text
stringlengths
60
353k
source
stringclasses
2 values
**Icosagon** Icosagon: In geometry, an icosagon or 20-gon is a twenty-sided polygon. The sum of any icosagon's interior angles is 3240 degrees. Regular icosagon: The regular icosagon has Schläfli symbol {20}, and can also be constructed as a truncated decagon, t{10}, or a twice-truncated pentagon, tt{5}. One interior angle in a regular icosagon is 162°, meaning that one exterior angle would be 18°. The area of a regular icosagon with edge length t is 31.5687 t2. In terms of the radius R of its circumcircle, the area is A=5R22(5−1); since the area of the circle is πR2, the regular icosagon fills approximately 98.36% of its circumcircle. Uses: The Big Wheel on the popular US game show The Price Is Right has an icosagonal cross-section. The Globe, the outdoor theater used by William Shakespeare's acting company, was discovered to have been built on an icosagonal foundation when a partial excavation was done in 1989.As a golygonal path, the swastika is considered to be an irregular icosagon. A regular square, pentagon, and icosagon can completely fill a plane vertex. Construction As 20 = 22 × 5, regular icosagon is constructible using a compass and straightedge, or by an edge-bisection of a regular decagon, or a twice-bisected regular pentagon: The golden ratio in an icosagon: In the construction with given side length the circular arc around C with radius CD, shares the segment E20F in ratio of the golden ratio. 20 20 20 1.618 Symmetry: The regular icosagon has Dih20 symmetry, order 40. There are 5 subgroup dihedral symmetries: (Dih10, Dih5), and (Dih4, Dih2, and Dih1), and 6 cyclic group symmetries: (Z20, Z10, Z5), and (Z4, Z2, Z1). Symmetry: These 10 symmetries can be seen in 16 distinct symmetries on the icosagon, a larger number because the lines of reflections can either pass through vertices or edges. John Conway labels these by a letter and group order. Full symmetry of the regular form is r40 and no symmetry is labeled a1. The dihedral symmetries are divided depending on whether they pass through vertices (d for diagonal) or edges (p for perpendiculars), and i when reflection lines path through both edges and vertices. Cyclic symmetries in the middle column are labeled as g for their central gyration orders. Symmetry: Each subgroup symmetry allows one or more degrees of freedom for irregular forms. Only the g20 subgroup has no degrees of freedom but can seen as directed edges. The highest symmetry irregular icosagons are d20, an isogonal icosagon constructed by ten mirrors which can alternate long and short edges, and p20, an isotoxal icosagon, constructed with equal edge lengths, but vertices alternating two different internal angles. These two forms are duals of each other and have half the symmetry order of the regular icosagon. Dissection: Coxeter states that every zonogon (a 2m-gon whose opposite sides are parallel and of equal length) can be dissected into m(m-1)/2 parallelograms. Dissection: In particular this is true for regular polygons with evenly many sides, in which case the parallelograms are all rhombi. For the icosagon, m=10, and it can be divided into 45: 5 squares and 4 sets of 10 rhombs. This decomposition is based on a Petrie polygon projection of a 10-cube, with 45 of 11520 faces. The list OEIS: A006245 enumerates the number of solutions as 18,410,581,880, including up to 20-fold rotations and chiral forms in reflection. Related polygons: An icosagram is a 20-sided star polygon, represented by symbol {20/n}. There are three regular forms given by Schläfli symbols: {20/3}, {20/7}, and {20/9}. There are also five regular star figures (compounds) using the same vertex arrangement: 2{10}, 4{5}, 5{4}, 2{10/3}, 4{5/2}, and 10{2}. Deeper truncations of the regular decagon and decagram can produce isogonal (vertex-transitive) intermediate icosagram forms with equally spaced vertices and two edge lengths.A regular icosagram, {20/9}, can be seen as a quasitruncated decagon, t{10/9}={20/9}. Similarly a decagram, {10/3} has a quasitruncation t{10/7}={20/7}, and finally a simple truncation of a decagram gives t{10/3}={20/3}. Petrie polygons: The regular icosagon is the Petrie polygon for a number of higher-dimensional polytopes, shown in orthogonal projections in Coxeter planes: It is also the Petrie polygon for the icosahedral 120-cell, small stellated 120-cell, great icosahedral 120-cell, and great grand 120-cell.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Design studio** Design studio: A design studio or drawing office is a workplace for designers and artisans engaged in conceiving, designing and developing new products or objects. Facilities in a design studio include clothes, furniture art equipment best suited for design work and extending to work benches, small machines, computer equipment, paint shops and large presentation boards. Size: The size and conveniences also depends upon the type of the studio. Freelance designers engaged in product design often have a small set-up of their own, the smallest being within private residences. The ambiance of a design studio is often noted for its informality. The number of designers working in a typical design studio may vary widely, from a single individual to up to 1000 members. In such large studios, apart from designers, the staff may also consist of other technicians and artisans engaged in prototyping and engineering detailing, in addition to administrative staff and designers. They’re composed of flexible work spaces where design thinking thrives Ownership: The smallest studios are operated by individuals, while the medium to bigger ones may be owned and operated by manufacturers involved in consumer goods or by design firms engaged in design services catering to different firms and industries. Such independent design studios may also function as design studios as well as design firms. Types: Automotive design studios Automotive design studios are usually large, where space is required for multiple cars under development, in addition to clay modeling surface tables, large scanners and clay milling machines. Such studios also have a presentation area to accommodate at least 20 to 30 people for presentations and design briefings with clients. Automobile manufacturer studios are often treated as a separate entity and housed within a compound. Most of these design studios are often located in a different part of the city or country and are isolated from the manufacturing and engineering environment. Such studios are often high security areas, where even internal access to most areas is severely restricted. Types: OKB OKB is a transliteration of the Russian initials of "опытное конструкторское бюро" – opytnoye konstruktorskoye byuro, meaning 'experimental design bureau'. During the Soviet era, OKBs were closed institutions working on design and prototyping of advanced technology, usually for military applications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kapla** Kapla: Kapla is a construction set for children and adults. The sets consist only of identical wood planks measuring 11.7 cm x 2.34 cm x 0.78 cm. This 15:3:1 ratio of length:width:thickness is different than the dimensions used for traditionally proportioned building blocks (such as unit blocks), and are used for building features such as lintels, roofs and floors. They are known for their stability in the absence of fastening devices. Name origin: "KAPLA" is an abbreviated form of the Dutch phrase "kabouter plankjes," which means "Gnome Planks." History: KAPLA was invented in 1987 by Dutchman Tom van der Bruggen. A student of art history, Van der Bruggen had hopes of building a castle from an early age. Inspired by an old abandoned farm on the river Tarn in the South of France, Van der Bruggen converted the farm into his dream castle, complete with carriage entrance, fountains, and towers. To help him visualize the finished construction of his castle, Tom van der Bruggen used wooden blocks, but soon realized that they would not be suitable for certain aspects of the construction, such as the lintels, roofs and floors. Assembly: KAPLA requires no glue, screws or clips to fix the planks. Each plank is placed one on top of the other, and held in place by weight and balance alone. There are 3 possible ways to use Kapla planks: Flat On the side Standing up (vertically)Also, similar KAPLA constructions can create different assemblages: Piled up as bricks: embedding Piled up as spiral stairs: stackingKAPLA is intended for children to safely build, create and experiment by using their imagination. Varieties: KAPLA bricks are made of pine wood and are available in many different colors. They are sold in sets ranging in size from 40 to 1000 pieces. Construction and art: KAPLA has created four educational art books, intended to inspire children who use their products. They are meant to encourage the use of geometry, physics and technology, while exposing children to the world of art, forms and volumes. The books are also created with the intention to help children in understanding the unique and demanding nature of construction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Poisons Act 1972** Poisons Act 1972: The Poisons Act 1972 (c 66) is an Act of the Parliament of the United Kingdom making provisions for the sale of non-medicinal poisons, and the involvement of Local Authorities and the Royal Pharmaceutical Society of Great Britain in their regulation. The Act refers to the Pharmacy and Poisons Act 1933, and the Poisons List. Non-medical poisons are divided into two separate lists. List one substances may only be sold by a registered Pharmacist, and list two substances may be sold by a registered pharmacist or a licensed retailer. Further provisions are made, to enable the Royal Pharmaceutical Society to enforce the compliance with the Act by pharmacists, and impose fines for breaches. Local Authorities are responsible for vetting applications for list two substances, for law enforcement and control of licensed premises. Section 7: The Poison Rules 1982 (SI 1982/218) were made under this section.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electrical resistance survey** Electrical resistance survey: Electrical resistance surveys (also called earth resistance or resistivity survey) are one of a number of methods used in archaeological geophysics, as well as in engineering geological investigations. In this type of survey electrical resistance meters are used to detect and map subsurface archaeological features and patterning. Overview: Electrical resistance meters can be thought of as similar to the Ohmmeters used to test electrical circuits. Archaeological features can be mapped when they are of higher or lower resistivity than their surroundings. A stone foundation might impede the flow of electricity, while the organic deposits within a midden might conduct electricity more easily than surrounding soils. Although generally used in archaeology for planview mapping, resistance methods also have a limited ability to discriminate depth and create vertical profiles (see Electrical resistivity tomography). Further applications include the measurement of the electrical resistivity of concrete to determinate the corrosion potential in concrete structures. Electrical resistance surveying is one of the most popular geophysical methods thanks to the fact it is a nondestructive and economically favorable investigation. Instrumentation: In most systems, metal probes (electrodes) are inserted into the ground to obtain a reading of the local electrical resistance. A variety of probe configurations are used, most having four probes, often mounted on a rigid frame. In these systems, two of the probes, called current probes, are used to introduce a current (either direct or low-frequency switching current) into the earth. The other two probes, called voltage or potential probes, are used to measure the voltage, which indicates the local resistivity. In general, greater probe spacings yield greater depth of investigation, but at the cost of sensitivity and spatial resolution.Early surveys (beginning in the mid 20th century) often used the Wenner array, which was a linear array of four probes. These were arranged current-voltage-voltage-current, at equal distances across the array. Probes were mounted on a rigid frame, or placed individually. While quite sensitive, this array has a very wide span for its depth of investigation, leading to problems with horizontal resolution. A number of experimental arrays attempted to overcome the shortcomings of the Wenner array, the most successful of these being the twin-probe array, which has become the standard for archaeological use. The twin-probe array - despite its name - has four probes: one current and one voltage probe mounted on a mobile frame to collect survey readings, and the other current probe placed remotely along with a voltage reference probe. These fixed remote probes are connected to the mobile survey probes by a trailing cable. This configuration is very compact for its depth of investigation, resulting in superior horizontal resolution. The logistical advantage of the more compact array is somewhat offset by the trailing cable. Instrumentation: A disadvantage of the systems described above is a relatively slow rate of survey. One solution to this has been wheeled arrays. These use spiked wheels or metal disks as electrodes, and may use a square array (a variation of the Wenner array) to avoid the encumbrance of a trailing cable. Wheeled arrays may be towed by vehicles or by human power.Systems having long linear arrays of many electrodes are often used in geological applications, and less commonly in archaeology. These take repeated measurements (often computer controlled) using different electrode spacings at multiple points along the extended line of electrodes. Data collected in this way may be used for tomography, or generating vertical profiles.Capacitively coupled systems that do not require direct physical contact with the soil have also been developed. These systems are capable of tomographic studies as well as mapping horizontal patterning. They may also be used on hard or very dry surfaces that preclude electrical contact necessary for probe resistance systems. While these show promise for archaeological applications, currently available systems operating on this principle lack sufficient spatial resolution and sensitivity. Data collection: Survey usually involves walking with the instrument along closely spaced parallel traverses, taking readings at regular intervals. In most cases, the area to be surveyed is staked into a series of square or rectangular survey "grids" (terminology can vary). With the corners of the grids as known reference points, the instrument operator uses tapes or marked ropes as a guide when collecting data. In this way, positioning error can be kept to within a few centimeters for high-resolution mapping. Early surveys recorded readings by hand, but computer controlled data logging and storage are now the norm.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**A2 Key** A2 Key: A2 Key, previously known as Cambridge English: Key and the Key English Test (KET), is an English language examination provided by Cambridge Assessment English (previously known as Cambridge English Language Assessment and University of Cambridge ESOL examinations). A2 Key is targeted for novice students of English. It tests for proficiency in simple communication to Level A2 of the Common European Framework of Reference (CEFR). A2 Key is offerers two versions: one for school-aged learners; and for general education. History: A2 Key (previously known as the Key English Test (KET) and Cambridge English: Key) was developed through trials conducted between 1991 and 1994.It was created to offer students a basic qualification in English and provide the first step for those wishing to progress towards higher level qualifications, such as B1 Preliminary, B2 First, C1 Advanced, and C2 Proficiency. An updated version of A2 Key was launched in March 2004, following a review with stakeholders. Comparison of two versions of A2 Key: A2 Key is designed for adult learners. It is one of the exams that make up Cambridge English Qualifications for general and higher education. A2 Key for Schools is designed for school-aged learners. It is one of the exams that make up Cambridge English Qualifications for Schools. The two tests have the same exam format; e.g. number of papers, number of questions, and time allowance. They both help students to develop real-life communication skills, and both lead to the same certificate. The exams use different topics and content: A2 Key is targeted at the interests and experiences of adult learners and is designed to support a wide range of learners, whether they want to get into university, start their own business or develop their career. A2 Key for Schools is designed specifically for school-aged students. The topics and tasks on the exam are designed to reinforce the learning students do in class. Format: Both versions of the exam (A2 Key and A2 Key for Schools) are made up of three papers, which cover all four language skills (Reading, Writing, Listening and Speaking).The Speaking paper is taken face-to-face and candidates have the choice of taking the Reading and Writing paper and Listening paper on a computer or paper.1. Reading and Writing (1 hour 10 minutes – 50% of total marks) The Reading and Writing paper has nine parts and 56 questions. Candidates are expected to be able to read and understand simple written information such as signs, brochures, newspapers, and magazines. Format: Parts 1 to 5 focus on reading skills, including underlying knowledge of vocabulary and grammar. The exam includes tasks such as supplying missing words, matching statements with given texts, selecting the right word for each gap in a given text, and completing multiple-choice questions about a given text. Parts 6 and 7 focus on writing skills. Part 6 requires writing an email of 25 words minimum, whereas part 7 requires a story of 35 words minimum based on three picture prompts. Candidates can either choose one of the two parts. 2. Listening (approximately 30 minutes – 25% of total marks) The Listening paper has five parts comprising 25 questions. Candidates are expected to understand spoken material in both informal and neutral settings on a range of everyday topics when spoken reasonably slowly. Part 1 has five short conversations and three pictures. Candidates listen for information such as prices, numbers, times, dates, locations, directions, shapes, sizes, weather, descriptions, etc. They then answer five multiple-choice questions. Part 2 has a recording of a monologue. Candidates write down information from the monologue to complete a message or notes. Part 3 has a longer conversation than those in part 1. Candidates listen for key information in the conversation and answer five multiple-choice questions. Part 4 has five short monologues and dialogues. Candidates identify the main idea, gist, topic, or message in the recordings and then answer five multiple-choice questions. Part 5 has another long conversation. Candidates identify simple factual information in the conversation and match together two lists of words (e.g. names of people and the food they like to eat). 3. Speaking (8–10 minutes – 25% of total marks) The Speaking test has two parts and is conducted face-to-face with one or two other candidates and two examiners. Candidates are expected to demonstrate conversation skills by answering and asking simple questions. Part 1 is a conversation with the examiner. Candidates give factual/personal information about themselves, e.g. about their daily life, interests, etc. Format: Part 2 has two phases, the first of which is a collaborative task with the other candidate(s) and the other further discussion with the examiner. In the first phase, the examiner gives each candidate a prompt card and asks them to talk with the other candidate(s) and ask and answer questions related to the prompt card. In phase 2, the examiner asks the candidates questions related to the prompt card. Scoring: In February 2016, Cambridge English Scale scores replaced the candidate profile and standardised scores used for pre-2016 results. All candidates (pre- and post-2016) receive a Statement of Results, with those scoring high enough also receiving a certificate. Scoring: Scoring from February 2016 From 2016, the Statement of Results and the Certificate have the following information about the candidate's performance: A score on the Cambridge English Scale for each of the three papers (Reading and Writing, Listening and Speaking) A score on the Cambridge English Scale for the overall exam A grade (A, B, C or Level A1) for the overall exam A CEFR level for the overall exam.The candidate's overall score is averaged from the individual scores for each paper (Reading and Writing, Listening and Speaking). Scoring: Cambridge English: Key is targeted at CEFR Level A2, but also provides reliable assessment at the level above A2 (Level B1) and the level below (Level A1). The following scores are used to report results: Scores under 100 are reported on the Statement of Results but candidates will not receive the A2 Key certificate. Scoring: Scoring pre-February 2016 Pre-2016, the Statement of Results had the following information, reflecting the total combined score from all three papers: A grade (Pass with Distinction, Pass with Merit and Pass) for the overall exam A score (out of 100) for the overall exam A CEFR level for the overall exam.Pre-2016, the Statement of Results had a Candidate Profile, which showed the candidate's performance on each of the individual papers against the following scale: exceptional, good, borderline and weak. Scoring: Pre-2016, candidates who achieved a score of 45 or more (out of 100) received a certificate. Timing and results: Candidates take the Reading and Writing and the Listening papers on the same day. The Speaking paper is often taken a few days before or after the Reading and Writing and the Listening papers, or on the same day. The exam is available to be taken at test centres in paper-based and computer-based formats. Both versions of the exam award the same internationally accepted certificate. The Speaking test is only available to be taken face-to-face with an examiner. The paper-based exam and the computer-based exam are offered at test centres throughout the calendar year. A directory of all global exam centres and their contact details can be accessed on the Cambridge Assessment English website. Successful candidates receive two documents: a Statement of Results and a certificate. Universities, employers, and other organisations may require either of these documents as proof of English language skills. An online Statement of Results is available to candidates four to six weeks after the paper-based exam and two weeks after the computer-based exam. Successful candidates (those scoring above 45) receive a hard-copy certificate within two months of the paper-based exam and within four weeks of the computer-based exam. Usage: A2 Key demonstrates language proficiency at Level A2 of the Common European Framework of Reference (CEFR). Usage: It is designed to show that a successful candidate has English language skills to deal with basic situations, e.g. they can understand simple written English such as short notices, understand simple spoken directions, communicate in familiar situations, use basic phrases and expressions, write short, simple notes and interact with English speakers who talk slowly and clearly.Learners can use this qualification for education or work purposes, as well as to progress to higher-level English language qualifications, such as B1 Preliminary, B2 First, C1 Advanced and C2 Proficiency. Usage: Many higher education institutions around the world recognise A2 Key as an indication of English language ability. This includes universities based in: Brazil (e.g. Universidade Católica de Brasilia) Chile (e.g. Universidad de Santiago de Chile) Egypt (e.g. Alexandria University) Mexico (e.g. Universidad Autonoma del Estado Mexico) Myanmar (e.g. University of Computer Studies, Yangon) Vietnam (e.g. Tra Vinh University) Spain (e.g. Universidad Salamanca).A full list of organisations can be accessed on the Cambridge Assessment English website. Usage: Additionally, many global companies and brands accept A2 Key as part of their recruitment processes including Chelsea Football Club Academy. Preparation: A comprehensive list of authorised exam centres can be found on the Cambridge Assessment English website. Free preparation materials, such as sample tests, are available from the website for A2 Key and A2 Key for Schools . There is also a wide range of official support materials, jointly developed by Cambridge Assessment English and Cambridge University Press.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Propionyl-CoA carboxylase** Propionyl-CoA carboxylase: Propionyl-CoA carboxylase (EC 6.4.1.3, PCC) catalyses the carboxylation reaction of propionyl-CoA in the mitochondrial matrix. PCC has been classified both as a ligase and a lyase. The enzyme is biotin-dependent. The product of the reaction is (S)-methylmalonyl CoA. ATP + propionyl-CoA + HCO3− <=> ADP + phosphate + (S)-methylmalonyl-CoA(S)-Methylmalonyl-CoA cannot be directly utilized by animals. It is acted upon by a racemase, yielding (R)-methylmalonyl-CoA, which is then converted into succinyl-CoA by methylmalonyl-CoA mutase (one of the few metabolic enzymes which requires vitamin B12 as a cofactor). Succinyl-CoA, a Krebs cycle intermediate, is further metabolized into fumarate, then malate, and then oxaloacetate. Oxaloacetate may be transported into the cytosol to form phosphoenol pyruvate and other gluconeogenic intermediates. Propionyl-CoA is therefore an important precursor to glucose. Propionyl-CoA carboxylase: Propionyl-CoA is the end product of odd-chain fatty acid metabolism, including most methylated fatty acids. The amino acids valine, isoleucine, and methionine are also substrates for propionyl-CoA metabolism. Structure: Propionyl-CoA carboxylase (PCC) is a 750 kDa alpha(6)-beta(6)-dodecamer. (Only approximately 540 kDa is native enzyme. ) The alpha subunits are arranged as monomers, decorating the central beta-6 hexameric core. Said core is oriented as a short cylinder with a hole along its axis. Structure: The alpha subunit of PCC contains the biotin carboxylase (BC) and biotin carboxyl carrier protein (BCCP) domains. A domain known as the BT domain is also located on the alpha subunit and is essential for interactions with the beta subunit. The 8-stranded anti-parallel beta barrel fold of this domain is particularly interesting. The beta subunit contains the carboxyltransferase (CT) activity. Structure: The BC and CT sites are approximately 55 Å apart, indicative of the entire BCCP domain translocating during catalysis of the carboxylation of propionyl-CoA. This provides clear evidence of crucial dimeric interaction between alpha and beta subunits. Structure: The biotin-binding pocket of PCC is hydrophobic and highly conserved. Biotin and propionyl-CoA bind perpendicular to each other in the oxyanion hole-containing active site. The native enzyme to biotin ratio has been determined to be one mole native enzyme to 4 moles biotin. The N1 of biotin is thought to be the active site base.Site-directed mutagenesis at D422 shows a change in the substrate specificity of the propionyl-CoA binding site, thus indicating this residue's importance in PCC's catalytic activity. In 1979, inhibition by phenylglyoxal determined that a phosphate group from either propionyl-CoA or ATP reacts with an essential arginine residue in the active site during catalysis. Later (2004), it was suggested that Arginine-338 serves to orient the carboxyphosphate intermediate for optimal carboxylation of biotin.The KM values for ATP, propionyl-CoA, and bicarbonate has been determined to be 0.08 mM, 0.29 mM, and 3.0 mM, respectively. The isoelectric point falls at pH 5.5. PCC's structural integrity is conserved over the temperature range of -50 to 37 degrees Celsius and the pH range of 6.2 to 8.8. Optimum pH was shown to be between 7.2 and 8.8 without biotin bound. With biotin, optimum pH is 8.0-8.5. Mechanism: The normal catalytic reaction mechanism involves a carbanion intermediate and does not proceed through a concerted process. Figure 3 shows a probable pathway. The reaction has been shown to be slightly reversible at low propionyl-CoA flux. Subunit genes: Human propionyl-CoA carboxylase contains two subunits, each encoded by a separate gene: Pathology: A deficiency is associated with propionic acidemia.PCC activity is the most sensitive indicator of biotin status tested to date. In future pregnancy studies, the use of lymphocyte PCC activity data should prove valuable in assessment of biotin status. Pathology: Intragenic complementation When multiple copies of a polypeptide encoded by a gene form an aggregate, this protein structure is referred to as a multimer. When a multimer is formed from polypeptides produced by two different mutant alleles of a particular gene, the mixed multimer may exhibit greater functional activity than the unmixed multimers formed by each of the mutants alone. In such a case, the phenomenon is referred to as intragenic complementation. Pathology: PCC is a heteropolymer composed of α and β subunits in a α6β6 structure. Mutations in PCC, either in the α subunit (PCCα) or β subunit (PCCβ) can cause propionic acidemia in humans. When different mutant skin fibroblast cell lines defective in PCCβ were fused in pairwise combinations, the β heteromultimeric protein formed as a result often exhibited a higher level of activity than would be expected based on the activities of the parental enzymes. This finding of intragenic complementation indicated that the multimeric structure of PCC allows cooperative interactions between the constituent PCCβ monomers that can generate a more functional form of the holoenzyme. Regulation: Of Propionyl-CoA Carboxylase a. Carbamazepine (antiepileptic drug): significantly lowers enzyme levels in the liverb. E. coli chaperonin proteins groES and groEL: essential for folding and assembly of human PCC heteromeric subunitsc. Bicarbonate: negative cooperativityd. Mg2+ and MgATP2−: allosteric activation By Propionyl-CoA Carboxylase a. 6-Deoxyerythronolide B: decrease in PCC levels lead to increased production b. Glucokinase in pancreatic beta cells: precursor of beta-PCC shown to decrease KM and increase Vmax; activation
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Protein-disulfide reductase (glutathione)** Protein-disulfide reductase (glutathione): In enzymology, a protein-disulfide reductase (glutathione) (EC 1.8.4.2) is an enzyme that catalyzes the chemical reaction 2 glutathione + protein-disulfide ⇌ glutathione disulfide + protein-dithiolThus, the two substrates of this enzyme are glutathione and protein disulfide, whereas its two products are glutathione disulfide and protein dithiol. Protein-disulfide reductase (glutathione): This enzyme belongs to the family of oxidoreductases, specifically those acting on a sulfur group of donors with a disulfide as acceptor. The systematic name of this enzyme class is glutathione:protein-disulfide oxidoreductase. Other names in common use include glutathione-insulin transhydrogenase, insulin reductase, reductase, protein disulfide (glutathione), protein disulfide transhydrogenase, glutathione-protein disulfide oxidoreductase, protein disulfide reductase (glutathione), GSH-insulin transhydrogenase, protein-disulfide interchange enzyme, protein-disulfide isomerase/oxidoreductase, thiol:protein-disulfide oxidoreductase, and thiol-protein disulphide oxidoreductase. This enzyme participates in glutathione metabolism. Structural studies: As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2IJY.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Additive disequilibrium and z statistic** Additive disequilibrium and z statistic: Additive disequilibrium (D) is a statistic that estimates the difference between observed genotypic frequencies and the genotypic frequencies that would be expected under Hardy–Weinberg equilibrium. At a biallelic locus with alleles 1 and 2, the additive disequilibrium exists according to the equations 11 12 22 =(1−p1)2+D where fij is the frequency of genotype ij in the population, p is the allele frequency in the population, and D is the additive disequilibrium coefficient.Having a value of D > 0 indicates an excess of homozygotes/deficiency of heterozygotes in the population, whereas D < 0 indicates an excess of heterozygotes/deficiency of homozygotes. When D = 0, the genotypes are considered to be in Hardy Weinberg Equilibrium. In practice, the estimated additive disequilibrium from a sample, D^ , will rarely be exactly 0, but it may be small enough to conclude that it is not significantly different from 0. Finding the value of the additive disequilibrium coefficient provides an alternative assessment in accepting or rejecting Hardy Weinberg Equilibrium in a set of genotypic frequencies.Because the genotype and allele frequencies must be positive numbers in the interval (0,1), there exists a constraint on the range of possible values for D, which is as follows: max u∈(1,2)−pu2≤D≤p1(1−p1) To estimate D from a sample, use the formula: 11 11 11 12 2n)2 where n11 (n12) is the number of individuals in the sample with that particular genotype and n is the total number of individuals in the sample. Note that 11 and p^1 are sample estimates of the population genotype and allele frequencies. Additive disequilibrium and z statistic: The approximate sampling variance of D^ (given by var ⁡(D^) ) is: var ⁡D^=p^12(1−p^12)n From this an estimated 95% confidence interval can be calculated, which is 1.96 var ⁡(D^) Note: var ⁡(D^) is also equal to the estimated standard deviation . Additive disequilibrium and z statistic: If the confidence interval for D^ does not include zero, we can reject the null hypothesis for Hardy Weinberg Equilibrium. Similarly, we can also test for Hardy Weinberg Equilibrium using the z-statistic, which uses information from the estimate of additive disequilibrium to determine significance. When using the z-statistic, however, the goal is to transform the statistic in a way such that asymptotically, it has a standard normal distribution. To do this, divide D^ by its standard deviation, which gives the simplified equation: z=D^np^1(1−p^1) When z is large, D^ and thus the departure from Hardy Weinberg Equilibrium are also large. If the value of z is sufficiently large, it is unlikely that the deviations would occur by chance and thus the hypothesis of Hardy Weinberg Equilibrium can be rejected.To determine if z is significantly larger or smaller than expected under Hardy Weinberg Equilibrium, find "the probability of observing" a value as or more extreme as the observed z "under the null hypothesis". The tail probability is normally used, P (y > z), where y is standard normal random variable. When z is positive, the tail probability is 1 − P (y ≤ z). Because normal distributions are symmetric, the upper and lower tail probabilities will be equal, and thus you can find the upper probability and multiply by 2 to find the combined tail probabilities. Additive disequilibrium and z statistic: If z is negative, find the negative tail probability, P (y ≤ z), and multiply by 2 to find the combined probability in both upper and lower tails. Additive disequilibrium and z statistic: The probability values calculated from these equations can be analyzed by comparison to a pre-specified value of α. When the observed probability p ≤ α, we can "reject the null hypothesis of Hardy Weinberg Equilibrium". If p > α, we fail to reject the null hypothesis. Commonly used values of α are 0.05, 0.01, and 0.001.At a significance of α = 0.05, we can reject the hypothesis of Hardy Weinberg Equilibrium if the absolute value of z is "greater than or equal to the critical value 1.96" for the two-sided test.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Elevator paradox** Elevator paradox: The elevator paradox is a paradox first noted by Marvin Stern and George Gamow, physicists who had offices on different floors of a multi-story building. Gamow, who had an office near the bottom of the building noticed that the first elevator to stop at his floor was most often going down, while Stern, who had an office near the top, noticed that the first elevator to stop at his floor was most often going up. This creates the false impression that elevator cars are more likely to be going in one direction than the other depending on which floor the observer is on. Modeling the elevator problem: Several attempts (beginning with Gamow and Stern) were made to analyze the reason for this phenomenon: the basic analysis is simple, while detailed analysis is more difficult than it would at first appear. Modeling the elevator problem: Simply, if one is on the top floor of a building, all elevators will come from below (none can come from above), and then depart going down, while if one is on the second from top floor, an elevator going to the top floor will pass first on the way up, and then shortly afterward on the way down – thus, while an equal number will pass going up as going down, downwards elevators will generally shortly follow upwards elevators (unless the elevator idles on the top floor), and thus the first elevator observed will usually be going up. The first elevator observed will be going down only if one begins observing in the short interval after an elevator has passed going up, while the rest of the time the first elevator observed will be going up. Modeling the elevator problem: In more detail, the explanation is as follows: a single elevator spends most of its time in the larger section of the building, and thus is more likely to approach from that direction when the prospective elevator user arrives. An observer who remains by the elevator doors for hours or days, observing every elevator arrival, rather than only observing the first elevator to arrive, would note an equal number of elevators traveling in each direction. This then becomes a sampling problem — the observer is sampling stochastically a non uniform interval.} To help visualize this, consider a thirty-story building, plus lobby, with only one slow elevator. The elevator is so slow because it stops at every floor on the way up, and then on every floor on the way down. It takes a minute to travel between floors and wait for passengers. Here is the arrival schedule; as depicted above, it forms a triangle wave: If you were on the first floor and walked up randomly to the elevator, chances are the next elevator would be heading down. The next elevator would be heading up only during the first two minutes at each hour, e.g., at 9:00 and 9:01. The number of elevator stops going upwards and downwards are the same, but the probability that the next elevator is going up is only 2 in 60. Modeling the elevator problem: A similar effect can be observed in railway stations where a station near the end of the line will likely have the next train headed for the end of the line. More than one elevator: If there is more than one elevator in a building, the bias decreases — since there is a greater chance that the intending passenger will arrive at the elevator lobby during the time that at least one elevator is below them; with an infinite number of elevators, the probabilities would be equal.In the example above, if there are 30 floors and 58 elevators, so at every minute there are 2 elevators on each floor, one going up and one going down (save at the top and bottom), the bias is eliminated – every minute, one elevator arrives going up and another going down. This also occurs with 30 elevators spaced 2 minutes apart – on odd floors they alternate up/down arrivals, while on even floors they arrive simultaneously every two minutes. The real-world case: In a real building, there are complicated factors such as: the tendency of elevators to be frequently required on the ground or first floor, and to return there when idle; lopsided demand where everyone wants to go down at the end of the day; people on the lower floors being more willing to take the stairs; or the way full elevators ignore external floor-level calls. These factors tend to shift the frequency of observed arrivals, but do not eliminate the paradox entirely. In particular, a user very near the top floor will perceive the paradox even more strongly, as elevators are infrequently present or required above their floor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Comic timing** Comic timing: Comic timing or comedic timing emerges from a performer's joke delivery: they interact with an audience—intonation, rhythm, cadence, tempo, and pausing—to guide the audience's laughter, which then guides the comedic narrative. The pacing of the delivery of a joke can have a strong impact on its comedic effect, even altering its meaning; the same can also be true of more physical comedy such as slapstick. Comic timing is also crucial for comedic video editing to maximize the impact of a joke, for example, through a smash cut. History: The use of comic timing can be first observed in the comic plays of the ancient Greeks. Specifically, Aristophanes indicated brief pauses in his works, such as The Clouds, in order to elicit laughter from the unfolding events. William Shakespeare, along with comic playwrights before him, also utilized comic timing in much of his plays. For example, Cleopatra's strategic interjections during Mark Antony's speech in Act 1 Scene 2 of Antony and Cleopatra, shift an otherwise serious scene to a comic one. George Bernard Shaw notably continued the usage of comic timing into the late 19th century. In his 1894 play Arms and the Man for instance, Shaw triggers laughter near the end of Act 2 through Nicola's calculated eruptions of composure.While the use of comic timing continued to flourish on stage, by the mid-20th century, comic timing became integral to comedy film, television and stand-up comedy. In movies, comedians such as Charlie Chaplin, Laurel and Hardy and Buster Keaton perfected their comedic performances through precise timing in films like One A.M., The Lucky Dog, and The Playhouse respectively. In television, Lucille Ball notably utilized comic timing in her show I Love Lucy. For example, in the episode "Lucy Does a TV Commercial" Ball acts out an advertisement within a fake television set, but ruins the illusion by a comically timed break of the TV's fourth wall. In stand-up, George Carlin's routine "Seven Words You Can't Say on Television" gets a laugh from the timing difference between the delivery of the first 6 words and the seventh. Additionally, Rowan Atkinson's routine "No One Called Jones" utilized a slow comic timing in his list of students' names to reveal multiple double entendres. History: While the above history highlights specific writers and performers, all workers in comedy, from Victor Borge to Sacha Baron Cohen and beyond, have utilized comic timing to deliver their humour most effectively. Beat: A beat is a pause taken for the purposes of comic timing, often to allow the audience time to recognize the joke and react, or to heighten the suspense before delivery of the expected punch line. Pauses—sometimes called "dramatic pauses"—in this context, can be used to distinguish subtext or even unconscious content—that is, what the speaker is really thinking about. A pause can also be used to heighten a switch in direction. As a speaker talks, the audience naturally "fills in the blanks", finishing the expected end of the thought. The pause allows this to happen before the comedian delivers a different outcome, thus surprising the listener and (hopefully) evoking laughter. Pregnant pause: A pregnant pause (as in the classical definition, "many possibilities") is a technique of comic timing used to accentuate a comedy element, which uses comic pauses at the end of a phrase to build up suspense. It is often used at the end of a comically awkward statement or in the silence after a seemingly non-comic phrase to build up a comeback. Refined by Jack Benny, who introduced specific body language and a phrase in his pregnant pauses, the pregnant pause has become a staple of stand-up comedy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gas-rich meteorites** Gas-rich meteorites: Gas-rich meteorites are meteorites with high levels of primordial gases, such as helium, neon, argon, krypton, xenon and sometimes other elements. Though these gases are present "in virtually all meteorites," the Fayetteville meteorite has ~2,000,000 x10−8 ccSTP/g helium, or ~2% helium by volume equivalent. In comparison, background level is a few ppm. The identification of gas-rich meteorites is based on the presence of light noble gases in large amounts, at levels which cannot be explained without involving an additional component over and above the well-known noble gas components that are present in all meteorites. History: William Ramsay was the first to report helium in an iron meteorite, in 1895- not long after its first Earth sample, instead of via Solar observation.The use of decay products to date meteorites was suggested by Bauer in 1947, and explicitly published by Gerling and Pavlova in 1951. However, this soon resulted in wildly varying ages; it was realized excess helium (including helium-3, rare on Earth) was generated by radiation, too.The first explicit publication of a gas-rich meteorite was Staroe Pesyanoe (often shortened to Pesyanoe), by Gerling and Levskii in 1956. In family with the later Fayetteville, Pesyanoe's helium level is ~1 million x10−8 ccSTP/g.Reynolds' publication of a "general Xe anomaly", including 129I decay products and more, touched off the subfield of xenology, continuing to today.The first publication of presolar grains in the 1980s was precipitated by workers searching for noble gases; PSGs were not simply checked via their gas contents. Lines of inquiry: As unreactive components, they are tracers of processes throughout and predating the Solar System: Material age can be determined by relative exposure to direct solar and cosmic radiation (by cosmic ray tracks), and indirect creation of resultant nuclides. This includes Ar-Ar dating, I-Xe dating, and U to its various decay products including helium.The parent body of a meteorite can be traced in part via comparison of trace elements. That meteorites are fragments of asteroids, and conditions on such asteroids, were partially deduced from gas evidence.This includes meteorite pairing, the re-association of meteorites which had split before recovery.Meteorite, parent, and Solar System histories are indicated by tracer elements, including thermometry, a record of material temperature. Lines of inquiry: Presolar activity. A supernova thought to have preceded the Solar System. The history of the Sun. This record extends to billion-year timescales, back to "very early in the life of the Sun". The history of cosmic ray fluence. Meteorites do not show significant variation of cosmic rays over time.The Lost City Meteor was tracked, allowing an orbit determination back to the asteroid belt. Measurement of relatively short-half-life isotopes in the subsequent Lost City Meteorite then indicate radiation levels in that region of the Solar System. Gas study: The field of meteoritic gases follows progress in analytical methods.The first analyses were basic laboratory chemistry, such as acid dissolution. Various acids were necessary, due to mixtures of various soluble and insoluble minerals. Stepped etching gave higher levels of resolution and discrimination. Pyrolysis was used, such as on highly acid resistant minerals. These two methods were alternately lauded and derided as "burning the haystack to find the needle."Meteoritical studies have tracked the progress of mass spectrometry, a continual and rapid progression comparable to or greater than Moore's Law.More recently, laser extraction Meteorites: This meteoritics-related list is incomplete; you can help by expanding it. Interplanetary dust, like c-chondrites and enstatites, contain hosts for these gases and often measurable gas contents. So too do a fraction of micrometeorites. Gas: Gas components were first named by descriptors, then letter codes; the letter taxonomy "has become increasingly complicated and confusing with time." By Element and Isotope Primordial/trapped 36A 132XeSolar wind/solar flare 4He 20Ne 36ArCosmic ray/spallogenic 3He 83Kr 126XeRadiogenic/fissile 3He 36Ar 40Ar 129Xe 132Xe 134Xe 136Xe 128Xe By Component Planetary "Planetary" gases (P, Q, P1) are depleted in light elements (He, Ne) compared to solar abundances (see below), or conversely, enriched in Kr, Xe. This name originally implied an origin, the gas blend observed in terrestrial planets. Scientists wished to stop implying this, but the habit was retained.Solar, subsolar This gas component corresponds to the solar wind. Solar flare gas can be distinguished by its greater depth, and a slightly variant composition. "Subsolar" is intermediary between solar and planetary.E "Exotic" neon- aberrant 20Ne/22Ne values.H "Heavy" isotopes of xenon, primarily r-process isotopes, plus p-process. Thus, sometimes seen as "HL," anomalous heavy and light isotopes. Gas: G "Giant", after asymptotic giant branch (while A and B had been taken); contains their s-process isotopes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**World English Bible** World English Bible: The World English Bible (WEB) is an English translation of the Bible freely shared online. The translation work began in 1994 and was deemed complete in 2020. Created by volunteers with oversight by Michael Paul Johnson, the WEB is an updated revision of the American Standard Version from 1901.The WEB has two main versions of the Old Testament: one with Deuterocanonical books and the other limited to Protocanonical books. The New Testament is the same in both versions. History: In 1994, Michael Paul Johnson felt commissioned by God "to create a new modern English translation of the Holy Bible that would be forever free to use, publish, and distribute." Since he did not have formal training in this regard, he started to study Greek and Hebrew and how to use scholarly works. His first translated books were the gospel and letters of John, which he shared drafts of on Usenet and a mailing list, receiving some suggestions and incorporating them. Estimating he would be 150 years old by the time this style of work would be finished, Johnson prayed for guidance. The answer was to use the American Standard Version (ASV) of 1901 because it is regarded as an accurate and reliable translation that is fully in the public domain.Johnson's main goal became modernizing the language of ASV, and he made custom computer programs to organize the process. This resulted in an initial draft of 1997 that "was not quite modern English, in that it still lacked quotation marks and still had some word ordering that sounded more like Elizabethan English or maybe Yoda than modern English." This draft was soon named World English Bible (WEB), since Johnson intended it for any English speaker, and the acronym indicates that the Web is the means of distribution. History: Over the years, a number of volunteers assisted Johnson. The entire translation effort was deemed complete in 2020, and the only subsequent changes have been fixing a few typos. Features: The translation philosophy of the WEB is to be mostly formally equivalent, like the American Standard Version it is based on, but with modernized English. The WEB also follows the ASV's decision to transliterate the Tetragrammaton, but uses "Yahweh" instead of "Jehovah" throughout the Old Testament. The British and Messianic editions of the WEB, as well as the New Testament and Deuterocanonical books use more traditional forms (e.g., the LORD). Features: As noted in the second paragraph of the opening, the WEB has two main versions of the Old Testament: one limited to the Protocanonical books, while the other also includes the Deuterocanon (a.k.a. the Apocrypha). The New Testament is the same for both versions. There are a modest amount of footnotes for cross-references and brief translation notes. Licensing: All of the text of the World English Bible is dedicated into the public domain. The ebible.org project maintains a trademark on the phrase "World English Bible" and forbids any derivative work that substantially alters the text from using the name "World English Bible" to describe it. The reasons given were that they felt copyright was an ineffective way of protecting the text's integrity and the fact that the Creative Commons licenses did not exist at the time the project began. Critical reception: The Provident Planning web site uses the World English Bible because it is free of copyright restrictions and because the author considers it to be a good translation.The Bible Megasite review of the World English Bible says it is a good revision of the American Standard Version of 1901 (ASV) into contemporary English, which also corrects some textual issues with the ASV.The World English Bible is widely published in digital formats by a variety of publishers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aluminium magnesium boride** Aluminium magnesium boride: Aluminium magnesium boride or Al3Mg3B56, colloquially known as BAM, is a chemical compound of aluminium, magnesium and boron. Whereas its nominal formula is AlMgB14, the chemical composition is closer to Al0.75Mg0.75B14. It is a ceramic alloy that is highly resistive to wear and has an extremely low coefficient of sliding friction, reaching a record value of 0.04 in unlubricated and 0.02 in lubricated AlMgB14−TiB2 composites. First reported in 1970, BAM has an orthorhombic structure with four icosahedral B12 units per unit cell. This ultrahard material has a coefficient of thermal expansion comparable to that of other widely used materials such as steel and concrete. Synthesis: BAM powders are commercially produced by heating a nearly stoichiometric mixture of elemental boron (low grade because it contains magnesium) and aluminium for a few hours at a temperature in the range 900 °C to 1500 °C. Spurious phases are then dissolved in hot hydrochloric acid. To ease the reaction and make the product more homogeneous, the starting mixture can be processed in a high-energy ball mill. All pretreatments are carried out in a dry, inert atmosphere to avoid oxidation of the metal powders.BAM films can be coated on silicon or metals by pulsed laser deposition, using AlMgB14 powder as a target, whereas bulk samples are obtained by sintering the powder.BAM usually contains small amounts of impurity elements (e.g., oxygen and iron) that enter the material during preparation. It is thought that the presence of iron (most often introduced as wear debris from mill vials and media) serves as a sintering aid. BAM can be alloyed with silicon, phosphorus, carbon, titanium diboride (TiB2), aluminium nitride (AlN), titanium carbide (TiC) or boron nitride (BN). Properties: BAM has the lowest known unlubricated coefficient of friction (0.04) possibly due to self-lubrication. Properties: Structure Most superhard materials have simple, high-symmetry crystal structures, e.g., diamond cubic or zinc blende. However, BAM has a complex, low-symmetry crystal structure with 64 atoms per unit cell. The unit cell is orthorhombic and its most salient feature is four boron-containing icosahedra. Each icosahedron contains 12 boron atoms. Eight more boron atoms connect the icosahedra to the other elements in the unit cell. The occupancy of metal sites in the lattice is lower than one, and thus, while the material is usually identified with the formula AlMgB14, its chemical composition is closer to Al0.75Mg0.75B14. Such non-stoichiometry is common for borides (see crystal structure of boron-rich metal borides and boron carbide). The unit cell parameters of BAM are a = 1.0313 nm, b = 0.8115 nm, c = 0.5848 nm, Z = 4 (four structure units per unit cell), space group Imma, Pearson symbol oI68, density 2.59 g/cm3. The melting point is roughly estimated as 2000 °C. Properties: Optoelectronic BAM has a bandgap of about ~1.5 eV. Significant absorption is observed at sub-bandgap energies and attributed to metal atoms. Electrical resistivity depends on the sample purity and is about 104 Ohm·cm. The Seebeck coefficient is relatively high, between −5.4 and −8.0 mV/K. This property originates from electron transfer from metal atoms to the boron icosahedra and is favorable for thermoelectric applications. Properties: Hardness & Fracture toughness The microhardness of BAM powders is 32–35 GPa. It can be increased to 45 GPa by alloying with Boron rich Titanium Boride, Fracture toughness can be increased with TiB2 or by depositing a quasi-amorphous BAM film. Addition of AlN or TiC to BAM reduces its hardness. By definition, a hardness value exceeding 40 GPa makes BAM a superhard material. In the BAM−TiB2 composite, the maximum hardness and toughness are achieved at ~60 vol.% of TiB2. The wear rate is improved by increasing the TiB2 content to 70–80% at the expense of ~10% hardness loss. The TiB2 additive is a wear-resistant material itself with a hardness of 28–35 GPa. Properties: Thermal expansion The thermal expansion coefficient (TEC, also known as Coefficient Of Thermal Expansion, COTE) for AlMgB14 was measured as 9×10−6 K−1 by dilatometry and by high temperature X-ray diffraction using synchrotron radiation. This value is fairly close to the COTE of widely used materials such as steel, titanium and concrete. Based on the hardness values reported for AlMgB14 and the materials themselves being used as wear resistant coatings, the COTE of AlMgB14 could be used in determining coating application methods and the performance of the parts once in service. Properties: Friction A composite of BAM and TiB2 (70 volume percent of TiB2) has one of the lowest values of friction coefficients, which amounts to 0.04–0.05 in dry scratching by a diamond tip (cf. 0.04 for Teflon) and decreases to 0.02 in water-glycol-based lubricants. Applications: BAM is commercially available and is being studied for potential applications. For example, pistons, seals and blades on pumps could be coated with BAM or BAM + TiB2 to reduce friction between parts and to increase wear resistance. The reduction in friction would reduce energy use. BAM could also be coated onto cutting tools. The reduced friction would lessen the force necessary to cut an object, extend tool life, and possibly allow increased cutting speeds. Coatings only 2–3 micrometers thick have been found to improve efficiency and reduce wear in cutting tools.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Object hyperlinking** Object hyperlinking: Object hyperlinking is a term that refers to extending the Internet to objects and locations in the real world. Object hyperlinking aims to extend the Internet to the physical world by attaching tags with URLs to tangible objects or locations. These object tags can then be read by a wireless mobile device and information about objects and locations retrieved and displayed. Object hyperlinking: However, object hyperlinking may also be sensible for contexts other than the Internet (e.g. with data objects in data base administering or with text content management). System components: Linking an object or a location to the Internet is a more involved process than linking two web pages. An object hyperlinking system requires seven components: A virtual or physical object tag to identify objects and locations. Some tagging systems are described below. To allow the object tags to be located they must be physically embedded in visual markers. For example, the Yellow arrow scheme [see below] prints SMS tags on large adhesive yellow arrows, which can then be stuck on buildings etc. System components: A means of reading physical tags, or locating virtual tags. A mobile device such as a mobile telephone, a PDA or a portable computer. Additional software for the mobile device. A commonly open wireless network, such as the existing 2G and 3G networks, for communication between the portable device and the server containing the information linked to the tagged object. Information on each linked object. This information could be in existing WWW pages, existing databases of price information etc., or have been specially created. A display to view the information on the linked object. At the present time this is most likely to be the screen of a mobile telephone. Tags and tag-reading systems: There are a number of different competing tagging systems. RFID tags A radio frequency identification device (also known as an 'Arphid') is a small transponder which can be read at short range by a transceiver (reader). Since RFID tags can be very small, they are often embedded in a more visible marker to allow them to be located. A RFID reader can be added to an existing mobile telephone as a shell. Nokia produce such a shell for their 3220 mobile phone. More and more mobile phones have RFID/NFC capability, since such RFID/NFC enabled mobiles may be used for cashless payments and other purposes. Tags and tag-reading systems: Since 2005 travelers in the city of Hanau, near Frankfurt, Germany have been able to pay for bus tickets by passing their Nokia phones over a smartcard reader installed on the buses. Other applications for RFID enabled mobiles include swapping electronic business cards between phones, and using a mobile to check in at an airport or hotel. Two RFID enabled devices may also be used to enable peer-to-peer transfer of data such as music, images or for synchronizing address books.Graphical tags A graphical tag consists of an image on a marker, which can be read by a mobile telephone camera. There are a number of competing systems, including open standards like Quick Response QR Codes, Datamatrix, Semacodes (based on Datamatrix), and barcodes; or proprietary systems like ShotCodes. The design of such coding schemes needs to be rich enough to include much information and robust enough for the tag to be readable, even when partly obscured or damaged: tags might be on the outside of buildings and exposed to wear and the weather. Tags and tag-reading systems: Graphical tags have a number of advantages. They are easy to understand and cheap to produce. They can also be printed on almost anything, including t-shirts. Barcodes are a particularly attractive form of tagging because they are already very widely used, and camera phones can easily read them.SMS tags An SMS tag comprises a short alphanumerical code, which can be printed on a marker or chalked on a wall. The Short Message Service is then used to send the code and return a message. Yellow arrows are an example of this form of tagging.Virtual tags In a virtual tagging system there is no physical tag at a location. Instead a URL as a meta-object is associated with a set of geographical coordinates. Using location-based services a mobile phone that enters a particular area can be used to retrieve all URLs associated with that area. The area can be set as a few metres or a much wider area.Hardlink A hardlink is an alphanumeric combination such as an object's common name or part number that when entered into a cell phone's web browser, targeting a hardlink database, returns information that may have been stored about the target object. It is one of several methods of object hyperlinking including graphical tags (2D barcodes), SMS tags and RFID tags. The hardlink method establishes a reference link between a physical world object and a .mobi web page just as a traditional hyperlink establishes an electronic reference to information on a Web page. A common cell phone is the medium of this information exchange that is initiated whenever a user makes a connection with a hardlink database, such as Objecs.mobi, and enters some alphanumeric sequence found on the target object. This alphanumeric sequence may be the objects part number or common name. This concept is also known as 'physical world connection', Object hyperlinking and Physical world hyperlink, or simply phylink, with a number of companies developing, what are currently, non-standardized methods of creating this connection. This topic is not to be confused with, a hard link (two words) which is Unix terminology for a pointer to physical data on a storage volume. The hardlink method does not require a graphical object tag or any special software be loaded on the users cell phone, but does require the phone be internet enabled. The consumer use and market for object hyperlinking methods is a very small and limited one in the U.S. with a slightly larger audience of users in some eastern countries. Unlike Japan, few US cell phone providers currently offer graphical tag readers or other support for object hyperlinking methods and this will likely continue until a clear linking method becomes dominant. Applications: The object hyperlinking systems described above will make it possible to link comprehensive and editable information to any object or location. How this capability can best be used remains to be seen. What has emerged so far is a mixture of social and commercial applications. The publishers of the Lonely Planet guidebooks are issuing yellow arrows with one of their guidebooks and encouraging travellers to leave tags to stories and comments wherever they go. Siemens see their virtual tagging system being used to tag tourist sites, and also leave messages for friends. They also suggest that virtual tags could be used to link advertisements with locations. Nokia have demonstrated that when a 3220 phone with the RFID shell attached is tapped against an RFID-enabled advertisement, a URL can be read and information about the advertised product or service returned to the phone. Applications: Japanese consumers are able to read barcodes with their mobiles and download comparative prices from Amazon.Semapedia have created a system for linking physical objects and Wikipedia articles using the Semacode tagging scheme. Graphical tags can be created that link to the URLs of individual Wikipedia articles. These tags can then be attached to the physical objects mentioned in the Wikipedia articles. Reading a tag with a camera phone will then retrieve an article from Wikipedia and display it on the phone screen, creating a "Mobile Wikipedia". Applications: An alternative to using 2d barcodes is to apply computer vision techniques to identify more complex patterns and images. Companies like kooaba, Daem, or Neven Vision (acquired by Google in 2006) develop image recognition platforms to turn any image into object hyperlinks. Microsoft has developed a system for creating hyperlinks using image matching. Google was planning in 2018 to tag 100,000 businesses in the United States with QR codes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**United States of America Mathematical Talent Search** United States of America Mathematical Talent Search: The United States of America Mathematical Talent Search (USAMTS) is a mathematics competition open to all United States students in or below high school. History: Professor George Berzsenyi initiated the contest in 1989 under the KöMaL model and under joint sponsorship of the Rose–Hulman Institute of Technology and the Consortium for Mathematics and its Applications.As of 2021, the USAMTS is sponsored by the National Security Agency and administered by the Art of Problem Solving foundation. There were 718 participants in the 2004–2005 school year, with an average score of 49.25 out of 100. Format: The competition is proof and research based. Students submit proofs within the round's timeframe (usually a month), and return solutions by mail or upload their solutions in a PDF file through the USAMTS website. During this time, students are free to use any mathematical resources that are available, so long as it is not the help of another person. Carefully written justifications are required for each problem.Prior to academic year 2010–2011 the competition consisted of four rounds of five problems each, covering all non-calculus topics. Students were given approximately one month to solve the questions. Each question is scored out of five points; thus, a perfect score is 100 In the academic year 2010–2011, the USAMTS briefly changed their format to two rounds of six problems each, and approximately six weeks are allotted for each round. Format: The current format consists of three problem sets, each five problems and lasting about a month each. Every question is still worth 5 points, making a perfect score 75 Scoring: Every problem on the USAMTS is graded on a scale of 0 to 5, where a 0 is an answer that is highly flawed or incomplete and a 5 is a rigorous and well-written proof. As a result, possible scores over the three rounds range from 0 to 75. The solutions are graded every year by a volunteer group of university students and other people with professional mathematical experience. In addition to their scores, students receive detailed feedback on how they could improve their solutions. Prizes: Prizes are given to all contestants who place within a certain range. These prizes include a shirt from AoPS, software, and one or two mathematical books of varying difficulty. Prizes are also awarded to students with outstanding solutions in individual rounds. Further, after the third round, given a high enough score, a student may qualify to take the AIME exam instead of qualifying through the AMC 10 or 12 competitions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Freethought** Freethought: Freethought (sometimes spelled free thought) is an epistemological viewpoint which holds that beliefs should not be formed on the basis of authority, tradition, revelation, or dogma, and that beliefs should instead be reached by other methods such as logic, reason, and empirical observation. According to the Oxford English Dictionary, a freethinker is "a person who forms their own ideas and opinions rather than accepting those of other people, especially in religious teaching." In some contemporary thought in particular, free thought is strongly tied with rejection of traditional social or religious belief systems. The cognitive application of free thought is known as "freethinking", and practitioners of free thought are known as "freethinkers". Modern freethinkers consider free thought to be a natural freedom from all negative and illusive thoughts acquired from society.The term first came into use in the 17th century in order to refer to people who inquired into the basis of traditional beliefs which were often accepted unquestioningly. Today, freethinking is most closely linked with deism, secularism, humanism, anti-clericalism, and religious critique. The Oxford English Dictionary defines freethinking as, "The free exercise of reason in matters of religious belief, unrestrained by deference to authority; the adoption of the principles of a free-thinker." Freethinkers hold that knowledge should be grounded in facts, scientific inquiry, and logic. The skeptical application of science implies freedom from the intellectually limiting effects of confirmation bias, cognitive bias, conventional wisdom, popular culture, urban myth, prejudice, or sectarianism. Definition: Atheist author Adam Lee defines free thought as thinking which is independent of revelation, tradition, established belief, and authority, and considers it as a "broader umbrella" than atheism "that embraces a rainbow of unorthodoxy, religious dissent, skepticism, and unconventional thinking."The basic summarizing statement of the essay The Ethics of Belief by the 19th-century British mathematician and philosopher William Kingdon Clifford is: "It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence." The essay became a rallying cry for freethinkers when published in the 1870s, and has been described as a point when freethinkers grabbed the moral high ground. Clifford was himself an organizer of free thought gatherings, the driving force behind the Congress of Liberal Thinkers held in 1878. Definition: Regarding religion, freethinkers typically hold that there is insufficient evidence to support the existence of supernatural phenomena. According to the Freedom from Religion Foundation, "No one can be a freethinker who demands conformity to a bible, creed, or messiah. To the freethinker, revelation and faith are invalid, and orthodoxy is no guarantee of truth." and "Freethinkers are convinced that religious claims have not withstood the tests of reason. Not only is there nothing to be gained by believing an untruth, but there is everything to lose when we sacrifice the indispensable tool of reason on the altar of superstition. Most freethinkers consider religion to be not only untrue, but harmful."However, philosopher Bertrand Russell wrote the following in his 1944 essay The Value of Free Thought: What makes a freethinker is not his beliefs but the way in which he holds them. If he holds them because his elders told him they were true when he was young, or if he holds them because if he did not he would be unhappy, his thought is not free; but if he holds them because, after careful thought he finds a balance of evidence in their favour, then his thought is free, however odd his conclusions may seem. Definition: A freethinker, according to Russell, is not necessarily an atheist or an agnostic, as long as he or she satisfies this definition: The person who is free in any respect is free from something; what is the free thinker free from? To be worthy of the name, he must be free of two things: the force of tradition, and the tyranny of his own passions. No one is completely free from either, but in the measure of a man's emancipation he deserves to be called a free thinker. Definition: Fred Edwords, former executive of the American Humanist Association, suggests that by Russell's definition, liberal religionists who have challenged established orthodoxies can be considered freethinkers.On the other hand, according to Bertrand Russell, atheists and/or agnostics are not necessarily freethinkers. As an example, he mentions Stalin, whom he compares to a "pope": what I am concerned with is the doctrine of the modern Communistic Party, and of the Russian Government to which it owes allegiance. According to this doctrine, the world develops on the lines of a Plan called Dialectical Materialism, first discovered by Karl Marx, embodied in the practice of a great state by Lenin, and now expounded from day to day by a Church of which Stalin is the Pope. […] Free discussion is to be prevented wherever the power to do so exists; […] If this doctrine and this organization prevail, free inquiry will become as impossible as it was in the middle ages, and the world will relapse into bigotry and obscurantism. Definition: In the 18th and 19th century, many thinkers regarded as freethinkers were deists, arguing that the nature of God can only be known from a study of nature rather than from religious revelation. In the 18th century, "deism" was as much of a 'dirty word' as "atheism", and deists were often stigmatized as either atheists or at least as freethinkers by their Christian opponents. Deists today regard themselves as freethinkers, but are now arguably less prominent in the free thought movement than atheists. Characteristics: Among freethinkers, for a notion to be considered true it must be testable, verifiable, and logical. Many freethinkers tend to be humanists, who base morality on human needs and would find meaning in human compassion, social progress, art, personal happiness, love, and the furtherance of knowledge. Generally, freethinkers like to think for themselves, tend to be skeptical, respect critical thinking and reason, remain open to new concepts, and are sometimes proud of their own individuality. They would determine truth for themselves – based upon knowledge they gain, answers they receive, experiences they have and the balance they thus acquire. Freethinkers reject conformity for the sake of conformity, whereby they create their own beliefs by considering the way the world around them works and would possess the intellectual integrity and courage to think outside of accepted norms, which may or may not lead them to believe in some higher power. Symbol: The pansy serves as the long-established and enduring symbol of free thought; literature of the American Secular Union inaugurated its usage in the late 1800s. The reasoning behind the pansy as the symbol of free thought lies both in the flower's name and in its appearance. The pansy derives its name from the French word pensée, which means "thought". It allegedly received this name because the flower is perceived by some to bear resemblance to a human face, and in mid-to-late summer it nods forward as if deep in thought. In the 1880s, following examples set by freethinkers in France, Belgium, Spain and Sweden, it was proposed in the United States as "the symbol of religious liberty and freedom of conscience". History: Pre-modern movement Critical thought has flourished in the Hellenistic Mediterranean, in the repositories of knowledge and wisdom in Ireland and in the Iranian civilizations (for example in the era of Khayyam (1048–1131) and his unorthodox Sufi Rubaiyat poems). Later societies made advances on freedom of thought such as the Chinese (note for example the seafaring renaissance of the Southern Song dynasty of 1127–1279), on through heretical thinkers on esoteric alchemy or astrology, to the Renaissance and the Protestant Reformation pioneered by Martin Luther.French physician and writer Rabelais celebrated "rabelaisian" freedom as well as good feasting and drinking (an expression and a symbol of freedom of the mind) in defiance of the hypocrisies of conformist orthodoxy in his utopian Thelema Abbey (from θέλημα: free "will"), the device of which was Do What Thou Wilt: So had Gargantua established it. In all their rule and strictest tie of their order there was but this one clause to be observed, Do What Thou Wilt; because free people ... act virtuously and avoid vice. They call this honor. History: When Rabelais's hero Pantagruel journeys to the "Oracle of The Div(in)e Bottle", he learns the lesson of life in one simple word: "Trinch!", Drink! Enjoy the simple life, learn wisdom and knowledge, as a free human. Beyond puns, irony, and satire, Gargantua's prologue-metaphor instructs the reader to "break the bone and suck out the substance-full marrow" ("la substantifique moëlle"), the core of wisdom. History: Modern movements The year 1600 is considered a landmark in the era of modern free thought. It was the year of the execution in Italy of Giordano Bruno, a former Dominican friar, by the Inquisition. History: Australia Prior to World War II, Australia had high rates of Protestantism and Catholicism. Post-war Australia has become a highly secularised country. Donald Horne, one of Australia's well-known public intellectuals, believed rising prosperity in post-war Australia influenced the decline in church-going and general lack of interest in religion. "Churches no longer matter very much to most Australians. If there is a happy eternal life it's for everyone ... For many Australians the pleasures of this life are sufficiently satisfying that religion offers nothing of great appeal", said Horne in his landmark work The Lucky Country (1964). History: Belgium The Université Libre de Bruxelles and the Vrije Universiteit Brussel, along with the two Circles of Free Inquiry (Dutch and French speaking), defend the freedom of critical thought, lay philosophy and ethics, while rejecting the argument of authority. History: Canada In 1873 a handful of secularists founded the earliest known secular organization in English Canada, the Toronto Freethought Association. Reorganized in 1877 and again in 1881, when it was renamed the Toronto Secular Society, the group formed the nucleus of the Canadian Secular Union, established in 1884 to bring together freethinkers from across the country.A significant number of the early members appear to have come from the educated labour "aristocracy", including Alfred F. Jury, J. Ick Evans and J. I. Livingstone, all of whom were leading labour activists and secularists. The second president of the Toronto association, T. Phillips Thompson, became a central figure in the city's labour and social-reform movements during the 1880s and 1890s and arguably Canada's foremost late nineteenth-century labour intellectual. By the early 1880s scattered free thought organizations operated throughout southern Ontario and parts of Quebec, eliciting both urban and rural support. History: The principal organ of the free thought movement in Canada was Secular Thought (Toronto, 1887–1911). Founded and edited during its first several years by English freethinker Charles Watts (1835–1906), it came under the editorship of Toronto printer and publisher James Spencer Ellis in 1891 when Watts returned to England. In 1968 the Humanist Association of Canada (HAC) formed to serve as an umbrella group for humanists, atheists, and freethinkers, and to champion social justice issues and oppose religious influence on public policy—most notably in the fight to make access to abortion free and legal in Canada. History: England The term freethinker emerged towards the end of the 17th century in England to describe those who stood in opposition to the institution of the Church, and the literal belief in the Bible. The beliefs of these individuals were centered on the concept that people could understand the world through consideration of nature. Such positions were formally documented for the first time in 1697 by William Molyneux in a widely publicized letter to John Locke, and more extensively in 1713, when Anthony Collins wrote his Discourse of Free-thinking, which gained substantial popularity. This essay attacks the clergy of all churches and it is a plea for deism. History: The Freethinker magazine was first published in Britain in 1881; it continued in print until 2014, and still exists as a web-based publication. History: France In France, the concept first appeared in publication in 1765 when Denis Diderot, Jean le Rond d'Alembert, and Voltaire included an article on Liberté de penser in their Encyclopédie. The concept of free thought spread so widely that even places as remote as the Jotunheimen, in Norway, had well-known freethinkers such as Jo Gjende by the 19th century.François-Jean Lefebvre de la Barre (1745–1766) was a young French nobleman, famous for having been tortured and beheaded before his body was burnt on a pyre along with Voltaire's Philosophical Dictionary. La Barre is often said to have been executed for not saluting a Roman Catholic religious procession, but the elements of the case were far more complex.In France, Lefebvre de la Barre is widely regarded a symbol of the victims of Christian religious intolerance; La Barre along with Jean Calas and Pierre-Paul Sirven, was championed by Voltaire. A second replacement statue to de la Barre stands nearby the Basilica of the Sacred Heart of Jesus of Paris at the summit of the butte Montmartre (itself named from the Temple of Mars), the highest point in Paris and an 18th arrondissement street nearby the Sacré-Cœur is also named after Lefebvre de la Barre. History: The 19th century saw the emergence of a specific notion of Libre-Pensée ("free thought"), with writer Victor Hugo as one of its major early proponents. French Freethinkers (Libre-Penseurs) associate freedom of thought, political anti-clericalism and socialist leanings. The main organisation referring to this tradition to this day is the Fédération nationale de la libre pensée, created in 1890. History: Germany In Germany, during the period 1815–1848 and before the March Revolution, the resistance of citizens against the dogma of the church increased. In 1844, under the influence of Johannes Ronge and Robert Blum, belief in the rights of man, tolerance among men, and humanism grew, and by 1859 they had established the Bund Freireligiöser Gemeinden Deutschlands (literally Union of Free Religious Communities of Germany), an association of persons who consider themselves to be religious without adhering to any established and institutionalized church or sacerdotal cult. This union still exists today, and is included as a member in the umbrella organization of free humanists. In 1881 in Frankfurt am Main, Ludwig Büchner established the Deutscher Freidenkerbund (German Freethinkers League) as the first German organization for atheists and agnostics. In 1892 the Freidenker-Gesellschaft and in 1906 the Deutscher Monistenbund were formed.Free thought organizations developed the "Jugendweihe" (literally Youth consecration), a secular "confirmation" ceremony, and atheist funeral rites. The Union of Freethinkers for Cremation was founded in 1905, and the Central Union of German Proletariat Freethinker in 1908. The two groups merged in 1927, becoming the German Freethinking Association in 1930.More "bourgeois" organizations declined after World War I, and "proletarian" free thought groups proliferated, becoming an organization of socialist parties. European socialist free thought groups formed the International of Proletarian Freethinkers (IPF) in 1925. Activists agitated for Germans to disaffiliate from their respective Church and for seculari-zation of elementary schools; between 1919–21 and 1930–32 more than 2.5 million Germans, for the most part supporters of the Social Democratic and Communist parties, gave up church membership. Conflict developed between radical forces including the Soviet League of the Militant Godless and Social Democratic forces in Western Europe led by Theodor Hartwig and Max Sievers. In 1930 the Soviet and allied delegations, following a walk-out, took over the IPF and excluded the former leaders. History: Following Hitler's rise to power in 1933, most free thought organizations were banned, though some right-wing groups that worked with so-called Völkische Bünde (literally "ethnic" associations with nationalist, xenophobic and very often racist ideology) were tolerated by the Nazis until the mid-1930s. Ireland In the 19th century, received opinion was scandalised by George Ensor (1769-1843). His Review of the Miracles, Prophecies, & Mysteries of the Old and New Testaments (1835) argued that, far from being a source of moral teaching, revealed religion and its divines regarded questions of morality as "incidental"--as a "mundane and merely philosophical" topic. Netherlands In the Netherlands, free thought has existed in organized form since the establishment of De Dageraad (now known as De Vrije Gedachte) in 1856. Among its most notable subscribing 19th century individuals were Johannes van Vloten, Multatuli, Adriaan Gerhard and Domela Nieuwenhuis. In 2009, Frans van Dongen established the Atheist-Secular Party, which takes a considerably restrictive view of religion and public religious expressions. History: Since the 19th century, free thought in the Netherlands has become more well known as a political phenomenon through at least three currents: liberal freethinking, conservative freethinking, and classical freethinking. In other words, parties which identify as freethinking tend to favor non-doctrinal, rational approaches to their preferred ideologies, and arose as secular alternatives to both clerically aligned parties as well as labor-aligned parties. Common themes among freethinking political parties are "freedom", "liberty", and "individualism". History: Switzerland With the introduction of cantonal church taxes in the 1870s, anti-clericals began to organise themselves. Around 1870, a "freethinkers club" was founded in Zürich. During the debate on the Zürich church law in 1883, professor Friedrich Salomon Vögelin and city council member Kunz proposed to separate church and state. History: Turkey In the last years of the Ottoman Empire, free thought made its voice heard by the works of distinguished people such as Ahmet Rıza, Tevfik Fikret, Abdullah Cevdet, Kılıçzade Hakkı, and Celal Nuri İleri. These intellectuals affected the early period of the Turkish Republic. Mustafa Kemal Atatürk –field marshal, revolutionary statesman, author, and founder of the secular Turkish nation state, serving as its first President from 1923 until his death in 1938– was the practitioner of their ideas. He made many reforms that modernized the country. Sources point out that Atatürk was a religious skeptic and a freethinker. He was a non-doctrinaire deist or an atheist, who was antireligious and anti-Islamic in general. According to Atatürk, the Turkish people do not know what Islam really is and do not read the Quran. People are influenced by Arabic sentences that they do not understand, and because of their customs they go to mosques. When the Turks read the Quran and think about it, they will leave Islam. Atatürk described Islam as the religion of the Arabs in his own work titled Vatandaş için Medeni Bilgiler by his own critical and nationalist views.Association of Atheism (Ateizm Derneği), the first official atheist organisation in Middle East and Caucasus, was founded in 2014. It serves to support irreligious people and freethinkers in Turkey who are discriminated against based on their views. In 2018 it was reported in some media outlets that the Ateizm Derneği would close down because of the pressure on its members and attacks by pro-government media, but the association itself issued a clarification that this was not the case and that it was still active. History: United States The Free Thought movement first organized itself in the United States as the "Free Press Association" in 1827 in defense of George Houston, publisher of The Correspondent, an early journal of Biblical criticism in an era when blasphemy convictions were still possible. Houston had helped found an Owenite community at Haverstraw, New York in 1826–27. The short-lived Correspondent was superseded by the Free Enquirer, the official organ of Robert Owen's New Harmony community in Indiana, edited by Robert Dale Owen and by Fanny Wright between 1828 and 1832 in New York. During this time Robert Dale Owen sought to introduce the philosophic skepticism of the Free Thought movement into the Workingmen's Party in New York City. The Free Enquirer's annual civic celebrations of Paine's birthday after 1825 finally coalesced in 1836 in the first national Free Thinkers organization, the "United States Moral and Philosophical Society for the General Diffusion of Useful Knowledge". It was founded on August 1, 1836, at a national convention at the Lyceum in Saratoga Springs with Isaac S. Smith of Buffalo, New York, as president. Smith was also the 1836 Equal Rights Party's candidate for Governor of New York and had also been the Workingmen's Party candidate for Lt. Governor of New York in 1830. The Moral and Philosophical Society published The Beacon, edited by Gilbert Vale. History: Driven by the revolutions of 1848 in the German states, the 19th century saw an immigration of German freethinkers and anti-clericalists to the United States (see Forty-Eighters). In the United States, they hoped to be able to live by their principles, without interference from government and church authorities.Many Freethinkers settled in German immigrant strongholds, including St. Louis, Indianapolis, Wisconsin, and Texas, where they founded the town of Comfort, Texas, as well as others.These groups of German Freethinkers referred to their organizations as Freie Gemeinden, or "free congregations". The first Freie Gemeinde was established in St. Louis in 1850. Others followed in Pennsylvania, California, Washington, D.C., New York, Illinois, Wisconsin, Texas, and other states.Freethinkers tended to be liberal, espousing ideals such as racial, social, and sexual equality, and the abolition of slavery.The "Golden Age of Freethought" in the US came in the late 1800s. The dominant organization was the National Liberal League which formed in 1876 in Philadelphia. This group re-formed itself in 1885 as the American Secular Union under the leadership of the eminent agnostic orator Robert G. Ingersoll. Following Ingersoll's death in 1899 the organization declined, in part due to lack of effective leadership.Free thought in the United States declined in the early twentieth century. By the early twentieth century, most free thought congregations had disbanded or joined other mainstream churches. The longest continuously operating free thought congregation in America is the Free Congregation of Sauk County, Wisconsin, which was founded in 1852 and is still active as of 2020. It affiliated with the American Unitarian Association (now the Unitarian Universalist Association) in 1955. D. M. Bennett was the founder and publisher of The Truth Seeker in 1873, a radical free thought and reform American periodical. History: German Freethinker settlements were located in: Burlington, Racine County, Wisconsin Belleville, St. Clair County, Illinois Castell, Llano County, Texas Comfort, Kendall County, Texas Davenport, Scott County, Iowa Fond du Lac, Fond du Lac County, Wisconsin Frelsburg, Colorado County, Texas Hermann, Gasconade County, Missouri Jefferson, Jefferson County, Wisconsin Indianapolis, Indiana Latium, Washington County, Texas Manitowoc, Manitowoc County, Wisconsin Meyersville, DeWitt County, Texas Milwaukee, Wisconsin Millheim, Austin County, Texas Oshkosh, Winnebago County, Wisconsin Ratcliffe, DeWitt County, Texas Sauk City, Sauk County, Wisconsin Shelby, Austin County, Texas Sisterdale, Kendall County, Texas St. Louis, Missouri Tusculum, Kendall County, Texas Two Rivers, Manitowoc County, Wisconsin Watertown, Dodge County, Wisconsin Anarchism United States tradition Free thought influenced the development of anarchism in the United States of America. In the U.S., "free thought was a basically anti-Christian, anti-clerical movement, whose purpose was to make the individual politically and spiritually free to decide for himself on religious matters. A number of contributors to Liberty were prominent figures in both free thought and anarchism. The American individualist anarchist George MacDonald [(1857–1944)] was a co-editor of Freethought and, for a time, The Truth Seeker. E.C. Walker was co-editor of the freethought/free love journal Lucifer, the Light-Bearer." "Many of the anarchists were ardent freethinkers; reprints from free thought papers such as Lucifer, the Light-Bearer, Freethought and The Truth Seeker appeared in Liberty...The church was viewed as a common ally of the state and as a repressive force in and of itself." European tradition In Europe, a similar development occurred in French and Spanish individualist anarchist circles: "Anticlericalism, just as in the rest of the libertarian movement, in another of the frequent elements which will gain relevance related to the measure in which the (French) Republic begins to have conflicts with the church...Anti-clerical discourse, frequently called for by the French individualist André Lorulot [(1885-1963)], will have its impacts in Estudios (a Spanish individualist anarchist publication). There will be an attack on institutionalized religion for the responsibility that it had in the past on negative developments, for its irrationality which makes it a counterpoint of philosophical and scientific progress. There will be a criticism of proselytism and ideological manipulation which happens on both believers and agnostics". These tendencies would continue in French individualist anarchism in the work and activism of Charles-Auguste Bontemps (1893-1981) and others. In the Spanish individualist anarchist magazines Ética and Iniciales "there is a strong interest in publishing scientific news, usually linked to a certain atheist and anti-theist obsession, philosophy which will also work for pointing out the incompatibility between science and religion, faith, and reason. In this way there will be a lot of talk on Darwin's theories or on the negation of the existence of the soul". History: In 1901 the Catalan anarchist and freethinker Francesc Ferrer i Guàrdia established "modern" or progressive schools in Barcelona in defiance of an educational system controlled by the Catholic Church. The schools had the stated goal to "educate the working class in a rational, secular and non-coercive setting". Fiercely anti-clerical, Ferrer believed in "freedom in education", education free from the authority of church and state. Ferrer's ideas, generally, formed the inspiration for a series of Modern Schools in the United States, Cuba, South America and London. The first of these started in New York City in 1911. Ferrer also inspired the Italian newspaper Università popolare, founded in 1901.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metal–organic framework** Metal–organic framework: Metal–organic frameworks (MOFs) are a class of compounds consisting of metal clusters (also known as SBUs) coordinated to organic ligands to form one-, two-, or three-dimensional structures. The organic ligands included are sometimes referred to as "struts" or "linkers", one example being 1,4-benzenedicarboxylic acid (BDC). Metal–organic framework: More formally, a metal–organic framework is an organic-inorganic porous extended structure. An extended structure is a structure whose sub-units occur in a constant ratio and are arranged in a repeating pattern. MOFs are a subclass of coordination networks, which is a coordination compound extending, through repeating coordination entities, in one dimension, but with cross-links between two or more individual chains, loops, or spiro-links, or a coordination compound extending through repeating coordination entities in two or three dimensions. Coordination networks including MOFs further belong to coordination polymers, which is a coordination compound with repeating coordination entities extending in one, two, or three dimensions. Most of the MOFs reported in the literature are crystalline compounds, but there are also amorphous MOFs, and other disordered phases.In most cases for MOFs, the pores are stable during the elimination of the guest molecules (often solvents) and could be refilled with other compounds. Because of this property, MOFs are of interest for the storage of gases such as hydrogen and carbon dioxide. Other possible applications of MOFs are in gas purification, in gas separation, in water remediation, in catalysis, as conducting solids and as supercapacitors.The synthesis and properties of MOFs constitute the primary focus of the discipline called reticular chemistry (from Latin reticulum, "small net"). In contrast to MOFs, covalent organic frameworks (COFs) are made entirely from light elements (H, B, C, N, and O) with extended structures. Structure: MOFs are composed of two main components: an inorganic metal cluster (often referred to as a secondary-building unit or SBU) and an organic molecule called a linker. For this reason, the materials are often referred to as hybrid organic-inorganic materials. The organic units are typically mono-, di-, tri-, or tetravalent ligands. The choice of metal and linker dictates the structure and hence properties of the MOF. For example, the metal's coordination preference influences the size and shape of pores by dictating how many ligands can bind to the metal, and in which orientation. Structure: To describe and organize the structures of MOFs, a system of nomenclature has been developed. Subunits of a MOF, called secondary building units (SBUs), can be described by topologies common to several structures. Each topology, also called a net, is assigned a symbol, consisting of three lower-case letters in bold. MOF-5, for example, has a pcu net. Attached to the SBUs are bridging ligands. For MOFs, typical bridging ligands are di- and tricarboxylic acids. These ligands typically have rigid backbones. Examples are benzene-1,4-dicarboxylic acid (BDC or terephthalic acid), biphenyl-4,4′-dicarboxylic acid (BPDC), and the tricarboxylic acid trimesic acid. Synthesis: General synthesis The study of MOFs has roots in coordination chemistry and solid-state inorganic chemistry, but it developed into a new field. In addition, MOFs are constructed from bridging organic ligands that remain intact throughout the synthesis. Zeolite synthesis often makes use of a "template". Templates are ions that influence the structure of the growing inorganic framework. Typical templating ions are quaternary ammonium cations, which are removed later. In MOFs, the framework is templated by the SBU (secondary building unit) and the organic ligands. A templating approach that is useful for MOFs intended for gas storage is the use of metal-binding solvents such as N,N-diethylformamide and water. In these cases, metal sites are exposed when the solvent is evacuated, allowing hydrogen to bind at these sites.Four developments were particularly important in advancing the chemistry of MOFs. (1) The geometric principle of construction where metal-containing units were kept in rigid shapes. Early MOFs contained single atoms linked to ditopic coordinating linkers. The approach not only led to the identification of a small number of preferred topologies that could be targeted in designed synthesis, but was the central point to achieve a permanent porosity. (2) The use of the isoreticular principle where the size and the nature of a structure changes without changing its topology led to MOFs with ultrahigh porosity and unusually large pore openings. (3) Post- synthetic modification of MOFs increased their functionality by reacting organic units and metal-organic complexes with linkers. (4) Multifunctional MOFs incorporated multiple functionalities in a single framework. Synthesis: Since ligands in MOFs typically bind reversibly, the slow growth of crystals often allows defects to be redissolved, resulting in a material with millimeter-scale crystals and a near-equilibrium defect density. Solvothermal synthesis is useful for growing crystals suitable to structure determination, because crystals grow over the course of hours to days. However, the use of MOFs as storage materials for consumer products demands an immense scale-up of their synthesis. Scale-up of MOFs has not been widely studied, though several groups have demonstrated that microwaves can be used to nucleate MOF crystals rapidly from solution. This technique, termed "microwave-assisted solvothermal synthesis", is widely used in the zeolite literature, and produces micron-scale crystals in a matter of seconds to minutes, in yields similar to the slow growth methods. Synthesis: Some MOFs, such as the mesoporous MIL-100(Fe), can be obtained under mild conditions at room temperature and in green solvents (water, ethanol) through scalable synthesis methods. Synthesis: A solvent-free synthesis of a range of crystalline MOFs has been described. Usually the metal acetate and the organic proligand are mixed and ground up with a ball mill. Cu3(BTC)2 can be quickly synthesised in this way in quantitative yield. In the case of Cu3(BTC)2 the morphology of the solvent free synthesised product was the same as the industrially made Basolite C300. It is thought that localised melting of the components due to the high collision energy in the ball mill may assist the reaction. The formation of acetic acid as a by-product in the reactions in the ball mill may also help in the reaction having a solvent effect in the ball mill. It has been shown that the addition of small quantities of ethanol for the mechanochemical synthesis of Cu3(BTC)2 significantly reduces the amounts of structural defects in the obtained material.A recent advancement in the solvent-free preparation of MOF films and composites is their synthesis by chemical vapor deposition. This process, MOF-CVD, was first demonstrated for ZIF-8 and consists of two steps. In a first step, metal oxide precursor layers are deposited. In the second step, these precursor layers are exposed to sublimed ligand molecules, that induce a phase transformation to the MOF crystal lattice. Formation of water during this reaction plays a crucial role in directing the transformation. This process was successfully scaled up to an integrated cleanroom process, conforming to industrial microfabrication standards.Numerous methods have been reported for the growth of MOFs as oriented thin films. However, these methods are suitable only for the synthesis of a small number of MOF topologies. One such example being the vapor-assisted conversion (VAC) which can be used for the thin film synthesis of several UiO-type MOFs. Synthesis: High-throughput synthesis High-throughput (HT) methods are a part of combinatorial chemistry and a tool for increasing efficiency. There are two synthetic strategies within the HT-methods: In the combinatorial approach, all reactions take place in one vessel, which leads to product mixtures. In the parallel synthesis, the reactions take place in different vessels. Furthermore, a distinction is made between thin films and solvent-based methods.Solvothermal synthesis can be carried out conventionally in a teflon reactor in a convection oven or in glass reactors in a microwave oven (high-throughput microwave synthesis). The use of a microwave oven changes, in part dramatically, the reaction parameters. Synthesis: In addition to solvothermal synthesis, there have been advances in using supercritical fluid as a solvent in a continuous flow reactor. Supercritical water was first used in 2012 to synthesize copper and nickel-based MOFs in just seconds. In 2020, supercritical carbon dioxide was used in a continuous flow reactor along the same time scale as the supercritical water-based method, but the lower critical point of carbon dioxide allowed for the synthesis of the zirconium-based MOF UiO-66. Synthesis: High-throughput solvothermal synthesis In high-throughput solvothermal synthesis, a solvothermal reactor with (e.g.) 24 cavities for teflon reactors is used. Such a reactor is sometimes referred to as a multiclav. The reactor block or reactor insert is made of stainless steel and contains 24 reaction chambers, which are arranged in four rows. With the miniaturized teflon reactors, volumes of up to 2 mL can be used. The reactor block is sealed in a stainless steel autoclave; for this purpose, the filled reactors are inserted into the bottom of the reactor, the teflon reactors are sealed with two teflon films and the reactor top side is put on. The autoclave is then closed in a hydraulic press. The sealed solvothermal reactor can then be subjected to a temperature-time program. The reusable teflon film serves to withstand the mechanical stress, while the disposable teflon film seals the reaction vessels. After the reaction, the products can be isolated and washed in parallel in a vacuum filter device. On the filter paper, the products are then present separately in a so-called sample library and can subsequently be characterized by automated X-ray powder diffraction. The informations obtained are then used to plan further syntheses. Synthesis: Pseudomorphic replication Pseudomorphic mineral replacement events occur whenever a mineral phase comes into contact with a fluid with which it is out of equilibrium. Re-equilibration will tend to take place to reduce the free energy and transform the initial phase into a more thermodynamically stable phase, involving dissolution and reprecipitation subprocesses.Inspired by such geological processes, MOF thin films can be grown through the combination of atomic layer deposition (ALD) of aluminum oxide onto a suitable substrate (e.g. FTO) and subsequent solvothermal microwave synthesis. The aluminum oxide layer serves both as an architecture-directing agent and as a metal source for the backbone of the MOF structure. The construction of the porous 3D metal-organic framework takes place during the microwave synthesis, when the atomic layer deposited substrate is exposed to a solution of the requisite linker in a DMF/H2O 3:1 mixture (v/v) at elevated temperature. Analogous, Kornienko and coworkers described in 2015 the synthesis of a cobalt-porphyrin MOF (Al2(OH)2TCPP-Co; TCPP-H2 = 4,4′,4″,4‴-(porphyrin-5,10,15,20-tetrayl)tetrabenzoate), the first MOF catalyst constructed for the electrocatalytic conversion of aqueous CO2 to CO. Synthesis: Post-synthetic modification Although the three-dimensional structure and internal environment of the pores can be in theory controlled through proper selection of nodes and organic linking groups, the direct synthesis of such materials with the desired functionalities can be difficult due to the high sensitivity of MOF systems. Thermal and chemical sensitivity, as well as high reactivity of reaction materials, can make forming desired products challenging to achieve. The exchange of guest molecules and counter-ions and the removal of solvents allow for some additional functionality but are still limited to the integral parts of the framework. The post-synthetic exchange of organic linkers and metal ions is an expanding area of the field and opens up possibilities for more complex structures, increased functionality, and greater system control. Synthesis: Ligand exchange Post-synthetic modification techniques can be used to exchange an existing organic linking group in a prefabricated MOF with a new linker by ligand exchange or partial ligand exchange. This exchange allows for the pores and, in some cases the overall framework of MOFs, to be tailored for specific purposes. Some of these uses include fine-tuning the material for selective adsorption, gas storage, and catalysis. To perform ligand exchange prefabricated MOF crystals are washed with solvent and then soaked in a solution of the new linker. The exchange often requires heat and occurs on the time scale of a few days. Post-synthetic ligand exchange also enables the incorporation of functional groups into MOFs that otherwise would not survive MOF synthesis, due to temperature, pH, or other reaction conditions, or hinder the synthesis itself by competition with donor groups on the loaning ligand. Synthesis: Metal exchange Post-synthetic modification techniques can also be used to exchange an existing metal ion in a prefabricated MOF with a new metal ion by metal ion exchange. The complete metal metathesis from an integral part of the framework has been achieved without altering the framework or pore structure of the MOF. Similarly to post-synthetic ligand exchange, post-synthetic metal exchange is performed by washing prefabricated MOF crystals with solvent and then soaking the crystal in a solution of the new metal. Post-synthetic metal exchange allows for a simple route to the formation of MOFs with the same framework yet different metal ions. Synthesis: Stratified synthesis In addition to modifying the functionality of the ligands and metals themselves, post-synthetic modification can be used to expand upon the structure of the MOF. Using post-synthetic modification MOFs can be converted from a highly ordered crystalline material toward a heterogeneous porous material. Using post-synthetic techniques, it is possible for the controlled installation of domains within a MOF crystal which exhibit unique structural and functional characteristics. Core-shell MOFs and other layered MOFs have been prepared where layers have unique functionalization but in most cases are crystallographically compatible from layer to layer. Synthesis: Open coordination sites In some cases MOF metal nodes have an unsaturated environment, and it is possible to modify this environment using different techniques. If the size of the ligand matches the size of the pore aperture, it is possible to install additional ligands to existing MOF structure. Sometimes metal nodes have a good binding affinity for inorganic species. For instance, it was shown that metal nodes can perform an extension, and create a bond with the uranyl cation. Composite materials: Another approach to increasing adsorption in MOFs is to alter the system in such a way that chemisorption becomes possible. This functionality has been introduced by making a composite material, which contains a MOF and a complex of platinum with activated carbon. In an effect known as hydrogen spillover, H2 can bind to the platinum surface through a dissociative mechanism which cleaves the hydrogen molecule into two hydrogen atoms and enables them to travel down the activated carbon onto the surface of the MOF. This innovation produced a threefold increase in the room-temperature storage capacity of a MOF; however, desorption can take upwards of 12 hours, and reversible desorption is sometimes observed for only two cycles. The relationship between hydrogen spillover and hydrogen storage properties in MOFs is not well understood but may prove relevant to hydrogen storage. Catalysis: MOFs have potential as heterogeneous catalysts, although applications have not been commercialized. Their high surface area, tunable porosity, diversity in metal and functional groups make them especially attractive for use as catalysts. Zeolites are extraordinarily useful in catalysis. Zeolites are limited by the fixed tetrahedral coordination of the Si/Al connecting points and the two-coordinated oxide linkers. Fewer than 200 zeolites are known. In contrast with this limited scope, MOFs exhibit more diverse coordination geometries, polytopic linkers, and ancillary ligands (F−, OH−, H2O among others). It is also difficult to obtain zeolites with pore sizes larger than 1 nm, which limits the catalytic applications of zeolites to relatively small organic molecules (typically no larger than xylenes). Furthermore, mild synthetic conditions typically employed for MOF synthesis allow direct incorporation of delicate functionalities into the framework structures. Such a process would not be possible with zeolites or other microporous crystalline oxide-based materials because of the harsh conditions typically used for their synthesis (e.g., calcination at high temperatures to remove organic templates). Metal-organic framework MIL-101 is one of the most used MOFs for catalysis incorporating different transition metals such as Cr. However, the stability of some MOF photocatalysts in aqueous medium and under strongly oxidizing conditions is very low.Zeolites still cannot be obtained in enantiopure form, which precludes their applications in catalytic asymmetric synthesis, e.g., for the pharmaceutical, agrochemical, and fragrance industries. Enantiopure chiral ligands or their metal complexes have been incorporated into MOFs to lead to efficient asymmetric catalysts. Even some MOF materials may bridge the gap between zeolites and enzymes when they combine isolated polynuclear sites, dynamic host–guest responses, and a hydrophobic cavity environment. MOFs might be useful for making semi-conductors. Theoretical calculations show that MOFs are semiconductors or insulators with band gaps between 1.0 and 5.5 eV which can be altered by changing the degree of conjugation in the ligands indicating its possibility for being photocatalysts. Catalysis: Design Like other heterogeneous catalysts, MOFs may allow for easier post-reaction separation and recyclability than homogeneous catalysts. In some cases, they also give a highly enhanced catalyst stability. Additionally, they typically offer substrate-size selectivity. Nevertheless, while clearly important for reactions in living systems, selectivity on the basis of substrate size is of limited value in abiotic catalysis, as reasonably pure feedstocks are generally available. Catalysis: Metal ions or metal clusters Among the earliest reports of MOF-based catalysis was the cyanosilylation of aldehydes by a 2D MOF (layered square grids) of formula Cd(4,4′-bpy)2(NO3)2. This investigation centered mainly on size- and shape-selective clathration. A second set of examples was based on a two-dimensional, square-grid MOF containing single Pd(II) ions as nodes and 2-hydroxypyrimidinolates as struts. Despite initial coordinative saturation, the palladium centers in this MOF catalyze alcohol oxidation, olefin hydrogenation, and Suzuki C–C coupling. At a minimum, these reactions necessarily entail redox oscillations of the metal nodes between Pd(II) and Pd(0) intermediates accompanying by drastic changes in coordination number, which would certainly lead to destabilization and potential destruction of the original framework if all the Pd centers are catalytically active. The observation of substrate shape- and size-selectivity implies that the catalytic reactions are heterogeneous and are indeed occurring within the MOF. Nevertheless, at least for hydrogenation, it is difficult to rule out the possibility that catalysis is occurring at the surface of MOF-encapsulated palladium clusters/nanoparticles (i.e., partial decomposition sites) or defect sites, rather than at transiently labile, but otherwise intact, single-atom MOF nodes. "Opportunistic" MOF-based catalysis has been described for the cubic compound, MOF-5. This material comprises coordinatively saturated Zn4O nodes and fully complexed BDC struts (see above for abbreviation); yet it apparently catalyzes the Friedel–Crafts tert-butylation of both toluene and biphenyl. Furthermore, para alkylation is strongly favored over ortho alkylation, a behavior thought to reflect the encapsulation of reactants by the MOF. Catalysis: Functional struts The porous-framework material [Cu3(btc)2(H2O)3], also known as HKUST-1, contains large cavities having windows of diameter ~6 Å. The coordinated water molecules are easily removed, leaving open Cu(II) sites. Kaskel and co-workers showed that these Lewis acid sites could catalyze the cyanosilylation of benzaldehyde or acetone. The anhydrous version of HKUST-1 is an acid catalyst. Compared to Brønsted vs. Lewis acid-catalyzed pathways, the product selectivity are distinctive for three reactions: isomerization of α-pinene oxide, cyclization of citronellal, and rearrangement of α-bromoacetals, indicating that indeed [Cu3(btc)2] functions primarily as a Lewis acid catalyst. The product selectivity and yield of catalytic reactions (e.g. cyclopropanation) have also been shown to be impacted by defective sites, such as Cu(I) or incompletely deprotonated carboxylic acid moities of the linkers.MIL-101, a large-cavity MOF having the formula [Cr3F(H2O)2O(BDC)3], is a cyanosilylation catalyst. The coordinated water molecules in MIL-101 are easily removed to expose Cr(III) sites. As one might expect, given the greater Lewis acidity of Cr(III) vs. Cu(II), MIL-101 is much more active than HKUST-1 as a catalyst for the cyanosilylation of aldehydes. Additionally, the Kaskel group observed that the catalytic sites of MIL-101, in contrast to those of HKUST-1, are immune to unwanted reduction by benzaldehyde. The Lewis-acid-catalyzed cyanosilylation of aromatic aldehydes has also been carried out by Long and co-workers using a MOF of the formula Mn3[(Mn4Cl)3BTT8(CH3OH)10]. This material contains a three-dimensional pore structure, with the pore diameter equaling 10 Å. In principle, either of the two types of Mn(II) sites could function as a catalyst. Noteworthy features of this catalyst are high conversion yields (for small substrates) and good substrate-size-selectivity, consistent with channellocalized catalysis. Catalysis: Encapsulated catalysts The MOF encapsulation approach invites comparison to earlier studies of oxidative catalysis by zeolite-encapsulated Fe(porphyrin) as well as Mn(porphyrin) systems. The zeolite studies generally employed iodosylbenzene (PhIO), rather than TPHP as oxidant. The difference is likely mechanistically significant, thus complicating comparisons. Briefly, PhIO is a single oxygen atom donor, while TBHP is capable of more complex behavior. In addition, for the MOF-based system, it is conceivable that oxidation proceeds via both oxygen transfer from a manganese oxo intermediate as well as a manganese-initiated radical chain reaction pathway. Regardless of mechanism, the approach is a promising one for isolating and thereby stabilizing the porphyrins against both oxo-bridged dimer formation and oxidative degradation. Catalysis: Metal-free organic cavity modifiers Most examples of MOF-based catalysis make use of metal ions or atoms as active sites. Among the few exceptions are two nickel- and two copper-containing MOFs synthesized by Rosseinsky and co-workers. These compounds employ amino acids (L- or D-aspartate) together with dipyridyls as struts. The coordination chemistry is such that the amine group of the aspartate cannot be protonated by added HCl, but one of the aspartate carboxylates can. Thus, the framework-incorporated amino acid can exist in a form that is not accessible for the free amino acid. While the nickel-based compounds are marginally porous, on account of tiny channel dimensions, the copper versions are clearly porous. Catalysis: The Rosseinsky group showed that the carboxylic acids behave as Brønsted acidic catalysts, facilitating (in the copper cases) the ring-opening methanolysis of a small, cavityaccessible epoxide at up to 65% yield. Superior homogeneous catalysts exist however. Catalysis: Kitagawa and co-workers have reported the synthesis of a catalytic MOF having the formula [Cd(4-BTAPA)2(NO3)2]. The MOF is three-dimensional, consisting of an identical catenated pair of networks, yet still featuring pores of molecular dimensions. The nodes consist of single cadmium ions, octahedrally ligated by pyridyl nitrogens. From a catalysis standpoint, however, the most interesting feature of this material is the presence of guest-accessible amide functionalities. The amides are capable of base-catalyzing the Knoevenagel condensation of benzaldehyde with malononitrile. Reactions with larger nitriles, however, are only marginally accelerated, implying that catalysis takes place chiefly within the material's channels rather than on its exterior. A noteworthy finding is the lack of catalysis by the free strut in homogeneous solution, evidently due to intermolecular H-bonding between bptda molecules. Thus, the MOF architecture elicits catalytic activity not otherwise encountered. Catalysis: In an interesting alternative approach, Férey and coworkers were able to modify the interior of MIL-101 via Cr(III) coordination of one of the two available nitrogen atoms of each of several ethylenediamine molecules. The free non-coordinated ends of the ethylenediamines were then used as Brønsted basic catalysts, again for Knoevenagel condensation of benzaldehyde with nitriles. Catalysis: A third approach has been described by Kim Kimoon and coworkers. Using a pyridine-functionalized derivative of tartaric acid and a Zn(II) source they were able to synthesize a 2D MOF termed POST-1. POST-1 possesses 1D channels whose cross sections are defined by six trinuclear zinc clusters and six struts. While three of the six pyridines are coordinated by zinc ions, the remaining three are protonated and directed toward the channel interior. When neutralized, the noncoordinated pyridyl groups are found to catalyze transesterification reactions, presumably by facilitating deprotonation of the reactant alcohol. The absence of significant catalysis when large alcohols are employed strongly suggests that the catalysis occurs within the channels of the MOF. Catalysis: Achiral catalysis Metals as catalytic sites The metals in the MOF structure often act as Lewis acids. The metals in MOFs often coordinate to labile solvent molecules or counter ions which can be removed after activation of the framework. The Lewis acidic nature of such unsaturated metal centers can activate the coordinated organic substrates for subsequent organic transformations. The use of unsaturated metal centers was demonstrated in the cyanosilylation of aldehydes and imines by Fujita and coworkers in 2004. They reported MOF of composition {[Cd(4,4′-bpy)2(H2O)2] • (NO3)2 • 4H2O} which was obtained by treating linear bridging ligand 4,4′-bipyridine (bpy) with Cd(NO3)2. The Cd(II) centers in this MOF possess a distorted octahedral geometry having four pyridines in the equatorial positions, and two water molecules in the axial positions to form a two-dimensional infinite network. On activation, two water molecules were removed leaving the metal centers unsaturated and Lewis acidic. The Lewis acidic character of metal center was tested on cyanosilylation reactions of imine where the imine gets attached to the Lewis-acidic metal centre resulting in higher electrophilicity of imines. For the cyanosilylation of imines, most of the reactions were complete within 1 h affording aminonitriles in quantitative yield. Kaskel and coworkers carried out similar cyanosilylation reactions with coordinatively unsaturated metals in three-dimensional (3D) MOFs as heterogeneous catalysts. The 3D framework [Cu3(btc)2(H2O)3] (btc: Benzene-1,3,5-tricarboxylate) (HKUST-1) used in this study was first reported by Williams et al. The open framework of [Cu3(btc)2(H2O)3] is built from dimeric cupric tetracarboxylate units (paddle-wheels) with aqua molecules coordinating to the axial positions and btc bridging ligands. The resulting framework after removal of two water molecules from axial positions possesses porous channel. This activated MOF catalyzes the trimethylcyanosilylation of benzaldehydes with a very low conversion (<5% in 24 h) at 293 K. As the reaction temperature was raised to 313 K, a good conversion of 57% with a selectivity of 89% was obtained after 72 h. In comparison, less than 10% conversion was observed for the background reaction (without MOF) under the same conditions. But this strategy suffers from some problems like 1) the decomposition of the framework with increase of the reaction temperature due to the reduction of Cu(II) to Cu(I) by aldehydes; 2) strong solvent inhibition effect; electron donating solvents such as THF competed with aldehydes for coordination to the Cu(II) sites, and no cyanosilylation product was observed in these solvents; 3) the framework instability in some organic solvents. Several other groups have also reported the use of metal centres in MOFs as catalysts. Again, electron-deficient nature of some metals and metal clusters makes the resulting MOFs efficient oxidation catalysts. Mori and coworkers reported MOFs with Cu2 paddle wheel units as heterogeneous catalysts for the oxidation of alcohols. The catalytic activity of the resulting MOF was examined by carrying out alcohol oxidation with H2O2 as the oxidant. It also catalyzed the oxidation of primary alcohol, secondary alcohol and benzyl alcohols with high selectivity. Hill et al. have demonstrated the sulfoxidation of thioethers using a MOF based on vanadium-oxo cluster V6O13 building units. Catalysis: Functional linkers as catalytic sites Functional linkers can be also utilized as catalytic sites. A 3D MOF {[Cd(4-BTAPA)2(NO3)2] • 6H2O • 2DMF} (4-BTAPA = 1,3,5-benzene tricarboxylic acid tris [N-(4-pyridyl)amide], DMF = N,N-dimethylformamide) constructed by tridentate amide linkers and cadmium salt catalyzes the Knoevenagel condensation reaction. The pyridine groups on the ligand 4-BTAPA act as ligands binding to the octahedral cadmium centers, while the amide groups can provide the functionality for interaction with the incoming substrates. Specifically, the −NH moiety of the amide group can act as electron acceptor whereas the C=O group can act as electron donor to activate organic substrates for subsequent reactions. Ferey et al. reported a robust and highly porous MOF [Cr3(μ3-O)F(H2O)2(BDC)3] (BDC: benzene-1,4-dicarboxylate) where instead of directly using the unsaturated Cr(III) centers as catalytic sites, the authors grafted ethylenediamine (ED) onto the Cr(III) sites. The uncoordinated ends of ED can act as base catalytic sites. ED-grafted MOF was investigated for Knoevenagel condensation reactions. A significant increase in conversion was observed for ED-grafted MOF compared to untreated framework (98% vs. 36%). Another example of linker modification to generate catalytic site is iodo-functionalized well-known Al-based MOFs (MIL-53 and DUT-5) and Zr-based MOFs (UiO-66 and UiO-67) for the catalytic oxidation of diols. Catalysis: Entrapment of catalytically active noble metal nanoparticles The entrapment of catalytically active noble metals can be accomplished by grafting on functional groups to the unsaturated metal site on MOFs. Ethylenediamine (ED) has been shown to be grafted on the Cr metal sites and can be further modified to encapsulate noble metals such as Pd. The entrapped Pd has similar catalytic activity as Pd/C in the Heck reaction. Ruthenium nanoparticles have catalytic activity in a number of reactions when entrapped in the MOF-5 framework. This Ru-encapsulated MOF catalyzes oxidation of benzyl alcohol to benzaldehyde, although degradation of the MOF occurs. The same catalyst was used in the hydrogenation of benzene to cyclohexane. In another example, Pd nanoparticles embedded within defective HKUST-1 framework enable the generation of tunable Lewis basic sites. Therefore, this multifunctional Pd/MOF composite is able to perform stepwise benzyl alcohol oxidation and Knoevenagel condensation. Catalysis: Reaction hosts with size selectivity MOFs might prove useful for both photochemical and polymerization reactions due to the tuneability of the size and shape of their pores. A 3D MOF {[Co(bpdc)3(bpy)] • 4DMF • H2O} (bpdc: biphenyldicarboxylate, bpy: 4,4′-bipyridine) was synthesized by Li and coworkers. Using this MOF photochemistry of o-methyl dibenzyl ketone (o-MeDBK) was extensively studied. This molecule was found to have a variety of photochemical reaction properties including the production of cyclopentanol. MOFs have been used to study polymerization in the confined space of MOF channels. Polymerization reactions in confined space might have different properties than polymerization in open space. Styrene, divinylbenzene, substituted acetylenes, methyl methacrylate, and vinyl acetate have all been studied by Kitagawa and coworkers as possible activated monomers for radical polymerization. Due to the different linker size the MOF channel size could be tunable on the order of roughly 25 and 100 Å2. The channels were shown to stabilize propagating radicals and suppress termination reactions when used as radical polymerization sites. Catalysis: Asymmetric catalysis Several strategies exist for constructing homochiral MOFs. Crystallization of homochiral MOFs via self-resolution from achiral linker ligands is one of the way to accomplish such a goal. However, the resulting bulk samples contain both enantiomorphs and are racemic. Aoyama and coworkers successfully obtained homochiral MOFs in the bulk from achiral ligands by carefully controlling nucleation in the crystal growth process. Zheng and coworkers reported the synthesis of homochiral MOFs from achiral ligands by chemically manipulating the statistical fluctuation of the formation of enantiomeric pairs of crystals. Growing MOF crystals under chiral influences is another approach to obtain homochiral MOFs using achiral linker ligands. Rosseinsky and coworkers have introduced a chiral coligand to direct the formation of homochiral MOFs by controlling the handedness of the helices during the crystal growth. Morris and coworkers utilized ionic liquid with chiral cations as reaction media for synthesizing MOFs, and obtained homochiral MOFs. The most straightforward and rational strategy for synthesizing homochiral MOFs is, however, to use the readily available chiral linker ligands for their construction. Catalysis: Homochiral MOFs with interesting functionalities and reagent-accessible channels Homochiral MOFs have been made by Lin and coworkers using 2,2′-bis(diphenylphosphino)-1,1′-binaphthyl (BINAP) and 1,1′-bi-2,2′-naphthol (BINOL) as chiral ligands. These ligands can coordinate with catalytically active metal sites to enhance the enantioselectivity. A variety of linking groups such as pyridine, phosphonic acid, and carboxylic acid can be selectively introduced to the 3,3′, 4,4′, and the 6,6′ positions of the 1,1'-binaphthyl moiety. Moreover, by changing the length of the linker ligands the porosity and framework structure of the MOF can be selectively tuned. Catalysis: Postmodification of homochiral MOFs Lin and coworkers have shown that the postmodification of MOFs can be achieved to produce enantioselective homochiral MOFs for use as catalysts. The resulting 3D homochiral MOF {[Cd3(L)3Cl6] • 4DMF • 6MeOH • 3H2O} (L=(R)-6,6'-dichloro-2,2'-dihydroxyl-1,1'-binaphthyl-bipyridine) synthesized by Lin was shown to have a similar catalytic efficiency for the diethylzinc addition reaction as compared to the homogeneous analogue when was pretreated by Ti(OiPr)4 to generate the grafted Ti- BINOLate species. The catalytic activity of MOFs can vary depending on the framework structure. Lin and others found that MOFs synthesized from the same materials could have drastically different catalytic activities depending on the framework structure present. Catalysis: Homochiral MOFs with precatalysts as building blocks Another approach to construct catalytically active homochiral MOFs is to incorporate chiral metal complexes which are either active catalysts or precatalysts directly into the framework structures. For example, Hupp and coworkers have combined a chiral ligand and bpdc (bpdc: biphenyldicarboxylate) with Zn(NO3)2 and obtained twofold interpenetrating 3D networks. The orientation of chiral ligand in the frameworks makes all Mn(III) sites accessible through the channels. The resulting open frameworks showed catalytic activity towards asymmetric olefin epoxidation reactions. No significant decrease of catalyst activity was observed during the reaction and the catalyst could be recycled and reused several times. Lin and coworkers have reported zirconium phosphonate-derived Ru-BINAP systems. Zirconium phosphonate-based chiral porous hybrid materials containing the Ru(BINAP)(diamine)Cl2 precatalysts showed excellent enantioselectivity (up to 99.2% ee) in the asymmetric hydrogenation of aromatic ketones. Catalysis: Biomimetic design and photocatalysis Some MOF materials may resemble enzymes when they combine isolated polynuclear sites, dynamic host–guest responses, and hydrophobic cavity environment which are characteristics of an enzyme. Some well-known examples of cooperative catalysis involving two metal ions in biological systems include: the diiron sites in methane monooxygenase, dicopper in cytochrome c oxidase, and tricopper oxidases which have analogy with polynuclear clusters found in the 0D coordination polymers, such as binuclear Cu2 paddlewheel units found in MOP-1 and [Cu3(btc)2] (btc = benzene-1,3,5-tricarboxylate) in HKUST-1 or trinuclear units such as {Fe3O(CO2)6} in MIL-88, and IRMOP-51. Thus, 0D MOFs have accessible biomimetic catalytic centers. In enzymatic systems, protein units show "molecular recognition", high affinity for specific substrates. It seems that molecular recognition effects are limited in zeolites by the rigid zeolite structure. In contrast, dynamic features and guest-shape response make MOFs more similar to enzymes. Indeed, many hybrid frameworks contain organic parts that can rotate as a result of stimuli, such as light and heat. The porous channels in MOF structures can be used as photocatalysis sites. In photocatalysis, the use of mononuclear complexes is usually limited either because they only undergo single-electron process or from the need for high-energy irradiation. In this case, binuclear systems have a number of attractive features for the development of photocatalysts. For 0D MOF structures, polycationic nodes can act as semiconductor quantum dots which can be activated upon photostimuli with the linkers serving as photon antennae. Theoretical calculations show that MOFs are semiconductors or insulators with band gaps between 1.0 and 5.5 eV which can be altered by changing the degree of conjugation in the ligands. Experimental results show that the band gap of IRMOF-type samples can be tuned by varying the functionality of the linker. An integrated MOF nanozyme was developed for anti-inflammation therapy. Mechanical properties: Implementing MOFs in industry necessitates a thorough understanding of the mechanical properties since most processing techniques (e.g. extrusion and pelletization) expose the MOFs to substantial mechanical compressive stresses. The mechanical response of porous structures is of interest as these structures can exhibit unusual response to high pressures. While zeolites (microporous, aluminosilicate minerals) can give some insights into the mechanical response of MOFs, the presence of organic linkers as opposed to zeolites, makes for novel mechanical responses. MOFs are very structurally diverse meaning that it is challenging to classify all of their mechanical properties. Additionally, variability in MOFs from batch to batch and extreme experimental conditions (Diamond anvil cells) mean that experimental determination of mechanical response to loading is limited, however many computational models have been made to determine structure-property relationships. Main MOF systems that have been explored are zeolitic imidazolate frameworks (ZIFs), Carboxylate MOFs, Zirconium-based MOFs, among others. Generally, the MOFs undergo three processes under compressive loading (which is relevant in a processing context): amorphization, hyperfilling, and/or pressure induced phase transitions. During amorphization linkers buckle and the internal porosity within the MOF collapses. During hyperfilling the MOF which is being hydrostatically compressed in a liquid (typically solvent) will expand rather than contract due to a filling of pores with the loading media. Finally, pressure induced phase transitions where the structure of the crystal is altered during the loading are possible. The response of the MOF is predominantly dependent on the linker species and the inorganic nodes. Mechanical properties: Zeolitic imidazolate frameworks (ZIFs) Several different mechanical phenomena have been observed in zeolitic imidazolate frameworks (ZIFs), the most widely studied MOF for mechanical properties due to their many similarities to zeolites. General trends for the ZIF family are the tendency of the Young's modulus and hardness of the ZIFs to decrease as the accessible pore volume increases. The bulk moduli of ZIF-62 series increase with the increasing of benzoimidazolate (bim−) concentration. ZIF-62 shows a continuous phase transition from open pore (op) to close pore (cp) phase when bim− concentration is over 0.35 per formular unit. The accessible pore size and volume of ZIF-62-bim0.35 can be precisely tuned by applying adequate pressures. Another study has shown that under hydrostatic loading in solvent the ZIF-8 material expands as opposed to contracting. This is a result of hyperfilling of the internal pores with solvent. A computational study demonstrated that ZIF-4 and ZIF-8 materials undergo a shear softening mechanism with amorphizing (at ~ 0.34 GPa) of the material under hydrostatic loading, while still possessing a bulk modulus on the order of 6.5 GPa. Additionally, the ZIF-4 and ZIF-8 MOFs are subject to many pressure dependent phase transitions. Mechanical properties: Carboxylate-based MOFs Carboxylate MOFs come in many forms and have been widely studied. Herein, HKUST-1, MOF-5, and the MIL series are discussed as representative examples of the carboxylate MOF class. Mechanical properties: HKUST-1 HKUST-1 consists of a dimeric Cu-paddlewheel that possesses two pore types. Under pelletization MOFs such as HKUST-1 exhibit a pore collapse. Although most carboxylate MOFs have a negative thermal expansion (they densify during heating), it was found that the hardness and Young's moduli unexpectedly decrease with increasing temperature from disordering of linkers. It was also found computationally that a more mesoporous structure has a lower bulk modulus. However, an increased bulk modulus was observed in systems with a few large mesopores versus many small mesopores even though both pore size distributions had the same total pore volume. The HKUST-1 shows a similar, "hyperfilling" phenomenon to the ZIF structures under hydrostatic loading. Mechanical properties: MOF-5 MOF-5 has tetranuclear nodes in an octahedral configuration with an overall cubic structure. MOF-5 has a compressibility and Young's modulus (~14.9 GPa) comparable to wood, which was confirmed with density functional theory (DFT) and nanoindentation. While it was shown that the MOF-5 can demonstrate the hyperfilling phenomenon within a loading media of solvent, these MOFs are very sensitive to pressure and undergo amorphization/pressure induced pore collapse at a pressure of 3.5 MPa when there is no fluid in the pores. Mechanical properties: MIL-53 MIL-53 MOFs possess a "wine rack" structure. These MOFs have been explored for anisotropy in Young's modulus due to the flexibility of loading, and the potential for negative linear compressibility when compressing in one direction, due to the ability of the wine rack opening during loading. Mechanical properties: Zirconium-based MOFs Zirconium-based MOFs such as UiO-66 are a very robust class of MOFs (attributed to strong hexanuclear Zr 6 metallic nodes) with increased resistance to heat, solvents, and other harsh conditions, which makes them of interest in terms of mechanical properties. Determinations of shear modulus and pelletization have shown that the UiO-66 MOFs are very mechanically robust and have high tolerance for pore collapse when compared to ZIFs and carboxylate MOFs. Although the UiO-66 MOF shows increased stability under pelletization, the UiO-66 MOFs amorphized fairly rapidly under ball milling conditions due to destruction of linker coordinating inorganic nodes. Applications: Hydrogen storage Molecular hydrogen has the highest specific energy of any fuel. Applications: However unless the hydrogen gas is compressed, its volumetric energy density is very low, so the transportation and storage of hydrogen require energy-intensive compression and liquefaction processes. Therefore, development of new hydrogen storage methods which decrease the concomitant pressure required for practical volumetric energy density is an active area of research. MOFs attract attention as materials for adsorptive hydrogen storage because of their high specific surface areas and surface to volume ratios, as well as their chemically tunable structures.Compared to an empty gas cylinder, a MOF-filled gas cylinder can store more hydrogen at a given pressure because hydrogen molecules adsorb to the surface of MOFs. Furthermore, MOFs are free of dead-volume, so there is almost no loss of storage capacity as a result of space-blocking by non-accessible volume. Also, because the hydrogen uptake is based primarily on physisorption, many MOFs have a fully reversible uptake-and-release behavior. No large activation barriers are required when liberating the adsorbed hydrogen. The storage capacity of a MOF is limited by the liquid-phase density of hydrogen because the benefits provided by MOFs can be realized only if the hydrogen is in its gaseous state.The extent to which a gas can adsorb to a MOF's surface depends on the temperature and pressure of the gas. In general, adsorption increases with decreasing temperature and increasing pressure (until a maximum is reached, typically 20–30 bar, after which the adsorption capacity decreases). However, MOFs to be used for hydrogen storage in automotive fuel cells need to operate efficiently at ambient temperature and pressures between 1 and 100 bar, as these are the values that are deemed safe for automotive applications. Applications: The U.S. Department of Energy (DOE) has published a list of yearly technical system targets for on-board hydrogen storage for light-duty fuel cell vehicles which guide researchers in the field (5.5 wt %/40 g L−1 by 2017; 7.5 wt %/70 g L−1 ultimate). Materials with high porosity and high surface area such as MOFs have been designed and synthesized in an effort to meet these targets. These adsorptive materials generally work via physical adsorption rather than chemisorption due to the large HOMO-LUMO gap and low HOMO energy level of molecular hydrogen. A benchmark material to this end is MOF-177 which was found to store hydrogen at 7.5 wt % with a volumetric capacity of 32 g L−1 at 77 K and 70 bar. MOF-177 consists of [Zn4O]6+ clusters interconnected by 1,3,5-benzenetribenzoate organic linkers and has a measured BET surface area of 4630 m2 g−1. Another exemplary material is PCN-61 which exhibits a hydrogen uptake of 6.24 wt % and 42.5 g L−1 at 35 bar and 77 K and 2.25 wt % at atmospheric pressure. PCN-61 consists of [Cu2]4+ paddle-wheel units connected through 5,5′,5′′-benzene-1,3,5-triyltris(1-ethynyl-2-isophthalate) organic linkers and has a measured BET surface area of 3000 m2 g−1. Despite these promising MOF examples, the classes of synthetic porous materials with the highest performance for practical hydrogen storage are activated carbon and covalent organic frameworks (COFs). Applications: Design principles Practical applications of MOFs for hydrogen storage are met with several challenges. For hydrogen adsorption near room temperature, the hydrogen binding energy would need to be increased considerably. Several classes of MOFs have been explored, including carboxylate-based MOFs, heterocyclic azolate-based MOFs, metal-cyanide MOFs, and covalent organic frameworks. Carboxylate-based MOFs have by far received the most attention because they are either commercially available or easily synthesized, they have high acidity (pKa ~ 4) allowing for facile in situ deprotonation, the metal-carboxylate bond formation is reversible, facilitating the formation of well-ordered crystalline MOFs, and the bridging bidentate coordination ability of carboxylate groups favors the high degree of framework connectivity and strong metal-ligand bonds necessary to maintain MOF architecture under the conditions required to evacuate the solvent from the pores.The most common transition metals employed in carboxylate-based frameworks are Cu2+ and Zn2+. Lighter main group metal ions have also been explored. Be12(OH)12(BTB)4, the first successfully synthesized and structurally characterized MOF consisting of a light main group metal ion, shows high hydrogen storage capacity, but it is too toxic to be employed practically. There is considerable effort being put forth in developing MOFs composed of other light main group metal ions, such as magnesium in Mg4(BDC)3.The following is a list of several MOFs that are considered to have the best properties for hydrogen storage as of May 2012 (in order of decreasing hydrogen storage capacity). While each MOF described has its advantages, none of these MOFs reach all of the standards set by the U.S. DOE. Therefore, it is not yet known whether materials with high surface areas, small pores, or di- or trivalent metal clusters produce the most favorable MOFs for hydrogen storage. Applications: Structural impacts on hydrogen storage capacity To date, hydrogen storage in MOFs at room temperature is a battle between maximizing storage capacity and maintaining reasonable desorption rates, while conserving the integrity of the adsorbent framework (e.g. completely evacuating pores, preserving the MOF structure, etc.) over many cycles. There are two major strategies governing the design of MOFs for hydrogen storage: 1) to increase the theoretical storage capacity of the material, and 2) to bring the operating conditions closer to ambient temperature and pressure. Rowsell and Yaghi have identified several directions to these ends in some of the early papers. Applications: Surface area The general trend in MOFs used for hydrogen storage is that the greater the surface area, the more hydrogen the MOF can store. High surface area materials tend to exhibit increased micropore volume and inherently low bulk density, allowing for more hydrogen adsorption to occur. Applications: Hydrogen adsorption enthalpy High hydrogen adsorption enthalpy is also important. Theoretical studies have shown that 22–25 kJ/mol interactions are ideal for hydrogen storage at room temperature, as they are strong enough to adsorb H2, but weak enough to allow for quick desorption. The interaction between hydrogen and uncharged organic linkers is not this strong, and so a considerable amount of work has gone in synthesis of MOFs with exposed metal sites, to which hydrogen adsorbs with an enthalpy of 5–10 kJ/mol. Synthetically, this may be achieved by using ligands whose geometries prevent the metal from being fully coordinated, by removing volatile metal-bound solvent molecules over the course of synthesis, and by post-synthetic impregnation with additional metal cations. (C5H5)V(CO)3(H2) and Mo(CO)5(H2) are great examples of increased binding energy due to open metal coordination sites; however, their high metal-hydrogen bond dissociation energies result in a tremendous release of heat upon loading with hydrogen, which is not favorable for fuel cells. MOFs, therefore, should avoid orbital interactions that lead to such strong metal-hydrogen bonds and employ simple charge-induced dipole interactions, as demonstrated in Mn3[(Mn4Cl)3(BTT)8]2. Applications: An association energy of 22–25 kJ/mol is typical of charge-induced dipole interactions, and so there is interest in the use of charged linkers and metals. The metal–hydrogen bond strength is diminished in MOFs, probably due to charge diffusion, so 2+ and 3+ metal ions are being studied to strengthen this interaction even further. A problem with this approach is that MOFs with exposed metal surfaces have lower concentrations of linkers; this makes them difficult to synthesize, as they are prone to framework collapse. This may diminish their useful lifetimes as well. Applications: Sensitivity to airborne moisture MOFs are frequently sensitive to moisture in the air. In particular, IRMOF-1 degrades in the presence of small amounts of water at room temperature. Studies on metal analogues have unraveled the ability of metals other than Zn to stand higher water concentrations at high temperatures.To compensate for this, specially constructed storage containers are required, which can be costly. Strong metal-ligand bonds, such as in metal-imidazolate, -triazolate, and -pyrazolate frameworks, are known to decrease a MOF's sensitivity to air, reducing the expense of storage. Applications: Pore size In a microporous material where physisorption and weak van der Waals forces dominate adsorption, the storage density is greatly dependent on the size of the pores. Calculations of idealized homogeneous materials, such as graphitic carbons and carbon nanotubes, predict that a microporous material with 7 Å-wide pores will exhibit maximum hydrogen uptake at room temperature. At this width, exactly two layers of hydrogen molecules adsorb on opposing surfaces with no space left in between. Applications: 10 Å-wide pores are also of ideal size because at this width, exactly three layers of hydrogen can exist with no space in between. (A hydrogen molecule has a bond length of 0.74 Å with a van der Waals radius of 1.17 Å for each atom; therefore, its effective van der Waals length is 3.08 Å.) Structural defects Structural defects also play an important role in the performance of MOFs. Room-temperature hydrogen uptake via bridged spillover is mainly governed by structural defects, which can have two effects: 1) a partially collapsed framework can block access to pores; thereby reducing hydrogen uptake, and 2) lattice defects can create an intricate array of new pores and channels causing increased hydrogen uptake.Structural defects can also leave metal-containing nodes incompletely coordinated. This enhances the performance of MOFs used for hydrogen storage by increasing the number of accessible metal centers. Finally, structural defects can affect the transport of phonons, which affects the thermal conductivity of the MOF. Applications: Hydrogen adsorption Adsorption is the process of trapping atoms or molecules that are incident on a surface; therefore the adsorption capacity of a material increases with its surface area. In three dimensions, the maximum surface area will be obtained by a structure which is highly porous, such that atoms and molecules can access internal surfaces. This simple qualitative argument suggests that the highly porous metal-organic frameworks (MOFs) should be excellent candidates for hydrogen storage devices. Applications: Adsorption can be broadly classified as being one of two types: physisorption or chemisorption. Physisorption is characterized by weak van der Waals interactions, and bond enthalpies typically less than 20 kJ/mol. Chemisorption, alternatively, is defined by stronger covalent and ionic bonds, with bond enthalpies between 250 and 500 kJ/mol. In both cases, the adsorbate atoms or molecules (i.e. the particles which adhere to the surface) are attracted to the adsorbent (solid) surface because of the surface energy that results from unoccupied bonding locations at the surface. The degree of orbital overlap then determines if the interactions will be physisorptive or chemisorptive.Adsorption of molecular hydrogen in MOFs is physisorptive. Since molecular hydrogen only has two electrons, dispersion forces are weak, typically 4–7 kJ/mol, and are only sufficient for adsorption at temperatures below 298 K.A complete explanation of the H2 sorption mechanism in MOFs was achieved by statistical averaging in the grand canonical ensemble, exploring a wide range of pressures and temperatures. Applications: Determining hydrogen storage capacity Two hydrogen-uptake measurement methods are used for the characterization of MOFs as hydrogen storage materials: gravimetric and volumetric. To obtain the total amount of hydrogen in the MOF, both the amount of hydrogen absorbed on its surface and the amount of hydrogen residing in its pores should be considered. To calculate the absolute absorbed amount (Nabs), the surface excess amount (Nex) is added to the product of the bulk density of hydrogen (ρbulk) and the pore volume of the MOF (Vpore), as shown in the following equation: Gravimetric method The increased mass of the MOF due to the stored hydrogen is directly calculated by a highly sensitive microbalance. Due to buoyancy, the detected mass of adsorbed hydrogen decreases again when a sufficiently high pressure is applied to the system because the density of the surrounding gaseous hydrogen becomes more and more important at higher pressures. Thus, this "weight loss" has to be corrected using the volume of the MOF's frame and the density of hydrogen. Applications: Volumetric method The changing of amount of hydrogen stored in the MOF is measured by detecting the varied pressure of hydrogen at constant volume. The volume of adsorbed hydrogen in the MOF is then calculated by subtracting the volume of hydrogen in free space from the total volume of dosed hydrogen. Applications: Other methods of hydrogen storage There are six possible methods that can be used for the reversible storage of hydrogen with a high volumetric and gravimetric density, which are summarized in the following table, (where ρm is the gravimetric density, ρv is the volumetric density, T is the working temperature, and P is the working pressure): Of these, high-pressure gas cylinders and liquid hydrogen in cryogenic tanks are the least practical ways to store hydrogen for the purpose of fuel due to the extremely high pressure required for storing hydrogen gas or the extremely low temperature required for storing hydrogen liquid. The other methods are all being studied and developed extensively. Applications: Electrocatalysis The high surface area and atomic metal sites feature of MOFs make them a suitable candidate for electrocatalysts, especially energy-related ones. Applications: Until now, MOFs have been used extensively as electrocatalyst for water splitting (hydrogen evolution reaction and oxygen evolution reaction), carbon dioxide reduction, and oxygen reduction reaction. Currently there are two routes: 1. Using MOFs as precursors to prepare electrocatalysts with carbon support. 2. Using MOFs directly as electrocatalysts. However, some results have shown that some MOFs are not stable under electrochemical environment. The electrochemical conversion of MOFs during electrocatalysis may produce the real catalyst materials, and the MOFs are precatalysts under such conditions. Therefore, claiming MOFs as the electrocatalysts requires in situ techniques coupled with electrocatalysis. Applications: Biological imaging and sensing A potential application for MOFs is biological imaging and sensing via photoluminescence. A large subset of luminescent MOFs use lanthanides in the metal clusters. Lanthanide photoluminescence has many unique properties that make them ideal for imaging applications, such as characteristically sharp and generally non-overlapping emission bands in the visible and near-infrared (NIR) regions of the spectrum, resistance to photobleaching or "blinking", and long luminescence lifetimes. However, lanthanide emissions are difficult to sensitize directly because they must undergo LaPorte forbidden f-f transitions. Indirect sensitization of lanthanide emission can be accomplished by employing the "antenna effect", where the organic linkers act as antennae and absorb the excitation energy, transfer the energy to the excited state of the lanthanide, and yield lanthanide luminescence upon relaxation. A prime example of the antenna effect is demonstrated by MOF-76, which combines trivalent lanthanide ions and 1,3,5-benzenetricarboxylate (BTC) linkers to form infinite rod SBUs coordinated into a three dimensional lattice. As demonstrated by multiple research groups, the BTC linker can effectively sensitize the lanthanide emission, resulting in a MOF with variable emission wavelengths depending on the lanthanide identity. Additionally, the Yan group has shown that Eu3+- and Tb3+- MOF-76 can be used for selective detection of acetophenone from other volatile monoaromatic hydrocarbons. Upon acetophenone uptake, the MOF shows a very sharp decrease, or quenching, of the luminescence intensity.For use in biological imaging, however, two main obstacles must be overcome: MOFs must be synthesized on the nanoscale so as not to affect the target's normal interactions or behavior The absorbance and emission wavelengths must occur in regions with minimal overlap from sample autofluorescence, other absorbing species, and maximum tissue penetration.Regarding the first point, nanoscale MOF (NMOF) synthesis has been mentioned in an earlier section. The latter obstacle addresses the limitation of the antenna effect. Smaller linkers tend to improve MOF stability, but have higher energy absorptions, predominantly in the ultraviolet (UV) and high-energy visible regions. A design strategy for MOFs with redshifted absorption properties has been accomplished by using large, chromophoric linkers. These linkers are often composed of polyaromatic species, leading to large pore sizes and thus decreased stability. To circumvent the use of large linkers, other methods are required to redshift the absorbance of the MOF so lower energy excitation sources can be used. Post-synthetic modification (PSM) is one promising strategy. Luo et al. introduced a new family of lanthanide MOFs with functionalized organic linkers. The MOFs, deemed MOF-1114, MOF-1115, MOF-1130, and MOF-1131, are composed of octahedral SBUs bridged by amino functionalized dicarboxylate linkers. The amino groups on the linkers served as sites for covalent PSM reactions with either salicylaldehyde or 3-hydroxynaphthalene-2-carboxaldehyde. Both of these reactions extend the π-conjugation of the linker, causing a redshift in the absorbance wavelength from 450 nm to 650 nm. The authors also propose that this technique could be adapted to similar MOF systems and, by increasing pore volumes with increasing linker lengths, larger pi-conjugated reactants can be used to further redshift the absorption wavelengths. Biological imaging using MOFs has been realized by several groups, namely Foucault-Collet and co-workers. In 2013, they synthesized a NIR-emitting Yb3+-NMOF using phenylenevinylene dicarboxylate (PVDC) linkers. They observed cellular uptake in both HeLa cells and NIH-3T3 cells using confocal, visible, and NIR spectroscopy. Although low quantum yields persist in water and Hepes buffer solution, the luminescence intensity is still strong enough to image cellular uptake in both the visible and NIR regimes. Applications: Nuclear wasteform materials The development of new pathways for efficient nuclear waste administration is essential in wake of increased public concern about radioactive contamination, due to nuclear plant operation and nuclear weapon decommission. Synthesis of novel materials capable of selective actinide sequestration and separation is one of the current challenges acknowledged in the nuclear waste sector. Metal–organic frameworks (MOFs) are a promising class of materials to address this challenge due to their porosity, modularity, crystallinity, and tunability. Every building block of MOF structures can incorporate actinides. First, a MOF can be synthesized starting from actinide salts. In this case the metal nodes are actinides. In addition, metal nodes can be extended, or cation exchange can exchange metals for actinides. Organic linkers can be functionalized with groups capable of actinide uptake. Lastly, the porosity of MOFs can be used to incorporate contained guest molecules and trap them in a structure by installation of additional or capping linkers. Applications: Drug delivery systems The synthesis, characterization, and drug-related studies of low toxicity, biocompatible MOFs has shown that they have potential for medical applications. Many groups have synthesized various low toxicity MOFs and have studied their uses in loading and releasing various therapeutic drugs for potential medical applications. A variety of methods exist for inducing drug release, such as pH-response, magnetic-response, ion-response, temperature-response, and pressure response.In 2010 Smaldone et al., an international research group, synthesized a biocompatible MOF termed CD-MOF-1 from cheap edible natural products. CD-MOF-1 consists of repeating base units of 6 γ-cyclodextrin rings bound together by potassium ions. γ-cyclodextrin (γ-CD) is a symmetrical cyclic oligosaccharide that is mass-produced enzymatically from starch and consists of eight asymmetric α-1,4-linked D-glucopyranosyl residues. The molecular structure of these glucose derivatives, which approximates a truncated cone, bucket, or torus, generates a hydrophilic exterior surface and a nonpolar interior cavity. Cyclodextrins can interact with appropriately sized drug molecules to yield an inclusion complex. Smaldone's group proposed a cheap and simple synthesis of the CD-MOF-1 from natural products. They dissolved sugar (γ-cyclodextrin) and an alkali salt (KOH, KCl, potassium benzoate) in distilled bottled water and allowed 190 proof grain alcohol (Everclear) to vapor diffuse into the solution for a week. The synthesis resulted in a cubic (γ-CD)6 repeating motif with a pore size of approximately 1 nm. Subsequently, in 2017 Hartlieb et al. at Northwestern did further research with CD-MOF-1 involving the encapsulation of ibuprofen. The group studied different methods of loading the MOF with ibuprofen as well as performing related bioavailability studies on the ibuprofen-loaded MOF. They investigated two different methods of loading CD-MOF-1 with ibuprofen; crystallization using the potassium salt of ibuprofen as the alkali cation source for production of the MOF, and absorption and deprotonation of the free-acid of ibuprofen into the MOF. From there the group performed in vitro and in vivo studies to determine the applicability of CD-MOF-1 as a viable delivery method for ibuprofen and other NSAIDs. In vitro studies showed no toxicity or effect on cell viability up to 100 μM. In vivo studies in mice showed the same rapid uptake of ibuprofen as the ibuprofen potassium salt control sample with a peak plasma concentration observed within 20 minutes, and the cocrystal has the added benefit of double the half-life in blood plasma samples. The increase in half-life is due to CD-MOF-1 increasing the solubility of ibuprofen compared to the pure salt form. Applications: Since these developments many groups have done further research into drug delivery with water-soluble, biocompatible MOFs involving common over-the-counter drugs. In March 2018 Sara Rojas and her team published their research on drug incorporation and delivery with various biocompatible MOFs other than CD-MOF-1 through simulated cutaneous administration. The group studied the loading and release of ibuprofen (hydrophobic) and aspirin (hydrophilic) in three biocompatible MOFs (MIL-100(Fe), UiO-66(Zr), and MIL-127(Fe)). Under simulated cutaneous conditions (aqueous media at 37 °C) the six different combinations of drug-loaded MOFs fulfilled "the requirements to be used as topical drug delivery systems, such as released payload between 1 and 7 days" and delivering a therapeutic concentration of the drug of choice without causing unwanted side effects. The group discovered that the drug uptake is "governed by the hydrophilic/hydrophobic balance between cargo and matrix" and "the accessibility of the drug through the framework". The "controlled release under cutaneous conditions follows different kinetics profiles depending on: (i) the structure of the framework, with either a fast delivery from the very open structure MIL-100 or a slower drug release from the narrow 1D pore system of MIL-127 or (ii) the hydrophobic/hydrophilic nature of the cargo, with a fast (Aspirin) and slow (Ibuprofen) release from the UiO-66 matrix." Moreover, a simple ball milling technique is used to efficiently encapsulate the model drugs 5-fluorouracil, caffeine, para-aminobenzoic acid, and benzocaine. Both computational and experimental studies confirm the suitability of [Zn4O(dmcapz)3] to incorporate high loadings of the studied bioactive molecules.Recent research involving MOFs as a drug delivery method includes more than just the encapsulation of everyday drugs like ibuprofen and aspirin. In early 2018 Chen et al., published detailing their work on the use of MOF, ZIF-8 (zeolitic imidazolate framework-8) in antitumor research "to control the release of an autophagy inhibitor, 3-methyladenine (3-MA), and prevent it from dissipating in a large quantity before reaching the target." The group performed in vitro studies and determined that "the autophagy-related proteins and autophagy flux in HeLa cells treated with 3-MA@ZIF-8 NPs show that the autophagosome formation is significantly blocked, which reveals that the pH-sensitive dissociation increases the efficiency of autophagy inhibition at the equivalent concentration of 3-MA." This shows promise for future research and applicability with MOFs as drug delivery methods in the fight against cancer. Applications: Semiconductors In 2014 researchers proved that they can create electrically conductive thin films of MOFs (Cu3(BTC)2 (also known as HKUST-1; BTC, benzene-1,3,5-tricarboxylic acid) infiltrated with the molecule 7,7,8,8-tetracyanoquinododimethane) that could be used in applications including photovoltaics, sensors and electronic materials and a path towards creating semiconductors. The team demonstrated tunable, air-stable electrical conductivity with values as high as 7 siemens per meter, comparable to bronze.Ni3(2,3,6,7,10,11-hexaiminotriphenylene)2 was shown to be a metal-organic graphene analogue that has a natural band gap, making it a semiconductor, and is able to self-assemble. It is an example of conductive metal organic framework. It represents a family of similar compounds. Because of the symmetry and geometry in 2,3,6,7,10,11-hexaiminotriphenylene (HITP), the overall organometallic complex has an almost fractal nature that allows it to perfectly self-organize. By contrast, graphene must be doped to give it the properties of a semiconductor. Ni3(HITP)2 pellets had a conductivity of 2 S/cm, a record for a metal-organic compound.In 2018 researchers synthesized a two-dimensional semiconducting MOF (Fe3(THT)2(NH4)3, also known as THT, 2,3,6,7,10,11-triphenylenehexathiol) and showed high electric mobility at room temperature. In 2020 the same material was integrated in a photo-detecting device, detecting a broad wavelength range from UV to NIR (400–1575 nm). This was the first time a two-dimensional semiconducting MOF was demonstrated to be used in opto-electronic devices. Applications: Cu HHTP )2 is a 2D MOF structure, and there are limited examples of materials which are intrinsically conductive, porous, and crystalline. Layered 2D MOFs have porous crystalline structure showing electrical conductivity. These materials are constructed from trigonal linker molecules (phenylene or triphenylene) and six functional groups of –OH, - NH 2 , or –SH. The trigonal linker molecules and square-planarly coordinated metal ions such as Cu 2+ , Ni 2+ , Co 2+ , and Pt 2+ form layers with hexagonal structures which look like graphene in larger scale. Stacking of these layers can build one-dimensional pore systems. Graphene-like 2D MOFs have shown decent conductivities. This makes them a good choice to be tested as electrode material for evolution of hydrogen from water, oxygen reduction reactions, supercapacitors, and sensing of volatile organic compounds (VOCs). Among these MOFs, Cu HHTP )2 has exhibited the lowest conductivity, but also the strongest reaction in sensing of VOCs. Applications: Bio-mimetic mineralization Biomolecules can be incorporated during the MOF crystallization process. Biomolecules including proteins, DNA and antibodies could be encapsulated within ZIF-8. Enzymes encapsulated in this way were stable and active even after being exposed to harsh conditions (e.g. aggressive solvents and high temperature). ZIF-8, MIL-88A, HKUST-1, and several luminescent MOFs containing lanthanide metals were used for the biomimetic mineralization process. Applications: Carbon capture Adsorbent MOF's small, tunable pore sizes and high void fractions are promising as an adsorbent to capture CO2. MOFs could provide a more efficient alternative to traditional amine solvent-based methods in CO2 capture from coal-fired power plants.MOFs could be employed in each of the main three carbon capture configurations for coal-fired power plants: pre-combustion, post-combustion, and oxy-combustion. The post-combustion configuration is the only one that can be retrofitted to existing plants, drawing the most interest and research. The flue gas would be fed through a MOF in a packed-bed reactor setup. Flue gas is generally 40 to 60 °C with a partial pressure of CO2 at 0.13 – 0.16 bar. CO2 can bind to the MOF surface through either physisorption (via Van der Waals interactions) or chemisorption (via covalent bond formation).Once the MOF is saturated, the CO2 is extracted from the MOF through either a temperature swing or a pressure swing. This process is known as regeneration. In a temperature swing regeneration, the MOF would be heated until CO2 desorbs. To achieve working capacities comparable to the amine process, the MOF must be heated to around 200 °C. In a pressure swing, the pressure would be decreased until CO2 desorbs.Another relevant MOF property is their low heat capacities. Monoethanolamine (MEA) solutions, the leading capture method, have a heat capacity between 3-4 J/(g⋅K) since they are mostly water. This high heat capacity contributes to the energy penalty in the solvent regeneration step, i.e. when the adsorbed CO2 is removed from the MEA solution. MOF-177, a MOF designed for CO2 capture, has a heat capacity of 0.5 J/(g⋅K) at ambient temperature.MOFs adsorb 90% of the CO2 using a vacuum pressure swing process. The MOF Mg(dobdc) has a 21.7 wt% CO2 loading capacity. Applied to a large scale power plant, the cost of energy would increase by 65%, while a U.S. NETL baseline amine-based system would cause an increase of 81% (goal is 35%). The capture cost would be $57 / ton, while for the amine system the cost is estimated to be $72 / ton. At that rate the capital required to implement such project in a 580 MW power plant would be $354 million. Applications: Catalyst A MOF loaded with propylene oxide can act as a catalyst, converting CO2 into cyclic carbonates (ring-shaped molecules with many applications). They can also remove carbon from biogas. This MOF is based on lanthanides, which provide chemical stability. This is especially important because the gases the MOF will be exposed to are hot, high in humidity, and acidic. Triaminoguanidinium-based POFs and Zn/POFs are new multifunctional materials for environmental remediation and biomedical applications. Applications: Desalination/ion separation MOF membranes can mimic substantial ion selectivity. This offers the potential for use in desalination and water treatment. As of 2018 reverse osmosis supplied more than half of global desalination capacity, and the last stage of most water treatment processes. Osmosis does not use dehydration of ions, or selective ion transport in biological channels and it is not energy efficient. The mining industry, uses membrane-based processes to reduce water pollution, and to recover metals. MOFs could be used to extract metals such as lithium from seawater and waste streams.MOF membranes such as ZIF-8 and UiO-66 membranes with uniform subnanometer pores consisting of angstrom-scale windows and nanometer-scale cavities displayed ultrafast selective transport of alkali metal ions. The windows acted as ion selectivity filters for alkali metal ions, while the cavities functioned as pores for transport. The ZIF-8 and UiO-66 membranes showed a LiCl/RbCl selectivity of ~4.6 and ~1.8, respectively, much higher than the 0.6 to 0.8 selectivity in traditional membranes. A 2020 study suggested that a new MOF called PSP-MIL-53 could be used along with sunlight to purify water in just half an hour. Applications: Gas separation MOFs are also predicted to be very effective media to separate gases with low energy cost using computational high throughput screening from their adsorption or gas breakthrough/diffusion properties. One example is NbOFFIVE-1-Ni, also referred to as KAUST-7 which can separate propane and propylene via diffusion at nearly 100% selectivity. The specific molecule selectivity properties provided by Cu-BDC surface mounted metal organic framework (SURMOF-2) growth on alumina layer on top of back gated Graphene Field Effect Transistor (GFET) can provide a sensor that is only sensitive to ethanol but not to methanol or isopropanol. Applications: Water vapor capture and dehumidification MOFs have been demonstrated that capture water vapor from the air. In 2021 under humid conditions, a polymer-MOF lab prototype yielded 17 liters (4.5 gal) of water per kg per day without added energy.MOFs could also be used to increase energy efficiency in room temperature space cooling applications. Applications: When cooling outdoor air, a cooling unit must deal with both the air's sensible heat and latent heat. Typical vapor-compression-air-conditioning (VCAC) units manage the latent heat in air through cooling fins held below the dew point temperature of the moist air at the intake. These fins condense the water, dehydrating the air and thus substantially reducing the air's heat content. The cooler's energy usage is highly dependent on the cooling coil's temperature and would be improved greatly if the temperature of this coil could be raised above the dew point. This makes it desirable to handle dehumidification through means other than condensation. One such means is by adsorbing the water from the air into a desiccant coated onto the heat exchangers, using the waste heat exhausted from the unit to desorb the water from the sorbent and thus regenerate the desiccant for repeated usage. This is accomplished by having two condenser/evaporator units through which the flow of refrigerant can be reversed once the desiccant on the condenser is saturated, thus making the condenser the evaporator and vice versa.MOFs' extremely high surface areas and porosities have made them the subject of much research in water adsorption applications. Chemistry can help tune the optimal relative humidity for adsorption/desorption, and the sharpness of the water uptake. Applications: Ferroelectrics and multiferroics Some MOFs also exhibit spontaneous electric polarization, which occurs due to the ordering of electric dipoles (polar linkers or guest molecules) below a certain phase transition temperature. If this long-range dipolar order can be controlled by the external electric field, a MOF is called ferroelectric. Some ferroelectric MOFs also exhibit magnetic ordering making them single structural phase multiferroics. This material property is highly interesting for construction of memory devices with high information density. The coupling mechanism of type-I [(CH3)2NH2][Ni(HCOO)3] molecular multiferroic is spontaneous elastic strain mediated indirect coupling.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Magic: The Gathering core sets, 2009–2015** Magic: The Gathering core sets, 2009–2015: Seven Magic: The Gathering core sets have been released since 2009: Magic 2010, Magic 2011, Magic 2012, Magic 2013, Magic 2014, Magic 2015, and Magic Origins. Unlike 10th Edition and previous core sets, roughly half of each core set was entirely new cards. Beginning with Magic 2010, Wizards decided to introduce new cards into the Core Set so that they could be relevant for both new players as well as veterans. Starting with Magic 2011, core sets have included "returning mechanics", or non-evergreen keywords with cards printed in just one core set. All of these core sets were released in the summer of the year prior to the year in the title - for example, Magic 2010 was released in 2009. Magic: The Gathering core sets, 2009–2015: After Magic Origins, Wizards of the Coast stopped production of core sets, opting for a new model where two blocks with two sets each are made each year, rather than one block of three sets and a core set. Magic head designer Mark Rosewater wrote that the Core Set's dual identity of needing to interest established players while being simple enough for new players leading to "odd compromises", and cited the potential and upsides of doing two blocks per year, such as visiting new settings and revisiting old ones faster. Later in 2017, Wizards of the Coast announced that core sets would be returning under a different name, starting with Core Set 2019, released on July 13, 2018. Magic 2010: Magic 2010 was released on July 17, 2009. It is the eleventh core set for Magic: The Gathering. It is the first Core Set since Limited Edition Beta (which included two cards accidentally left out of the original Limited Edition Alpha) to feature new cards; every core set between Beta and Magic 2010 had contained only reprints from previous sets. About half the cards were new, the rest being reprints. Magic 2010: Magic 2010 (also known as M10) marked a major shift in the way Wizards of the Coast produces and markets the "Core" set of their marquee trading card game, Magic: The Gathering. M10 was the first core set since Revised (the third edition) to not be labeled with an ordinal number. Another important marketing change starting with M10 was Wizards of the Coast's decision to release a new core set every year, instead of every two years, as they did since 1995. Previous policy regarding which cards to reprint in the core sets led to the Core set product drifting away from its intended function. There were 112 new cards printed in M10, the remainder being reprints.M10 was the first core set to use the "mythic rare" rarity as well as the first core set to include planeswalkers, a relatively new card type which was first introduced in 2007. All five of the initial set of planeswalkers from Lorwyn were reprinted in M10 as mythic rares. Magic 2010: Rule changes Wizards of the Coast has also overhauled the core rules of the game with the introduction of Magic 2010. The changes included the renaming of several zones and actions of the game, eliminate the 'mana burn' rule of the game, and more relevant for gameplay, an alteration to the way combat damage is assigned. This was the first major alteration of the game rules since the introduction of 6th Edition rules in 1999, and was instituted to make the game more streamlined and intuitive; previous damage-assignment rules, for instance, would allow a creature to, in the words of Magic Rules Manager Mark Gottlieb, "swing its fist to punch, vanish from the battlefield, and [still] have that punch land." The rule changes, as with most rules changes, raised some controversy. Magic 2011: Magic 2011 was released on July 16, 2010. It was the twelfth core set for Magic: The Gathering. The set contained 110 new cards and 139 reprints. Magic 2011: Magic 2011 contains the keyword scry. This marks the first time that a mechanic from an expert level set has been printed in a core set, without making that mechanic evergreen, or permanently available for use in all future sets. Also, this set introduced the concept of "planeswalker signature cards": cards of lesser rarities that are tied directly to the central planeswalker characters of the set (ex. "Ajani's Pridemate" and "Ajani's Mantra" were included as a reference to the planeswalker "Ajani Goldmane"). These cards were made to make the identity of the planeswalkers more accessible to players, as the planeswalker cards themselves are only available in mythic rarities. Magic 2011: A notable cycle first printed in M11 was the "Titan cycle" of Sun Titan, Frost Titan, Grave Titan, Inferno Titan, Primeval Titan. Magic 2012: Magic 2012 was released on July 15, 2011. It is the thirteenth core set for Magic: The Gathering. This set has 97 new cards in it. Magic 2012: Magic 2012 was the first set to use "dies" to mean a creature being put into a graveyard from the battlefield. It is the first core set to use the keyword "Hexproof", a keyword ability replacing the text "cannot be the target of spells or abilities your opponents control" (cards with this ability had been printed in previous sets, but the ability was not given a keyword). The returning mechanic in Magic 2012 was Bloodthirst. When creatures with Bloodthirst are played, they gain a boost to their power and toughness if an opponent was already dealt damage that turn. For example, a 2/3 creature with Bloodthirst 3 could enter the battlefield as a 5/6. Bloodthirst was previously seen in Guildpact and Future Sight. Magic 2013: Magic 2013 was released on July 13, 2012. The tagline for the set is "Face a Greater Challenge." There were 108 new magic cards printed in this set. Magic 2013: Magic 2013 is the first core set to have a multicolored card, Nicol Bolas, Planeswalker (Bolas is also referenced on a number of other cards). It is the second Magic Core set (Tenth Edition was the first) to feature legendary cards; one legendary creature of each color plus the artifact Akroma's Memorial. Magic 2013 contains the Exalted mechanic which first appeared in the Shards of Alara block. It is featured as the "returning mechanic" in Magic 2013, as both reprinted Alara cards and new cards with Exalted are in Magic 2013. The Exalted ability gives a creature you control +1/+1 when it is the only creature attacking that combat, and multiple instances of Exalted are cumulative (e.g. 3 sources of Exalted will give a lone attacking creature +3/+3). Magic 2014: Magic 2014 was released on July 19, 2013. The tagline for the set is "Ignite your Spark." As Bolas was the mascot of M13, Chandra was the mascot of M14. The returning mechanic of Magic 2014 is Slivers, a series of creatures of which each grants an ability to each Sliver.Magic 2014 marked a change to the Legend rule. It made the "Indestructible" effect a keyword, and changed the phrasing for unblockable creatures to "can't be blocked." Slivers in Magic 2014 also worked subtly differently from Slivers in earlier Magic; they now only affected Slivers owned by the same controller, rather than all Slivers in the game. Slivers also received an art redesign that de-emphasized their original beak-headed, one-clawed, one-tailed insect-like appearance, and instead became monstrous humanoids whose appearance varied heavily by card, but had "normal" features such as faces and eyes. This redesign proved controversial; one reviewer noted "slivers are one of the most iconic designs in all of Magic: The Gathering. To essentially muddle them down into just another humanoid monster thing is really disappointing." Wizards of the Coast acknowledged the negative feedback, noting that some players disliked the new art style, and included a card in Magic 2015 that used the original Sliver appearance in Sliver Hive. Magic 2015: Magic 2015 was released on July 18, 2014. Magic 2015 made the second major change to the card frame in Magic's history (the first being in Eighth Edition). Changes include a slight font change (Starting with Magic 2015, an in-house font known as Beleren will be used rather than the Matrix Bold font), the addition of a holofoil stamp in the bottom center of all rare and mythic rare cards, a slightly narrower black border, and a redesign of the collector's info at the bottom of each card. The new border made it easier for machines to read the cards, helping to prevent mispackaging. Advertising for the set featured the planeswalker Garruk Wildspeaker, with a tagline of "Hunt Bigger Game."Magic 2015 includes the mechanic "Convoke", which originally appeared in Ravnica: City of Guilds. This mechanic allows a player to use their creatures to help cast spells with Convoke. It also includes 15 cards designed by notable non-employee Magic fans, such as Richard Garriott, George Fan, and Notch, some of which also appear in Duels of the Planeswalkers 2015. Magic Origins: Magic Origins was released on July 17, 2015. Magic Origins told the origin stories for 5 planeswalkers who are featured in sets after Origins. It featured a cycle of double-faced cards (originally used in Innistrad) that have a legendary creature on one side representing the character before their transformation, and a planeswalker on the reverse face that represents them after gaining their new power. Magic Origins: The set introduced the mechanics of renown, spell mastery, and menace. It also features cards with the prowess mechanic, which was introduced in Khans of Tarkir block, and the scry mechanic, which was introduced in Mirrodin block.In February 2015, Wizards of the Coast said that it would be an introductory product "like a core set", and it was published in the time period that a core set would have taken up during the pattern established by Magic 2010. After Origins, Magic switched to a new schedule where each year contained 2 blocks, and each block contained 2 sets.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Canadian system of soil classification** Canadian system of soil classification: The Canadian System of Soil Classification is more closely related to the American system than any other, but they differ in several ways. The Canadian system is designed to cover only Canadian soils. The Canadian system dispenses with the sub-order hierarchical level. Solonetzic and Gleysolic soils are differentiated at the order level. History: Before 1955, Canadian soil testing was based on systems of classification which were similar to methods being used in the United States. In 1955, a taxonomic system of soil classification specific to Canadian conditions was introduced. This system was designed to differentiate soils created by pedogenic processes in cool climatic environments. Classification process: The land area of Canada (excluding inland waters) is approximately 9 180 000 km2, of which about 1 375 000 km2 (15%) is rock land. The remainder is classified according to the Canadian System of Soil Classification. This system differentiates soil types on the basis of measured properties of the profile and uses a hierarchical scheme to classify soils from general to specific. The most recent version of the classification system has five categories in its hierarchical structure. From general to specific, the major categories in this system are: Orders, Great Groups, Subgroups, Families, and Series.Soil classes are defined as specifically as possible to permit uniformity of classification. Limits between classes are arbitrary as there are few sharp divisions of the soil continuum in nature. Differences in soils are the result of the interaction of many factors: climate, organisms, parent material, relief and time. The soil classification system changes as knowledge grows through soil mapping and research in Canada and elsewhere. Classification process: Classification involves arranging individual units with similar characteristics into groups. Soils do not occur as discrete entities; thus the unit of measurement for soil is not obvious. This unit of measurement is called the pedon, defined as a 3-dimensional body, commonly with lateral dimensions of 1 metre and depth of 1 to 2 metres. A vertical section of a pedon displays the more-or-less horizontal layers (horizons) developed by the action of soil-forming processes. Soil classification facilitates the organization and communication of information about soils, as well as the understanding of relationships between soils and environmental factors. At its most general level, the Canadian System recognizes ten different Soil Orders. Classification process: To classify in a soil in practice, an identification key in The Canadian System of Soil Classification is used. Decisions are made based on the properties of the horizons, such as thickness, Munsell colour, pH, or evidence of other soil-forming processes (eg, eluviation). Orders: Cryosolic Order These soils have permafrost (permanently frozen material) within one metre of the surface (2 m if the soil is strongly cryoturbated; i.e., disturbed by frost action). As permafrost is a barrier to roots and water, the active layer (seasonally thawed material) above it may become a saturated, semifluid material in spring. Commonly the permafrost layer near the surface contains abundant ice. Melting of ice and frozen materials, resulting from disturbance of the surface vegetation (boreal forest or tundra), may cause slumping of the soil and disruption of roads, pipelines and buildings. Cryosolic soils, occupying about 3 672 000 km2 (about 40%) of Canada's land area, are dominant in much of the Yukon, Northwest Territories and Nunavut and occur in northern areas of all but the Atlantic provinces (excluding Labrador). Orders: The order and its 3 great groups were defined in 1973, after soil and terrain surveys in the Mackenzie Valley yielded new knowledge about the properties, genesis and significance of these soils. Turbic Cryosols have a patterned surface (hummocks, stone nets, etc.) and mixed horizons or other evidence of cryoturbation. Static Cryosols lack marked evidence of cryoturbation; they are associated with sandy or gravelly materials. Organic Cryosols are composed dominantly of organic materials (e.g., peat). Because organic material acts as an insulator, Organic Cryosols occur farther south than the boundary of continuous permafrost. Orders: Organic Order These soils are composed predominantly of organic matter in the upper half metre (more than 30% organic matter by weight) and do not have permafrost near the surface. They are the major soils of peatlands (e.g., swamp, bog, fen). Most organic soils develop by the accumulation of plant materials from species that grow well in areas usually saturated with water. Some organic soils are composed largely of plant materials deposited in lakes; others, mainly of forest leaf litter on rocky slopes in areas of high rainfall. Organic soils cover almost 374 000 km2 (4.1%) of Canada's land area: large areas occur in Manitoba, Ontario and northern Alberta, smaller areas in other provinces and territories. Orders: Organic soils are subdivided into 4 great groups. Fibrisols, common in Canada, consist predominantly of relatively undecomposed organic material with clearly visible plant fragments; resistant fibres account for over 40% by volume. Most soils derived from Sphagnum mosses are Fibrisols. Mesisols are more highly decomposed and contain less fibrous material than Fibrisols (10-40% by volume). Humisols consist mainly of humified organic materials and may contain up to 10% fibre by volume. Folisols consist mainly of thick deposits of forest litter overlying bedrock, fractured bedrock or unconsolidated material. They occur commonly in wet mountainous areas of coastal British Columbia. Orders: Vertisolic Order These clay-rich soils shrink and swell markedly on drying and wetting. The physical disruption associated with shrinking and swelling produces shiny shear planes (slickensides) in the subsoil and either prevents the formation of subsurface horizons or severely disrupts and mixes them. When the soil swells on wetting, the former surficial material is mixed with the subsoil. Vertisolic soils develop mainly in clayey materials in semiarid to subhumid areas of the Interior Plains of Saskatchewan, Manitoba and Alberta and occupy less than 1% of the land area of Canada. Orders: The order and its 2 great groups were recognized in the Canadian system in the 1990s after extensive studies of pedons in the Great Plains. The Vertisol great group has a light-coloured A horizon that is not readily distinguishable, and the Humid Vertisol great group has a dark-coloured A horizon enriched in organic matter that is clearly distinguishable from the underlying soil material. Orders: Podzolic Order These acid soils have a B horizon containing accumulations of amorphous materials composed of humified organic matter associated with aluminum and iron. They develop most commonly in sandy materials in areas of cold, humid climate under forest or shrub vegetation. Water moving downward through the relatively porous material leaches out basic elements (e.g., calcium), and acidic conditions develop. Soluble organic substances formed by decomposition of the forest litter attack soil minerals in surface horizons, and much of the iron and aluminum released combines with this organic material. When the proportion of aluminum and iron to organic matter reaches a critical level, the organic complex becomes insoluble and is deposited in the B horizon. Dissolved aluminum and iron may also move downward in inorganic forms and be deposited as aluminum-silicon complexes and iron oxides. An Ae (light grey, strongly leached) horizon usually overlies the Podzolic B horizon. Orders: Podzolic soils occupy about 1 429 000 km2 (15.6%) of Canada's land area and are dominant in vast areas of the humid Appalachian and Canadian Shield regions and in the humid coastal region of British Columbia. They are divided among 3 great groups on the basis of the kind of Podzolic B horizon. Humic Podzols have a dark B horizon with a low iron content. They occur mainly in wet sites under humid climates and are much less common than other Podzolic soils. Orders: Ferro-Humic Podzols have a dark reddish-brown or black B horizon containing at least 5% organic carbon and appreciable amounts (often 2% or more) of aluminum and iron in organic complexes. They occur commonly in the more humid parts of the area of Podzolic soils; e.g., coastal British Columbia and parts of Newfoundland and southern Quebec. In Labrador along the Churchill River valley Ferro-Humic Podzols comprise about 36% of soils. Humo-Ferric Podzols, the most common Podzolic soils in Canada, have a reddish-brown B horizon containing less than 5% organic carbon associated with aluminum and iron complexes. Orders: Gleysolic Order These soils are periodically or permanently saturated with water and depleted of oxygen. They occur commonly in shallow depressions and level areas of subhumid and humid climate in association with other classes of soil on slopes and hills. After snowmelt or heavy rains, depressions in the landscape may be flooded. If flooding occurs when the soil temperature is above approximately 5 °C, microbial activity results in depletion of oxygen within a few days. Under such conditions, oxidized soil components (e.g., nitrate, ferric oxide) are reduced. Depletion of ferric oxide removes the brownish colour common to many soils, leaving them grey. As the soil dries and oxygen re-enters, the reduced iron may be oxidized locally to bright yellow-brown spots (mottles). Thus, Gleysolic soils are usually identified by their poor drainage and drab grey colour, sometimes accompanied by brown mottles. Gleysolic soils cover about 117 000 km2 (1.3%) of Canada's land area. Orders: Three great groups of Gleysolic soils are defined. Humic Gleysols have a dark A horizon enriched in organic matter. Gleysols lack such a horizon. Luvic Gleysols have a leached (Ae) horizon underlain by a B horizon in which the clay has accumulated; they may have a dark surface horizon. Orders: Solonetzic Order These soils have B horizons that are very hard when dry, swelling to a sticky, compact mass when wet. They usually develop in saline parent materials in semiarid and subhumid regions. Properties of the B horizons are associated with sodium ions that cause the clay to disperse readily and swell on wetting, thus closing the large pores and preventing water flow. Solonetzic soils cover almost 73 000 km2 (0.7%) of Canada's land area; most occur in southern Alberta, because of the large areas of saline parent material and semiarid climate. Orders: The 4 great groups of Solonetzic soils are based on properties reflecting the degree of leaching. Solonetz soils have a dark, organic-matter-enriched A horizon overlying the Solonetzic B, which occurs usually at a depth of 20 cm or less. The Ae (grey, leached) horizon is very thin or absent. Solodized Solonetz have a distinct Ae horizon between the dark A and the Solonetzic B. Solods have a transitional AB or BA horizon formed by degradation of the upper part of the Solonetzic B horizon. Vertisolic Solonetzic soils have features intergrading the Vertisolic order in addition to any of the above Solonetzic features. The developmental sequence of Solonetzic soils is commonly from saline parent material to Solonetz, Solodized Solonetz and Solod. As leaching progresses, the salts and sodium ions are translocated downward. If leaching progresses for long enough and salts are removed completely, the Solonetzic B may disintegrate completely. The soil would then be classified in another order. Resalinization may occur and reverse the process associated with leaching. Orders: Chernozemic Order These soils have an A horizon darkened by the addition of organic matter, usually from the decay of grass roots. The A horizon is neutral to slightly acid and is well supplied with bases such as calcium. The C horizon usually contains calcium carbonate (lime); it may contain more soluble salts such as gypsum. Chernozemic soils have mean annual soil temperatures above 0 °C and occur in regions of semiarid and subhumid climates. Covering more than 4% of Canada's land area, they are the major class of soils in the southern Interior Plains, where grass is the dominant native vegetation. Orders: The 4 great groups of Chernozemic soils are distinguished based upon surface horizon colour, associated with the relative dryness of the soil. Brown soils have brownish A horizons and occur in the driest area of the Chernozemic region. Dark Brown soils have a darker A horizon than Brown soils, reflecting a somewhat higher precipitation and associated higher organic-matter content. Black soils, associated with subhumid climates and tall-grass native vegetation, have a black A horizon which is usually thicker than that of Brown or Dark Brown soils. Dark Gray soils are transitional between grassland Chernozemic soils and the more strongly leached soils of forested regions. Orders: Luvisolic Order These soils have eluvial horizons from which clay has been leached after snowmelt or heavy rains and illuvial horizons in which clay has been deposited; these horizons are designated Ae and Bt respectively. In saline or calcareous materials, clay translocation is preceded by leaching of salts and carbonates. Luvisolic soils occur typically in forested areas of subhumid to humid climate where the parent materials contain appreciable clay. Luvisolic soils cover about 809 000 km2 (8.8%) of Canada's land area. Large areas of Luvisolic soils occur in the central to northern Interior Plains; smaller areas in all regions south of the permafrost zone. Orders: The 2 great groups of Luvisolic soils are distinguished mainly on the basis of soil temperature. Gray Brown Luvisols have a dark Ah horizon in which organic matter has been mixed with the mineral material (commonly by earthworm activity), a brown, often platy eluvial horizon (Ae) and an illuvial horizon (Bt) in which blocky structure is common. Their mean annual soil temperature is 8 °C or higher. The major area of Gray Brown Luvisols is found in the southern part of the Great Lakes-St Lawrence Lowlands. Gray Luvisols have eluvial and illuvial horizons and may have an Ah horizon if the mean annual soil temperature is below 8 °C. Vast areas of Gray Luvisols in the Boreal Forest Zone of the Interior Plains have thick, light grey eluvial horizons underlying the forest litter and thick Bt horizons with clay coating the surface of aggregates. Orders: Brunisolic Order This order includes all soils that have developed B horizons but do not meet the requirements of any of the orders described previously. Many Brunisolic soils have brownish B horizons without much evidence of clay accumulation, as in Luvisolic soils, or of amorphous materials, as in Podzolic soils. With time and stable environmental conditions, some Brunisolic soils will evolve to Luvisolic soils; others, to Podzolic soils. Covering almost 790 000 km2 (8.6%) of Canada's land area, Brunisolic soils occur in association with other soils in all regions south of the permafrost zone. Orders: Four great groups are distinguished on the basis of organic matter enrichment in the A horizon and acidity. Melanic Brunisols have an Ah horizon at least 10 cm thick and a pH above 5.5. They occur commonly in southern Ontario and Quebec. Eutric Brunisols have the same basic properties as Melanic Brunisols, except that the Ah horizon, if any, is less than 10 cm thick. Sombric Brunisols have an Ah horizon at least 10 cm thick, and are acid and their pH is below 5.5. Dystric Brunisols are acidic and do not have an Ah horizon 10 cm thick. Orders: Regosolic Order These soils are too weakly developed to meet the limits of any other order. The absence or weak development of genetic horizons may result from a lack of time for development or from instability of materials. The properties of Regosolic soils are essentially those of the parent material. Two great groups are defined. Regosols consist essentially of C horizons. Humic Regosols have an Ah horizon at least 10 cm thick. Regosolic soils cover about 73 000 km2 (0.8%) of Canada's land area. Subgroups, families and series: Great Group The 31 great group classes are formed by subdividing order classes on the basis of soil properties that reflect differences in soil-forming processes (e.g., kinds and amounts of organic matter in surface soil horizons). Subgroups, families and series: Subgroups Subgroups are based on the sequence of horizons in the pedon. Many subgroups intergrade to other soil orders. For example, the Gray Luvisol great group includes 12 subgroups; Orthic Gray Luvisol is the typical expression of Gray Luvisols, and other subgroups are intergrades to the Chernozemic order (Dark Gray Luvisol), Podzolic order (Podzolic Gray Luvisol), Gleysolic order (Gleyed Gray Luvisol), Solonetzic and Gleysolic orders (Gleyed Solonetzic Gray Luvisol), etc. Subgroups, families and series: Families Families are based on parent material properties and soil climate. For example, the Orthic Gray Luvisol subgroup includes soils of a wide range of texture (gravelly sandy loam to clay), different mineralogy and different temperature and water regime. The soil family designation is much more specific; e.g., Orthic Gray Luvisol, clayey, mixed (mineralogy), cold, subhumid. Subgroups, families and series: Series Series have a vast array of properties (e.g., horizon thickness and colour, gravel content, structure) that fall within a narrow range. Thus, for example, the series name Breton implies all the basic properties of the Luvisolic order, the Gray Luvisol great group, the Orthic Gray Luvisol subgroup and the fine, loamy, mixed, cold subhumid family of that subgroup as well as series-specific properties. A series name implies so much specific information about soil properties that a wide range of interpretations can be made on the probable suitability of the soil for a variety of uses.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Peanut butter bun** Peanut butter bun: A peanut butter bun is a sweet bun found in Chinatown bakery shops. The bun has layers of peanut butter fillings, sometimes with light sprinkles of sugar mixed with the peanut butter for extra flavor. Unlike other similar buns, the shape varies, depending on the bakery.The dough is made of flour, sugar, water, yeast, milk, and cream. Before putting it into the oven for baking, the bun is often brushed with sugar water in order to develop a nice glaze.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Project MINARET** Project MINARET: Project MINARET was a domestic espionage project operated by the National Security Agency (NSA), which, after intercepting electronic communications that contained the names of predesignated US citizens, passed them to other government law enforcement and intelligence organizations. Intercepted messages were disseminated to the FBI, CIA, Secret Service, Bureau of Narcotics and Dangerous Drugs (BNDD), and the Department of Defense. The project was a sister project to Project SHAMROCK. History: Starting in 1962, the NSA had a "watch list" of Americans travelling to Cuba, expanded to include narcotic traffickers. Then, from 1967 onwards, President Lyndon B. Johnson included the names of activists in the anti-war movement. President Richard Nixon further expanded the list to include civil rights leaders, journalists and two senators. The NSA included David Kahn.The names were on "watch lists" of American citizens, generated by Executive Branch law enforcement and intelligence agencies, to detect communications involving the listed individuals. There was no judicial oversight, and the project had no warrants for interception. The NSA cooperated with FBI and CIA requests for international communications of targeted individuals as long as the recipients distanced the NSA from involvement. This entailed the FBI and CIA either returning the reports to the NSA or destroying them after two weeks, classifying the reports "Top Secret," and not filing them along with other NSA records.The 1972 Keith decision by the U.S. Supreme Court became a controversial issue mainly because, even though the court had confirmed that the government had the authority to protect the nation from subversive activity, it ruled against the government's ability to use warrantless electronic surveillance for domestic espionage purposes. This controversy became a major case against Project MINARET. History: Operating between 1967 and 1973, over 5,925 foreigners and 1,690 organizations and US citizens were included on the Project MINARET watch lists. NSA Director, Lew Allen, testified before the Senate Intelligence Committee in 1975 that the NSA had issued over 3,900 reports on the watch-listed Americans. History: According to Stephen Budiansky, a 1977 Dept. of Justice review concluded wiretap laws were violated, but "If the intelligence agencies possessed too much discretionary authority with too little accountability, that would seem to be a 35-year failing of Presidents and the Congress rather than the agencies or their personnel."One result of these investigations was the 1978 creation of the Foreign Intelligence Surveillance Act (FISA), which limited the powers of the NSA and put in place a process of warrants and judicial review. Another internal safeguard was U.S. Signal Intelligence Directive 18, an internal NSA and intelligence community set of procedures, originally issued in 1980, and updated in 1993. USSID 18 was the general guideline for handling signals intelligence (SIGINT) inadvertently collected on US citizens, without a warrant, prior to the George W. Bush Administration. Interpretations of FISA and the principles of USSID 18 by the Bush administration assume the Executive Branch has unitary authority for warrantless surveillance, which is under Congressional investigation as an apparent violation of the intent of FISA. Domestic targets: 1,650 U.S. citizens were targeted. Among those monitored were: U.S. Senator Howard Baker, Civil Rights Movement leaders Martin Luther King Jr. and Whitney Young, boxer Muhammad Ali, New York Times journalist Tom Wicker, the actress Jane Fonda and Washington Post humor columnist Art Buchwald.In 1975, Senator Frank Church, himself a target, chaired the Church Committee, which disclosed the program. Role of Britain's GCHQ agency: Britain's intelligence agency Government Communications Headquarters (GCHQ) took part in the program, targeting several anti-Vietnam War dissidents such as Tom Hayden and Jane Fonda. The GCHQ handed over intercepted data of Americans to the U.S. government.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spatial view cells** Spatial view cells: Spatial view cells are neurons in primates' hippocampus; they respond when a certain part of the environment is in the animal's field of view.They are related to place cells and head direction cells. Spatial view cells differ from place cells, since they are not localized in space. They also differ from head direction cells since they don't represent a global orientation (like a compass), but the direction towards a specific object. Spatial view cells are the cells that respond in the hippocampus when a particular location is being recalled. These cells are identified in the hippocampus of test subjects by monitoring individual neurons while the test subject is moved around in a cue controlled spatial environment. The spatial view cells are the cells that fire consistently when the monkey is looking at a certain direction in the environment; this is independent of the head direction or the location of the monkey. Also, these cells are confirmed to be spatial view cells by observing that there is minimal randomized firing of the cells without the appropriate stimulus present. Characteristics: Spatial view cells can be characterized by the following features: respond to a region of visual space being looked at, relatively independently of where the monkey is located respond to a small number of visual cues generally within a 30° receptive field activated when doing spatial tasks which include active walking in a spatial environment fire relatively independent of the place where the monkey is located represent the place at which the monkey is looking generally stimulated by at least 3 cues present in optimal view fire uniformly all over different areas in space as long as monkey is looking at the same area ability to maintain their spatial properties for periods of up to several minutes in the dark responses depend on where the monkey is looking, by measuring eye position spatial representation is allocentric responses still occur in some cases even if view details are obscured with curtainsThe spatial view cells that respond in the absence of visual cues are generally found in the Cornu Ammonis area 1, the parahippocampal gyrus, and the presubiculum, while the ones that do not respond are found in the Cornu Ammonis region 3. The cells found in the CA1, parahippocampal gyrus, and presubiculum regions often provide a longer response even after the stimulus is removed for up to several minutes in complete darkness. Spatial view cells update their representations by the use of idiothetic inputs in the dark and these cells are commonly found in the CA1, parahippocampal gyrus, and presubiculum regions. Uses: Spatial view cells are used by primates for storing an episodic memory that helps with remembering where a particular object was in the environment. Imaging studies have shown that the hippocampus plays an important role in spatial navigation and episodic memories. Also, spatial view cells enable them to recall locations of objects even if they are not physically present in the environment. The neurons associated with remembering the location and object are often found in the primate hippocampus. These spatial view cells do not only recall specific locations, but they also remember distances between other landmarks around the place in order to gain a better understanding of where the places are spatially. Uses: In real world applications, monkeys remember where they saw ripe fruit with the aid of spatial view cells. Humans use spatial view cells when they try to recall where they may have seen a person or where they left their keys. Primates' highly developed visual and eye movement control systems enables them to explore and remember information about what's present at places in the environment without having to physically visit those places. These sorts of memories would be useful for spatial navigation in which the primates visualize everything in an allocentric, or worldly manner that allows them to convey directions to others without physically going through the entire route. These cells are used by primates in regular day-to-day lives. Removal of spatial view cell: Diseases and illnesses that harm the brain and the hippocampus can also damage spatial view cells, which are located in the hippocampus. Strokes, meningitis, and encephalitis are only a few of the various illnesses that can cause harm to the spatial view cells. Some clinical symptoms present in patients with damage to the central nervous system include: fever, altered mental status, and neck stiffness. Lesion studies have shown that damage to the hippocampus or to some of its connections, such as the fornix, in monkeys produces deficits in learning about the places of objects and about the places where responses should be made. This sort of damage to the brain often results in impaired object-place memory. Object-place memory tasks require the monkey to not only remember the object seen, but they must also remember where the object was seen in the environment. It has been shown that posterior para-hippocampal lesions in macaques impair even a simple type of object-place learning in which only one pair of unique stimuli are needed for memory. Removal of spatial view cell: Relationship to other diseases Patients with damage to spatial view cells will often show similar symptoms from other diseases such as: vascular dementia, Alzheimer's disease, amnesia fugue, macular degeneration, and optic nerve damage. Another illness that reflects signs of spatial view damage is fornix lesions that impair conditional left–right discrimination learning. Patients with damage to the temporal lobe which also includes the hippocampus can sometimes have Amnesia. Patients with amnesia often have memory impairments in which they have difficulty remembering both what they saw and where they saw the object or event take place. These signs point to the possible damage to spatial view cells found in the hippocampus. Current research involving spatial view cells: Optimal firing rate Current research shows that the maximum firing rate of spatial view cells is obtained when the test agent is allowed to explore the environment freely. Tests in which the monkey was not allowed to have active locomotion provided very few results of spatial view cells being detected in the hippocampus. Majority of the experiments conducted for spatial view cells involved the use of macaque monkeys as test subjects. These types of cells are identified by monitoring the hippocampus of the monkeys while the brains are stimulated by presenting various images and objects in the monkey's vision. Various researchers use different methodologies in sync with the experiment being conducted in order to identify these spatial view cells. For example, in a delayed spatial response task, the monkey is shown a stimulus on one side of a screen and then the stimulus is taken away. After a short while, the stimulus is again presented to the monkey in the same location and the firing of the cell in the hippocampus that is specifically associated with the location at which the monkey is looking and is independent of the location of the monkey helps identify the spatial view cell. The monkeys in this of experiment are encouraged by rewarding them with fruit juice when they correctly identify the same object in the same location twice in a row and if they get it incorrect, the monkeys receive a saline taste. Current research involving spatial view cells: Association with episodic memories The experiments often use object-place memory tasks because they are representative of episodic memories and often employ similar parts of the brain. It is also believed that whenever an episodic memory is stored, part of the context from that event is also stored along with it. As a result, recalling a certain place can link up the emotions at that time. These recollections do not only happen if a place is recalled, but they are prone to occur if the person is in the same mood as they were at the time of the event. Rewards are also remembered along with the place at which it was received. Spatial view cells have been proven to be independent of head direction and place cells. Spatial view cells have been shown to respond even in the dark without any visual cues as long as the test subject was facing in the proper direction. It is believed that in the absence of visual cues, spatial view cells respond from the inputs being received from head direction cells and place cells along with eye position of the primate. The use of the vestibular system and proprioceptive cues also provide a sense of direction the animal is facing in the dark. Current research involving spatial view cells: Ability to update with new information Research has led to the finding the spatial view cells are consistently updated with other inputs from the body. For example, when a monkey is oriented in a different position spatially such as being upside down, the spatial view cells still respond when the test subject faces the appropriate direction. This implies that there is stream of new information being received by the spatial view cells constantly. This integration from various inputs develops continuous attractor networks. Continuous attractor neural networks, also known as CANN, are routinely used when studying spatial view cells from an idiothetic stand point. CANNs allow researchers to closely monitor the associated head direction cells and place cells along with the spatial view cells as one close "packet of neural activity".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BLADE (software)** BLADE (software): BLADE (Block All Drive-by Download Exploits) is a computer program that was developed by Phillip Porras and Vinod Yegneswaran at SRI International; and Long Lu and Wenke Lee at the Georgia Institute of Technology. BLADE is funded by grants from the National Science Foundation, the United States Army Research Laboratory, and the Office of Naval Research. The program is designed to prevent drive-by download malware attacks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thought experiment** Thought experiment: A thought experiment is a hypothetical situation in which a hypothesis, theory, or principle is laid out for the purpose of thinking through its consequences. History: The ancient Greek δείκνυμι, deiknymi, 'thought experiment', "was the most ancient pattern of mathematical proof", and existed before Euclidean mathematics, where the emphasis was on the conceptual, rather than on the experimental part of a thought-experiment. Johann Witt-Hansen established that Hans Christian Ørsted was the first to use the term Gedankenexperiment (from German: 'thought experiment') circa 1812. Ørsted was also the first to use the equivalent term Gedankenversuch in 1820. History: By 1883, Ernst Mach used the term Gedankenexperiment in a different way, to denote exclusively the imaginary conduct of a real experiment that would be subsequently performed as a real physical experiment by his students. Physical and mental experimentation could then be contrasted: Mach asked his students to provide him with explanations whenever the results from their subsequent, real, physical experiment differed from those of their prior, imaginary experiment. History: The English term thought experiment was coined (as a calque) from Mach's Gedankenexperiment, and it first appeared in the 1897 English translation of one of Mach's papers. Prior to its emergence, the activity of posing hypothetical questions that employed subjunctive reasoning had existed for a very long time (for both scientists and philosophers). The irrealis moods are ways to categorize it or to speak about it. This helps explain the extremely wide and diverse range of the application of the term "thought experiment" once it had been introduced into English. History: Galileo's demonstration that falling objects must fall at the same rate regardless of their masses was a significant step forward in the history of modern science. This is widely thought to have been a straightforward physical demonstration, involving climbing up the Leaning Tower of Pisa and dropping two heavy weights off it, whereas in fact, it was a logical demonstration, using the 'thought experiment' technique. The 'experiment' is described by Galileo in Discorsi e dimostrazioni matematiche (1638) (from Italian: 'Mathematical Discourses and Demonstrations') thus: Salviati. If then we take two bodies whose natural speeds are different, it is clear that on uniting the two, the more rapid one will be partly retarded by the slower, and the slower will be somewhat hastened by the swifter. Do you not agree with me in this opinion? Simplicio. You are unquestionably right. History: Salviati. But if this is true, and if a large stone moves with a speed of, say, eight while a smaller moves with a speed of four, then when they are united, the system will move with a speed less than eight; but the two stones when tied together make a stone larger than that which before moved with a speed of eight. Hence the heavier body moves with less speed than the lighter; an effect which is contrary to your supposition. Thus you see how, from your assumption that the heavier body moves more rapidly than the lighter one, I infer that the heavier body moves more slowly. Uses: The common goal of a thought experiment is to explore the potential consequences of the principle in question: A thought experiment is a device with which one performs an intentional, structured process of intellectual deliberation in order to speculate, within a specifiable problem domain, about potential consequents (or antecedents) for a designated antecedent (or consequent). Given the structure of the experiment, it may not be possible to perform it, and even if it could be performed, there need not be an intention to perform it. Examples of thought experiments include Schrödinger's cat, illustrating quantum indeterminacy through the manipulation of a perfectly sealed environment and a tiny bit of radioactive substance, and Maxwell's demon, which attempts to demonstrate the ability of a hypothetical finite being to violate the 2nd law of thermodynamics. Uses: It is a common element of science-fiction stories.Thought experiments, which are well-structured, well-defined hypothetical questions that employ subjunctive reasoning (irrealis moods) – "What might happen (or, what might have happened) if . . . " – have been used to pose questions in philosophy at least since Greek antiquity, some pre-dating Socrates. In physics and other sciences many thought experiments date from the 19th and especially the 20th Century, but examples can be found at least as early as Galileo. Uses: In thought experiments, we gain new information by rearranging or reorganizing already known empirical data in a new way and drawing new (a priori) inferences from them or by looking at these data from a different and unusual perspective. In Galileo's thought experiment, for example, the rearrangement of empirical experience consists of the original idea of combining bodies of different weights.Thought experiments have been used in philosophy (especially ethics), physics, and other fields (such as cognitive psychology, history, political science, economics, social psychology, law, organizational studies, marketing, and epidemiology). In law, the synonym "hypothetical" is frequently used for such experiments. Uses: Regardless of their intended goal, all thought experiments display a patterned way of thinking that is designed to allow us to explain, predict and control events in a better and more productive way. Uses: Theoretical consequences In terms of their theoretical consequences, thought experiments generally: challenge (or even refute) a prevailing theory, often involving the device known as reductio ad absurdum, (as in Galileo's original argument, a proof by contradiction), confirm a prevailing theory, establish a new theory, or simultaneously refute a prevailing theory and establish a new theory through a process of mutual exclusion Practical applications Thought experiments can produce some very important and different outlooks on previously unknown or unaccepted theories. However, they may make those theories themselves irrelevant, and could possibly create new problems that are just as difficult, or possibly more difficult to resolve. Uses: In terms of their practical application, thought experiments are generally created to: challenge the prevailing status quo (which includes activities such as correcting misinformation (or misapprehension), identify flaws in the argument(s) presented, to preserve (for the long-term) objectively established fact, and to refute specific assertions that some particular thing is permissible, forbidden, known, believed, possible, or necessary); extrapolate beyond (or interpolate within) the boundaries of already established fact; predict and forecast the (otherwise) indefinite and unknowable future; explain the past; the retrodiction, postdiction and hindcasting of the (otherwise) indefinite and unknowable past; facilitate decision making, choice, and strategy selection; solve problems, and generate ideas; move current (often insoluble) problems into another, more helpful, and more productive problem space (e.g.: functional fixedness); attribute causation, preventability, blame, and responsibility for specific outcomes; assess culpability and compensatory damages in social and legal contexts; ensure the repeat of past success; or examine the extent to which past events might have occurred differently. Uses: ensure the (future) avoidance of past failures Types: Generally speaking, there are seven types of thought experiments in which one reasons from causes to effects, or effects to causes: Prefactual Prefactual (before the fact) thought experiments – the term prefactual was coined by Lawrence J. Sanna in 1998 – speculate on possible future outcomes, given the present, and ask "What will be the outcome if event E occurs?" Counterfactual Counterfactual (contrary to established fact) thought experiments – the term counterfactual was coined by Nelson Goodman in 1947, extending Roderick Chisholm's (1946) notion of a "contrary-to-fact conditional" – speculate on the possible outcomes of a different past; and ask "What might have happened if A had happened instead of B?" (e.g., "If Isaac Newton and Gottfried Leibniz had cooperated with each other, what would mathematics look like today?").The study of counterfactual speculation has increasingly engaged the interest of scholars in a wide range of domains such as philosophy, psychology, cognitive psychology, history, political science, economics, social psychology, law, organizational theory, marketing, and epidemiology. Types: Semifactual Semifactual thought experiments – the term semifactual was coined by Nelson Goodman in 1947 – speculate on the extent to which things might have remained the same, despite there being a different past; and asks the question Even though X happened instead of E, would Y have still occurred? (e.g., Even if the goalie had moved left, rather than right, could he have intercepted a ball that was traveling at such a speed?). Types: Semifactual speculations are an important part of clinical medicine. Types: Predictive The activity of prediction attempts to project the circumstances of the present into the future. According to David Sarewitz and Roger Pielke (1999, p123), scientific prediction takes two forms: "The elucidation of invariant – and therefore predictive – principles of nature"; and "[Using] suites of observational data and sophisticated numerical models in an effort to foretell the behavior or evolution of complex phenomena".Although they perform different social and scientific functions, the only difference between the qualitatively identical activities of predicting, forecasting, and nowcasting is the distance of the speculated future from the present moment occupied by the user. Whilst the activity of nowcasting, defined as "a detailed description of the current weather along with forecasts obtained by extrapolation up to 2 hours ahead", is essentially concerned with describing the current state of affairs, it is common practice to extend the term "to cover very-short-range forecasting up to 12 hours ahead" (Browning, 1982, p.ix). Types: Hindcasting The activity of hindcasting involves running a forecast model after an event has happened in order to test whether the model's simulation is valid. Retrodiction The activity of retrodiction (or postdiction) involves moving backward in time, step-by-step, in as many stages as are considered necessary, from the present into the speculated past to establish the ultimate cause of a specific event (e.g., reverse engineering and forensics). Types: Given that retrodiction is a process in which "past observations, events, add and data are used as evidence to infer the process(es) that produced them" and that diagnosis "involve[s] going from visible effects such as symptoms, signs and the like to their prior causes", the essential balance between prediction and retrodiction could be characterized as: regardless of whether the prognosis is of the course of the disease in the absence of treatment, or of the application of a specific treatment regimen to a specific disorder in a particular patient. Types: Backcasting The activity of backcasting – the term backcasting was coined by John Robinson in 1982 – involves establishing the description of a very definite and very specific future situation. It then involves an imaginary moving backward in time, step-by-step, in as many stages as are considered necessary, from the future to the present to reveal the mechanism through which that particular specified future could be attained from the present.Backcasting is not concerned with predicting the future: The major distinguishing characteristic of backcasting analyses is the concern, not with likely energy futures, but with how desirable futures can be attained. It is thus explicitly normative, involving 'working backward' from a particular future end-point to the present to determine what policy measures would be required to reach that future. Types: According to Jansen (1994, p. 503: Within the framework of technological development, "forecasting" concerns the extrapolation of developments towards the future and the exploration of achievements that can be realized through technology in the long term. Conversely, the reasoning behind "backcasting" is: on the basis of an interconnecting picture of demands technology must meet in the future – "sustainability criteria" – to direct and determine the process that technology development must take and possibly also the pace at which this development process must take effect. Types: Backcasting [is] both an important aid in determining the direction technology development must take and in specifying the targets to be set for this purpose. As such, backcasting is an ideal search toward determining the nature and scope of the technological challenge posed by sustainable development, and it can thus serve to direct the search process toward new – sustainable – technology. Fields: Thought experiments have been used in a variety of fields, including philosophy, law, physics, and mathematics. In philosophy they have been used at least since classical antiquity, some pre-dating Socrates. In law, they were well known to Roman lawyers quoted in the Digest. In physics and other sciences, notable thought experiments date from the 19th and especially the 20th century, but examples can be found at least as early as Galileo. Fields: Philosophy In philosophy, a thought experiment typically presents an imagined scenario with the intention of eliciting an intuitive or reasoned response about the way things are in the thought experiment. (Philosophers might also supplement their thought experiments with theoretical reasoning designed to support the desired intuitive response.) The scenario will typically be designed to target a particular philosophical notion, such as morality, or the nature of the mind or linguistic reference. The response to the imagined scenario is supposed to tell us about the nature of that notion in any scenario, real or imagined. Fields: For example, a thought experiment might present a situation in which an agent intentionally kills an innocent for the benefit of others. Here, the relevant question is not whether the action is moral or not, but more broadly whether a moral theory is correct that says morality is determined solely by an action's consequences (See Consequentialism). John Searle imagines a man in a locked room who receives written sentences in Chinese, and returns written sentences in Chinese, according to a sophisticated instruction manual. Here, the relevant question is not whether or not the man understands Chinese, but more broadly, whether a functionalist theory of mind is correct. Fields: It is generally hoped that there is universal agreement about the intuitions that a thought experiment elicits. (Hence, in assessing their own thought experiments, philosophers may appeal to "what we should say," or some such locution.) A successful thought experiment will be one in which intuitions about it are widely shared. But often, philosophers differ in their intuitions about the scenario. Fields: Other philosophical uses of imagined scenarios arguably are thought experiments also. In one use of scenarios, philosophers might imagine persons in a particular situation (maybe ourselves), and ask what they would do. Fields: For example, in the veil of ignorance, John Rawls asks us to imagine a group of persons in a situation where they know nothing about themselves, and are charged with devising a social or political organization. The use of the state of nature to imagine the origins of government, as by Thomas Hobbes and John Locke, may also be considered a thought experiment. Søren Kierkegaard explored the possible ethical and religious implications of Abraham's binding of Isaac in Fear and Trembling. Similarly, Friedrich Nietzsche, in On the Genealogy of Morals, speculated about the historical development of Judeo-Christian morality, with the intent of questioning its legitimacy. Fields: An early written thought experiment was Plato's allegory of the cave. Another historic thought experiment was Avicenna's "Floating Man" thought experiment in the 11th century. He asked his readers to imagine themselves suspended in the air isolated from all sensations in order to demonstrate human self-awareness and self-consciousness, and the substantiality of the soul. Science Scientists tend to use thought experiments as imaginary, "proxy" experiments prior to a real, "physical" experiment (Ernst Mach always argued that these gedankenexperiments were "a necessary precondition for physical experiment"). In these cases, the result of the "proxy" experiment will often be so clear that there will be no need to conduct a physical experiment at all. Fields: Scientists also use thought experiments when particular physical experiments are impossible to conduct (Carl Gustav Hempel labeled these sorts of experiment "theoretical experiments-in-imagination"), such as Einstein's thought experiment of chasing a light beam, leading to special relativity. This is a unique use of a scientific thought experiment, in that it was never carried out, but led to a successful theory, proven by other empirical means. Properties: Further categorization of thought experiments can be attributed to specific properties. Possibility In many thought experiments, the scenario would be nomologically possible, or possible according to the laws of nature. John Searle's Chinese room is nomologically possible. Properties: Some thought experiments present scenarios that are not nomologically possible. In his Twin Earth thought experiment, Hilary Putnam asks us to imagine a scenario in which there is a substance with all of the observable properties of water (e.g., taste, color, boiling point), but is chemically different from water. It has been argued that this thought experiment is not nomologically possible, although it may be possible in some other sense, such as metaphysical possibility. It is debatable whether the nomological impossibility of a thought experiment renders intuitions about it moot. Properties: In some cases, the hypothetical scenario might be considered metaphysically impossible, or impossible in any sense at all. David Chalmers says that we can imagine that there are zombies, or persons who are physically identical to us in every way but who lack consciousness. This is supposed to show that physicalism is false. However, some argue that zombies are inconceivable: we can no more imagine a zombie than we can imagine that 1+1=3. Others have claimed that the conceivability of a scenario may not entail its possibility. Properties: Causal reasoning The first characteristic pattern that thought experiments display is their orientation in time. They are either: Antefactual speculations: experiments that speculate about what might have happened prior to a specific, designated event, or Postfactual speculations: experiments that speculate about what may happen subsequent to (or consequent upon) a specific, designated event.The second characteristic pattern is their movement in time in relation to "the present moment standpoint" of the individual performing the experiment; namely, in terms of: Their temporal direction: are they past-oriented or future-oriented? Their temporal sense: (a) in the case of past-oriented thought experiments, are they examining the consequences of temporal "movement" from the present to the past, or from the past to the present? or, (b) in the case of future-oriented thought experiments, are they examining the consequences of temporal "movement" from the present to the future, or from the future to the present? Relation to real experiments The relation to real experiments can be quite complex, as can be seen again from an example going back to Albert Einstein. In 1935, with two coworkers, he published a paper on a newly created subject called later the EPR effect (EPR paradox). In this paper, starting from certain philosophical assumptions, on the basis of a rigorous analysis of a certain, complicated, but in the meantime assertedly realizable model, he came to the conclusion that quantum mechanics should be described as "incomplete". Niels Bohr asserted a refutation of Einstein's analysis immediately, and his view prevailed. After some decades, it was asserted that feasible experiments could prove the error of the EPR paper. These experiments tested the Bell inequalities published in 1964 in a purely theoretical paper. The above-mentioned EPR philosophical starting assumptions were considered to be falsified by the empirical fact (e.g. by the optical real experiments of Alain Aspect). Properties: Thus thought experiments belong to a theoretical discipline, usually to theoretical physics, but often to theoretical philosophy. In any case, it must be distinguished from a real experiment, which belongs naturally to the experimental discipline and has "the final decision on true or not true", at least in physics. Interactivity Thought experiments can also be interactive where the author invites people into his thought process through providing alternative paths with alternative outcomes within the narrative, or through interaction with a programmed machine, like a computer program. Properties: Thanks to the advent of the Internet, the digital space has lent itself as a new medium for a new kind of thought experiments. The philosophical work of Stefano Gualeni, for example, focuses on the use of virtual worlds to materialize thought experiments and to playfully negotiate philosophical ideas. His arguments were originally presented in his book Virtual Worlds as Philosophical Tools. Properties: Gualeni's argument is that the history of philosophy has, until recently, merely been the history of written thought, and digital media can complement and enrich the limited and almost exclusively linguistic approach to philosophical thought. He considers virtual worlds to be philosophically viable and advantageous in contexts like those of thought experiments, when the recipients of a certain philosophical notion or perspective are expected to objectively test and evaluate different possible courses of action, or in cases where they are confronted with interrogatives concerning non-actual or non-human phenomenologies. Examples: Humanities Physics Philosophy Mathematics Biology Levinthal paradox Rotating locomotion in living systems Computer science Economics Broken window fallacy (law of unintended consequences, opportunity cost) Laffer Curve
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zinc bromide** Zinc bromide: Zinc bromide (ZnBr2) is an inorganic compound with the chemical formula ZnBr2. It is a colourless salt that shares many properties with zinc chloride (ZnCl2), namely a high solubility in water forming acidic solutions, and good solubility in organic solvents. It is hygroscopic and forms a dihydrate ZnBr2·2H2O. Production: ZnBr2 · 2H2O is prepared by treating zinc oxide or zinc metal with hydrobromic acid. ZnO + 2 HBr + H2O → ZnBr2·2H2O Zn + 2 HBr → ZnBr2 + H2The anhydrous material can be produced by dehydration of the dihydrate with hot CO2 or by reaction of zinc metal and bromine. Sublimation in a stream of hydrogen bromide also gives the anhydrous derivative. Structure: ZnBr2 crystallizes in the same structure as ZnI2: four tetrahedral Zn centers share three vertices to form “super-tetrahedra” of nominal composition {Zn4Br10}2−, which are linked by their vertices to form a three-dimensional structure. The dihydrate ZnBr2 · 2H2O can be described as ([Zn(H2O)6]2+)2([Zn2Br6]2-).Gaseous ZnBr2 is linear in accordance with VSEPR theory with a Zn-Br bond length of 221 pm. Uses: Zinc bromide is used in the following applications: In organic chemistry as a Lewis acid. It is the electrolyte in the zinc bromide battery. Uses: In oil and natural gas wells, solutions containing zinc bromide are used to displace drilling mud when transitioning from the drilling phase to the completion phase or in well workover operations. The extremely dense brine solution gives the fluid its weight of 20 pounds/gallon, which makes it especially useful in holding back flammable oil and gas particles in high pressure wells. However, the high acidity and osmolarity cause corrosion and handling problems. Crews must be issued slicker suits and rubber boots because the fluid is so dehydrating. Uses: Zinc bromide solutions can be used as a transparent shield against radiation. The space between two glass panes is filled with a strong aqueous solution of zinc bromide with a very high density, to be used as a window on a hot cell. This type of window has the advantage over lead glass in that it will not darken as a result of exposure to radiation. All glass will darken slowly over time due to radiation, however this is especially true in a hot cell, where exceptional levels of radiation are present. The advantage of an aqueous salt solution is that any radiation damage will last less than a millisecond, so the shield will undergo self-repair. Safety: Safety considerations are similar to those for zinc chloride, for which the toxic dose for humans is 3–5 g.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fordham Experiment** Fordham Experiment: The Fordham Experiment was an experiment done as part of a course on The Effects of Television by Eric McLuhan and Harley Parker at Fordham University in 1967 or 1968. The purpose of the experiment was to demonstrate to the students that there was a difference between the effects of movies and those of TV on an audience, and to try to ascertain what some of those differences might be. Fordham Experiment: The distinction was thought to occur because movies present reflected light ('light on') to the viewer, while a TV picture is back lit ('light through'). The experimenters showed two movies, a documentary and a film with little story line about horses, sequentially to two groups of equivalent size, and had the viewers write a half a page of comments of their reactions. Fordham Experiment: The groups' reactions to one of the films were roughly similar. Distinct reactions, however, were found for the other. Generally, the 'light on' (movie) presentation was perceived as having lowered tactility and heightened visuality, as compared to the heightened tactility and lessened visuality of the 'light through' (TV) presentation. Visualility dropped from 'light on' to 'light-through': Comments on cinematic technique dropped from 36% with 'light on' to below 20% with 'light-through' Comments on specific scenes dropped from 51% to 20% Objective comments on a 'sense of power' in the animals dropped from 60% to 20%Tactility increased from 'light on' to 'light through': Comments on sensory evocation and a sense of involvement and tenseness increased from 6% with 'light on' to 36% with 'light through' Comments on a feeling of a loss of sense of time rose from 6% to 40% Comments on a sense of total involvement rose from 15% to 64% Comments on a sense of total emotional involvement rose from 12% to 48%The researchers concluded that the 'light on' subjects exhibited a sensory shift characterized by a drop in visual sense and an increase in tactile sense. Fordham Experiment: Although this experiment has validity, it does not deal directly with the central point made by Marshall McLuhan that the cinema image, typically a 35mm frame, is made up of millions of dots, or emulsion, and is much more 'saturated' than the lines and pixels of the TV image. McLuhan argued that the TV screen invited the audience to 'fill-in' a low-intensity image, much like following the bounding lines of a cartoon. That made TV more 'involving' and more tactile. The high-intensity film image allows for much more information on screen, but also demands a higher degree of visual perception and cognition. In that sense, he said, film is a 'hot' medium, TV a 'cool' bath.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Syphon Filter** Syphon Filter: Syphon Filter is a third-person shooter stealth video game series developed by Bend Studio (formerly Eidetic) and published by Sony Computer Entertainment (previously 989 Studios), for the PlayStation, PlayStation 2 and PlayStation Portable. In the series, Syphon Filter is the name given to a mysterious biological weapon. Games: Syphon Filter (1999) The plot centers on special agents, Gabriel Logan and Lian Xing, who are tasked by the United States government to apprehend Erich Rhoemer, an international terrorist. Syphon Filter 2 (2000) The plot picks up immediately after where the previous Syphon Filter ended. Gabe sets out to cure the virus, whilst being targeted as a "terrorist" by the United States government. Syphon Filter 3 (2001) Gabe and his team are suspected of treason. Summoned to prove their innocence, the team recounts the incidents that led to this moment. In the background, Gabe moves to rid the world of Syphon Filter once and for all. Syphon Filter: The Omega Strain (2004) Gabe, now commander of a government agency, leads global investigation of viral outbreaks in order to stop a deadlier strain of the titular virus from emerging. Unlike previous games, the main protagonist is I.P.C.A. recruit Cobra, while Gabe and Lian Xing appear as supporting NPCs. Games: Syphon Filter: Dark Mirror (2006) Following a mixed reception of The Omega Strain, Dark Mirror is a return to the series' roots. Gabe investigates a terrorist incident in an Alaskan oil refinery, only to discover a big conspiracy around the titular Dark Mirror. This is the first Syphon Filer title developed for PlayStation Portable. The PlayStation 2 port removed multiplayer and mature content, but restored the roll ability. Games: Syphon Filter: Logan's Shadow (2007) Serving as a direct sequel to Dark Mirror, Gabe receives a mission to retrieve stolen military equipment from Somali pirates, while discovering that his partner, Lian Xing, could be a double agent.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dirac spinor** Dirac spinor: In quantum field theory, the Dirac spinor is the spinor that describes all known fundamental particles that are fermions, with the possible exception of neutrinos. It appears in the plane-wave solution to the Dirac equation, and is a certain combination of two Weyl spinors, specifically, a bispinor that transforms "spinorially" under the action of the Lorentz group. Dirac spinor: Dirac spinors are important and interesting in numerous ways. Foremost, they are important as they do describe all of the known fundamental particle fermions in nature; this includes the electron and the quarks. Algebraically they behave, in a certain sense, as the "square root" of a vector. This is not readily apparent from direct examination, but it has slowly become clear over the last 60 years that spinorial representations are fundamental to geometry. For example, effectively all Riemannian manifolds can have spinors and spin connections built upon them, via the Clifford algebra. The Dirac spinor is specific to that of Minkowski spacetime and Lorentz transformations; the general case is quite similar. Dirac spinor: This article is devoted to the Dirac spinor in the Dirac representation. This corresponds to a specific representation of the gamma matrices, and is best suited for demonstrating the positive and negative energy solutions of the Dirac equation. There are other representations, most notably the chiral representation, which is better suited for demonstrating the chiral symmetry of the solutions to the Dirac equation. The chiral spinors may be written as linear combinations of the Dirac spinors presented below; thus, nothing is lost or gained, other than a change in perspective with regards to the discrete symmetries of the solutions. Dirac spinor: The remainder of this article is laid out in a pedagogical fashion, using notations and conventions specific to the standard presentation of the Dirac spinor in textbooks on quantum field theory. It focuses primarily on the algebra of the plane-wave solutions. The manner in which the Dirac spinor transforms under the action of the Lorentz group is discussed in the article on bispinors. Definition: The Dirac spinor is the bispinor u(p→) in the plane-wave ansatz of the free Dirac equation for a spinor with mass m which, in natural units becomes and with Feynman slash notation may be written An explanation of terms appearing in the ansatz is given below. The Dirac field is ψ(x) , a relativistic spin-1/2 field, or concretely a function on Minkowski space R1,3 valued in C4 , a four-component complex vector function. Definition: The Dirac spinor related to a plane-wave with wave-vector p→ is u(p→) , a C4 vector which is constant with respect to position in spacetime but dependent on momentum p→ The inner product on Minkowski space for vectors p and x is p⋅x≡pμxμ≡Ep→t−p→⋅x→ The four-momentum of a plane wave is := {\textstyle p^{\mu }=\left(\pm {\sqrt {m^{2}+{\vec {p}}^{2}}},\,{\vec {p}}\right):=\left(\pm E_{\vec {p}},{\vec {p}}\right)} where p→ is arbitrary, In a given inertial frame of reference, the coordinates are xμ . These coordinates parametrize Minkowski space. In this article, when xμ appears in an argument, the index is sometimes omitted.The Dirac spinor for the positive-frequency solution can be written as where ϕ is an arbitrary two-spinor, concretely a C2 vector. Definition: σ→ is the Pauli vector, Ep→ is the positive square root {\textstyle E_{\vec {p}}=+{\sqrt {m^{2}+{\vec {p}}^{2}}}} . For this article, the p→ subscript is sometimes omitted and the energy simply written E .In natural units, when m2 is added to p2 or when m is added to p/ , m means mc in ordinary units; when m is added to E, m means mc2 in ordinary units. When m is added to ∂μ or to ∇ it means {\textstyle {\frac {mc}{\hbar }}} (which is called the inverse reduced Compton wavelength) in ordinary units. Derivation from Dirac equation: The Dirac equation has the form In order to derive an expression for the four-spinor ω, the matrices α and β must be given in concrete form. The precise form that they take is representation-dependent. For the entirety of this article, the Dirac representation is used. In this representation, the matrices are These two 4×4 matrices are related to the Dirac gamma matrices. Note that 0 and I are 2×2 matrices here. Derivation from Dirac equation: The next step is to look for solutions of the form while at the same time splitting ω into two two-spinors: Results Using all of the above information to plug into the Dirac equation results in This matrix equation is really two coupled equations: Solve the 2nd equation for χ and one obtains Note that this solution needs to have {\textstyle E=+{\sqrt {{\vec {p}}^{2}+m^{2}}}} in order for the solution to be valid in a frame where the particle has p→=0→ Derivation of the sign of the energy in this case. We consider the potentially problematic term {\textstyle {\frac {{\vec {\sigma }}\cdot {\vec {p}}}{E+m}}\phi } If {\textstyle E=+{\sqrt {p^{2}+m^{2}}}} , clearly {\textstyle {\frac {{\vec {\sigma }}\cdot {\vec {p}}}{E+m}}\rightarrow 0} as p→→0→ On the other hand, let {\textstyle E=-{\sqrt {p^{2}+m^{2}}}} , p→=pn^ with n^ a unit vector, and let p→0 Hence the negative solution clearly has to be omitted, and {\textstyle E=+{\sqrt {p^{2}+m^{2}}}} . End derivation. Derivation from Dirac equation: Assembling these pieces, the full positive energy solution is conventionally written as The above introduces a normalization factor {\textstyle {\sqrt {\frac {E+m}{2m}}},} derived in the next section. Derivation from Dirac equation: Solving instead the 1st equation for ϕ a different set of solutions are found: In this case, one needs to enforce that {\textstyle E=-{\sqrt {{\vec {p}}^{2}+m^{2}}}} for this solution to be valid in a frame where the particle has p→=0→ . The proof follows analogously to the previous case. This is the so-called negative energy solution. It can sometimes become confusing to carry around an explicitly negative energy, and so it is conventional to flip the sign on both the energy and the momentum, and to write this as In further development, the ψ(+) -type solutions are referred to as the particle solutions, describing a positive-mass spin-1/2 particle carrying positive energy, and the ψ(−) -type solutions are referred to as the antiparticle solutions, again describing a positive-mass spin-1/2 particle, again carrying positive energy. In the laboratory frame, both are considered to have positive mass and positive energy, although they are still very much dual to each other, with the flipped sign on the antiparticle plane-wave suggesting that it is "travelling backwards in time". The interpretation of "backwards-time" is a bit subjective and imprecise, amounting to hand-waving when one's only evidence are these solutions. It does gain stronger evidence when considering the quantized Dirac field. A more precise meaning for these two sets of solutions being "opposite to each other" is given in the section on charge conjugation, below. Chiral basis: In the chiral representation for γμ , the solution space is parametrised by a C2 vector ξ , with Dirac spinor solution where σμ=(I2,σi),σ¯μ=(I2,−σi) are Pauli 4-vectors and ⋅ is the Hermitian matrix square-root. Spin orientation: Two-spinors In the Dirac representation, the most convenient definitions for the two-spinors are: and since these form an orthonormal basis with respect to a (complex) inner product. Pauli matrices The Pauli matrices are Using these, one obtains what is sometimes called the Pauli vector: Orthogonality: The Dirac spinors provide a complete and orthogonal set of solutions to the Dirac equation. This is most easily demonstrated by writing the spinors in the rest frame, where this becomes obvious, and then boosting to an arbitrary Lorentz coordinate frame. In the rest frame, where the three-momentum vanishes: p→=0→, one may define four spinors Introducing the Feynman slash notation the boosted spinors can be written as and The conjugate spinors are defined as ψ¯=ψ†γ0 which may be shown to solve the conjugate Dirac equation with the derivative understood to be acting towards the left. The conjugate spinors are then and The normalization chosen here is such that the scalar invariant ψ¯ψ really is invariant in all Lorentz frames. Specifically, this means Completeness: The four rest-frame spinors u(s)(0→), v(s)(0→) indicate that there are four distinct, real, linearly independent solutions to the Dirac equation. That they are indeed solutions can be made clear by observing that, when written in momentum space, the Dirac equation has the form and This follows because which in turn follows from the anti-commutation relations for the gamma matrices: with ημν the metric tensor in flat space (in curved space, the gamma matrices can be viewed as being a kind of vielbein, although this is beyond the scope of the current article). It is perhaps useful to note that the Dirac equation, written in the rest frame, takes the form and so that the rest-frame spinors can correctly be interpreted as solutions to the Dirac equation. There are four equations here, not eight. Although 4-spinors are written as four complex numbers, thus suggesting 8 real variables, only four of them have dynamical independence; the other four have no significance and can always be parameterized away. That is, one could take each of the four vectors u(s)(0→), v(s)(0→) and multiply each by a distinct global phase eiη. Completeness: This phase changes nothing; it can be interpreted as a kind of global gauge freedom. This is not to say that "phases don't matter", as of course they do; the Dirac equation must be written in complex form, and the phases couple to electromagnetism. Phases even have a physical significance, as the Aharonov–Bohm effect implies: the Dirac field, coupled to electromagnetism, is a U(1) fiber bundle (the circle bundle), and the Aharonov–Bohm effect demonstrates the holonomy of that bundle. All this has no direct impact on the counting of the number of distinct components of the Dirac field. In any setting, there are only four real, distinct components. Completeness: With an appropriate choice of the gamma matrices, it is possible to write the Dirac equation in a purely real form, having only real solutions: this is the Majorana equation. However, it has only two linearly independent solutions. These solutions do not couple to electromagnetism; they describe a massive, electrically neutral spin-1/2 particle. Apparently, coupling to electromagnetism doubles the number of solutions. But of course, this makes sense: coupling to electromagnetism requires taking a real field, and making it complex. With some effort, the Dirac equation can be interpreted as the "complexified" Majorana equation. This is most easily demonstrated in a generic geometrical setting, outside the scope of this article. Energy eigenstate projection matrices: It is conventional to define a pair of projection matrices Λ+ and Λ− , that project out the positive and negative energy eigenstates. Given a fixed Lorentz coordinate frame (i.e. a fixed momentum), these are These are a pair of 4×4 matrices. They sum to the identity matrix: are orthogonal and are idempotent It is convenient to notice their trace: Note that the trace, and the orthonormality properties hold independent of the Lorentz frame; these are Lorentz covariants. Charge conjugation: Charge conjugation transforms the positive-energy spinor into the negative-energy spinor. Charge conjugation is a mapping (an involution) ψ↦ψc having the explicit form where (⋅)T denotes the transpose, C is a 4×4 matrix, and η is an arbitrary phase factor, 1. Charge conjugation: The article on charge conjugation derives the above form, and demonstrates why the word "charge" is the appropriate word to use: it can be interpreted as the electrical charge. In the Dirac representation for the gamma matrices, the matrix C can be written as Thus, a positive-energy solution (dropping the spin superscript to avoid notational overload) is carried to its charge conjugate Note the stray complex conjugates. These can be consolidated with the identity to obtain with the 2-spinor being As this has precisely the form of the negative energy solution, it becomes clear that charge conjugation exchanges the particle and anti-particle solutions. Note that not only is the energy reversed, but the momentum is reversed as well. Spin-up is transmuted to spin-down. It can be shown that the parity is also flipped. Charge conjugation is very much a pairing of Dirac spinor to its "exact opposite".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SEPTA Key** SEPTA Key: The SEPTA Key card is a smart card that is used for automated fare collection on the SEPTA public transportation network in the Philadelphia metropolitan area. It can be used throughout SEPTA's transit system (bus, trolley, subway, high speed line), and on Regional Rail. History: Before the Key System, SEPTA's fare collection was almost entirely manual. Monthly and Weekly passes were sold by a cashier at a SEPTA sales office. Tokens for bus, trolley and subway fare could be purchased from a vending machine at some stations, however exact change was required. Paper tickets and passes were used on Regional Rail. In 2012, SEPTA announced the Key project. In 2014, SEPTA began deploying the new hardware necessary for the system at each station. History: The initial rollout of the key card on transit services began with an early adoption program starting on June 13, 2016. Sale of Key Cards was opened to the public on February 9, 2017. As of June 1, 2017, weekly and monthly TransPasses (for urban transit, distinct from the TrailPasses for SEPTA Regional Rail) were no longer available in the old format, and users of those passes had to have a Key Card. However, the sale of weekly TransPass at third-party locations continued until July 30th, 2018. The sale of monthly TransPasses at third-party locations also ended in July 2018.Sales of paper weekly/monthly TransPasses at all Regional Rail stations, token sales at most Regional Rail stations and token sales at all transit sales offices ended by April 30, 2018; however, token sales at third-party locations continued until July 15. Tokens then continued to be sold in bulk to social service agencies, as work continued to implement a new method for those organizations to provide SEPTA fares to their clients. Also in April 2018, SEPTA launched the external retail network for Key Cards, allowing cards to be purchased and reloaded at businesses across the Philadelphia area. On August 1, 2018, SEPTA stopped issuing or honoring paper transfers; the only way to use the reduced transfer fee is through the SEPTA Key card.On August 1, 2018, SEPTA began an early adoption program for SEPTA Key on Regional Rail from select Zone 4 stations for Monthly Zone 4 TrailPass holders. On October 1, SEPTA expanded the program to include select Zone 3 stations for Monthly and Weekly TrailPass holders. The SEPTA Key program extended to Zone 1 and Zone 2 TrailPass holders on May 1, 2019. Weekly TrailPasses were available only on SEPTA Key starting the week of August 12th for Zones 3 and 4 and the week of September 9th for Zones 1 and 2, marking the end of paper Weekly TrailPass sales. Monthly TrailPasses were available only on SEPTA Key starting in October for Zones 3 and 4 and in November for Zones 1 and 2, marking the end of paper Monthly TrailPass sales.On July 13, 2020, the Travel Wallet feature launched on Regional Rail, replacing tickets and cash, along with the Cross County Pass on a SEPTA Key card. The sale of Monthly Cross County Passes ended at third-party locations in August 2020. Sales of paper single-ride and ten-trip tickets ended on October 2. As of April 2, 2021, previously purchased paper tickets are no longer accepted for travel on Regional Rail. Technology and use: Similar to a debit card issued by a bank, each Key card has a personalized 16 digit account number. A Mastercard Paypass chip is embedded in the card allowing it to be read wirelessly. Riders simply wave their card near a red fare validator pad. On buses, trolley routes, and the Norristown High Speed Line; the validator is mounted to the vehicle farebox. On the Broad Street Line and the Market–Frankford Line, the validators are located on the turnstiles that access the boarding area. At certain stations serving both subway and trolley lines (like 30th Street Station), fare is collected at the turnstiles even for trolley routes. The Norristown High Speed Line collects fares at turnstiles at 69th Street Transportation Center and Norristown Transportation Center while the fare is collected onboard at all other stations along the line. On Regional Rail, there are turnstiles with validators at the Center City Philadelphia stations while outlying stations have platform validators. Riders tap on at the turnstile or platform validator to open their trip before boarding the train and tap off at the turnstile or platform validator to close their trip after exiting the train. Technology and use: The system also has a Quick Trip feature allowing a single fare for the Broad Street Line or the Market-Frankford Line to be purchased from a Key vending machine. Instead of a plastic card with an embedded chip, the system prints a paper ticket with a magnetic stripe. A rider with a quick trip ticket will swipe it at a black card reader mounted next to the red pad to access the boarding area. Quick Trips can also be used on Regional Rail's Airport Line on trips originating from the Philadelphia International Airport; they can be purchased from machines located on the platforms. Quick Trips are also used at the Regional Rail stations in Center City Philadelphia; riders arriving in Center City Philadelphia buy a Quick Trip before exiting the station turnstiles while riders departing Center City Philadelphia buy a Quick Trip before entering the station turnstiles. A card can be loaded with a weekly, monthly or single day pass. Unlike the older paper passes, SEPTA Key imposes a limit on how many trips a rider can take on a pass (56 for a weekly pass, 240 for a monthly pass, 8 for a One Day Convenience Pass, and 10 for a One Day Independence Pass). This is designed to prevent sharing of cards. The system also has a "Travel Wallet" feature in which riders can load money on the card and have the fare for each trip deducted from the balance when the card is presented. The Travel Wallet fare is discounted from the cash fare and costs the same as a token on transit and a ticket purchased in advance on Regional Rail. Technology and use: The system was designed to keep most of SEPTA's existing fare collection practices in place. For example, the system can automatically detect if a rider is transferring from another route and charge the transfer fee instead of the full fare. Technology and use: The SEPTA Key Student Fare Card program provides K-12 students with a SEPTA Key card that can be used for up to 8 trips per school day. Cards can be upgraded to be used on Regional Rail. The SEPTA Key University Pass is a discount transit pass for college students at participating colleges. Colleges participating in the SEPTA Key University Pass program include University of Pennsylvania, Temple University, Drexel University, University of the Arts, and University of the Sciences.SEPTA Key is accepted on all SEPTA rapid transit lines (Broad Street, Market-Frankford, Norristown), buses, trolleys, trackless trolleys, and Regional Rail. SEPTA Key cards were formerly accepted on DART First State buses in northern New Castle County, Delaware. Starting January 1, 2021, SEPTA Key cards were no longer accepted on DART First State buses because the fareboxes cannot read the card to confirm the purchase of a TrailPass and due to widespread fraudulent use. Contract and implementation: In 2007, SEPTA announced a plan to award a contract for an updated fare payment system by the end of the year. At the time, it was estimated the project would take about three years and cost approximately $100 million, based on the implementation of similar fare payment systems in other cities. After the bid deadline for contractors was extended several times, in 2011 the SEPTA Board awarded a $129.5 million contract to ACS Transport Solutions Group, a division of Xerox, with 2013 as a target date for completing the implementation.By 2013, the project was said to be a few months behind schedule, with SEPTA's Chief Officer of New Payment Technology John McGee stating "That ball of steam isn't as large as we'd like, but we're still moving along." Roll out was expected first on SEPTA Regional Rail, with transit service to follow.By 2019, total cost of the primary contract was $192.5 million, about $70 million more than planned.As of September 2020, the total cost was $193.3 million. SEPTA Key Tix: In December 2022, SEPTA released a public beta of SEPTA Key Tix after a months-long closed trial. This feature allows for occasional riders to buy passes for all modes of rapid transit (except Regional Rail) from a smartphone that can be scanned as a ticket via QR code. The fares for SEPTA Key Tix are the same those on the SEPTA Key, which is at a discount to cash prices. The fare also includes one free transfer, which has been unavailable with cash fares since SEPTA did away with paper transfers. There have been some complaints about SEPTA Key Tix, which have included difficulties of using the platform, disuse of money stored in the "travel wallet" to buy tickets, and no integration with mobile payment services such as Apple Pay and Google Pay; however, SEPTA has announced plans to support those and other forms of contactless payment within the program in the near future.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of Engineering Education** Journal of Engineering Education: The Journal of Engineering Education is a quarterly peer-reviewed academic journal covering research on engineering education that is published by the American Society for Engineering Education. The editor-in-chief is Lisa C. Benson (Clemson University). Abstracting and indexing: The journal is abstracted and indexed in: Science Citation Index Social Sciences Citation Index Current Contents/Engineering, Computing and Technology Current Contents/Social and Behavioral Sciences EBSCOhost ScopusAccording to the Journal Citation Reports, the journal has a 2014 impact factor of 2.059.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Classical Marxism** Classical Marxism: Classical Marxism is the body of economic, philosophical, and sociological theories expounded by Karl Marx and Friedrich Engels in their works, as contrasted with orthodox Marxism, Marxism–Leninism, and autonomist Marxism which emerged after their deaths. The core concepts of classical Marxism include alienation, base and superstructure, class consciousness, class struggle, exploitation, historical materialism, ideology, revolution; and the forces, means, modes, and relations of production. Marx's political praxis (application of theory), including his attempt to organize a professional revolutionary body in the First International, often served as an area of debate for subsequent theorists. Karl Marx: Karl Marx (5 May 1818, Trier, Germany – 14 March 1883, London) was an immensely influential German philosopher, sociologist, political economist and revolutionary socialist. Marx addressed a wide range of issues, including alienation and exploitation of the worker, the capitalist mode of production and historical materialism, although he is most famous for his analysis of history in terms of class struggles, summed up in the opening line of the introduction to The Communist Manifesto: "The history of all hitherto existing society is the history of class struggles". The influence of his ideas, already popular during his life, was given added impetus by the victory of the Russian Bolsheviks in 1917 October Revolution and there are few parts of the world which were not significantly touched by Marxian ideas in the course of the twentieth century. Karl Marx: As the American Marx scholar Hal Draper remarked: "[T]here are few thinkers in modern history whose thought has been so badly misrepresented, by Marxists and anti-Marxists alike". Early influences The early influences on Marx are often grouped into three categories, namely German philosophy, English/Scottish political economy and French socialism. Karl Marx: German philosophy Main influences include Immanuel Kant, Georg Wilhelm Friedrich Hegel and Ludwig Feuerbach. Marx studied under one of Hegel's pupils, Bruno Bauer, a leader of the circle of Young Hegelians to whom Marx attached himself. However, in 1841 he and Engels came to disagree with Bauer and the rest of the Young Hegelians about socialism and also about the usage of Hegel's dialectic and progressively broke away from German idealism and the Young Hegelians. Marx's early writings are thus a response to Hegel, German idealism, and a break with the rest of the Young Hegelians. Marx, "stood Hegel on his head", in his own view of his role by turning the idealistic dialectic into a materialistic one, in proposing that material circumstances shape ideas instead of the other way around. In this, Marx was following the lead of Feuerbach. His theory of alienation, developed in the Economic and Philosophical Manuscripts of 1844 (published in 1932), inspired itself from Feuerbach's critique of the alienation of Man in God through the objectivation of all his inherent characteristics (thus man projected on God all qualities which are in fact man's own quality which defines the "human nature"). But Marx also criticized Feuerbach for being insufficiently materialistic. Karl Marx: English and Scottish political economy Main influences include Adam Smith and David Ricardo. Marx built on and critiqued the most well-known political economists of his day, the British classical political economists. Karl Marx: Marx critiqued Smith and Ricardo for not realizing that their economic concepts reflected specifically capitalist institutions, not innate natural properties of human society, and could not be applied unchanged to all societies. He proposed a systematic correlation between labor values and money prices. He claimed that the source of profits under capitalism is the value added by workers not paid out in wages. This mechanism operated through the distinction between "labor power", which workers freely exchanged for their wages; and "labour", over which asset-holding capitalists thereby gained control. This practical and theoretical distinction was Marx's primary insight, and allowed him to develop the concept of "surplus value", which distinguished his works from that of Smith and Ricardo. Karl Marx: French socialism Main influences include Jean-Jacques Rousseau, Charles Fourier, Henri de Saint-Simon, Pierre-Joseph Proudhon and Louis Blanc. Rousseau was one of the first modern writers to seriously attack the institution of private property and is sometimes considered a forebear of modern socialism and communism, though Marx rarely mentions Rousseau in his writings. Karl Marx: In 1833, France was experiencing a number of social problems arising out of the Industrial Revolution. A number of sweeping plans of reform were developed by thinkers on the political left. Among the more grandiose were the plans of Charles Fourier and the followers of Saint-Simon. Fourier wanted to replace modern cities with utopian communities while the Saint-Simonians advocated directing the economy by manipulating credit. Although these programs did not have much support, they did expand the political and social imagination of Marx.Louis Blanc is perhaps best known for originating the social principle, later adopted by Marx, of how labor and income should be distributed: "From each according to his abilities, to each according to his needs".Pierre-Joseph Proudhon participated in the French Revolution of 1848 and the composition of what he termed "the first republican proclamation" of the new republic, but he had misgivings about the new government because it was pursuing political reform at the expense of the socio-economic reform, which Proudhon considered basic. Proudhon published his own perspective for reform, Solution du problème social, in which he laid out a program of mutual financial cooperation among workers. He believed this would transfer control of economic relations from capitalists and financiers to workers. It was Proudhon's book What Is Property? that convinced the young Karl Marx that private property should be abolished. Karl Marx: Other influences on Marx Main influences includes Friedrich Engels, ancient Greek materialism, Giambattista Vico and Lewis H. Morgan. Marx's revision of Hegelianism was also influenced by Engels' book The Condition of the Working Class in England in 1844, which led Marx to conceive of the historical dialectic in terms of class conflict and to see the modern working class as the most progressive force for revolution. Marx was influenced by Antique materialism, especially Epicurus (to whom Marx dedicated his thesis, The Difference Between the Democritean and Epicurean Philosophy of Nature, 1841) for his materialism and theory of clinamen which opened up a realm of liberty. Karl Marx: Giambattista Vico propounded a cyclical theory of history, according to which human societies progress through a series of stages from barbarism to civilization and then return to barbarism. In the first stage—called the Age of the Gods—religion, the family and other basic institutions emerge; in the succeeding Age of Heroes, the common people are kept in subjection by a dominant class of nobles; in the final stage—the Age of Men—the people rebel and win equality, but in the process society begins to disintegrate. Vico's influence on Marx is obvious.Marx drew on Lewis H. Morgan and his social evolution theory. He wrote a collection of notebooks from his reading of Lewis Morgan, but they are regarded as being quite obscure and only available in scholarly editions. (However Engels is much more noticeably influenced by Morgan than Marx). Friedrich Engels: Friedrich Engels (28 November 1820, Wuppertal, Prussia – 5 August 1895, London) was a 19th-century German political philosopher. He developed communist theory alongside his better-known collaborator, Karl Marx. In 1842, his father sent the young Engels to England to help manage his cotton factory in Manchester. Shocked by the widespread poverty, Engels began writing an account which he published in 1845 as The Condition of the Working Class in England in 1844 ([1]). Friedrich Engels: In July 1845, Engels went to England, where he met an Irish working-class woman named Mary Burns (Crosby), with whom he lived until her death in 1863 (Carver 2003:19). Later, Engels lived with her sister Lizzie, marrying her the day before she died in 1877 (Carver 2003:42). These women may have introduced him to the Chartist movement, of whose leaders he met several, including George Harney. Friedrich Engels: Engels actively participated in the Revolution of 1848, taking part in the uprising at Elberfeld. Engels fought in the Baden campaign against the Prussians (June/July 1849) as the aide-de-camp of August Willich, who commanded a Free Corps in the Baden-Palatinate uprising. Friedrich Engels: Marx and Engels Marx and Engels first met in person in September 1844. They discovered that they had similar views on philosophy and on capitalism and decided to work together, producing a number of works including Die heilige Familie (The Holy Family). After the French authorities deported Marx from France in January 1845, Engels and Marx decided to move to Belgium, which then permitted greater freedom of expression than some other countries in Europe. Engels and Marx returned to Brussels in January 1846, where they set up the Communist Correspondence Committee. Friedrich Engels: In 1847, Engels and Marx began writing a pamphlet together, based on Engels' The Principles of Communism. They completed the 12,000-word pamphlet in six weeks, writing it in such a manner as to make communism understandable to a wide audience and published it as The Communist Manifesto in February 1848. In March, Belgium expelled both Engels and Marx. They moved to Cologne, where they began to publish a radical newspaper, the Neue Rheinische Zeitung. By 1849, both Engels and Marx had to leave Germany and moved to London. The Prussian authorities applied pressure on the British government to expel the two men, but Prime Minister Lord John Russell refused. With only the money that Engels could raise, the Marx family lived in extreme poverty. The contributions of Marx and Engels to the formation of Marxist theory have been described as inseparable. Main ideas: Marx's main ideas included: Alienation: Marx refers to the alienation of people from aspects of their "human nature" (Gattungswesen, usually translated as "species-essence" or "species-being"). He believed that alienation is a systematic result of capitalism. Under capitalism, the fruits of production belong to the employers, who expropriate the surplus created by others and in so doing generate alienated labour. Alienation describes objective features of a person's situation in capitalism—it is not necessary for them to believe or feel that they are alienated. Main ideas: Base and superstructure: Marx and Engels use the “base-structure” concept to explain the idea that the totality of relations among people with regard to “the social production of their existence” forms the economic basis, on which arises a superstructure of political and legal institutions. To the base corresponds the social consciousness which includes religious, philosophical and other main ideas. The base conditions both, the superstructure and the social consciousness. A conflict between the development of material productive forces and the relations of production causes social revolutions and the resulting change in the economic basis will sooner or later lead to the transformation of the superstructure. For Marx, this relationship is not a one way process—it is reflexive and the base determines the superstructure in the first instance at the same time as it remains the foundation of a form of social organization which is itself transformed as an element in the overall dialectical process. The relationship between superstructure and base is considered to be a dialectical one, ineffable in a sense except as it unfolds in its material reality in the actual historical process (which scientific socialism aims to explain and ultimately to guide). Main ideas: Class consciousness: class consciousness refers to the awareness, both of itself and of the social world around it, that a social class possesses and its capacity to act in its own rational interests based on this awareness. Thus class consciousness must be attained before the class may mount a successful revolution. However, other methods of revolutionary action have been developed, such as vanguardism. Main ideas: Exploitation: Marx refers to the exploitation of an entire segment or class of society by another. He sees it as being an inherent feature and key element of capitalism and free markets. The profit gained by the capitalist is the difference between the value of the product made by the worker and the actual wage that the worker receives—in other words, capitalism functions on the basis of paying workers less than the full value of their labor in order to enable the capitalist class to turn a profit. Main ideas: Historical materialism: historical materialism was first articulated by Marx, although he himself never used the term. It looks for the causes of developments and changes in human societies in the way in which humans collectively make the means to life, thus giving an emphasis through economic analysis to everything that co-exists with the economic base of society (e.g. social classes, political structures, ideologies). Main ideas: Means of production: the means of production are a combination of the means of labor and the subject of labor used by workers to make products. The means of labor include machines, tools, equipment, infrastructure and "all those things with the aid of which man acts upon the subject of labor, and transforms it". The subject of labor includes raw materials and materials directly taken from nature. Means of production by themselves produce nothing— labor power is needed for production to take place. Main ideas: Ideology: without offering a general definition for "ideology", Marx on several instances has used the term to designate the production of images of social reality. According to Engels, “ideology is a process accomplished by the so-called thinker consciously, it is true, but with a false consciousness. The real motive forces impelling him remain unknown to him; otherwise it simply would not be an ideological process. Hence he imagines false or seeming motive forces”. Because the ruling class controls the society's means of production, the superstructure of society as well as its ruling ideas will be determined according to what is in the ruling class's best interests. As Marx said famously in The German Ideology, “the ideas of the ruling class are in every epoch the ruling ideas, i.e. the class which is the ruling material force of society, is at the same time its ruling intellectual force”. Therefore the ideology of a society is of enormous importance since it confuses the alienated groups and can create false consciousness such as commodity fetishism (perceiving labor as capital—a degradation of human life). Main ideas: Mode of production: the mode of production is a specific combination of productive forces (including human the means of production and labour power, tools, equipment, buildings and technologies, materials and improved land) and social and technical relations of production (including the property, power and control relations governing society's productive assets, often codified in law, cooperative work relations and forms of association, relations between people and the objects of their work and the relations between social classes). Main ideas: Political economy: the term "political economy" originally meant the study of the conditions under which production was organized in the nation-states of the new-born capitalist system. Political economy then studies the mechanism of human activity in organizing material and the mechanism of distributing the surplus or deficit that is the result of that activity. Political economy studies the means of production, specifically capital and how this manifests itself in economic activity. Main ideas: Marx's concept of class Marx believed that class identity was configured in the relations with the mode of production. In other words, a class is a collective of individuals who have a similar relationship with the means of production (as opposed to the more common idea that class is determined by wealth alone, i.e. high class, middle class and poor class). Main ideas: Marx describes several social classes in capitalist societies, including primarily: The proletariat: "those individuals who sell their labor power, (and therefore add value to the products), and who, in the capitalist mode of production, do not own the means of production". According to Marx, the capitalist mode of production establishes the conditions for the bourgeoisie to exploit the proletariat due to the fact that the worker's labor power generates an added value greater than his salary. Main ideas: The bourgeoisie: those who "own the means of production" and buy labor power from the proletariat, who are recompensed by a salary, thus exploiting the proletariat.The bourgeoisie may be further subdivided into the very wealthy bourgeoisie and the petty bourgeoisie. The petty bourgeoisie are those who employ labor, but may also work themselves. These may be small proprietors, land-holding peasants, or trade workers. Marx predicted that the petty bourgeoisie would eventually be destroyed by the constant reinvention of the means of production and the result of this would be the forced movement of the vast majority of the petty bourgeoisie to the proletariat. Marx also identified the lumpenproletariat, a stratum of society completely disconnected from the means of production. Main ideas: Marx also describes the communists as separate from the oppressed proletariat. The communists were to be a unifying party among the proletariat; they were educated revolutionaries who could bring the proletariat to revolution and help them establish the democratic dictatorship of the proletariat. According to Marx, the communists would support any true revolution of the proletariat against the bourgeoisie. Thus the communists aide the proletariat in creating the inevitable classless society. Main ideas: Marx's theory of history The Marxist theory of historical materialism understands society as fundamentally determined by the material conditions at any given time—this means the relationships which people enter into with one another in order to fulfill their basic needs, for instance to feed and clothe themselves and their families. In general, Marx and Engels identified five successive stages of the development of these material conditions in Western Europe. Main ideas: Primitive communism Asiatic Ancient slave society Feudalism Modern bourgeois society
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PLEKHB2** PLEKHB2: Pleckstrin homology domain-containing family B member 2 is a protein that in humans is encoded by the PLEKHB2 gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**National Digital Library Program** National Digital Library Program: The Library of Congress National Digital Library Program (NDLP) is assembling a digital library of reproductions of primary source materials to support the study of the history and culture of the United States. Begun in 1995 after a five-year pilot project, the program began digitizing selected collections of Library of Congress archival materials that chronicle the nation's rich cultural heritage. In order to reproduce collections of books, pamphlets, motion pictures, manuscripts and sound recordings, the Library has created a wide array of digital entities: bitonal document images, grayscale and color pictorial images, digital video and audio, and searchable e-texts. To provide access to the reproductions, the project developed a range of descriptive elements: bibliographic records, finding aids, and introductory texts and programs, as well as indexing the full texts for certain types of content. National Digital Library Program: The reproductions were produced with a variety of tools: image scanners, digital cameras, devices that digitize audio and video, and human labor for rekeying and encoding texts. American Memory employs national-standard and well established industry-standard formats for many digital reproductions, e.g., texts encoded with Standard Generalized Markup Language (SGML) and images stored in Tagged Image File Format (TIFF) files or compressed with the Joint Photographic Experts Group (JPEG) algorithm. In other cases, the lack of well established standards has led to the use of emerging formats, e.g., RealAudio (for audio), QuickTime (for moving images), and MrSID (for maps). Technical information by types of material and by individual collections is also available at this site. Vision: The Library of Congress is trying to extend its brick and mortar library services to include services to the entire web. While the original Library was focused on the needs of the US Congress, it now struggles with dealing with the whole world through the Internet. The collection includes an eclectic mix of documents, images, videos and sound recordings. Images include maps, sheet music, handwritten documents, drawings and architectural diagrams. The goal of a Library of Congress Internet Library should be to provide access to those materials unique to the Library of Congress as well as a clear guide to any internet materials related to the United States. Vision: If you search "digital library project" + "library of congress" on the web, you will get a cluttered view of what the Library of Congress is providing. The Library of Congress Global Gateway at site:international.loc.gov has about 200,000 documents currently. The main page provides links but no context. The American Memory at site:memory.loc.gov has about 350,000 documents. The main page is similarly vague. Vision: An Internet Library is more than a haphazard collection of materials on an internet server. It serves an entire world, not just those who can afford subscription feeds, or who receive grants through US government agencies. Likewise it does not discriminate against very young users, or languages other than English. Its purpose, scope, and contents are readily understood at any location within the site. It is not needlessly repetitive. It recognizes the value of the users' time, and makes every effort to constantly improve performance and the users' success. Vision: Because materials are available to anyone – of any age or background, in any country – an Internet Library needs to be more open and inclusive. LoC is just beginning to serve the needs of the world's internet users. Vision: Topics Mentioned: America – Industry, Technology, Cities, Towns, Culture, Literature, Performing Arts, Music, Folklife, Architecture, Landscape, Environment, Sports, Recreation, America – Government, Military, Law, Religion, Advertising, Conservation, America – Presidents, Women's History, African American History, Native American History, American Expansion, Immigration, War Missing – Sciences, Universities, Occupations, American Resources other than LoC, Agriculture, Arts, Missing – Wiki tools, User communities to improve the site, Internet Maps,Content Mentioned: Bibliographic databases, Online Catalogs, current issues of favorite journals, new acquisitions, indexes to journal literature, references from scholarly publications, lists of readings, classroom presentations, lesson plans, "valuable materials", articles, textbooks User Categories Mentioned: School teachers, scholars, students, internet users, User Purposes Mentioned: Term papers, presentations, reports, online projects, Digital library users: In 1989, to help launch the American Memory pilot project, a consultant surveyed 101 members of the Association of Research Libraries and the 51 state library agencies. The survey disclosed a genuine appetite for on-line collections, especially in research libraries serving higher education. The American Memory pilot (1990–1995) identified multiple audiences for digital collections in a special survey, an end-user evaluation and in thousands of conversations, letters and encounters with visitors. Digital library users: The most thorough audience appraisal carried out by the Library of Congress consisted of an end-user evaluation conducted in 1992–1993. Forty-four school, college and university, and state and public libraries were provided with a dozen American Memory collections on CD-ROMs and videodisks (these formats are no longer being supported). Participating library staff, teachers, students and the public were polled about which digitized materials they had used and how well the delivery systems worked. The evaluation indicated continued interest by institutions of higher education as well as public libraries. The surprising finding, however, was the strong showing of enthusiasm in schools, especially at the secondary level. Library Science students however should be more wary of such a development given the potential for unwarranted changes being made to the collection. Digital library users: The evaluation team learned that recent reforms in education had created a need for primary-source historical materials such as those in the Library's incomparable collections. Teachers welcomed digitized collections to aid in the development of critical thinking skills; school librarians used the electronic resource to inculcate research skills. These findings have been validated in the educational outreach program initiated by the Library of Congress in 1995 and initially funded by the W. K. Kellogg Foundation. Educational outreach: In 1995, in conjunction with the launch of the Library of Congress National Digital Library Program, the Library brought together leading history and social studies K-12 teachers and librarians to consider how archival on-line resources could best be used in the nation's schools. The participants at this Educator's Forum validated earlier findings: that while the primary sources were in great demand, for teachers to be able to make effective use of them, they needed additional materials to frame the collections and the topics represented in the collections. To this end in 1996, the Library of Congress developed The Learning Page—a gateway to the digital collections, which provides contextual material, search help and evaluate their materials under development. The Library continued the American Memory Fellows Program in the summer of 1998 with the goal of building champions for their collections in schools across the country.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Beier–Neely morphing algorithm** Beier–Neely morphing algorithm: Image morphing is a technique to synthesize a fluid transformation from one image (source image) to another (destination image). Source image can be one or more than one images. There are two parts in the image morphing implementation. The first part is warping and the second part is cross-dissolving. The algorithm of Beier and Neely is a method to compute a mapping of coordinates between 2 images from a set of lines; i.e., the warp is specified by a set of line pairs where the start-points and end-points are given for both images. The algorithm is widely used within morphing software. Also noteworthy, this algorithm only discussed about the situation with at most 2 source images as there are other algorithms introducing multiple source images.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FrontBase** FrontBase: FrontBase is a relational database management system written in ANSI C. FrontBase uses the Unicode character encoding. International standards: FrontBase complies with SQL 92 (fully compliant), Unicode (Unicode 2.0) and TCP/IP (uses sockets). Available platforms: FrontBase is available on the following platforms: Macintosh - Mac OS X, Mac OS X Server 10.x and Mac OS X Server 1.2 Linux - RedHat, SuSE(Intel and Power PC), YellowDog Linux and Mandrake Linux Unix - FreeBSB, Solaris and HP-UX Windows - Windows NT and Windows 2000. Drivers and adaptors: Drivers and adaptors include Apple WebObjects, PHP3, PHP4, Perl, ODBC, JDBC, Omnis Studio, REALBasic, Tcl, EOF, FBAccess and FBCAccess. Data types: Data types supported include INTEGER, DECIMAL, TIMESTAMP, BLOB and VARCHAR.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Samurai sauce** Samurai sauce: Samurai sauce (French: Sauce samouraï) is a Belgian condiment prepared from mayonnaise, ketchup, and harissa or sambal oelek commonly served with French fries. The sauce is also popular and widely used throughout France, and is not to be confused with the also popular Algerian sauce. There is indeed a version also called Algerian sauce which is similar to Samurai sauce, but with onions. According to Harry Pearson, author of A Tall Man In A Low Land: Some Time Among the Belgians, mobile friteries in Belgium often have samurai sauce, with some making it their special item. In addition, many kebab restaurants have Samurai sauce as an available condiment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pulse Impact Investing Management Software** Pulse Impact Investing Management Software: Pulse Impact Investing Management Software was a software platform that was available free to non-profit companies and designed to help organizations better demonstrate impact.Pulse was designed to track financial, operational, social and environmental metrics, and featured a range of qualitative reporting to complement quantitative performance management data. It allowed organizations to aggregate and benchmark financial, operating, social and environmental performance metrics at the portfolio and sector level, allowing for meaningful comparisons of performance against a relevant peer group.On September 30, 2013, Acumen Fund and B Lab announced Pulse would be integrated into the B Analytics platform and no longer offered on a standalone basis. History: Pulse was developed by Acumen Fund, who, in 2006, recognized the need to standardize the metrics that social investors track to benchmark and better understand the impact of one social investor versus another. Acumen Fund recruited volunteer engineers from Google to build a prototype portfolio management system, and from this work, a system called PDMS (Portfolio Data Management System) was launched. Google's charitable arm Google.org started using PDMS soon after, and PDMS was exhibited at the 2006 ANDE (Aspen Network of Development Professionals) annual conference.In 2007 and 2008, PDMS continued to gain industry acceptance as the formal beta test was launched with more flexibility and features. In 2008, Acumen Fund and the developers of PDMS decided to move the system onto the Salesforce.com platform. Concurrently, Acumen Fund engaged The Rockefeller Foundation, PricewaterhouseCoopers, Deloitte, Global Impact Investing Network, Hitachi, and B-Lab to develop the standard taxonomy that PDMS would use, which was dubbed IRIS (Impact Reporting and Investment Standards). The new system, built on the Salesforce.com platform and incorporating the new IRIS standards, was renamed Pulse.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Palladium Book of Contemporary Weapons** The Palladium Book of Contemporary Weapons: The Palladium Book of Contemporary Weapons is a 1984 role-playing game supplement published by Palladium Books. Contents: The Palladium Book of Contemporary Weapons is a collection of the world's most famous and favorite firearms from 1930 to the present, and is organized into Automatic Pistols, Sub-Machine Guns, Rifles, Shotguns, and Machine Guns. Reception: Jerry Epperson reviewed The Palladium Book of Contemporary Weapons in Space Gamer No. 70. Epperson commented that "If you have no interest in modern RPGs, obviously Contemporary Weapons will be of little interest to you. However, if you are looking to expand the firearm variety in your game, this aid is right on target."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Risley (circus act)** Risley (circus act): A Risley or Risley act (also antipode or antipodism) is any circus acrobalance posture where the base is lying down on their back, supporting one or more flyers with their hands, feet and/or other parts of the body; spinning a person or object using only one's feet. The act is named after Richard Risley Carlisle (1814–1874) who developed this kind of act in the United States.Risleys can be separated into three general categories of skills: Skills that are based with the hands Skills that are based with the feet Other
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Handloading** Handloading: Handloading, or reloading, is the practice of making firearm cartridges by assembling the individual components (case, primer, propellant, and projectile), rather than purchasing mass-assembled, factory-loaded ammunition. (It should not be confused with the reloading of a firearm with cartridges, such as by swapping magazines or using a speedloader.) The term handloading is the more general term, and refers generically to the manual assembly of ammunition. Reloading refers more specifically to handloading using previously fired cases and shells. The terms are often used interchangeably however, as the techniques are largely the same, whether the handloader is using new or recycled components. The differences lie in the initial preparation of cases and shells; new components are generally ready to load, while previously fired components often need additional procedures, such as cleaning, removal of expended primers, or the reshaping and resizing of brass cases. Reasons for handloading: Economy, increased performance and accuracy, commercial ammunition shortages, and hobby interests are all common motives for handloading both cartridges and shotshells. Handloading ammunition waives the user off the labor costs of commercial production lines, reducing the expenditure to only the cost of purchasing components and equipment. Reloading used cartridge cases can save the shooter money, providing not only a greater quantity, but also a higher quality of ammunition within a given budget. Reloading may not however be cost effective for occasional shooters, as it takes time to recoup the cost of needed equipment, but those who shoot more frequently will see cost-savings over time, as the brass cartridge cases and shotgun shell hulls, which are often the most expensive components, can be reused with proper maintenance. Additionally, most handloading components can be acquired at discounted prices when purchased in bulk, so handloaders are often less affected by changes in ammunition availability. Reasons for handloading: The opportunity to customize performance is another common goal for many handloaders. Hunters for instance, may desire cartridges with specialized bullets with specific terminal performance. Target shooters often experiment extensively with component combinations in an effort to achieve the best and most consistent bullet trajectories, often using cartridge cases that have been fire formed in order to best fit the chamber of a specific firearm. Shotgun enthusiasts can make specialty rounds unavailable through commercial inventories at any price. Some handloaders even customize cartridges and shotshells simply to lower recoil, for instance for younger shooters who might otherwise avoid shooting sports because of the high recoil of certain firearms. It is also a not infrequent practice for handloaders to make increased-power ammunition (i.e. "hot loads") if higher muzzle velocities (hence flatter trajectories) are desired. Rather than purchasing a special purpose rifle, which a novice or adolescent shooter might outgrow, a single rifle can be used with special handloaded rounds until such time more powerful rounds become appropriate. This use of specialized handloading techniques often provides significant cost savings as well, for instance when a hunter in a family already has a full-power rifle and a new hunter in the family wishes to learn the sport. This technique also enables hunters to use the same rifle and caliber to hunt a greater diversity of game.Where the most extreme accuracy is demanded, such as in rifle benchrest shooting, handloading is a fundamental prerequisite for success, but can only be done consistently accurately once load development has been done to determine what cartridge parameters works best with a specific rifle. Additionally, collectors of rare, antique and foreign-made firearms must often turn to handloading because the appropriate cartridges and shotshells are no longer commercially available. Handloaders can also create cartridges for which no commercial equivalent has ever existed — the so-called wildcat cartridges, some of which can eventually acquire mainstream acceptance if the ballistic performance is proven to be good enough. However, as with any hobby, the pure enjoyment of the reloading process may be the most important benefit. Reasons for handloading: Recurring shortages of commercial ammunition are also reasons to reload cartridges and shotshells. When commercial supplies dry up, and store-bought ammunition is not available at any price, having the ability to reload one's own cartridges and shotshells economically provides an ability to continue shooting despite shortages. Reasons for handloading: There are three aspects to ballistics: internal ballistics, external ballistics, and terminal ballistics. Internal ballistics refers to things that happen inside the firearm during and after firing, but before the bullet leaves the muzzle. The handloading process can realize increased accuracy and precision through improved consistency of manufacture, by selecting the optimal bullet weight and design, and tailoring bullet velocity to the purpose. Each cartridge reloaded can have each component carefully matched to the rest of the cartridges in the batch. Brass cases can be matched by volume, weight, and concentricity, bullets by weight and design, powder charges by weight, type, case filling (amount of total usable case capacity filled by charge), and packing scheme (characteristics of granule packing).In addition to these critical items, the equipment used to assemble the cartridge also has an effect on its uniformity/consistency and optimal shape/size; dies used to size the cartridges can be matched to the chamber of a given gun. Modern handloading equipment enables a firearm owner to tailor fresh ammunition to a specific firearm, and to precisely measured tolerances far improving the comparatively wide tolerances within which commercial ammunition manufacturers must operate. Equipment: Inexpensive "tong" tools have been used for reloading since the mid-19th century. They resemble a large pair of pliers and can be caliber-specific or have interchangeable dies. However, in modern days, handloading equipments are sophisticated machine tools that emphasize on precision and reliability, and often cost more than high-end shooting optics. There are also a myriad of various measuring tools and accessory products on the market for use in conjunction with handloading. Equipment: Presses The quintessential handloading equipment is the press, which uses compound leverage to push the cases into a die that performs the loading operations. Presses vary from simple, inexpensive single-stage models, to complex "progressive" models that operate with each pull of the lever like an assembly line at rates up to 10 rounds per minute.Loading presses are often categorized by the letter of the English alphabet that they most resemble in shape: "O", "C", and "H". The sturdiest presses, suitable for bullet swaging functions as well as for normal reloading die usage, are of the "O" type. Heavy steel completely encloses the single die on these presses. Equally sturdy presses for all but bullet swaging use often resemble the letter "C". Both steel and aluminum construction are seen with "C" presses. Some users prefer "C" style presses over "O" presses, as there is more room to place bullets into cartridge mouths on "C" presses. Shotshell style presses, intended for non-batch use, for which each shotshell or cartridge is cycled through the dies before commencing onto the next shotshell or cartridge to be reloaded, commonly resemble the letter "H".Single-stage press, generally of the "O" or "C" types, is the simplest of press designs. These presses can only hold one die and perform a single procedure on a single case at any time. They are usually only used to crimp the case neck onto the bullet, and if the user wants to perform any different procedures with the press (e.g. priming, powder dispensing, neck resizing), the functioning die/module need to be manually removed and changed. When using a single-stage press, cases are loaded in batches, one step for each cartridge per batch at a time. The batch sizes are kept small, about 20–50 cases at a time, so the cases are never left in a partially completed state for long because extended exposure to humidity and light can degrade the powder. Single-stage presses are commonly most used for high-precision rifle cartridge handloading, but may be used for high-precision reloading of all cartridge types, and for fine-tuning loads (developing loading recipes) for ultimately mass-producing large numbers of cartridges on a progressive press.Turret press, most commonly of the "C" type, is similar to a single-stage press, but has an indexed mounting disc that allows multiple dies to be quickly interchanged, with each die being fastened with lock rings. Batch operations are performed similar to a single-stage press, different procedures can be switched by simply rotating the turret and placing a different die into position. Although turret presses operate much like single-stage presses, they eliminate much of the setup time required in positioning individual dies correctly.Progressive press is far more complex in design and can handle several cases at once. These presses have a rotating base that turns with each pull of the lever. All the dies/loading modules needed (often including a case hopper, a primer feed, a powder measure, and sometimes also a bullet feeder) are mounted in alignment with each case slot on the base disc, and often also include an additional vacant station where the powder levels are manually checked to prevent over- or under-charges. Progressive presses can load hundreds of cartridges sequentially with streamlined efficiency, and all the user has to do is pulling the lever, occasionally provide manual inputs such as placing the bullet in place on the case mouth (if a bullet feeder is not used).Primer pocket swages can be either standalone, bench-mounted, specialized presses, or, alternatively, a special swage anvil die that can be mounted into a standard "O" style loading press, along with a special shell holder insert with either a large or a small primer pocket insert swage that is then inserted into the position on the "O" press where a normal shell holder is usually clicked into position. This way, both small and large primer pockets on different types of military cases can be properly processed to remove primer pocket crimps. Both types of presses can be used to remove either ring crimps or stab crimps found on military cartridges when reloading them. Reamers for removing primer pocket crimps are not associated with presses, being an alternative to using a press to remove military case primer pocket crimps. Equipment: Shotshell presses Shotshell presses are generally a single unit of the "H" configuration that handles all functions, dedicated to reloading just one gauge of shotshell. Shotshell reloading is similar to cartridge reloading, except that, instead of a bullet, a wad and a measure of shot are used, and after loading the shot, the shell is crimped shut. Both 6 and 8 fold crimps are in use, for paper hulls and plastic hulls, respectively. Likewise, roll crimps are in use for metallic, paper, and plastic hulls. The shotshell loader contains stations to resize the shell, measure powder, load the wad, measure shot, and crimp the shell. Due to the low cost of modern plastic shotshells, and the additional complexity of reloading fired shells, shotshell handloading is not as popular as cartridge handloading. For example, unlike when handloading rifle and pistol cartridges, where all the various components (cases, gas checks, powder, primers, etc.) from different manufacturers are usually all interchangeable, shotshells typically are loaded for particular brands of shotshell cases (called hulls) only with one specific brand of wad, shot cup (if used), primer, and powder, further increasing the complexity and difficulty of reloading shotshells. Substitution of components is not considered safe, as changing just one component, such as a brand of primer, can increase pressures by as much as 3500 PSI, which may exceed SAAMI pressure limits. Reloading shotshells is therefore more along the lines of precisely following a recipe with non-fungible components. Where shotshell reloading remains popular, however, is for making specialized shotgun shells, such as for providing lowered recoil, when making low-cost "poppers" used for training retrievers before hunting season to acclimate hunting dogs to the sound of a gun firing without actually shooting projectiles, for achieving better shot patterning, or for providing other improvements or features not available in commercially loaded shotshells at any price, such as when handloading obsolete shotshells with brass cases for gauges of shotshells that are no longer commercially manufactured. Equipment: Rifle and pistol loading presses are usually not dedicated to reloading a single caliber of cartridge, although they can be, but are configured for reloading various cartridge calibers as needed. In contrast, shotshell presses are most often configured for reloading just one gauge of shotshell, e.g., 12 gauge, and are rarely, if ever, reconfigured for reloading other gauges of shotshells, as the cost of buying all new dies, shot bar, and powder bushing as required to switch gauges on a shotshell press often exceeds the cost of buying a new shotshell press outright, as shotshell presses typically come from the factory already set up to reload one gauge or bore of shotshell. Hence, it is common to use a dedicated shotshell press for reloading each gauge or bore of shotshell used. Likewise, the price of shot for reloading shotshells over the last several years has also risen significantly, such that lead shot that was readily available for around $0.50/lb. (c. 2005) now reaches $2.00 per pound (2013.) Due to this large increase in the price of lead shot, the economy of reloading 12 gauge shotshells vs. just using promotional (low-cost) 12 gauge shotshells only starts to make economic sense for higher volume shooters, who may shoot more than 50,000 rounds a year. In contrast, the reloading of shotshells that are usually not available in low-cost, promotional pricings, such as .410 bore, 12 ga. slugs, 16 ga, 20 ga., and 28 ga., becomes more economical to reload in much smaller quantities, perhaps within only 3-5 boxes of shells per year. Reloading .410 bore, 12 ga. slugs, 16 ga., 20 ga, and 28 ga. shells, therefore, remains relatively common, more so than the reloading of 12 gauge shotshells, for which promotional shotshells are usually readily available from many retailers. These smaller bore and gauge shotshells also require much less lead shot, further lessening the effect of the rapid rises seen in the price of lead shot. The industry change to steel shot, arising from the US and Canadian Federal bans on using lead shotshells while hunting migratory wildfowl, has also affected reloading shotshells, as the shot bar and powder bushing required on a dedicated shotshell press also must be changed for each hull type reloaded, and are different than what would be used for reloading shotshells with lead shot, further complicating the reloading of shotshells. Equipment: With the recent rampant rise in lead shot prices, though, a major change in handloading shotshells has also occurred. Namely, a transition among high volume 12 gauge shooters from loading traditional 1-1/8 oz. shot loads to 7/8 oz. shot loads or even 24 gm. (so-called International) shot loads have occurred. At 1-1/8 oz. per shotshell, a 25 lb. bag of lead shot can only reload approximately 355 shotshells. At 7/8 oz. per shotshell, a 25 lb. of lead shot can reload 457 shotshells. At 24 grams per shotshell, a 25 lb of lead shot can reload approximately 472 shotshells. Stretching the number of hulls that it is possible to reload from an industry-standard 25 lb. bag of lead shot by 117 shells has significantly helped mitigate the large increase in the price of lead shot. That this change has also resulted in minimal changes to scores in shooting sports such as skeet and trap has only expedited the switch among high volume shooters to shooting 24 gm. shotshells with their lesser amounts of shot. Equipment: With the recent shortages over 2012–2013 of 12 gauge shotshells in the United States (among all other types of rifle and pistol ammunition), the popularity of reloading 12 gauge shotshells has seen a widespread resurgence. Field use of the International 24 gm. 12 gauge shells has proven them to be effective on small game, while stretching the number of reloads possible from a bag of shot, and they have subsequently become popular for hunting small game. Since shot shells are typically reloaded at least 5 times, although upwards of 15 times are often possible for lightly loaded shells, this transition to field use of 24 gm. loads has helped mitigate ammunition shortages for hunters. Equipment: Shotshell presses typically use a charge bar to drop precise amounts of shot and powder. Most commonly, these charge bars are fixed in their capacities, with a single charge bar rated at, say, 1-1/8 oz. of lead shot, with a switchable powder bushing that permits dropping precisely measured fixed amounts of different types of powder repetitively (e.g., MEC.) On the other hand, some charge bars are drilled to accept bushings for dropping different fixed amounts of both shot and powder (e.g. Texan.) For the ultimate in flexibility, though, universal charge bars with micrometers dropping fixed volumes of powder and shot are also available; these are able to select differing fixed amounts of both powder and shot, and are popular for handloaders who load more than just a few published recipes, or, especially, among those who wish to experiment with numerous different published recipes. Fixed charge bars are rated for either lead or steel shot, but not for both. Universal charge bars, on the other hand, are capable of reloading both lead and steel shot, being adjustable. Equipment: Like their pistol and rifle counterparts, shotshell presses are available in both single-stage and progressive varieties. For shooters shooting fewer than approximately 500 shells a month, and especially shooting fewer than 100 shells a month, a single-stage press is often found to be adequate. For shooters shooting larger numbers of shells a month, progressive presses are often chosen. A single-stage press can typically reload 100 hulls in approximately an hour. Progressive presses can typically reload upwards of 400 or 500 hulls an hour. Equipment: Shotshell presses are most commonly operated in non-batch modes. That is, a single hull will often be deprimed, reshaped, primed, loaded with powder, have a wad pressed in, be loaded with shot, be pre-crimped, and then be final crimped before being removed and a new hull being placed on the shotshell press at station 1. An alternative, somewhat faster method, often used on a single stage press is to work on 5 hulls in parallel sequentially, with but a single processed hull being located at each of the 5 stations available on a single stage shotshell press, while manually removing the finished shotshell from station 5 and then moving the 4 in-process hulls to the next station (1 to 2, 2 to 3, 3 to 4, 4 to 5) before adding a new hull at the deprimer (station 1) location. Both these modes of shotshell reloading are in distinct contrast to the common practice used with reloading pistol and rifle cartridges on a single-stage press, which is most often processed in batch modes, where a common operation will commonly be done on a batch of up to 50 or 100 cartridges at a time, before proceeding to the next processing step. This difference is largely a result of shotshell presses having 5 stations available for use simultaneously, unlike a single-stage cartridge press which typically has but one station available for use. Equipment: In general, though, shotshell reloading is far more complex than rifle and pistol cartridge reloading, and hence far fewer shotshell presses are therefore used relative to rifle and pistol cartridge reloading presses. Equipment: .50 BMG and larger cartridge presses Reloading presses for reloading .50 BMG and larger cartridges are also typically caliber-specific, much like shotshell presses, as standard-size rifle and pistol reloading presses are not capable of being pressed into such exotic reloading service. The reloading of such large cartridges is also much more complex, as developing a load using a specific lot of powder can require nearly all of a 5 lb. bottle of powder and a load must be developed with a single load of powder for reasons of safety. Equipment: Dies Dies are generally sold in sets of two or three units, depending on the shape of the case. A three-die set is needed for straight cases, while a two-die set is used for bottlenecked cases. The first die of either set performs the sizing and decapping operation, except in some cases in the 3-die set, where decapping may be done by the second die. The middle die in a three-die set is used to expand the case mouth of straight cases (and decap in the case where this is not done by the first die), while in a two-die set the entire neck is expanded as the case is extracted from the first die. The last die in the set seats the bullet and may apply a crimp. Special crimping dies are often used to apply a stronger crimp after the bullet is seated. Progressive presses sometimes use an additional "die" to meter powder into the case (though it is arguably not a real die as it does not shape the case).Standard dies are made from hardened steel, and require that the case be lubricated, for the resizing operation, which requires a large amount of force. Rifle cartridges require lubrication of every case, due to the large amount of force required, while smaller, thinner handgun cartridges can get away with alternating lubricated and unlubricated cases. Carbide dies have a ring of tungsten carbide, which is far harder and slicker than tool steel, and so carbide dies do not require lubrication.Modern reloading dies are generally standardized with 7/8-14 (or, for the case of .50 BMG dies, with 1-1/4×12) threads and are interchangeable with all common brands of presses, although older dies may use other threads and be press-specific. Equipment: Dies for bottleneck cases usually are supplied in sets of at least two dies, though sometimes a third is added for crimping. This is an extra operation and is not needed unless a gun's magazine or action design requires crimped ammunition for safe operation, such as autoloading firearms, where the cycling of the action may push the bullet back in the case, resulting in poor accuracy and increased pressures. Crimping is also sometimes recommended to achieve full velocity for bullets, through increasing pressures so as to make powders burn more efficiently, and for heavy recoiling loads, to prevent bullets from moving under recoil. For FMJ bullets mounted in bottleneck cases, roll crimping is generally not ever used unless a cannelure is present on the bullet, to prevent causing bullet deformation when crimping. Rimless, straight wall cases, on the other hand, require a taper crimp, because they have headspace on the case mouth; roll crimping causes headspacing problems on these cartridges. Rimmed, belted, or bottleneck cartridges, however, generally can safely be roll crimped when needed. Three dies are normally supplied for straight-walled cases, with an optional fourth die for crimping. Crimps for straight wall cases may be taper crimps, suitable for rimless cartridges used in autoloaders, or roll crimps, which are best for rimmed cartridges such as are used in revolvers.There are also specialty dies. Bump dies are designed to move the shoulder of a bottleneck case back just a bit to facilitate chambering. These are frequently used in conjunction with neck dies, as the bump die itself does not manipulate the neck of the case whatsoever. A bump die can be a very useful tool to anyone who owns a fine shooting rifle with a chamber that is cut to minimum headspace dimensions, as the die allows the case to be fitted to this unique chamber. Another die is the "hand die". A hand die has no threads and is operated—as the name suggests—by hand or by use of a hand-operated arbor press. Hand dies are available for most popular cartridges, and although available as full-length resizing dies, they are most commonly seen as neck sizing dies. These use an interchangeable insert to size the neck, and these inserts come in 1/1000-inch steps so that the user can custom fit the neck of the case to his own chamber or have greater control over neck tension on the bullet. Equipment: Shellholders A shell holder, generally sold separately, is needed to hold the case in place as it is forced into and out of the dies. The reason shellholders are sold separately is that many cartridges share the same base dimensions, and a single shell holder can service many different cases. Shellholders are also specialized, and will generally only fit a certain make of reloading press, while modern dies are standardized and will fit a wide variety of presses. Different shell holders than those used for dies are also required for use with some hand priming tools (e.g., Lee Autoprime tool.) Scale A precision weighing scale is a near necessity for reloading. While it is possible to load using nothing but a powder measure and a weight-to-volume conversion chart, this greatly limits the precision with which a load can be adjusted, increasing the danger of accidentally overloading cartridges with powder for loads near or at the maximum safe load. With a powder scale, an adjustable powder measure can be calibrated more precisely for the powder in question, and spot checks can be made during loading to make sure that the measure is not drifting. With a powder trickler, a charge can be measured directly into the scale, giving the most accurate measure.A scale also allows bullets and cases to be sorted by weight, which can increase consistency further. Sorting bullets by weight has obvious benefits, as each set of matched bullets will perform more consistently. Sorting cases by weight is done to group cases by case wall thickness, and match cases with similar interior volumes. Military cases, for example, tend to be thicker, while cases that have been reloaded numerous times will have thinner walls due to brass flowing forward under firing, and excess case length being later trimmed from the case mouth. Equipment: There are 3 types of reloading scales: Mechanical reloading scale (they are measured manually with no usage of power). Digital Scales (they need electricity or batteries to operate). Digital Scales with dispenser (they unite both reloading scales and dispense options into one version). Equipment: Priming tool Single-stage presses often do not provide an easy way of installing primers to ("priming") cases. Various add-on tools can be used for priming the case on the down-stroke, or a separate tool can be used. Since cases loaded by a single-stage press are done in steps, with the die being changed between steps, a purpose-made priming tool (so-called "primer" tool) — is often faster than trying to integrate a priming step to a press step, and also often more robust than a model that needs to be mounted and fitted onto a press, resulting in a more consistent primer seating depth. Equipment: Powder measure Beginning reloading kits often include a weight-to-volume conversion chart for a selection of common powders and a set of powder volume measures graduated in small increments. By adding the various measures of powder the desired charge can be measured with a safe degree of accuracy. However, since multiple measures of powder are often needed, and since powder lots may vary slightly in density, a powder measure accurate to 1⁄10 grain (6.5 mg) is desirable. Equipment: Bullet puller Like any complex process, mistakes in handloading are easy to make, and a bullet puller device allows the handloader to disassemble mistakes. Most pullers use inertia to pull the bullet, and are often shaped like hammers. When in use, the case is locked in place in a head-down fashion inside the far end of the "hammer", and then the device is swung and struck against a firm surface. The sharp impact will suddenly decelerate the case, but the inertia exerted by the heavier mass of the bullet will keep it moving and thus pull it free from the case in a few blows, while the powder and bullet will get caught by a trapping container within the puller after the separation. Collet-type pullers are also available, which use a caliber-specific clamp to grip the bullet, while a loading press is used to pull the case downwards. It is essential that the collet be a good match for the bullet diameter because a poor match can result in significant deformation of the bullet. Equipment: Bullet pullers are also used to disassemble loaded ammunition of questionable provenance or undesirable configuration so that the components can be salvaged for re-use. Surplus military ammunition is often pulled for components, particularly cartridge cases, which are often difficult to obtain for older foreign military rifles. Military ammunition is often tightly sealed, to make it resistant to water and rough handling, such as in machine gun feeding mechanisms. In this case, the seal between the bullet and cartridge can prevent the bullet puller from functioning. Pushing the bullet into the case slightly with a seating die will break the seal, and allow the bullet to be pulled.Primers are a more problematic issue. If a primer is not seated deeply enough, the cartridge (if loaded) can be pulled, and the primer re-seated with the seating tool. Primers that must be removed are frequently deactivated first—either firing the primed case in the appropriate firearm or soaking in penetrating oil, which penetrates the water-resistant coatings in the primer. Equipment: Components pulled from loaded cartridges should be reused with care. Unknown or potentially contaminated powders, contaminated primers, and bullets that are damaged or incorrectly sized can all cause dangerous conditions upon firing. Equipment: Case trimmer Cases, especially bottleneck cases, will stretch upon firing. How much a case will stretch depends upon load pressure, cartridge design, chamber size, functional cartridge headspace (usually the most important factor), and other variables. Periodically cases need to be trimmed to bring them back to proper specifications. Most reloading manuals list both a trim size and a max length. Long cases can create a safety hazard through improper headspace and possible increased pressure.Several kinds of case trimmers are available. Die-based trimmers have an open top and allow the case to be trimmed with a file during the loading process. Manual trimmers usually have a base that has a shellholder at one end and a cutting bit at the opposite end, with a locking mechanism to hold the case tight and in alignment with the axis of the cutter, similar to a small lathe. Typically the device is cranked by hand, but sometimes they have attachments to allow the use of a drill or powered screwdriver. Powered case trimmers are also available. They usually consist of a motor (electric drills are sometimes used) and special dies or fittings that hold the case to be trimmed at the appropriate length, letting the motor do the work of trimming. Equipment: Primer pocket tools Primer pocket cleaning tools are used to remove residual combustion debris remaining in the primer pocket; both brush designs and single blade designs are commonly used. Dirty primer pockets can prevent setting primers at, or below, the cartridge head. Primer pocket reamers or swagers are used to remove military crimps in primer pockets.Primer pocket uniformer tools are used to achieve a uniform primer pocket depth. These are small endmills with a fixed depth-spacing ring attached, and are mounted either in a handle for use as a handtool, or are sometimes mounted in a battery-operated screwdriver. Some commercial cartridges (notably Sellier & Bellot) use large rifle primers that are thinner than the SAAMI standards common in the United States, and will not permit seating a Boxer primer manufactured to U.S. standards; the use of a primer pocket uniformer tool on such brass avoids setting Boxer primers high when reloading, which would be a safety issue. Two sizes of primer pocket uniformer tools exist, the larger one is for large rifle (0.130-inch nominal depth) primer pockets and the smaller one is used for uniforming small rifle/pistol primer pockets.Flash hole uniforming tools are used to remove any burrs, which are residual brass remaining from the manufacturing punching operation used in creating flash holes. These tools resemble primer pocket uniformer tools, except being thinner, and commonly include deburring, chamfering, and uniforming functions. The purpose of these tools is to achieve a more equal distribution of flame from the primer to ignite the powder charge, resulting in consistent ignition from case to case. Equipment: Headspace gauges and modified case gauges Bottleneck rifle cartridges are particularly prone to encounter incipient head separations if they are full-length re-sized and re-trimmed to their maximum permitted case lengths each time they are reloaded. In some such cartridges, such as the .303 British when used in Enfield rifles, as few as 1 or 2 reloadings can be the limit before the head of the cartridge will physically separate from the body of the cartridge when fired. The solution to this problem, of avoiding overstretching of the brass case, and thereby avoiding the excessive thinning of the wall thickness of the brass case due to case stretching, is to use what is called a "headspace gauge". Contrary to its name, it does not actually measure a rifle's headspace. Rather, it measures the distance from the head of the cartridge to the middle of the shoulder of the bottleneck cartridge case. For semi-automatic and automatic rifles, the customary practice is to move the midpoint of this shoulder back by no more than 0.005 inches, for reliable operation, when resizing the case. For bolt-action rifles, with their additional camming action, the customary practice is to move this shoulder back by only 0.001 to 0.002 inches when resizing the case. In contrast to full-length resizing of bottleneck rifle cartridges, which can rapidly thin out the wall thickness of bottleneck rifle cartridges due to case stretching that occurs each time when fired, partial length re-sizing of the bottleneck case pushes shoulders back only a few thousandths of an inch will often permit a case to be safely reloaded 5 times or more, even up to 10 times, or more for very light loads. Equipment: Similarly, by using modified case gauges, it is possible to measure precisely the distance from a bullet ogive to the start of rifling in a particular rifle for a given bottleneck cartridge. Maximum accuracy for a rifle is often found to occur for only one particular fixed distance from the start of rifling in a bore to a datum line on a bullet ogive. Measuring the overall cartridge length does not permit setting such fixed distances accurately, as different bullets from different manufacturers will often have a different ogive shape. It is only by measuring from a fixed diameter point on a bullet ogive to the start of a bore's rifling that proper spacing can be determined to maximize accuracy. A modified case gauge can provide the means by which to achieve an improvement in accuracy with precision handloads. Equipment: Such head space gauges and modified case gauges can, respectively, permit greatly increasing the number of times a rifle bottleneck case can be reloaded safely, as well as improve greatly the accuracy of such handloads. Unlike the situation with using expensive factory ammunition, handloaded match ammunition can be made that is vastly more accurate, and, through reloading, that can be much more affordable than anything that can be purchased, being customized for a particular rifle. Materials required: The following materials are needed for handloading ammunition: Cases or shotshell hulls. For shotshells, plastic or paper cases can be reloaded, though plastic is more durable. Steel and aluminum cases do not have the correct qualities for reloading, so a brass case is essential (although nickel-plated brass cases, while not as reformable as plain brass, can also be reloaded) Propellant of an appropriate type. Generally, handgun cartridges (due to shorter barrels) and shotshells (due to heavier projectile weights) use faster burning smokeless powders, and rifle cartridges use slower burning powder. The powder is generally of the "smokeless" type in modern cartridges, although on occasion the older black powder more commonly known as "gunpowder" may be used. Materials required: Projectiles, such as bullets for handguns and rifles, or shot and wads for shotguns. Materials required: Centerfire primers, most commonly a Boxer-type.Case lubrication may also be needed depending on the dies used. Carbide pistol dies do not require case lubricant. For this reason, they are preferred by many, being inherently less messy in operation. In contrast, all dies for bottleneck cartridges, whether made of high-strength steel or carbide, and steel dies for pistols do require the use of a case lubricant to prevent a case become stuck in a die. (In the event that a case does ever become stuck in a die, there are stuck case remover tools that are available to remove a stuck case from the die, albeit at the loss of the particular case that became stuck.) Powder should always be stored in original containers since they are designed to split open at low pressure to prevent a dangerous pressure buildup, and any cabinet they are stored in should similarly prevent pressure buildup by allowing venting and expansion. Reloading process: Pistol/Rifle cartridges The operations performed when handloading cartridges are: Depriming — the removal of any old, expended primers from previously fired cases. Usually done with a thin rod that is inserted into the flash hole via the case mouth and push out the primer from inside. Reloading process: Case cleaning — removal of foulings and tarnishes from the cases, optional but recommended for reused rifle or pistol cases. Cleaning can be done with an ultrasonic cleaner, or more commonly with a mass finishing device known as a "case tumbler". Tumblers use abrasive granules known as tumbling media (which can be stone or ceramic granules, fragments of corncob or walnut/coconut shells, or small segments of stainless steel wire often called "pins") to burnish the cases, and can be either a vibratory type ("dry tumbling") or a water/detergent-based rotary type ("wet tumbling"). In either type, when the cleaning is completed, a "media separator" is needed to sieve out and remove the abrasive media. In the "wet" rotary tumbling, a food dehydrator-like convection dryer is sometimes used to eliminate moisture retention that might later interfere with handloading. Reloading process: Case inspection — looking for cracks or other defects, and discard visibly imperfect cases. The interior may be inspected by a wire-feeler or feeler gauge to detect emerging interior cracks. Bent case mouths may be repaired during resizing. Case lubrication — applying surface lubricant on the exterior surface of the cases to prevent them from getting stuck inside the die (carbide dies do not require lubrication). Resizing — modifying the shape of the case neck/shoulder and/or removing any dents and deformities. Reloading process: Reaming or swage crimping the primer pocket (reloading military cases only), or milling the primer pocket depth using a primer pocket uniformer tool Gauging and trimming — measuring the case length and removing excess length from the case neck (as needed; rarely required with handgun cases) Deburring and reaming — smoothing the case mouth edge (optional, as-needed; only trimmed cases need to be deburred); some benchrest shooters also do exterior neck turning at this stage in order to make the cartridge case have a uniform thickness, so the bullet will be crimped and released with the most uniformity. Reloading process: Primer pocket cleaning and flash hole uniforming (optional) — the primer pockets and flash holes will have deposits from previous primer combustion, as well as occasional deformation, that need fixing; generally only benchrest shooters perform these. Reloading process: Expanding or chamfering the case mouth — to allow easier, smoother seating of the bullet before pressing (not required for boat-tailed bullets) Cleaning the lubricant off the cases Priming — seating a new primer into the case (primer pockets often become loose after multiple loadings; a lack of effort being required to seat new primers indicates a loose primer pocket; cases with loose primer pockets are usually discarded, after crushing the case to prevent its reuse) Powder charging — adding a measured amount of propellant powder into the case. This is a critical step, as incorrect powder charges are extremely dangerous, both undercharged (which can lead to a squib load) as well as overcharged (which can cause the gun to explode). Reloading process: Bullet seating — positioning the bullet in the case mouth for the correct cartridge overall length (OAL) and for aligning bullet cannelure (if present) with the case mouth Crimping — Pressing and tightening the case mouth to fix the bullet in place; some may hold the bullet with neck tension alone. Reloading process: Final cartridge inspectionWhen previously fired cases are used, they must be inspected before loading. Cases that are dirty or tarnished are often polished in a tumbler to remove oxidation and allow easier inspection of the case. Cleaning in a tumbler will also clean the interior of cases, which is often considered important for handloading high-precision target rounds. Cracked necks, non-reloadable cases (steel, aluminum, or Berdan primed cases), and signs of head separation are all reasons to reject a case. Cases are measured for length, and any that are over the recommended length is trimmed down to the minimum length. Competition shooters will also sort cases by brand and weight to ensure consistency.Removal of the primer, called decapping or depriming, is usually done with a die containing a steel pin that punches out the primer from inside the case. Berdan primed cases require a different technique, either a hydraulic ram or a hook that punctures the case and levers it out from the bottom. Military cases often have crimped-in primers, and decapping them leaves a slightly indented ring (most common) or, for some military cartridges, a set of stabbed ridges located on the edge of the primer pocket opening that inhibits or prevents seating a new primer into a decapped case. A reamer or a swage is used to remove both these styles of crimp, whether ring crimps or stab crimps. The purpose of all such primer crimps is to make military ammunition more reliable under more extreme environmental conditions. Some military cartridges also have sealants placed around primers, in addition to crimps, to provide additional protection against moisture intrusion that could deactivate the primer for any ammunition exposed to water under battlefield conditions. Decapping dies, though, easily overcome the additional resistance of sealed primers, with no significant difficulty beyond that encountered when removing non-sealed primers. Reloading process: When a cartridge is fired, the internal pressure expands the case to fit the chamber in a process called obturation. To allow ease of chambering the cartridge when it is reloaded, the case is swaged back to size. Competition shooters, using bolt-action rifles that are capable of camming a tight case into place, often resize only the neck of the cartridge, called neck sizing, as opposed to the normal full-length resizing process. Neck sizing is only useful for cartridges to be re-fired in the same firearm, as the brass may be slightly oversized in some dimensions for other chambers, but the precise fit of the case to the chamber will allow greater consistency and therefore greater potential accuracy. Some believe that neck sizing will permit a larger number of reloads with a given case in contrast to full-size resizing, although this is controversial. Semi-automatic rifles and rifles with SAAMI minimum chamber dimensions often require a special small base resizing die, that sizes further down the case than normal dies, and allows for more reliable feeding.Once the case is sized down, the inside of the neck of the case will actually be slightly smaller than the bullet's diameter. To allow the bullet to be seated, the end of the neck is slightly expanded to allow the bullet to start into the case. Boattailed bullets need very little expansion, while unjacketed lead bullets require more expansion to prevent shaving of lead when the bullet is seated. Reloading process: Priming the case is the most dangerous step of the loading process since the primers are pressure-sensitive. The use of safety glasses or goggles during priming operations can provide valuable protection in the rare event that an accidental detonation takes place. Seating a Boxer primer not only places the primer in the case, but it also seats the anvil of the primer down onto the priming compound, in effect arming the primer. A correctly seated primer will sit slightly below the surface of the case. A primer that protrudes from the case may cause a number of problems, including what is known as a slam fire, which is the firing of a case before the action is properly locked when chambering a round. This may either damage the gun and/or injure the shooter. A protruding primer will also tend to hang when feeding, and the anvil will not be seated correctly so the primer may not fire when hit by the firing pin. Primer pockets may need to be cleaned with a primer pocket brush to remove deposits that prevent the primer from being properly seated. Berdan primers must also be seated carefully, and since the anvil is part of the case, the anvil must be inspected before the primer is seated. For reloading cartridges intended for use in military-surplus firearms, rifles especially, "hard" primers are most commonly used instead of commercial "soft" primers. The use of "hard" primers avoids slamfires when loading finished cartridges in the military-surplus firearm. Such primers are available to handloaders commercially.The quantity of gunpowder is specified by weight, but almost always measured by volume, especially in larger-scale operations. A powder scale is needed to determine the correct mass thrown by the powder measure, as loads are specified with a precision of 0.10 grain (6.5 mg). One grain is 1/7000 of a pound. Competition shooters will generally throw a slightly underweight charge, and use a powder trickler to add a few granules of powder at a time to the charge to bring it to the exact weight desired for maximum consistency. Special care is needed when charging large-capacity cases with fast-burning, low-volume powders. In this instance, it is possible to put two charges of powder in a case without overflowing the case, which can lead to dangerously high pressures and a significant chance of bursting the chamber of the firearm. Non-magnum revolver cartridges are the easiest to do this with, as they generally have relatively large cases, and tend to perform well with small charges of fast powders. Some powders meter (measured by volume) better than others due to the shape of each granule. When using volume to meter each charge, it is important to regularly check the charge weight on a scale throughout the process.Competition shooters also often sort bullets by weight, often down to 0.10 grain (6.5 mg) increments. The bullet is placed in the case mouth by hand and then seated with the press. At this point, the expanded case mouth is also sized back down. A crimp can optionally be added, either by the seating die or with a separate die. Taper crimps are used for cases that are held in the chamber by the case mouth, while roll crimps may be used for cases that have headspace on a rim or on the cartridge neck. Roll crimps hold the bullet far more securely, and are preferred in situations, such as magnum revolvers, where recoil velocities are significant. A tight crimp also helps to delay the start of the bullet's motion, which can increase chamber pressures, and help develop full power from slower burning powders (see internal ballistics). Reloading process: Shotgun shells Unlike the presses used for reloading metallic cartridges, the presses used for reloading shotgun shells have become standardized to contain 5 stations, with the exact configuration of these 5 stations arranged either in a circle or in a straight row. Nonetheless, the operations performed using the industry-standard 5 station shotshell presses when handloading shotshells with birdshot, although slightly different, are very similar as to when reloading metallic cartridges: Selecting an appropriate charge bar and powder bushing, or charge bar with shot bushing and powder bushing, or a universal charge bar (if used) for measuring shot and powder, for the shotshell press. Reloading process: Verifying that all components are properly selected (hull, primer, powder, wad, and shot). (No substitutions are allowed in components, nor in charge weights of shot and powder. The only substitution allowed is in the brand of shot and the size of the shot (#8, #9, etc. Also, no substitutions are allowed in the shot material itself (whether it is lead shot, Hevi-Shot, steel shot, etc.), as the malleability of lead shot is noticeably different than steel.) Loading shot and powder in the press, and verifying that the as-dropped weights are per an established, published, loading recipe using a calibrated scale. (Typically, 5 to 10 trials of shot and powder drops, each, are recommended by shotshell press or universal charge bar user manuals.) Adjusting bushings or universal charge bar settings to account for small differences in densities due to lot-to-lot variations in both powder and shot. Reloading process: Inspecting each hull. (Examining for cracks or other hull defects, and discarding any visibly imperfect hulls. Also, turning each hull upside down to remove any foreign object debris before depriming.) Removing the fired primer and sizing/resizing the brass outer diameter at the base of the hull (Station 1). Inserting a primer in the well of the press, and sizing/resizing the inner diameter of the hull while inserting a new primer (Station 2). Verifying primer is fully seated, not raised. If primer is not fully seated, re-running operation at Station 2 until primer is fully seated. Positioning primed hull (at Station 3), pulling handle down, toggling charge bar to drop measured amount of powder, raising handle, inserting wad, dropping handle again to seat wad, toggling charge bar to drop measured amount of shot, raising handle. Pre-crimping of shell (Station 4). Final crimping of shell (Station 5). Inspecting crimping on shell. If crimp is not fully flat, re-crimping (Station 5). Inspecting bottles of shot and powder on the shotshell press, adding more as needed before it runs out. Reloading process: Cutting open 4 or 5 shells randomly selected from a large lot of handloaded shells, respectively, and verifying that the as-thrown weights of powder and shot are both within desired tolerances of the published recipe that was followed. (Optional, but recommended.)The exact details for accomplishing these steps on particular shotshell presses vary depending on the brand of the press, although the presence of 5 stations is standard among all modern presses. Reloading process: The use of safety glasses or goggles while reloading shotshells can provide valuable protection in the rare event that an accidental detonation takes place during priming operations. Reloading process: The quantities of both gunpowder and shot are specified by weight when loading shotshells, but almost always measured solely by volume. A powder scale is therefore needed to determine the correct mass thrown by the powder measure, and by the shot measure, as powder loads are specified with a precision of 0.10 grain (6.5 mg), but are usually thrown with a tolerance of 0.2 to 0.3 grains in most shotshell presses. Similarly, shot payloads in shells are generally held to within a tolerance of plus or minus 3-5 grains. One grain is 1/7000 of a pound. Reloading process: Shotshell reloading for specialty purposes, such as for buckshot or slugs, or other specialty rounds, is often practiced but varies significantly from the process steps discussed previously for handloading birdshot shotshells. The primary difference is that large shot cannot be metered in a charge bar, and so must be manually dropped, a ball at a time, in a specific configuration. Likewise, the need for specialty wads or extra wads, in order to achieve the desired stackup distance to achieve a full and proper crimp for a fixed shell length, say 2-3/4", causes the steps to differ slightly when handloading such shells. Reloading process: Modern shotshells are all uniformly sized for Type 209 primers. However, reloaders should be aware that older shotshells were sometimes primed with a Type 57 or Type 69 primer (now obsolete), meaning that shotgun shell reloading tends to be done only with modern (or recently produced) components. Being essentially "published recipe" dependent, antique shotshell reloading is not widely practiced, being more of a specialty, or niche, activity. Of course, when reloading for very old shotguns, such as those with Damascus barrels, special shotshell recipes that limit pressures to less than 4500 psi are still available, and these "recipes" are reloaded by some shotgunning enthusiasts. Typical shotshell pressures for handloads intended for modern shotguns range from approximately 4700 psi to 10,000 psi. Reloading process: Brass shotshells are also reloaded, occasionally, but typically these are reloaded using standard rifle/pistol reloading presses with specialty dies, rather than with modern shotshell presses. Rather than plastic wads, traditional felt and paperboard wads are also generally used (both over powder and over shot) when reloading brass shotgun shells. Reloading brass shotshells is not widely practiced. Reloading process: Shotguns, in general, operate at much lower pressures than pistols and rifles, typically operating at pressures of 10,000 psi, or less, for 12 gauge shells, whereas rifles and pistols routinely are operated at pressures in excess of 35,000 psi, and sometimes upwards of 50,000 psi. The SAAMI maximum permitted pressure limit is only 11,500 psi for 12 gauge 2-3/4 inch shells, so the typical operating pressures for many shotgun shells are only slightly below the maximum permitted pressures allowed for safe ammunition. Because of this small difference in typical operating vs. maximum industry allowed pressures and the fact that even small changes in components can cause pressure variances in excess of 4,000 psi, the components used in shotshell reloading must not be varied from published recipes, as the margin of safety relative to operating pressures for shotguns is much lower than for pistols and rifles. This lower operating pressure for shotguns and shells is also the reason why shotgun barrels have noticeably thinner walls than rifle and pistol barrels. Legal aspects: Since many countries heavily restrict the civilian possession of ammunition and ammunition components, including primers and smokeless powder, handloading may be explicitly or implicitly illegal in certain countries. Even without specific restrictions on powder and primers, they may be covered under other laws governing explosive materials. Handloading may require study and passing an exam to acquire a handloading permit prior to being allowed to handload ammunition in some jurisdictions. This is done to avoid catastrophic accidents caused by lack of knowledge/skill as much as possible, and also allows the government to maintain information on who reloads their own cartridges. The standards organization C.I.P. rules that the products of handloaders that do not comply with the C.I.P. ammunition approval rules for commercial ammunition manufacturers cannot be legally sold in C.I.P. member states. Legal aspects: Many firearms manufacturers explicitly advise against the use of handloaded ammunition. Generally, this means that the maker's warranty is void, and the manufacturer is not liable for any damage to the gun or personal injury if handloaded ammunition is used that exceeded established limits for a particular arm. This arises because firearm manufacturers point out that while they have some influence and scope for redress with ammunition manufacturers, they have no such influence over the actions of incompetent or overly ambitious individuals who assemble ammunition. Legal aspects: United States In the United States, handloading is not only legal and requires no permit, but is also quite popular. Experts point to potential legal liabilities (depending on the jurisdiction) that the shooter may incur if using handloaded ammunition for defense, such as an implied malice on the part of the shooter, as the use of handloaded ammunition may give the impression that "regular bullets weren't deadly enough". Additionally, forensic reconstruction of a shooting relies on using identical ammunition from the manufacturer, where handloaded ammunition cannot be guaranteed identical to the ammunition used in the shooting, since "the defendant literally manufactured the evidence". In particular, powder residue patterning is used by law enforcement to validate the distance between the firearm and the person shot using known facts from the manufacturer about powder type, content, and other factors. Legal aspects: Canada Handloading is legal in Canada. The Explosives Act places limits on the amount of powder (either smokeless or black) that may be stored in a building, on the manner in which it is stored, and on how much powder may be available for use at any time. The Act is the responsibility of Natural Resources Canada. If the quantity of powder stored for personal use exceeds 75 kg, then a Propellant Magazine Licence (Type P) is required. There is no limit on the number of primers that may be stored for non-commercial use. Legal aspects: Germany As an example of a European country, handloading in Germany requires a course, terminated in an exam, in handloading and handling of explosive propellants; often, this is offered in combination with a course and exam in muzzle-loading and black powder-shooting. The State's Ministry of the Interior conducts the exam. When passed and the reloader can provide a reason for his will to reload ("Bedürfnisprüfung"), he can apply for a permit to a quota of propellant for five years (after which time he has to extend the permit). Every propellant is recorded in the permit. Primers, cartridges, bullets, and reloading equipment are available without a permit. Legal aspects: As German law gives maximum pressures for every commercial caliber, the handloader is allowed to non-commercially give away his ammunition. He is liable for incorrect loading. His references are data books by propellant manufacturers (like RWS), bullet manufacturers (like Speer), reloading tool manufacturers (like Lyman) or neutral manufacturers institutions like the DEVA. Firearms manufacturers give guarantees as long as the handloaded ammunition is within the correct parameters. Legal aspects: The relevant rules for non-commercial application can be found in §27 of the Explosives Act ("Sprengstoffgesetz").In order to investigate gun destruction – material fault or incorrectly loaded ammunition – , and for handloaders to get data for new loads, gun and/or handloaded cartridges can be sent to the DEVA institute (German institute for testing and examining of hunting and sporting guns); the DEVA returns a pressure diagram and a report whether this load is within legal range for this ammunition. Legal aspects: South Africa Handloading or reloading is allowed in South Africa as long as you are in possession of a competency certificate to possess a firearm as well as a license to possess such a firearm. Sport shooters load to make shooting sports more affordable and hunters load to obtain greater accuracy. Powder and primers are strictly controlled by law and can not exceed for 2 kg for powder and 2400 primers. The amount of ammunition you may have in your possession is also limited to 200 rounds per chambering. If you are a registered dedicated sportsman, the quantities are unlimited. Although the powder's quantity is unlimited if you are a dedicated sportsman, storage of excess amounts of powder is dangerous due to the potential of fire occurring from accidental ignition. A manual from a South African powder manufacturer Rheinmetall Denel Munition (previously Somchem) is available for reloaders with adequate information and guidelines. Atypical handloading: Berdan primers, with their off-center flash holes and lack of self-contained anvil, are more difficult to work with than the easily removed Boxer primers. The primers may be punctured and pried out from the rear, or extracted with hydraulic pressure. Primers must be selected carefully, as there are more sizes of Berdan primers than the standard large and small pistol, large and small rifle of Boxer primers. The case must also be inspected carefully to make sure the anvil has not been damaged because this could result in a failure to fire.Rimfire cartridges (e.g. 22 Long Rifle) are not generally hand-loaded in modern times, although there are some shooters that unload commercial rimfire cartridges, and use the primed case to make their own loads or to generate special rimfire wildcat cartridges. These cartridges are highly labor-intensive to produce. Historically, liquid priming material was available for reloading rimfire ammunition, but the extreme explosive hazard of bulk primer compound and complexity of the process (including "ironing out" the firing pin strike) caused the practice to decline. Atypical handloading: Some shooters desiring to reload for obsolete rimfire cartridges alter the firearm in question to function as a centerfire, which allows them to reload. Often it is possible to reform cases from similarly sized ammunition which is in production, and this is the most economical way of obtaining brass for obscure or out-of-production calibers. Even if custom brass must be manufactured, this is often far less expensive than purchasing rare, out-of-production ammunition. Cartridges like the 56-50 Spencer, for example, are not readily obtainable in rimfire form, but can be made from shortened 50-70 cartridges or even purchased in loaded form from specialty dealers.An unusual solution to the problem of obtaining ammunition for the very old pinfire cartridges is even available. This solution uses specialized cartridges that use a removable pin and anvil which hold a percussion cap of the type use in caplock firearms. To reload a fired case, the pin is removed, allowing the anvil to slide out; a percussion cap is placed in the anvil, it is re-inserted, and the pin serves to lock the anvil in place, as well as to ignite the percussion cap. Atypical handloading: Shotshell reloading is sometimes done for scattershot loads, consisting of multiple wads separating groups of shot, which are intended for use at short-distance hunting of birds. Similarly, shotshell reloading for buckshot loads and non-lethal "bean bag" loads are sometimes handloaded. These types of shotshells are rarely handloaded. Accuracy considerations: Precision and consistency are key to developing accurate ammunition. Various methods are used to ensure that ammunition components are as consistent as possible. Since the firearm is also a variable in the accuracy equation, careful tuning of the load to a particular firearm can yield significant accuracy improvements. Accuracy considerations: Cases The internal volume of the cartridge case, or case capacity, significantly affects the pressure developed during ignition, which significantly affects the velocity of the bullet. Cases from different manufacturers can vary in wall thickness, and as cases are repeatedly fired and reloaded the brass flows up to the neck and is trimmed off, increasing capacity as well as weakening the case. The first step to ensuring consistent case capacity is sorting the cases by headstamp, so each lot of cases is from the same manufacturer and/or year. A further step would be to then weigh these cases, and sort by case weight.The neck of the case is another variable since this determines how tightly the bullet is held in place during ignition. Inconsistent neck thickness and neck tension will result in variations in pressure during ignition. These variables can be addressed by annealing and thinning the neck, as well as by careful control of the crimping operation. Accuracy considerations: Bullets Bullets must be well balanced and consistent in weight, shape, and seating depth to ensure that they correctly engage the rifling, exit the barrel at a consistent velocity, and fly straight. Buying bullets from a high-quality source will help ensure quality, but for ultimate accuracy, some shooters will measure even the best bullets, and reject all but the most consistent. Measurement of the weight is the easiest, and bullets that are out of round can be detected by rotating the bullet while measuring with a micrometer. There is even a device available that will detect changes in jacket thickness and internal voids in jacketed rifle bullets, though its high cost makes it prohibitively expensive for all but the most dedicated shooters.The transition from the case to the barrel is also very important. If the bullets have to travel a varying distance from the case to the point where they engage the rifling, then this can result in variations in pressure and velocity. The bearing surface of the bullet should ideally be seated as close as possible to the rifling. Since it is the bearing surface that matters here, it is important that the bullets have a consistent bearing surface. Accuracy considerations: Load tuning Tuning load to a gun can also yield great increases in accuracy, especially for standard, non-accurized rifles. Different rifles, even of the same make and model, will often react to the same ammunition in different ways. The handloader is afforded a wider selection of bullet weights than can readily be found in commercially loaded ammunition, and there are many different powders that can be used for any given cartridge. Trying a range of bullets and a variety of powders will determine what combination of bullet and powder gives the most consistent velocities and accuracies. Careful adjustment of the amount of powder can give the velocity that best fits the natural harmonics of the barrel (see accurize and internal ballistics). For ultimate accuracy and performance, the handloader also has the option of using a wildcat cartridge; wildcats are the result of shaping the cartridge and chamber themselves to a specific end, and the results push the envelope of velocity, energy, and accuracy. Most, but not all, reloads perform best when the powder selected fills 95% or more of the case (by volume). Cost considerations: Those who reload with the primary goal of maximizing accuracy or terminal performance may end up paying more per reloaded round than for commercial ammunition—this is especially true for military calibers which are commonly available as surplus. Maximum performance, however, requires the highest quality components, which are usually the most expensive. Reloaders who reload with the primary goal of saving money on ammunition, however, can make a few tradeoffs to realize significant cost savings with a minimal sacrifice in quality. Cost considerations: Case life maximization Since the case is the single most expensive part of a loaded round, the more times a case can be re-used, the better. Cases that are loaded to a moderate pressure will generally last longer, as they will not be work hardened or flow under pressure as much as cases loaded to higher pressures. Use of moderate pressure loads extends the life of the case significantly, not to mention saving quite a bit of wear and tear on the barrel. Work hardening can cause cracks to occur in the neck as the hardened brass loses its malleability, and is unable to survive swaging back into shape during the resizing operation. Rifle brass tends to flow towards the neck (this is why rifle brass must be trimmed periodically) and this takes brass away from the rear of the case. Eventually, this will show as a bright ring near the base of the cartridge, just in front of the thick web of brass at the base. If brass is used after this ring appears, it risks a crack, or worse, a complete head separation, which will leave the forward portion of the brass lodged in the chamber of the gun. This generally requires a special stuck case removal tool to extract, so it is very undesirable to have a head separation.With bottlenecked cartridge cases, choosing the right sizing die can also be important. Full-length sizing of cartridges is often thought to greatly shorten case life by work hardening the full length of the case, which can cause the case neck to split, although some studies show that the number of reloads possible with a case is essentially the same for either full length sizing as for neck sizing only if the issue is one of neck hardening. If the reloaded cartridges are going to be used in the same firearm in which they were previously fired, though, and if that firearm has a bolt action or other action with a strong camming action on closing, then full-length resizing may not be needed. A collet neck sizing die can be used to size just the case neck enough to hold the bullet and leave the rest of the case unsized. The resulting cartridge will chamber into the specific rifle that previously fired it, though the fit might be tight and require more force to chamber than a full-length resized case. The use of a neck-sizing die in conjunction with moderate pressure loads may extend the life of the case significantly by minimizing the amount of case that is work hardened or stretched. This is especially true for reloads intended for military rifles with intentionally large chambers such as the Lee–Enfield in .303 British. The use of partial length or neck sizing for cartridges used in such large chambers permits effectively switching the headspacing from relying on the rim of a rimmed cartridge to the shoulder of the bottleneck transition instead, increasing the number of times a rimmed military cartridge can be reloaded from once to perhaps 5 or more times, all while avoiding dangerous incipient head separations. One final form of limiting case wear is limited strictly to benchrest shooters with custom-cut chambers. The chamber of these rifles is cut so that there is just enough room, typically just a few thousandths of an inch, in the neck area. The result of using this type of chamber is that fired rounds do not require any resizing whatsoever once the case is fired. The brass will 'spring back' a bit after firing, and will properly hold a new bullet without further manipulation. Some refer to this as a 'fitted' neck, however, it is a function of both the carefully cut precision neck and the case adjusted to fit with very little clearance.Work hardening happens to all cases, even low-pressure handgun cases. The sudden increase in pressure upon firing hits the brass like a hammer, changing its crystalline structure and making it more brittle. The neck of the case, if it becomes too brittle, will be incapable of standing the strain of resizing, expanding, crimping, and firing, and will split during loading or firing. Since the case neck remains in tension while holding the bullet in place, aging ammunition may develop split necks in storage. While a neck split during firing is not a significant danger, a split neck will render the case incapable of holding the bullet in place, so the case must be discarded or recycled as a wildcat cartridge of shorter overall length, allowing the split section to be removed. The simplest way to decrease the effects of work hardening is to decrease the pressure in the case. Loading to the minimum power level listed in the reloading manual, instead of the maximum, can significantly increase case life. Slower powders generally also have lower pressure peaks and may be a good choice.Annealing brass to make it softer and less brittle is fairly easy, but annealing cartridge cases is a more complex matter. Since the base of the case must be hard, it cannot be annealed. What is needed is a form of heat treatment called differential hardening, where heat is carefully applied to part of the case until the desired softness is reached, and then the heat treatment process is halted by rapidly cooling the case. Since annealing brass requires heating it to about 660 °F (350 °C), the heating must be done in such a way as to heat the neck to that temperature, while preventing the base of the case from being heated and losing its hardness. The traditional way is to stand the cases in a shallow pan full of water, then heat the necks of the cases with a torch, but this method makes it difficult to get an even heating of the entire case neck. A temperature-sensitive crayon can be used at the point to which it is to be annealed, which is just behind the shoulder for bottlenecked cartridges, or at the bottom of the bullet seating depth for straight-wall cartridges. The neck of the case is placed in a propane torch flame and heated it until the crayon mark changes color, indicating the correct temperature. Once the correct temperature is reached the case is completely quenched in water to stop the annealing process at the desired hardness. Failing to keep the base of the case cool can anneal the case near the head, where it must remain hard to function properly. Another approach is to immerse the case mouth in a molten alloy of lead that is at the desired annealing temperature for a few seconds, then quickly shake off the lead and quench the case.Cases that have small cracks at the neck may not be a complete loss. Many cartridges, both commercial and wildcats, can be made by shortening a longer cartridge. For example, a 223 Remington can be shortened to become a .222 Remington, which can further be shortened to become a .221 Fireball. Similarly, .30-06 Springfield can become .308 Winchester, which can become any number of specialized benchrest shooting cartridges. Since the cracking is likely due to a brittle neck, the cases should be annealed before attempting to reform them, or the crack may propagate and ruin the newly formed shorter case as well. Cost considerations: Powder cost minimization Powder is another significant cost of reloading, and one over which the handloader has significant control. In addition to the obvious step of using a minimum charge, rather than a full power one, significant cost savings may be obtained through careful powder choice. Given the same bullet and cartridge, a faster burning powder will generally use a smaller charge of powder than required with a slower powder. For example, a 44 Magnum firing a 240-grain lead semi-wadcutter could be loaded with either Accurate Arms #2, a very fast pistol powder, or #9, a very slow pistol powder. When using the minimum loads, 9.0 grains (0.58 g) of AA #2 yield a velocity of 1126 ft/s (343 m/s), and 19.5 grains (1.26 g) of #9 yield 1364 ft/s (416 m/s). For the same amount of powder, AA #2 can produce approximately twice as many rounds, yet both powders cost the same per weight. Cost considerations: The tradeoff comes in terms of power and accuracy; AA #2 is designed for small cases and will burn inconsistently in the large 44 Magnum case. AA #9, however, will fill the case much better, and the slow burn rate of AA #9 is ideal for magnum handgun rounds, producing 20% higher velocities (at maximum levels) while still producing less pressure than the fast-burning AA #2. A medium-burning powder might actually be a better choice, as it could split the difference in powder weights while delivering more power and accuracy than the fastest powder.One solution that is applicable to revolvers, in particular, is the possibility of using a reduced-volume case. Cartridges such as 357 Magnum and 44 Magnum are just longer versions of their parent rounds of .38 Special and .44 Special, and the shorter rounds will fire in the longer chambers with no problems. The reduced case capacity allows greater accuracy with even lighter loads. A 44 Special loaded with a minimum load of AA #2 uses only 4.2 grains (0.27 g) of powder, and produces a modest 771 ft/s (235 m/s). It is important to note that when reloading .38 Special and .44 Special, extreme care must be exercised to not exceed maximum powder specifications - i.e. a 357 Magnum load must never be used in a .38 special case, as even though the powder charge may fit, the difference in case volumes will likely create an overpressure scenario resulting in unsafe conditions. Bullets: While the case is usually the most expensive component of a cartridge, the bullet is usually the most expensive part of the reloaded round, especially with handgun ammunition. It is also the best place to save money with handgun ammunition. This is because the bullets are used one time, and the case lasts for many reloadings. Bullets: Other advantages of casting or swaging bullets from lead wire (which is pricier but avoids many quality-control issues of casting) is the ability to precisely control many attributes of the resulting bullet. Custom bullet molds are available from a number of sources, allowing the handloader to pick the exact weight, shape, and diameter of the bullet to fit the cartridge, firearm, and intended use. A good example of where this is useful is for shooters of older military surplus firearms, which often exhibit widely varying bore and groove diameters; by making bullets specifically intended for the firearm in question, the accuracy of the resulting cartridges can be significantly increased. Bullets: Casting For the truly frugal, the cheapest method of obtaining bullets, buckshot, and slugs intended for reloading use at low to moderate velocities is casting them. Bullets: This requires a set of bullet, buckshot, or slug molds, which are available from a number of sources, and a source of known quality lead. Linotype and automotive wheelweights are often used as sources of lead that are blended together in a molten state to achieve the desired Brinell hardness. Other sources of scrap lead, such as recovered bullets, lead cable sheathing, lead pipe, or even lead–acid battery plates (EXTREME caution should be used as modern battery components, when melted, can yield hazardous, even deadly gases), can yield usable lead with some degree of effort, including purification and measuring of hardness.Cast bullets are also the cheapest bullets to buy, though generally only handgun bullets are available in this form. Some firearms manufacturers, such as those using polygonal rifling like Glock and H&K, advise against the use of cast bullets. For shooters who would like to shoot cast bullets, aftermarket barrels are generally available for these models with conventional rifling, and the cost of the barrel can generally be recouped in ammunition savings after a few thousand rounds. Bullets: Soft lead bullets are generally used in handguns with velocities of 1000 ft/s (300 m/s) or lower, while harder cast bullets may be used, with careful powder selection, in rifles with velocities of 2000 ft/s (600 m/s) or slightly more. A modern solution to velocity limitations of cast projectiles is to powder coat the projectile, encasing it in a protective skin allowing higher velocities to be achieved with softer lead alloys with no lead build up in the firearm. The limit is the point at which the powder gas temperature and pressure starts to melt the base of the bullet, and leave a thin coating of molten and re-solidified lead in the bore of the gun—a process called leading the bore. Cast lead bullets may also be fired in full power magnum handgun rounds like the 44 Magnum with the addition of a gas check, which is a thin aluminum, zinc or copper washer or cup that is crimped over a tiny heel on the base of appropriate cast bullets. This provides protection for the base of the bullet, and allows velocities of over 1500 ft/s (450 m/s) in handguns, with little or no leading of the bore.Such cast lead bullets, intended for use with a gas check, will have a reduced diameter at the rear of the cast lead bullet, onto which the gas check can be swaged using a lubricating/resizing press. All cast lead bullets, whether with or without a gas check, must still be lubricated, to prevent leading of the rifling of the barrel. A lubricating/resizing press, which is a special purpose bullet processing press, can be either a standalone press dedicated to lubricating and resizing bullets, or can be an add-on to a reloading press, at the option of the handloader. Not all handloaders resize cast lead bullets, although all handloaders do lubricate cast lead bullets. An option to using a lubricating press is simply to coat the bullets with bullet lube, which can be done either with a spray, in a tumbler, in a plastic bowel with a liquid lube, in a tray with melted bullet lube, or even with a manual lubricating process. Bullets: Slugs for shotgun shells are also commonly cast from pure lead by handloaders, for subsequent reloading into shotgun shells. Although roll crimps of shotgun hull cases are commonly used for handloading these cast lead slugs, in place of the fold crimps that are used when reloading shot into shotgun shells, some published recipes specifically do include fold crimps. For published recipes using fold crimps and shot wads used as sabots, slugs can be easily reloaded using standard shotshell presses and techniques, without requiring any roll crimp tools. Whether roll crimps or fold crimps are used, cast lead slugs are commonly used in jurisdictions where rifles are banned for hunting, under the reasoning that fired slugs will not travel but over short distances, unlike rifle bullets which can travel up to several miles when fired. Use of cast lead slugs is therefore very common when hunting large game near populated areas. Bullets: Similarly, cast lead buckshot is often cast by handloaders, for reloading into shotgun shells for hunting larger game animals. Such buckshot is then placed by hand into shotgun shells when handloaded, due to the necessity of having to stack the buckshot balls into specific configurations depending on the gauge of shotgun shell being reloaded, the choice of wad, the volume of powder, and the size of the buckshot (e.g., 00, 000, 0000 buckshot). Such cast lead buckshot is never simply dropped from a shotshell press charge bar into a shotgun shell when reloading. Bullets: Swaging Most shooters prefer jacketed bullets, especially in rifles and pistols. The hard jacket material, generally copper or brass, resists deformation and handles far higher pressures and temperatures than lead. Several companies offer swaging presses (both manual and hydraulic) that will manufacture on a small scale jacketed bullets that can rival or surpass the quality of commercial jacketed bullets. Two swaging equipment manufacturers offer equipment and dies designed to turn 22 Long Rifle cases into brass jackets for 22 caliber (5.56 mm) bullets.Example variants of swage dies include: R dies, used for bullet swaging in the reloading press. No expensive special press is needed; however, the reloading press cannot swage all calibers and variants of bullets. Bullets: S dies, steel dies for a manual press. They have a maximum caliber of .458 inches (11.6 mm) and a maximum jacket length of 1.3 inches (33 mm). H dies, dies designed for hydraulic presses and are offered in calibers up to 25 millimetres (0.98 in) and jacket lengths of more than 1.3". In a hydraulic press, bullets from powdered metal can be swaged.Every bullet diameter, and most of the bullet types, need special dies, making swaging a rather investment-intensive enterprise. Bullets: Purchased Bullets Handloaders have the choice to swage but most choose to purchase pre-made jacketed bullets, due to the obscure nature of swaging and the specialized, expensive equipment. The process of manufacturing a jacketed bullet is far more complex than for a cast bullet; first, the jacket must be punched from a metal sheet of precise thickness, filled with a premeasured lead core, and then swaged into shape with a high pressure press in multiple steps. This involved process makes jacketed bullets far more expensive on average than cast bullets. Further complicating this are the requirements for controlled expansion bullets (see terminal ballistics), which require a tight bond between the jacket and the core. Premium expanding bullets are, with match grade bullets, at the top tier in expense. Bullets: Plated Bullets A more economical alternative was made available to the handloader in the 1980s, the copper-plated bullet. Copper-plated bullets are lead bullets that are electroplated with a copper jacket. While thinner than a swaged bullet jacket, the plated jacket is far thicker than normal electroplate, and provides significant structural integrity to the bullet. Since the jacket provides the strength, soft lead can be used, which allows bullets to be swaged or cast into shape before plating. While not strong enough for most rifle cartridges, plated bullets work well in many handgun rounds, with a recommended maximum velocity of 1250 ft/s (375 m/s). Plated bullets fall between cast and traditional jacketed bullets in price. Bullets: While originally sold only to handloaders as an inexpensive substitute for jacketed bullets, the plated bullet has come far. The ammunition manufacturer Speer now offers the Gold Dot line, commercially loaded premium handgun ammunition using copper-plated hollow point bullets. The strong bond between jacket and core created by the electroplating process makes expanding bullets hold together very well, and the Gold Dot line is now in use by many police departments.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**International Nomenclature of Cosmetic Ingredients** International Nomenclature of Cosmetic Ingredients: The International Nomenclature of Cosmetic Ingredients (INCI) are the unique identifiers for cosmetic ingredients such as waxes, oils, pigments, and other chemicals that are assigned in accordance with rules established by the Personal Care Products Council (PCPC), previously the Cosmetic, Toiletry, and Fragrance Association (CTFA). INCI names often differ greatly from systematic chemical nomenclature or from more common trivial names and is a mixture of conventional scientific names, Latin and English words. INCI nomenclature conventions "are continually reviewed and modified when necessary to reflect changes in the industry, technology, and new ingredient developments". INCI and CAS: The relationship between a CAS Registry Number and an INCI name is not always one-to-one. In some cases, more than one INCI name may have the same CAS number, or more than one CAS number may apply to an INCI name. For example, the CAS number 1245638-61-2 has the CA Index Name of 2-Propenoic acid, reaction products with pentaerythritol. This CAS number can accurately be associated with two INCI names: Pentaerythrityl Tetraacrylate and Pentaerythrityl Triacrylate. Alternatively, the INCI name, Glucaric Acid can be associated with two CAS numbers: 87-73-0 which has the CA Index Name of D-Glucaric acid, and 25525-21-7, which has the CA Index Name of DL-Glucaric acid. Both of these examples are accurate associations between CAS and INCI. Table of common names: Here is a table of several common names and their corresponding INCI names. * Some common names and INCI names are the same name. INCI labeling: In the U.S., under the Food, Drug, and Cosmetic Act and the Fair Packaging and Labeling Act, certain accurate information is a requirement to appear on labels of cosmetic products. In Canada, the regulatory guideline is the Cosmetic Regulations. Ingredient names must comply by law with EU requirements by using INCI names.The cosmetic regulation laws are enforceable for important consumer safety. For example, the ingredients are listed on the ingredient declaration for the purchaser to reduce the risk of an allergic reaction to an ingredient the user has had an allergy to before. INCI names are mandated on the ingredient statement of every consumer personal care product. The INCI system allows the consumer to identify the ingredient content. In the U.S., true soaps (as defined by the FDA) are specifically exempted from INCI labeling requirements as cosmetics per FDA regulation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bushing (isolator)** Bushing (isolator): A bushing or rubber bushing is a type of vibration isolator. It provides an interface between two parts, damping the energy transmitted through the bushing. A common application is in vehicle suspension systems, where a bushing made of rubber (or, more often, synthetic rubber or polyurethane) separates the faces of two metal objects while allowing a certain amount of movement. This movement allows the suspension parts to move freely, for example, when traveling over a large bump, while minimizing transmission of noise and small vibrations through to the chassis of the vehicle. A rubber bushing may also be described as a flexible mounting or antivibration mounting. Bushing (isolator): These bushings often take the form of an annular cylinder of flexible material inside a metallic casing or outer tube. They might also feature an internal crush tube which protects the bushing from being crushed by the fixings which hold it onto a threaded spigot. Many different types of bushing designs exist. An important difference compared with plain bearings is that the relative motion between the two connected parts is accommodated by strain in the rubber, rather than by shear or friction at the interface. Some rubber bushings, such as the D block for a sway bar, do allow sliding at the interface between one part and the rubber. History: Charles E. Sorensen credits Walter Chrysler as being a leader in encouraging the adoption of rubber vibration-isolating mounts. In his memoir (1956), he says that, on March 10, 1932, Chrysler called at Ford headquarters to show off a new Plymouth model. History: "The most radical feature of his car was the novel suspension of its six-cylinder engine so as to cut down vibration. The engine was supported on three points and rested on rubber mounts. Noise and vibration were much less. There was still a lot of movement of the engine when idling, but under a load it settled down. Although it was a great success in the Plymouth, Henry Ford did not like it. For no given reason, he just didn't like it, and that was that. I told Walter that I felt it was a step in the right direction, that it would smooth out all noises and would adapt itself to axles and springs and steering-gear mounts, which would stop the transfer of road noises into the body. Today rubber mounts are used on all cars. They are also found on electric-motor mounts, in refrigerators, radios, television sets—wherever mechanical noises are apparent, rubber is used to eliminate them. We can thank Walter Chrysler for a quieter way of life. Mr. Ford could have installed this new mount at once in the V-8, but he missed the value of it. Later Edsel and I persuaded him. Rubber mounts are now found also in doors, hinges, windshields, fenders, spring hangers, shackles, and lamps—all with the idea of eliminating squeaks and rattles."Lee Iacocca credits Chrysler's chief of engineering during that era, Frederick Zeder, with leading the effort. Iacocca said that Zeder "was the first man to figure out how to get the vibrations out of cars. His solution? He mounted their engines on a rubber base." In Vehicles: A bushing is a type of bearing that is used in the suspension system of a vehicle. It is typically used to connect moving parts such as control arms and sway bars to the frame of the vehicle, and also to isolate these parts from each other and from the frame. The main function of a bushing is to reduce the transmission of vibrations and shocks from the road to the rest of the vehicle, which helps to improve the overall ride comfort and reduce noise and harshness inside the vehicle.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Retinol-binding protein** Retinol-binding protein: Retinol-binding proteins (RBP) are a family of proteins with diverse functions. They are carrier proteins that bind retinol. Assessment of retinol-binding protein is used to determine visceral protein mass in health-related nutritional studies. Retinol-binding protein: Retinol and retinoic acid play crucial roles in the modulation of gene expression and overall development of an embryo. However, deficit or excess of either one of these substances can cause early embryo mortality or developmental malformations. Regulation of transport and metabolism of retinol necessary for a successful pregnancy is accomplished via RBP. Retinol-binding proteins have been identified within the uterus, embryo, and extraembryonic tissue of the bovine, ovine, and porcine, clearly indicating that RBP plays a role in proper retinol exposure to the embryo and successful transport at the maternal-fetal interface. Further research is necessary to determine the exact effects of poor RBP expression on pregnancy and threshold levels for said expression. Genes: Cellular: RBP1, RBP2, RBP5, RBP7 Interstitial: RBP3 Plasma: RBP4 RBP in pregnancy: Retinol plays a crucial role in the growth and differentiation of various body tissues, and it has been previously characterized that embryos are extremely sensitive to alterations in retinol concentration that can lead to spontaneous abortion and malformations occurring during development. Within a mature animal, retinol is transported from the liver via the circulatory system while bound to RBP to the desired target tissue. RBP is also bound to a carrier protein, transthyretin. The process by which RBP releases retinol for cellular availability is still unknown and not concisely determined. RBP in pregnancy: Sites of synthesis Traditionally, RBP is synthesized within the liver with secretion being dependent upon retinol concentrations. However, the concentrations levels do not appear to have an effect upon transcription of RBP messenger RNA (mRNA) which remains constant. Literature reveals that the bovine endometrium has also been identified as a location of RBP synthesis, as well as, the conceptus and extraembryonic tissues of various livestock species. RBP in pregnancy: Types Plasma retinol-binding protein, the retinol transport vehicle in serum. CRBP I/II, cellular-binding proteins involved in transport of retinol and metabolites into retinyl esters for storage or into retinoic acid. CRABPs, cellular retinoic acid–binding proteins capable of binding retinol and retinoic acid with high affinity. It has also been characterized that CRABPs are involved in many aspects of the retinoic acid signaling pathway such as the regulation and availability of retinoic acid to nuclear receptors. RBP in pregnancy: Presence in livestock species during gestation Bovine/OvineRBP, identical to that found in plasma has been identified in the placental tissues of both the ovine and the bovine, suggesting that RBP may be highly involved in retinol transport and metabolism during pregnancy. However, exact timing of expression had been yet to be identified. An antiserum specific for bovine conceptus RBP and immunohistochemistry has been utilized to identify the presence of RBP at different stages of early pregnancy. Strong immunostaining and hybridization were observed in the trophectoderm of tubular, but not spherical blastocysts at day 13. RBP mRNA was localized to epithelial cells of the chorion, allantois, and amnion at day 45 of pregnancy. Lastly, RBP mRNA was detected in the cotyledons, the fetal contribution to the placenta and the site of attachment to the uterine epithelium for fetal/maternal exchange. Expression of RBP in developing conceptuses, extraembryonic membranes, and at the fetal-maternal interface indicate that there may be some regulation of retinol transport and metabolism that occurs due to RBP by the extraembryonic membranes. Within the uterus of pregnant bovines, it has been found that RBP synthesis in the luminal and glandular epithelium is quite similar to that of a cyclic animal's; however upon reaching day 17 of the estrous cycle, levels of RBP remain constant and continue to gradually rise throughout gestation. It has also been suggested that ovarian steroids may play a role in regulating uterine RBP expression. RBP in pregnancy: PorcineAll three previously mentioned types of retinol-binding proteins (RBP, CRBP, CRABP) have been identified within the porcine placenta during pregnancy via immunohistochemistry. As previously mentioned, retinol and retinoic acid are modulators of gene expression and are necessary for the proper development and growth of a conceptus. Porcine exhibit a diffuse type placenta that has areolar-gland subunits which allows for transport of larger molecules between dam and fetus. RBP and CRBP have been identified in the endometrial glands and areolar trophoblasts, suggesting that RBP is crucial in transport of retinol from the gland to the trophectoderm of the conceptus. RBP expression has also been identified within the yolk sac, myometrium, oviduct, and numerous other fetal tissues.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Atom shell** Atom shell: Atom shell may refer to either what is properly called an electron shell or an atomic orbital that makes up an electron subshell. Atom shell may also refer to: The final track of the album A City Dressed in Dynamite by American experimental rock band That Handsome Devil Electron (software framework), originally named Atom Shell
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Artisan cheese** Artisan cheese: Artisanal cheese refers to cheeses produced by hand using the traditional craftsmanship of skilled cheesemakers. As a result, the cheeses are often more complex in taste and variety. Many are aged and ripened to achieve certain aesthetics. This contrasts with the more mild flavors of mass-produced cheeses produced in large-scale operations, often shipped and sold right away.Part of the artisanal cheese-making process is the aging and ripening of the cheeses to develop flavor and textural characteristics. One type of artisanal cheese is known as farmstead cheese, made traditionally with milk from the producer's own herds of cows, sheep, and goats. Artisan cheeses may be made by mixing milk from multiple farms, whereas the more strict definition of farmstead cheese (or farmhouse cheese) requires that milk come only from one farm. Definition: There has been a lot of discussion relating to what truly defines artisanal cheese. According to the American Cheese Society, “The word ‘artisan’ or ‘artisanal’ implies that a cheese is produced primarily by hand, in small batches, with particular attention paid to the tradition of the cheesemaker's art and thus using as little mechanization as possible in production of the cheese. Artisan, or artisanal, cheese may be made from all types of milk and may include various flavorings.” While it is something that is debated by some, those involved in the industry still share a passion for making hand-created products, which may or may not include some manufacturing equipment, that will be enjoyed by many consumers. Process: The artisanal cheesemaking process can be quite extensive and resembles modern chemistry in many aspects. Many different factors affect a finished artisanal cheese product; these include, but are not limited to, what species of grass is consumed by the cattle that provided the milk source, any sudden changes of heat, and any loss of cultivated yeast, or changes in barometric pressure. These factors to an extent are different from large commercial cheesemakers, and affect artisanal cheese more heavily. Popularity: In the last decade, the American artisanal cheese industry has seen an increase larger than that in the twenty years prior, in artisan creameries being licensed for commercial business. This translates to approximately 450 different artisan cheese makers existing in the United States today. Three regions have come to lead the way in this category, New England, Wisconsin, and California. This rise in the popularity of artisan cheesemaking has also coincided with a rise in the number of dairy farms, while traditional cattle ranching has been decreasing in numbers. Legal concerns: In January 2014, Monica Metz, Branch Chief of the Food and Drug Administration's Center for Food Safety and Applied Nutrition's Dairy and Egg Branch, responded to a New York State Department of Agriculture request asking the FDA to clarify if using wooden surfaces to age cheese was acceptable. In her response, Metz said the use of wooden surfaces to ripen cheese does not conform to the Current Good Manufacturing Practices. Metz cited 21 CFR 110.40(a), to support her stance on the issue. Legal concerns: This statement caused concern among those involved in the artisanal cheesemaking process, and consumers who enjoy such cheeses. It was feared this direction as stated by the FDA would harm local American cheesemakers, but also affect cheeses that follow the same practices that are imported from other nations. Many groups including the American Cheese Society, a nonprofit trade association which promotes and supports American cheeses, created a letter on June 10, 2014, arguing against the FDA stance on using wooden surfaces to ripen cheese. The American Cheese Society stressed their stance on strict safety standards during the American cheesemaking process. Additionally, they commented on how such a ruling would affect the non-manufactured cheese industry, affecting the U.S. consumer's ability to access a multitude of different cheeses, including those created locally or abroad. Legal concerns: On June 11, 2014, the FDA sent an update regarding their earlier stance on the issue. In the update, the FDA stressed that they are not requiring the prohibiting or banning of wooden surfaces in the cheesemaking process. Furthermore, they stated there is no Food Safety Modernization Act that specifically mentions the use of wooden surfaces needed in the cheesemaking process. The FDA advised they would reach out to and engage the artisanal cheesemaking community, in coming together to resolve the issue.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ikeda map** Ikeda map: In physics and mathematics, the Ikeda map is a discrete-time dynamical system given by the complex map The original map was proposed first by Kensuke Ikeda as a model of light going around across a nonlinear optical resonator (ring cavity containing a nonlinear dielectric medium) in a more general form. It is reduced to the above simplified "normal" form by Ikeda, Daido and Akimoto zn stands for the electric field inside the resonator at the n-th step of rotation in the resonator, and A and C are parameters which indicate laser light applied from the outside, and linear phase across the resonator, respectively. In particular the parameter B≤1 is called dissipation parameter characterizing the loss of resonator, and in the limit of B=1 the Ikeda map becomes a conservative map. Ikeda map: The original Ikeda map is often used in another modified form in order to take the saturation effect of nonlinear dielectric medium into account: A 2D real example of the above form is: where u is a parameter and For 0.6 , this system has a chaotic attractor. Attractor: This animation shows how the attractor of the system changes as the parameter u is varied from 0.0 to 1.0 in steps of 0.01. The Ikeda dynamical system is simulated for 500 steps, starting from 20000 randomly placed starting points. The last 20 points of each trajectory are plotted to depict the attractor. Note the bifurcation of attractor points as u is increased. Point trajectories: The plots below show trajectories of 200 random points for various values of u . The inset plot on the left shows an estimate of the attractor while the inset on the right shows a zoomed in view of the main trajectory plot. Octave/MATLAB code for point trajectories The Octave/MATLAB code to generate these plots is given below: Python code for point trajectories
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BIMx** BIMx: BIMx is a set of desktop and mobile software tools to interactively present the 3D model and 2D documentation of Building Information Models created with ArchiCAD through a much simpler and intuitive interface than ArchiCAD's complex BIM authoring environment's UI. 3D models with 2D drawing sheets exported to BIMx document format can be viewed with native viewer applications developed for Apple iOS, Android, Mac OS X, and Microsoft Windows operating systems. BIMx presents three dimensional building models in an interactive way similar to First-person shooter video games. Clients, consultants and builders can virtually walk through and make measurements in the 3D model without the need for installing ArchiCAD. The real-time cutaway function can help to discover the construction details of the displayed building model. 2D construction documentation can be accessed directly from the BIMx Hyper-model's 3D model views providing more detailed information about the building. BIMx authoring and viewer applications: The Graphisoft BIMx software suite consists of three different applications: the desktop publisher software, the viewer apps for desktop and mobile and the web viewer application: ArchiCAD (OS X / Windows): The BIM authoring tool, a commercial application, which can publish BIMx Hyper-models. BIMx authoring and viewer applications: BIMx App (iOS / Android): A free app that can be downloaded from the iTunes App Store for displaying 3D models on iPad, iPhone, or other Android-based smartphones or tablets. In-app purchase is also available at the free BIMx iOS app for buying PRO license to view hyper-models or a model-sharing license to share a specific model with any stakeholder. BIMx authoring and viewer applications: BIMx PRO App (iOS): A commercial application that can be purchased from the iTunes App Store for displaying the entire BIMx Hyper-model: not only the 3D model of the building, but also its two dimensional architectural documentation set as drawn and detailed with ARCHICAD. In-app purchase is available for buying a model-sharing license to share a specific model with any stakeholder. BIMx authoring and viewer applications: BIMx Desktop Viewer (OS X / Windows): A desktop application that can be freely downloaded to view BIMx models. (3D models only, as the desktop viewer cannot display the drawing sheets.) BIMx Web Viewer: A free web service to access BIMx Hyper-models in a browser without any installation. It is an integrated part of the GRAPHISOFT BIMx Model Transfer service. History: The core engine of BIMx was originally developed by a Swedish developer: Zermatt Virtual Reality Software. The original product was released as an add-on for ArchiCAD 9. Graphisoft acquired Zermatt in 2010 and released Virtual Building Explorer for ArchiCAD 13 and ArchiCAD 14. Virtual Building Explorer for ArchiCAD 15 was renamed to BIMx or BIM Explorer. BIMx Hyper-models: The BIMx Hyper-model concept provides easy access to drawing sheets (such as floor plans and sections) directly from the virtual environment generated from the 3D building models. The 2D drawing sheets of a BIMx Hyper-model can only be opened with either the paid BIMx PRO application or by an in-app purchase. There is also an in-app purchase available for sharing a Hyper-model with unlimited number of stakeholders. Awards: Construction Computing Awards 2013 — Mobile Technology of the Year Architizer A+ Awards 2016 - Product/Apps Construction Computing Awards 2018 - Mobile / Field Technology App of 2018 AIA ‘Best of Show’ awards - 2019 Reviews: BIMx Docs — AECbytes Product Review Top 10 Apps for Architects — archdaily.com
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**X.121** X.121: X.121 is the ITU-T address format of the X.25 protocol suite used as part of call setup to establish a switched virtual circuit between Public Data Networks (PDNs), connecting two network user addresses (NUAs). It consists of a maximum of fourteen binary-coded decimal digits and is sent over the Packet Layer Protocol (PLP) after the packet type identifier (PTI). The address is made up of the international data number (IDN), which consists of two fields: the 4 digit data network identification code (DNIC) and the (up to) 10 digit national terminal number (NTN). X.121: The DNIC has three digits to identify the country (one to identify a zone and two to identify the country within the zone) and one to identify the PDN (allowing only ten in each country). The NTN identifies the exact network device (DTE, data terminal equipment) in the packet-switched network (PSN) and is often provided as an NUA. There are no rules to the structure of the NTN. X.121: IPv4 addresses can be mapped to X.121 as described in RFC 1236. The 14.0.0.0/8 block used to be reserved for X.121 use but was returned to IANA in 2008 to stave off IPv4 address exhaustion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Guanine deaminase** Guanine deaminase: Guanine deaminase also known as cypin, guanase, guanine aminase, GAH, and guanine aminohydrolase is an aminohydrolase enzyme which converts guanine to xanthine. Cypin is a major cytosolic protein that interacts with PSD-95. It promotes localized microtubule assembly in neuronal dendrites.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metre** Metre: The metre (or meter in American spelling; symbol: m) is the base unit of length in the International System of Units (SI). Metre: The metre was originally defined in 1791 as one ten-millionth of the distance from the equator to the North Pole along a great circle, so the Earth's circumference is approximately 40000 km. In 1799, the metre was redefined in terms of a prototype metre bar. The actual bar used was changed in 1889. In 1960, the metre was redefined in terms of a certain number of wavelengths of a certain emission line of krypton-86. The current definition was adopted in 1983 and modified slightly in 2002 to clarify that the metre is a measure of proper length. From 1983 until 2019, the metre was formally defined as the length of the path travelled by light in a vacuum in 1/299792458 of a second. After the 2019 redefinition of the SI base units, this definition was rephrased to include the definition of a second in terms of the caesium frequency ΔνCs. Spelling: Metre is the standard spelling of the metric unit for length in nearly all English-speaking nations; the exceptions are the United States and the Philippines, which use meter. Other West Germanic languages, such as German and Dutch, and North Germanic languages, such as Danish, Norwegian, and Swedish, likewise spell the word Meter or meter. Measuring devices (such as ammeter, speedometer) are spelled "-meter" in all variants of English. The suffix "-meter" has the same Greek origin as the unit of length. Etymology: The etymological roots of metre can be traced to the Greek verb μετρέω (metreo) (to measure, count or compare) and noun μέτρον (metron) (a measure), which were used for physical measurement, for poetic metre and by extension for moderation or avoiding extremism (as in "be measured in your response"). This range of uses is also found in Latin (metior, mensura), French (mètre, mesure), English and other languages. The Greek word is derived from the Proto-Indo-European root *meh₁- 'to measure'. The motto ΜΕΤΡΩ ΧΡΩ (metro chro) in the seal of the International Bureau of Weights and Measures (BIPM), which was a saying of the Greek statesman and philosopher Pittacus of Mytilene and may be translated as "Use measure!", thus calls for both measurement and moderation. The use of the word metre (for the French unit mètre) in English began at least as early as 1797. History of definition: Pendulum or meridian In 1671, Jean Picard measured the length of a "seconds pendulum" and proposed a unit of measurement twice that length to be called the universal toise (French: Toise universelle). In 1675, Tito Livio Burattini suggested the term metre for a unit of length based on a pendulum length, but then it was discovered that the length of a seconds pendulum varies from place to place.Since Eratosthenes, geographers had used meridian arcs to assess the size of the Earth, which in 1669, Jean Picard determined to have a radius of 3269000 toises, treated as a simple sphere. In the 18th century, geodesy grew in importance as a means of empirically demonstrating the theory of gravity, which Émilie du Châtelet promoted in France in combination with Leibniz's mathematical work, and because the radius of the Earth was the unit to which all celestial distances were to be referred. History of definition: Meridional definition As a result of the Lumières and during the French Revolution, the French Academy of Sciences charged a commission with determining a single scale for all measures. On 7 October 1790 that commission advised the adoption of a decimal system, and on 19 March 1791 advised the adoption of the term mètre ("measure"), a basic unit of length, which they defined as equal to one ten-millionth of the quarter meridian, the distance between the North Pole and the Equator along the meridian through Paris. On 26 March 1791, the French National Constituent Assembly adopted the proposal.The French Academy of Sciences commissioned an expedition led by Jean Baptiste Joseph Delambre and Pierre Méchain, lasting from 1792 to 1799, which attempted to accurately measure the distance between a belfry in Dunkirk and Montjuïc castle in Barcelona at the longitude of the Paris Panthéon (see meridian arc of Delambre and Méchain). The expedition was fictionalised in Denis Guedj, Le Mètre du Monde. Ken Alder wrote factually about the expedition in The Measure of All Things: the seven year odyssey and hidden error that transformed the world.This portion of the Paris meridian was to serve as the basis for the length of the half meridian connecting the North Pole with the Equator. From 1801 to 1812 France adopted this definition of the metre as its official unit of length based on results from this expedition combined with those of the Geodesic Mission to Peru. The latter was related by Larrie D. Ferreiro in Measure of the Earth: The Enlightenment Expedition That Reshaped Our World.In the 19th century, geodesy underwent a revolution through advances in mathematics as well as improvements in the instruments and methods of observation, for instance accounting for individual bias in terms of the personal equation. The application of the least squares method to meridian arc measurements demonstrated the importance of the scientific method in geodesy. On the other hand, the invention of the telegraph made it possible to measure parallel arcs, and the improvement of the reversible pendulum gave rise to the study of the Earth's gravitational field. A more accurate determination of the Figure of the Earth would soon result from the measurement of the Struve Geodetic Arc (1816–1855) and would have given another value for the definition of this standard of length. This did not invalidate the metre but highlighted that progress in science would allow better measurement of Earth's size and shape.In 1832, Carl Friedrich Gauss studied the Earth's magnetic field and proposed adding the second to the basic units of the metre and the kilogram in the form of the CGS system (centimetre, gram, second). In 1836, he founded the Magnetischer Verein, the first international scientific association, in collaboration with Alexander von Humboldt and Wilhelm Edouard Weber. The coordination of the observation of geophysical phenomena such as the Earth's magnetic field, lightning and gravity in different points of the globe stimulated the creation of the first international scientific associations. The foundation of the Magnetischer Verein was followed by that of the Central European Arc Measurement (German: Mitteleuropaïsche Gradmessung) on the initiative of Johann Jacob Baeyer in 1863, and by that of the International Meteorological Organisation whose second president, the Swiss meteorologist and physicist, Heinrich von Wild represented Russia at the International Committee for Weights and Measures (CIPM). History of definition: International prototype metre bar The influence of the intellect transcends mountains and leaps across oceans. At the time when George Washington warned his fellow countrymen against entangling political alliances with European countries, there was started a movement of far reaching importance in a small country in the heart of the Alps which (as we shall see) exerted a silent, yet potent scientific influence upon the young republic on the eastern shores of North America. In 1816, Ferdinand Rudolph Hassler was appointed first Superintendent of the Survey of the Coast. Trained in geodesy in Switzerland, France and Germany, Hassler had brought a standard metre made in Paris to the United States in 1805. He designed a baseline apparatus which instead of bringing different bars in actual contact during measurements, used only one bar calibrated on the metre and optical contact. Thus the metre became the unit of length for geodesy in the United States.Since 1830, Hassler was also head of the Bureau of Weights and Measures which became a part of the Coast Survey. He compared various units of length used in the United States at that time and measured coefficients of expansion to assess temperature effects on the measurements.In 1841, Friedrich Wilhelm Bessel, taking into account errors which had been recognized by Louis Puissant in the French meridian arc comprising the arc measurement of Delambre and Méchain which had been extended southward by François Arago and Jean-Baptiste Biot, recalculated the flattening of the Earth ellipsoid making use of nine more arc measurements, namely Peruan, Prussian, first East-Indian, second East-Indian, English, Hannover, Danish, Russian and Swedish covering almost 50 degrees of latitude, and stated that the Earth quadrant used for determining the length of the metre was nothing more than a rather imprecise conversion factor between the toise and the metre.Regarding the precision of the conversion from the toise to the metre, both units of measurement were then defined by primary standards, and unique artifacts made of different alloys with distinct coefficients of expansion were the legal basis of units of length. A wrought iron ruler, the Toise of Peru, also called Toise de l'Académie, was the French primary standard of the toise, and the metre was officially defined by the Mètre des Archives made of platinum. Besides the latter, another platinum and twelve iron standards of the metre were made in 1799.One of them became known as the Committee Meter in the United States and served as standard of length in the Coast Survey until 1890. According to geodesists, these standards were secondary standards deduced from the Toise of Peru. In Europe, surveyors continued to use measuring instruments calibrated on the Toise of Peru. Among these, the toise of Bessel and the apparatus of Borda were respectively the main references for geodesy in Prussia and in France. A French scientific instrument maker, Jean Nicolas Fortin, had made two direct copies of the Toise of Peru, the first for Friedrich Georg Wilhelm von Struve in 1821 and a second for Friedrich Bessel in 1823.On the subject of the theoretical definition of the metre, it had been inaccessible and misleading at the time of Delambre and Mechain arc measurement, as the geoid is a ball, which on the whole can be assimilated to an oblate spheroid, but which in detail differs from it so as to prohibit any generalization and any extrapolation. As early as 1861, after Friedrich von Schubert showed that the different meridians were not of equal length, Elie Ritter, a mathematician from Geneva, deduced from a computation based on eleven meridian arcs covering 86 degrees that the meridian equation differed from that of the ellipse: the meridian was swelled about the 45th degree of latitude by a layer whose thickness was difficult to estimate because of the uncertainty of the latitude of some stations, in particular that of Montjuïc in the French meridian arc. By measuring the latitude of two stations in Barcelona, Méchain had found that the difference between these latitudes was greater than predicted by direct measurement of distance by triangulation. We know now that, in addition to other errors in the survey of Delambre and Méchain, an unfavourable vertical deflection gave an inaccurate determination of Barcelona's latitude, a metre "too short" compared to a more general definition taken from the average of a large number of arcs.Nevertheless Ferdinand Rudolph Hassler's use of the metre in coastal survey contributed to the introduction of the Metric Act of 1866 allowing the use of the metre in the United States, and also played an important role in the choice of the metre as international scientific unit of length and the proposal by the European Arc Measurement (German: Europäische Gradmessung) to "establish a European international bureau for weights and measures". However, in 1866, the most important concern was that the Toise of Peru, the standard of the toise constructed in 1735 for the French Geodesic Mission to the Equator, might be so much damaged that comparison with it would be worthless, while Bessel had questioned the accuracy of copies of this standard belonging to Altona and Koenigsberg Observatories, which he had compared to each other about 1840. Indeed when the primary Imperial yard standard was partially destroyed in 1834, a new standard of reference had been constructed using copies of the "Standard Yard, 1760" instead of the pendulum's length as provided for in the Weights and Measures Act of 1824.In 1864, Urbain Le Verrier refused to join the first general conference of the Central European Arc Measurement because the French geodetic works had to be verified. History of definition: In 1866, at the meeting of the Permanent Commission of the association in Neuchâtel, Antoine Yvon Villarceau announced that he had checked eight points of the French arc. He confirmed that the metre was too short. It then became urgent to undertake a complete revision of the meridian arc. Moreover, while the extension of the French meridian arc to the Balearic Islands (1803–1807) had seemed to confirm the length of the metre, this survey had not been secured by any baseline in Spain. For that reason, Carlos Ibáñez e Ibáñez de Ibero's announcement, at this conference, of his 1858 measurement of a baseline in Madridejos was of particular importance. Indeed surveyors determined the size of triangulation networks by measuring baselines which concordance granted the accuracy of the whole survey.In 1867 at the second general conference of the International Association of Geodesy held in Berlin, the question of an international standard unit of length was discussed in order to combine the measurements made in different countries to determine the size and shape of the Earth. The conference recommended the adoption of the metre in replacement of the toise and the creation of an international metre commission, according to the proposal of Johann Jacob Baeyer, Adolphe Hirsch and Carlos Ibáñez e Ibáñez de Ibero who had devised two geodetic standards calibrated on the metre for the map of Spain.Ibáñez adopted the system which Ferdinand Rudolph Hassler used for the United States Survey of the Coast, consisting of a single standard with lines marked on the bar and microscopic measurements. Regarding the two methods by which the effect of temperature was taken into account, Ibáñez used both the bimetallic rulers, in platinum and brass, which he first employed for the central baseline of Spain, and the simple iron ruler with inlaid mercury thermometers which was utilized in Switzerland. These devices, the first of which is referred to as either Brunner apparatus or Spanish Standard, were constructed in France by Jean Brunner, then his sons. Measurement traceability between the toise and the metre was ensured by comparison of the Spanish Standard with the standard devised by Borda and Lavoisier for the survey of the meridian arc connecting Dunkirk with Barcelona.Hassler's metrological and geodetic work also had a favourable response in Russia. In 1869, the Saint Petersburg Academy of Sciences sent to the French Academy of Sciences a report drafted by Otto Wilhelm von Struve, Heinrich von Wild and Moritz von Jacobi inviting his French counterpart to undertake joint action to ensure the universal use of the metric system in all scientific work. History of definition: In the 1870s and in light of modern precision, a series of international conferences was held to devise new metric standards. When a conflict broke out regarding the presence of impurities in the metre-alloy of 1874, a member of the Preparatory Committee since 1870 and Spanish representative at the Paris Conference in 1875, Carlos Ibáñez e Ibáñez de Ibero intervened with the French Academy of Sciences to rally France to the project to create an International Bureau of Weights and Measures equipped with the scientific means necessary to redefine the units of the metric system according to the progress of sciences.The Metre Convention (Convention du Mètre) of 1875 mandated the establishment of a permanent International Bureau of Weights and Measures (BIPM: Bureau International des Poids et Mesures) to be located in Sèvres, France. This new organisation was to construct and preserve a prototype metre bar, distribute national metric prototypes, and maintain comparisons between them and non-metric measurement standards. The organisation distributed such bars in 1889 at the first General Conference on Weights and Measures (CGPM: Conférence Générale des Poids et Mesures), establishing the International Prototype Metre as the distance between two lines on a standard bar composed of an alloy of 90% platinum and 10% iridium, measured at the melting point of ice. History of definition: The comparison of the new prototypes of the metre with each other and with the Committee metre (French: Mètre des Archives) involved the development of special measuring equipment and the definition of a reproducible temperature scale. The BIPM's thermometry work led to the discovery of special alloys of iron-nickel, in particular invar, for which its director, the Swiss physicist Charles-Edouard Guillaume, was granted the Nobel Prize for physics in 1920. History of definition: As Carlos Ibáñez e Ibáñez de Ibero stated, the progress of metrology combined with those of gravimetry through improvement of Kater's pendulum led to a new era of geodesy. If precision metrology had needed the help of geodesy, the latter could not continue to prosper without the help of metrology. It was then necessary to define a single unit to express all the measurements of terrestrial arcs and all determinations of the force of gravity by the mean of pendulum. Metrology had to create a common unit, adopted and respected by all civilized nations.Moreover, at that time, statisticians knew that scientific observations are marred by two distinct types of errors, constant errors on the one hand, and fortuitous errors, on the other hand. The effects of the latter can be mitigated by the least-squares method. Constant or regular errors on the contrary must be carefully avoided, because they arise from one or more causes that constantly act in the same way and have the effect of always altering the result of the experiment in the same direction. They therefore deprive of any value the observations that they impinge. However, the distinction between systematic and random errors is far from being as sharp as one might think at first assessment. In reality, there are no or very few random errors. As science progresses, the causes of certain errors are sought out, studied, their laws discovered. These errors pass from the class of random errors into that of systematic errors. The ability of the observer consists in discovering the greatest possible number of systematic errors in order to be able, once he has become acquainted with their laws, to free his results from them using a method or appropriate corrections.For metrology the matter of expansibility was fundamental; as a matter of fact the temperature measuring error related to the length measurement in proportion to the expansibility of the standard and the constantly renewed efforts of metrologists to protect their measuring instruments against the interfering influence of temperature revealed clearly the importance they attached to the expansion-induced errors. It was thus crucial to compare at controlled temperatures with great precision and to the same unit all the standards for measuring geodetic baselines and all the pendulum rods. Only when this series of metrological comparisons would be finished with a probable error of a thousandth of a millimetre would geodesy be able to link the works of the different nations with one another, and then proclaim the result of the measurement of the Globe.As the figure of the Earth could be inferred from variations of the seconds pendulum length with latitude, the United States Coast Survey instructed Charles Sanders Peirce in the spring of 1875 to proceed to Europe for the purpose of making pendulum experiments to chief initial stations for operations of this sort, in order to bring the determinations of the forces of gravity in America into communication with those of other parts of the world; and also for the purpose of making a careful study of the methods of pursuing these researches in the different countries of Europe. In 1886 the association of geodesy changed name for the International Geodetic Association, which Carlos Ibáñez e Ibáñez de Ibero presided up to his death in 1891. During this period the International Geodetic Association (German: Internationale Erdmessung) gained worldwide importance with the joining of United States, Mexico, Chile, Argentina, and Japan. History of definition: Efforts to supplement the various national surveying systems, which began in the 19th century with the foundation of the Mitteleuropäische Gradmessung, resulted in a series of global ellipsoids of the Earth (e.g., Helmert 1906, Hayford 1910 and 1924) which would later lead to develop the World Geodetic System. Nowadays the practical realisation of the metre is possible everywhere thanks to the atomic clocks embedded in GPS satellites. History of definition: Wavelength definition In 1873, James Clerk Maxwell suggested that light emitted by an element be used as the standard both for the metre and for the second. These two quantities could then be used to define the unit of mass.In 1893, the standard metre was first measured with an interferometer by Albert A. Michelson, the inventor of the device and an advocate of using some particular wavelength of light as a standard of length. By 1925, interferometry was in regular use at the BIPM. However, the International Prototype Metre remained the standard until 1960, when the eleventh CGPM defined the metre in the new International System of Units (SI) as equal to 1650763.73 wavelengths of the orange-red emission line in the electromagnetic spectrum of the krypton-86 atom in a vacuum. History of definition: Speed of light definition To further reduce uncertainty, the 17th CGPM in 1983 replaced the definition of the metre with its current definition, thus fixing the length of the metre in terms of the second and the speed of light: The metre is the length of the path travelled by light in vacuum during a time interval of 1/299792458 of a second.This definition fixed the speed of light in vacuum at exactly 299792458 metres per second (≈300000 km/s or ≈1.079 billion km/hour). An intended by-product of the 17th CGPM's definition was that it enabled scientists to compare lasers accurately using frequency, resulting in wavelengths with one-fifth the uncertainty involved in the direct comparison of wavelengths, because interferometer errors were eliminated. To further facilitate reproducibility from lab to lab, the 17th CGPM also made the iodine-stabilised helium–neon laser "a recommended radiation" for realising the metre. For the purpose of delineating the metre, the BIPM currently considers the HeNe laser wavelength, λHeNe, to be 632.99121258 nm with an estimated relative standard uncertainty (U) of 2.1×10−11.This uncertainty is currently one limiting factor in laboratory realisations of the metre, and it is several orders of magnitude poorer than that of the second, based upon the caesium fountain atomic clock (U = 5×10−16). Consequently, a realisation of the metre is usually delineated (not defined) today in labs as 1579800.762042(33) wavelengths of helium-neon laser light in a vacuum, the error stated being only that of frequency determination. This bracket notation expressing the error is explained in the article on measurement uncertainty. History of definition: Practical realisation of the metre is subject to uncertainties in characterising the medium, to various uncertainties of interferometry, and to uncertainties in measuring the frequency of the source. A commonly used medium is air, and the National Institute of Standards and Technology (NIST) has set up an online calculator to convert wavelengths in vacuum to wavelengths in air. As described by NIST, in air, the uncertainties in characterising the medium are dominated by errors in measuring temperature and pressure. Errors in the theoretical formulas used are secondary.By implementing a refractive index correction such as this, an approximate realisation of the metre can be implemented in air, for example, using the formulation of the metre as 1579800.762042(33) wavelengths of helium–neon laser light in a vacuum, and converting the wavelengths in a vacuum to wavelengths in air. Air is only one possible medium to use in a realisation of the metre, and any partial vacuum can be used, or some inert atmosphere like helium gas, provided the appropriate corrections for refractive index are implemented.The metre is defined as the path length travelled by light in a given time, and practical laboratory length measurements in metres are determined by counting the number of wavelengths of laser light of one of the standard types that fit into the length, and converting the selected unit of wavelength to metres. Three major factors limit the accuracy attainable with laser interferometers for a length measurement: uncertainty in vacuum wavelength of the source, uncertainty in the refractive index of the medium, least count resolution of the interferometer.Of these, the last is peculiar to the interferometer itself. The conversion of a length in wavelengths to a length in metres is based upon the relation λ=cnf which converts the unit of wavelength λ to metres using c, the speed of light in vacuum in m/s. Here n is the refractive index of the medium in which the measurement is made, and f is the measured frequency of the source. Although conversion from wavelengths to metres introduces an additional error in the overall length due to measurement error in determining the refractive index and the frequency, the measurement of frequency is one of the most accurate measurements available.The CIPM issued a clarification in 2002: Its definition, therefore, applies only within a spatial extent sufficiently small that the effects of the non-uniformity of the gravitational field can be ignored (note that, at the surface of the Earth, this effect in the vertical direction is about 1 part in 1016 per metre). In this case, the effects to be taken into account are those of special relativity only. Early adoptions of the metre internationally: In France, the metre was adopted as an exclusive measure in 1801 under the Consulate. This continued under the First French Empire until 1812, when Napoleon decreed the introduction of the non-decimal mesures usuelles, which remained in use in France up to 1840 in the reign of Louis Philippe. Meanwhile, the metre was adopted by the Republic of Geneva. After the joining of the canton of Geneva to Switzerland in 1815, Guillaume Henri Dufour published the first official Swiss map, for which the metre was adopted as the unit of length.Louis Napoléon Bonaparte, a Swiss–French binational officer, was present when a baseline was measured near Zürich for the Dufour map, which would win the gold medal for a national map at the Exposition Universelle of 1855. Among the scientific instruments calibrated on the metre that were displayed at the Exposition Universelle, was Brunner's apparatus, a geodetic instrument devised for measuring the central baseline of Spain, whose designer, Carlos Ibáñez e Ibáñez de Ibero would represent Spain at the International Statistical Institute. In 1885, in addition to the Exposition Universelle and the second Statistical Congress held in Paris, an International Association for Obtaining a Uniform Decimal System of Measures, Weights, and Coins was created there.Copies of the Spanish standard were made for Egypt, France and Germany. These standards were compared to each other and with the Borda apparatus, which was the main reference for measuring all geodetic bases in France. In 1869, Napoleon III convened the International Metre Commission, which met in Paris in 1870. The Franco-Prussian War broke out, the Second French Empire collapsed, but the metre survived. Early adoptions of the metre internationally: Metre adoption dates by country France: 1801 - 1812, then 1840, Republic of Geneva, Switzerland: 1813, Kingdom of the Netherlands: 1820, Kingdom of Belgium: 1830, Chile: 1848, Kingdom of Sardinia, Italy: 1850, Spain: 1852, Portugal: 1852, Colombia: 1853, Ecuador: 1856, Mexico: 1857, Brazil: 1862, Argentina: 1863, Italy: 1863, German Empire, Germany: 1872, Austria, 1875, Switzerland: 1877. SI prefixed forms of metre: SI prefixes can be used to denote decimal multiples and submultiples of the metre, as shown in the table below. Long distances are usually expressed in km, astronomical units (149.6 Gm), light-years (10 Pm), or parsecs (31 Pm), rather than in Mm, Gm, Tm, Pm, Em, Zm or Ym; "30 cm", "30 m", and "300 m" are more common than "3 dm", "3 dam", and "3 hm", respectively. SI prefixed forms of metre: The terms micron and millimicron have been used instead of micrometre (μm) and nanometre (nm), respectively, but this practice is discouraged. Equivalents in other units: Within this table, "inch" and "yard" mean "international inch" and "international yard" respectively, though approximate conversions in the left column hold for both international and survey units. "≈" means "is approximately equal to"; "=" means "is exactly equal to".One metre is exactly equivalent to 5 000/127 inches and to 1 250/1 143 yards. Equivalents in other units: A simple mnemonic aid exists to assist with conversion, as three "3"s: 1 metre is nearly equivalent to 3 feet 3+3⁄8 inches. This gives an overestimate of 0.125 mm; however, the practice of memorising such conversion formulas has been discouraged in favour of practice and visualisation of metric units.The ancient Egyptian cubit was about 0.5 m (surviving rods are 523–529 mm). Scottish and English definitions of the ell (two cubits) were 941 mm (0.941 m) and 1143 mm (1.143 m) respectively. The ancient Parisian toise (fathom) was slightly shorter than 2 m and was standardised at exactly 2 m in the mesures usuelles system, such that 1 m was exactly 1⁄2 toise. The Russian verst was 1.0668 km. The Swedish mil was 10.688 km, but was changed to 10 km when Sweden converted to metric units.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Telik** Telik: Telik, Inc. was set up in 1988 and was reverse merged into privately held MabVax in May 2014. The major drug of the company was TELINTRA, an investigational agent that was in development for the treatment of myelodysplastic syndrome (MDS) and idiopathic chronic neutropenia. Controversies: In 2007, a class action was filed against Telik, Inc., alleging that they made false and misleading statements about the Company’s business and prospects during the Class Period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dual segmented Langmuir probe** Dual segmented Langmuir probe: Dual Segmented Langmuir Probe (DSLP) is an instrument developed primarily by Czech researchers and engineers to study the magnetospheric background plasma flown on board the spacecraft of the European Space Agency (ESA) Proba 2.Data acquired by DSLP will be used to reach these specific scientific goals: Directional Measurements: Contrary to classical Langmuir probes, the new DSLP concept of data acquisition from the independent segments will enable to study also plasma characteristics in different directions. This should provide for example estimations of plasma flow velocity. Typically in the presence of magnetic field, electron temperatures are observed to be slightly different in the direction parallel and perpendicular to the magnetic field lines. This temperature anisotropy should be measured with DSLP by way of directional data acquisition. Dual segmented Langmuir probe: Non-Maxwellian Features in Ionospheric Plasma: Classical theories for LPs are typically developed for plasmas in a thermodynamic equilibrium, that is for particle populations possessing Maxwellian velocity distribution functions. However, a thermodynamic equilibrium and thus a Maxwellian distribution is an idealized case while the real distribution in many plasma environments often exhibits various non-Maxwellian features, like loss-cone or flat-top distributions or high-energy tails. We intend to adapt the DSLP theoretical model in order to see whether such features exist also in ionospheric plasmas. Dual segmented Langmuir probe: Ionospheric Irregularities: Ionosphere especially in the equatorial region possess several phenomena such equatorial ionization anomaly or ionospheric perturbations in auroral and cusp regions. The latitudinal distribution of these anomalies should be mapped during the whole mission. The effects are also highly dependent on space weather, on magnetospheric forces induced by solar, interplanetary and magnetospheric disturbances. Hence also coordination with LYRA and SWAP (other Proba 2 payload) measurements would be useful to find a correlation between particular solar events and ionospheric disturbances. Dual segmented Langmuir probe: Ionospheric Perturbations by Solar Events (CMEs): This scientific objective will use cooperation with LYRA and SWAP experiments and further more enhance the sphere of interest. Detected solar event, if possible, should start DSLP burst measurement when the solar event affects the Earth. Dual segmented Langmuir probe: Mapping Bulk Plasma Parameters: All acquired DSLP data will be used to map the bulk plasma parameters (primarily electron density and temperature) and to study their latitude and seasonal variations.The DSLP instrument consists of two Langmuir probes, electronics and small data processing unit. DSLP shares some interface, power and processing resources with TPMU experiment. DSLP has been developed on the basis of its predecessor ISL (Instrument Sonde de Langmuir), flown on the Demeter mission of CNES. Dual segmented Langmuir probe: DSLP was developed by the consortium of Astronomical Institute and Institute of Atmospheric Physics, Academy of Sciences of the Czech Republic, Prague, Czech Republic, Research and Scientific Support Department (RSSD) ESA ESTEC, Noordwijk, The Netherlands, Czech Space Research Centre (CSRC), Brno Czech Republic, and SPRINX Systems, Prague, Czech Republic. The team has been led by the Principal Investigator Pavel Trávníček.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PPP2R5D** PPP2R5D: Serine/threonine-protein phosphatase 2A 56 kDa regulatory subunit delta isoform is an enzyme that in humans is encoded by the PPP2R5D gene. Mutations in PPP2R5D cause Jordan's Syndrome. Function: The product of this gene belongs to the phosphatase 2A regulatory subunit B family. Protein phosphatase 2A is one of the four major Ser/Thr phosphatases, and it is implicated in the negative control of cell growth and division. It consists of a common heteromeric core enzyme, which is composed of a catalytic subunit and a constant regulatory subunit, that associates with a variety of regulatory subunits. The B regulatory subunit might modulate substrate selectivity and catalytic activity. This gene encodes a delta isoform of the regulatory subunit B56 subfamily. Alternatively spliced transcript variants encoding different isoforms have been identified. Interactions: PPP2R5D has been shown to interact with: HAND2, PPP2CA, PPP2R1B, and liprin-alpha-1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diacope** Diacope: Diacope ( dy-AK-ə-pee) is a rhetorical term meaning repetition of a word or phrase that is broken up by a single intervening word, or a small number of intervening words. It derives from a Greek word diakopḗ, which means "cut in two". Examples: "Bond. James Bond." — James Bond "Put out the light, and then put out the light." — Shakespeare, Othello, Act V, scene 2. Examples: "A horse! a horse! my kingdom for a horse! — Richard III "You think you own whatever land you land on" — Second verse from the song "Colors of the Wind" from the movie Pocahontas Leo Marks's poem "The Life That I Have", memorably used in the film Odette, is an extended example of diacope:The life that I have Is all that I have And the life that I have Is yours.The love that I have Of the life that I have Is yours and yours and yours.A sleep I shall have A rest I shall have Yet death will be but a pause.For the peace of my years In the long green grass Will be yours and yours and yours.The first line in the poem not to deploy diacope is the one about death being "a pause." "In times like these, it helps to recall that there have always been times like these." — Paul Harvey. This is also an example of an epanalepsis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MIR660** MIR660: MicroRNA 660 is a miRNA that in humans is encoded by the MIR660 gene. Function: microRNAs (miRNAs) are short (20-24 nt) non-coding RNAs that are involved in post-transcriptional regulation of gene expression in multicellular organisms by affecting both the stability and translation of mRNAs. miRNAs are transcribed by RNA polymerase II as part of capped and polyadenylated primary transcripts (pri-miRNAs) that can be either protein-coding or non-coding. The primary transcript is cleaved by the Drosha ribonuclease III enzyme to produce an approximately 70-nt stem-loop precursor miRNA (pre-miRNA), which is further cleaved by the cytoplasmic Dicer ribonuclease to generate the mature miRNA and antisense miRNA star (miRNA*) products. The mature miRNA is incorporated into a RNA-induced silencing complex (RISC), which recognizes target mRNAs through imperfect base pairing with the miRNA and most commonly results in translational inhibition or destabilization of the target mRNA. The RefSeq represents the predicted microRNA stem-loop.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Desktop Architect** Desktop Architect: Desktop Architect is a third-party replacement for the Desktop Themes control panel in Windows 95, 98, ME and 2000. It is also fully compatible with Windows XP and Vista. However, in Vista, the startup sound does not work, and the Network Neighborhood icon has to be changed manually. It is not known at this time if this program works with Windows 7. Desktop Architect: On Windows 8.1 (Pro 64-bit) the fonts may be garbled to the point the system is rendered unusable. Changing the desktop colors (windows borders, 3D object bevels, scrollbar, etc.) do work, but the problem with text display is hard to correct since all Windows' and its applications screens are affected, and even rebooting or rolling back to a previous system restore point may be difficult. Caution is advised; since Desktop Architect allows the user to select which changes to apply, making small changes and testing the results in steps, like marking only the Colors checkbox, is recommended. Features: Appearance Appearance allows users to customize the Windows Classic theme by changing the colour of various objects, such as scrollbars, active and inactive windows, menu bar, message box, window borders, window frame, selected items, font colours, 3D objects, and a few other things as well. Wallpaper Users can change the desktop wallpaper image. Sounds Systems sounds can be customized and changed. Default sounds can be removed, or can be changed to a different sound file. When browsing for sound files users can preview the sound. Icons Users can change the system icons, or restore default icons. These include folder icons, printers, My Documents, My Computer, Recycle Bin, Network Neighborhood, and more. It can also import and install 3rd party icon packages. Pointers The mouse pointers can be changed with custom cursor files, or restored to default cursors. Animated cursors can be used, also. Screen Savers Screen savers can be changed or imported, and saved with the theme file. Wizards Desktop Architect has two wizards, the Theme Package Wizard, and the Theme Install Wizard. The Theme Package Wizard compresses the theme into a zip file for distribution. The Theme Install Wizard imports and installs themes downloaded from another source. Theme Scheduler This features allows user to have themes automatically set at a specific time of year. Requirements: Desktop Architect requires version 4.2 or higher of the comctl32.dll, which is located in the computers system folder. It also requires 64MB of RAM and 16-bit 1024x768 Video graphics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FOSDIC** FOSDIC: FOSDIC (Film Optical Sensing Device for Input to Computers) is a family of optical scanners for converting data on microfilm to computer-readable magnetic tape. FOSDIC was designed and built by the United States National Bureau of Standards for use by the United States Census Bureau and other government agencies. Although the Census Bureau entered the computer age with the introduction of UNIVAC I in 1951, its data processing speed was hampered by the continued reliance upon punched cards. Transferring questionnaire data to punch cards that UNIVAC "read" and stored on magnetic tape was a time-consuming process that remained relatively unchanged since the late nineteenth century. To take advantage of UNIVAC's speed, National Bureau of Standards scientists and Census Bureau engineers began development of FOSDIC. Completed in 1954, the first generation of FOSDIC read the position of pencil-filled circles on questionnaires and translated the responses to computer code stored on magnetic computer tape. FOSDIC: The Census Bureau first used FOSDIC to process a decennial census in 1960. Enumerators transferred data collected on questionnaires to a "FOSDIC-readable schedule" on which questionnaire responses were recorded as pencilled-in circles. At the Census Bureau, technicians used extremely sensitive photography equipment to convert these forms into microfilm. In 1970 and later censuses, all questionnaires were FOSDIC readable, eliminating the need to have enumerators transfer data from questionnaires to FOSDIC schedules. FOSDIC: The shaded circles appeared as light dots on the microfilm. When the microfilm passed through the Census Bureau's new fleet of FOSDIC III machines (FOSDIC II had been designed for the Weather Bureau), they read the placement of the bright marks on the microfilm and translated them into computer code. The Census Bureau used updated versions of FOSDIC for the 1970, 1980, and 1990 censuses. FOSDIC proved so successful that it was not replaced until the introduction of optical character recognition for the 2000 Census. Technology: A series of systems were developed for use in the 1960, 1970, 1980 and 1990 U.S. census. The first system, delivered in 1954 used vacuum tubes and analog processing. Later versions used software control with a PDP-11 minicomputer. FOSDIC used a flying-spot scanner to detect marks on forms that had previously been photographed on microfilm. Other applications included digitizing unemployment data, EPA Pollutant charts, NOAA Underwater current meter records and . The U.S. Postal Research Laboratory used a surplus FOSDIC system to make high-resolution images of dead-letter mail to create a data base for evaluating character-recognition techniquesThe FOSDIC system was also use by the National Archives to digitize the images of Army enlistment records on punched card that were stored on 1,586 rolls of microfilm. Sources: This article incorporates text from a free content work. Licensed under Public Domain as a work of the U.S. Government. Text taken from U.S. Census Bureau, . To learn how to add open license text to Wikipedia articles, please see this how-to page. For information on reusing text from Wikipedia, please see the terms of use.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Autoregressive integrated moving average** Autoregressive integrated moving average: In statistics and econometrics, and in particular in time series analysis, an autoregressive integrated moving average (ARIMA) model is a generalization of an autoregressive moving average (ARMA) model. To better comprehend the data or to forecast upcoming series points, both of these models are fitted to time series data. ARIMA models are applied in some cases where data show evidence of non-stationarity in the sense of mean (but not variance/autocovariance), where an initial differencing step (corresponding to the "integrated" part of the model) can be applied one or more times to eliminate the non-stationarity of the mean function (i.e., the trend). When the seasonality shows in a time series, the seasonal-differencing could be applied to eliminate the seasonal component. Since the ARMA model, according to the Wold's decomposition theorem, is theoretically sufficient to describe a regular (a.k.a. purely nondeterministic) wide-sense stationary time series, we are motivated to make stationary a non-stationary time series, e.g., by using differencing, before we can use the ARMA model. Note that if the time series contains a predictable sub-process (a.k.a. pure sine or complex-valued exponential process), the predictable component is treated as a non-zero-mean but periodic (i.e., seasonal) component in the ARIMA framework so that it is eliminated by the seasonal differencing. Autoregressive integrated moving average: The AR part of ARIMA indicates that the evolving variable of interest is regressed on its own lagged (i.e., prior) values. The MA part indicates that the regression error is actually a linear combination of error terms whose values occurred contemporaneously and at various times in the past. The I (for "integrated") indicates that the data values have been replaced with the difference between their values and the previous values (and this differencing process may have been performed more than once). The purpose of each of these features is to make the model fit the data as well as possible. Autoregressive integrated moving average: Non-seasonal ARIMA models are generally denoted ARIMA(p,d,q) where parameters p, d, and q are non-negative integers, p is the order (number of time lags) of the autoregressive model, d is the degree of differencing (the number of times the data have had past values subtracted), and q is the order of the moving-average model. Seasonal ARIMA models are usually denoted ARIMA(p,d,q)(P,D,Q)m, where m refers to the number of periods in each season, and the uppercase P,D,Q refer to the autoregressive, differencing, and moving average terms for the seasonal part of the ARIMA model.When two out of the three terms are zeros, the model may be referred to based on the non-zero parameter, dropping "AR", "I" or "MA" from the acronym describing the model. For example, ARIMA (1,0,0) is AR(1), ARIMA (0,1,0) is I(1), and ARIMA (0,0,1) is MA(1). Autoregressive integrated moving average: ARIMA models can be estimated following the Box–Jenkins approach. Definition: Given time series data Xt where t is an integer index and the Xt are real numbers, an ARIMA (p′,q) model is given by Xt−α1Xt−1−⋯−αp′Xt−p′=εt+θ1εt−1+⋯+θqεt−q, or equivalently by (1−∑i=1p′αiLi)Xt=(1+∑i=1qθiLi)εt where L is the lag operator, the αi are the parameters of the autoregressive part of the model, the θi are the parameters of the moving average part and the εt are error terms. The error terms εt are generally assumed to be independent, identically distributed variables sampled from a normal distribution with zero mean. Definition: Assume now that the polynomial (1−∑i=1p′αiLi) has a unit root (a factor (1−L) ) of multiplicity d. Then it can be rewritten as: (1−∑i=1p′αiLi)=(1−∑i=1p′−dφiLi)(1−L)d. Definition: An ARIMA(p,d,q) process expresses this polynomial factorisation property with p=p'−d, and is given by: (1−∑i=1pφiLi)(1−L)dXt=(1+∑i=1qθiLi)εt and thus can be thought as a particular case of an ARMA(p+d,q) process having the autoregressive polynomial with d unit roots. (For this reason, no process that is accurately described by an ARIMA model with d > 0 is wide-sense stationary.) The above can be generalized as follows. Definition: (1−∑i=1pφiLi)(1−L)dXt=δ+(1+∑i=1qθiLi)εt. This defines an ARIMA(p,d,q) process with drift δ1−∑φi Other special forms: The explicit identification of the factorization of the autoregression polynomial into factors as above can be extended to other cases, firstly to apply to the moving average polynomial and secondly to include other special factors. For example, having a factor (1−Ls) in a model is one way of including a non-stationary seasonality of period s into the model; this factor has the effect of re-expressing the data as changes from s periods ago. Another example is the factor (1−3L+L2) , which includes a (non-stationary) seasonality of period 2. The effect of the first type of factor is to allow each season's value to drift separately over time, whereas with the second type values for adjacent seasons move together.Identification and specification of appropriate factors in an ARIMA model can be an important step in modeling as it can allow a reduction in the overall number of parameters to be estimated while allowing the imposition on the model of types of behavior that logic and experience suggest should be there. Differencing: A stationary time series's properties do not depend on the time at which the series is observed. Specifically, for a wide-sense stationary time series, the mean and the variance/autocovariance keep constant over time. Differencing in statistics is a transformation applied to a non-stationary time-series in order to make it stationary in the mean sense (viz., to remove the non-constant trend), but having nothing to do with the non-stationarity of the variance or autocovariance. Likewise, the seasonal differencing is applied to a seasonal time-series to remove the seasonal component. From the perspective of signal processing, especially the Fourier spectral analysis theory, the trend is the low-frequency part in the spectrum of a non-stationary time series, while the season is the periodic-frequency part in the spectrum of it. Therefore, the differencing works as a high-pass (i.e., low-stop) filter and the seasonal-differencing as a comb filter to suppress the low-frequency trend and the periodic-frequency season in the spectrum domain (rather than directly in the time domain), respectively.To difference the data, the difference between consecutive observations is computed. Mathematically, this is shown as yt′=yt−yt−1 Differencing removes the changes in the level of a time series, eliminating trend and seasonality and consequently stabilizing the mean of the time series.Sometimes it may be necessary to difference the data a second time to obtain a stationary time series, which is referred to as second-order differencing: yt∗=yt′−yt−1′=(yt−yt−1)−(yt−1−yt−2)=yt−2yt−1+yt−2 Another method of differencing data is seasonal differencing, which involves computing the difference between an observation and the corresponding observation in the previous season e.g a year. This is shown as: where duration of season . Differencing: The differenced data are then used for the estimation of an ARMA model. Examples: Some well-known special cases arise naturally or are mathematically equivalent to other popular forecasting models. For example: An ARIMA(0, 1, 0) model (or I(1) model) is given by Xt=Xt−1+εt — which is simply a random walk. An ARIMA(0, 1, 0) with a constant, given by Xt=c+Xt−1+εt — which is a random walk with drift. An ARIMA(0, 0, 0) model is a white noise model. An ARIMA(0, 1, 2) model is a Damped Holt's model. An ARIMA(0, 1, 1) model without constant is a basic exponential smoothing model. An ARIMA(0, 2, 2) model is given by Xt=2Xt−1−Xt−2+(α+β−2)εt−1+(1−α)εt−2+εt — which is equivalent to Holt's linear method with additive errors, or double exponential smoothing. Choosing the order: The order p and q can be determined using the sample autocorrelation function (ACF), partial autocorrelation function (PACF), and/or extended autocorrelation function (EACF) method.Other alternative methods include AIC, BIC, etc. To determine the order of a non-seasonal ARIMA model, a useful criterion is the Akaike information criterion (AIC). It is written as AIC log ⁡(L)+2(p+q+k), where L is the likelihood of the data, p is the order of the autoregressive part and q is the order of the moving average part. The k represents the intercept of the ARIMA model. For AIC, if k = 1 then there is an intercept in the ARIMA model (c ≠ 0) and if k = 0 then there is no intercept in the ARIMA model (c = 0). Choosing the order: The corrected AIC for ARIMA models can be written as AICc AIC +2(p+q+k)(p+q+k+1)T−p−q−k−1. The Bayesian Information Criterion (BIC) can be written as BIC AIC log ⁡T)−2)(p+q+k). Choosing the order: The objective is to minimize the AIC, AICc or BIC values for a good model. The lower the value of one of these criteria for a range of models being investigated, the better the model will suit the data. The AIC and the BIC are used for two completely different purposes. While the AIC tries to approximate models towards the reality of the situation, the BIC attempts to find the perfect fit. The BIC approach is often criticized as there never is a perfect fit to real-life complex data; however, it is still a useful method for selection as it penalizes models more heavily for having more parameters than the AIC would. Choosing the order: AICc can only be used to compare ARIMA models with the same orders of differencing. For ARIMAs with different orders of differencing, RMSE can be used for model comparison. Forecasts using ARIMA models: The ARIMA model can be viewed as a "cascade" of two models. The first is non-stationary: Yt=(1−L)dXt while the second is wide-sense stationary: (1−∑i=1pφiLi)Yt=(1+∑i=1qθiLi)εt. Now forecasts can be made for the process Yt , using a generalization of the method of autoregressive forecasting. Forecast intervals The forecast intervals (confidence intervals for forecasts) for ARIMA models are based on assumptions that the residuals are uncorrelated and normally distributed. If either of these assumptions does not hold, then the forecast intervals may be incorrect. For this reason, researchers plot the ACF and histogram of the residuals to check the assumptions before producing forecast intervals. 95% forecast interval: 1.96 vT+h∣T , where vT+h∣T is the variance of yT+h∣y1,…,yT For h=1 , vT+h∣T=σ^2 for all ARIMA models regardless of parameters and orders. For ARIMA(0,0,q), yt=et+∑i=1qθiet−i. for h=2,3,… In general, forecast intervals from ARIMA models will increase as the forecast horizon increases. Variations and extensions: A number of variations on the ARIMA model are commonly employed. If multiple time series are used then the Xt can be thought of as vectors and a VARIMA model may be appropriate. Sometimes a seasonal effect is suspected in the model; in that case, it is generally considered better to use a SARIMA (seasonal ARIMA) model than to increase the order of the AR or MA parts of the model. If the time-series is suspected to exhibit long-range dependence, then the d parameter may be allowed to have non-integer values in an autoregressive fractionally integrated moving average model, which is also called a Fractional ARIMA (FARIMA or ARFIMA) model. Software implementations: Various packages that apply methodology like Box–Jenkins parameter optimization are available to find the right parameters for the ARIMA model. EViews: has extensive ARIMA and SARIMA capabilities. Julia: contains an ARIMA implementation in the TimeModels package Mathematica: includes ARIMAProcess function. MATLAB: the Econometrics Toolbox includes ARIMA models and regression with ARIMA errors NCSS: includes several procedures for ARIMA fitting and forecasting. Python: the "statsmodels" package includes models for time series analysis – univariate time series analysis: AR, ARIMA – vector autoregressive models, VAR and structural VAR – descriptive statistics and process models for time series analysis. Software implementations: R: the standard R stats package includes an arima function, which is documented in "ARIMA Modelling of Time Series". Besides the ARIMA (p,d,q) part, the function also includes seasonal factors, an intercept term, and exogenous variables (xreg, called "external regressors"). The CRAN task view on Time Series is the reference with many more links. The "forecast" package in R can automatically select an ARIMA model for a given time series with the auto.arima() function and can also simulate seasonal and non-seasonal ARIMA models with its simulate.Arima() function. Software implementations: Ruby: the "statsample-timeseries" gem is used for time series analysis, including ARIMA models and Kalman Filtering. JavaScript: the "arima" package includes models for time series analysis and forecasting (ARIMA, SARIMA, SARIMAX, AutoARIMA) C: the "ctsa" package includes ARIMA, SARIMA, SARIMAX, AutoARIMA and multiple methods for time series analysis. SAFE TOOLBOXES: includes ARIMA modelling and regression with ARIMA errors. SAS: includes extensive ARIMA processing in its Econometric and Time Series Analysis system: SAS/ETS. Software implementations: IBM SPSS: includes ARIMA modeling in the Professional and Premium editions of its Statistics package as well as its Modeler package. The default Expert Modeler feature evaluates a range of seasonal and non-seasonal autoregressive (p), integrated (d), and moving average (q) settings and seven exponential smoothing models. The Expert Modeler can also transform the target time-series data into its square root or natural log. The user also has the option to restrict the Expert Modeler to ARIMA models, or to manually enter ARIMA nonseasonal and seasonal p, d, and q settings without Expert Modeler. Automatic outlier detection is available for seven types of outliers, and the detected outliers will be accommodated in the time-series model if this feature is selected. Software implementations: SAP: the APO-FCS package in SAP ERP from SAP allows creation and fitting of ARIMA models using the Box–Jenkins methodology. SQL Server Analysis Services: from Microsoft includes ARIMA as a Data Mining algorithm. Stata includes ARIMA modelling (using its arima command) as of Stata 9. StatSim: includes ARIMA models in the Forecast web app. Teradata Vantage has the ARIMA function as part of its machine learning engine. TOL (Time Oriented Language) is designed to model ARIMA models (including SARIMA, ARIMAX and DSARIMAX variants) [1]. Scala: spark-timeseries library contains ARIMA implementation for Scala, Java and Python. Implementation is designed to run on Apache Spark. PostgreSQL/MadLib: Time Series Analysis/ARIMA. X-12-ARIMA: from the US Bureau of the Census
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gypsum recycling** Gypsum recycling: Gypsum recycling is the process of turning gypsum waste (from construction) into recycled gypsum, thereby generating a raw material that can replace virgin gypsum raw materials in the manufacturing of new products. Gypsum waste definition and types: Gypsum waste primarily consists of waste from gypsum boards, which are wall or ceiling panels made of a gypsum core between paper lining. Such boards are also referred to as sheetrock, plasterboards, drywall, wallboards and gyprock. Gypsum waste in some countries also consists of gypsum blocks and plaster, among others. Three main types of gypsum waste based on their origin can be distinguished: Gypsum waste from the manufacturing of gypsum products. Gypsum waste definition and types: This waste, which arises at the industrial gypsum production sites, consists of rejects and non-spec materials generated during the manufacturing of gypsum products. The recycling of this waste stream is usually part of the waste avoidance activity of the gypsum plants. The waste is referred to as gypsum manufacturing or production waste and the recycled gypsum obtained from the recycling of this is known as “production waste derived recycled gypsum”. Gypsum waste definition and types: Gypsum waste from new construction activities is typically a clean waste, and primarily consists of off-cuts of plasterboard (drywall, wallboard or gyprock) when the boards have been cut to fit the dimensions of the wall or ceiling. The waste may constitute 15% of the gypsum materials used on the site. Such waste is generally referred to as new construction gypsum waste, and can be reduced by ordering boards “made-to-measure”, but in most markets less than 10% of all orders are “made-to-measure”. Gypsum waste definition and types: Gypsum waste from demolition and reconstruction This waste arises when already installed plasterboards (drywalls, wallboards or gyprock boards), that usually have been installed many years ago, are taken out in connection with that the building is demolished or renovated. For this reason some refer to this waste as “old gypsum waste”, whereas the trade usually refer to this waste as “demolition waste”. Different from the two other types of gypsum waste described above, this type of gypsum waste from renovation, refurbishment and demolition works is more likely to present a certain degree of contamination, which can be in the form of nails, screws, wood, insulation, wall coverings etc. For this waste to be recyclable it is required that the equipment processing the waste is capable of separating such contamination from the gypsum to arrive at a pure recycled gypsum. New construction and demolition gypsum waste is both arising after the gypsum products have left the manufacturing sites, and together these two waste types are referred to as post consumer gypsum waste.The recycled gypsum obtained from this is known as post-consumer recycled gypsum. Gypsum recycling process: Gypsum waste can be turned into recycled gypsum by processing the gypsum waste in such a way that the contaminants are removed and the paper facing of the plasterboard is separated from the gypsum core through mechanical processes including grinding and sieving in specialised equipment. Gypsum waste such as gypsum blocks and plaster do not require the removal of paper, as they are not made with paper from the beginning. Gypsum recycling process: It is typical for the gypsum recyclers to accept up to 3 per cent of contamination from other materials. The professional recyclers are capable of handling gypsum waste with nails and screws, wall coverings etc. Why should gypsum waste be recycled?: Gypsum materials consist of calcium sulfate dihydrate (CaSO4·2H2O). Sulfate-reducing bacteria convert sulfates to toxic hydrogen sulphide gas; they are killed by exposure to air, but the moist, airless, carbon-containing environment in a landfill is a good habitat for them. So gypsum put into landfill will decompose, releasing up to a quarter of its weight in hydrogen sulfide. Moreover, methanogenic bacteria also thrive in such an environment, and convert the paper in the plasterboard to methane gas which is a potent greenhouse gas.Recycling gypsum waste also reduces the need for the quarrying and production of virgin gypsum raw materials. Why should gypsum waste be recycled?: Recycling one ton of the ordinary gypsum will save 1,000 pounds of black alkali, 1 ton of lactic acid and 500 kwh of energy.Recycling one metric ton of gypsum will save 28 kwh of energy and 4 pounds of aluminium. Rationale for choosing closed loop recycling: Gypsum is fully and eternally recyclable and, as a consequence, gypsum waste is one of the few construction materials for which closed loop recycling is possible. Rationale for choosing closed loop recycling: Closed loop recycling of gypsum products involves the collection and processing of the gypsum waste, and the delivery of the obtained recycled gypsum to the manufacturer of gypsum products. It is therefore essential that the recycled gypsum achieves a pre-determined quality suitable for the manufacturing of new gypsum products. Presently there is no European or American standard pre-determining the recycled gypsum's quality and the criteria vary from plant to plant. Rationale for choosing closed loop recycling: By choosing closed loop recycling the need for manufacturers to acquire virgin gypsum is reduced, contributing therefore to promote a sustainable manufacturing process. The most advanced plants, and most of these are found in the Nordic countries in Europe, have substituted up to 30 per cent of virgin gypsum raw materials with recycled gypsum. Gypsum recycling in Europe: Gypsum recycling in Europe was started by the Danish company Gypsum Recycling International A/S in Denmark, in 2001. After a few years the recycling system received waste from approximately 85 per cent of all public civic amenity/recycling centres and a recycling rate of 60 per cent of all gypsum waste was achieved. The system has been exported to cover other European countries. Gypsum recycling in Europe: Today also new recyclers have emerged and gypsum recycling systems have been introduced in more countries, like the UK, France and in the Benelux, but the highest recycling rates for gypsum waste are still found in Denmark, Norway and Sweden. January 1, 2013 the European Life + project “Gypsum to Gypsum” started, with the overall aim of transforming the gypsum demolition waste market to achieve higher recycling rates of gypsum waste, thereby helping to achieve a resource efficient economy. One of the drivers for the project is the target set by the European Union to achieve that 70 per cent of construction and demolition waste is recycled by 2020. Gypsum recycling in North America: Urban Gypsum Recycling Urban Gypsum is a division of Laneco, Inc. and provides gypsum wallboard recycling services for the Pacific Northwest of the United States. This recovered gypsum is then distributed to agricultural and industrial customers in the region keeping the wallboard from ending up in the landfill. Gypsum recycling in North America: New West Gypsum Recycling began recycling of wallboard waste in Canada in 1985. The recycled material is a blend of pre- and post-consumer, wet and dry gypsum waste that is a source of raw material for use in the manufacture of new drywall products. Gypsum Agri-cycle is one of the first companies to recycle drywall in the USA. Gypsum Agri-cycle is another North American recycler of new construction drywall located in Pennsylvania. Pennsylvania does not allow Gypsum Agri-cycle to recycle demolition drywall. Gypsum recycling in North America: Zanker Recycling began recycling gypsum in the form of sheetrock in 1999. In the recycling process, materials such as wood, metals, and trash are removed on-site where a dozer is used to crush the materials. American Gypsum Recycling American Gypsum Recycling was founded in 2018 by Chris Stapleton. His vision for the company is to transform the Northwest drywall waste stream into a valuable product for agriculture and industry. USA Gypsum located in Denver, PA provides both closed loop recycling and up-cycling reclaimed gypsum to higher value gypsum products such as agricultural gypsum.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Double overhand knot** Double overhand knot: The double overhand knot or barrel knot is simply an extension of the regular overhand knot, made with one additional pass. The result is slightly larger and more difficult to untie. It forms the first part of the surgeon's knot and both sides of a double fisherman's knot. According to The Ashley Book of Knots, "A double overhand knot tied in a cat-o'-nine-tails is termed a blood knot."When weighted, it can be difficult to untie, especially when wet.The strangle knot is a rearranged double overhand knot made around an object. It is sometimes used to secure items to posts. Instructions for tying: Tie an overhand knot at the end of a rope but do not tighten the knot down. Pass the end of the line through the loop created by the first overhand knot. Instructions for tying: Tighten the knot down while sliding it into place at the end of the line. Be sure to leave some tail sticking out from the end of the knot.Alternatively, the working end of the rope can be wrapped around the standing end twice, and then passed through both resulting loops. Both methods result in the same knot, though the latter is easier to dress in the compact finished form. Instructions for tying: With either method, more loops can be included to make a longer multiple overhand knot (which is also known as a barrel knot or blood knot).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Blade battery** Blade battery: Blade battery is a type of lithium iron phosphate (LFP) battery for electric vehicles designed and manufactured by FinDreams Battery, a subsidiary of Chinese manufacturing company BYD.The Blade Battery is most commonly a 96 centimetres (37.8 in) long and 9 centimetres (3.5 in) wide single-cell battery with a special design, which can be placed in an array and inserted into a battery pack like a blade. It is made in various lengths and thicknesses. The space utilization of the battery pack is increased by over 50% compared to most conventional lithium iron phosphate block batteries. The driving range of some electric vehicles equipped with Blade Battery can reach more than 600 kilometres (373 mi). Blade battery: In the nail penetration test, the battery industry's stringent safety test, the Blade Battery emitted no smoke or fire after being penetrated, and its surface temperature reached only 30 to 60 °C (86 to 140 °F). It is currently the only power battery in the world that can safely pass the test. In addition, it successfully passed an extreme safety test that saw it being rolled over by a 46-tonne heavy-duty truck. The Blade Battery also passed other extreme test conditions, such as being crushed, bent, being heated in a furnace to 300 °C (572 °F) and overcharged by 260%. None of these resulted in a fire or explosion. Blade battery: The Blade Battery was officially launched by BYD in 2020. Compared with ternary lithium batteries and traditional lithium iron phosphate batteries, it holds notable advantages in its high safety, long range, enduring longevity, ultra strength and high power. To address users' concerns about the safety of EV power batteries, BYD will only use the Blade Battery in all its pure electric passenger vehicles since July 2021. Safety controversies: BYD claims that "EVs equipped with the Blade Battery would be far less susceptible to catching fire – even when they are severely damaged."However, In July 2021, a BYD Han EV with Blade batteries was crash-tested in China (car-to-car crash test) versus an Arcfox Alpha-S. At about 48 hours after the test, nothing happened to the Arcfox Alpha-S, only the BYD Han car caught fire and burned to the ground. On November 15, 2021, a BYD Tang EV (with Blade batteries) caught fire in a workshop in Kristiansand, Norway.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jordan–Pólya number** Jordan–Pólya number: In mathematics, the Jordan–Pólya numbers are the numbers that can be obtained by multiplying together one or more factorials, not required to be distinct from each other. For instance, 480 is a Jordan–Pólya number because 480 =2!⋅2!⋅5! . Every tree has a number of symmetries that is a Jordan–Pólya number, and every Jordan–Pólya number arises in this way as the order of an automorphism group of a tree. These numbers are named after Camille Jordan and George Pólya, who both wrote about them in the context of symmetries of trees.These numbers grow more quickly than polynomials but more slowly than exponentials. As well as in the symmetries of trees, they arise as the numbers of transitive orientations of comparability graphs and in the problem of finding factorials that can be represented as products of smaller factorials. Sequence and growth rate: The sequence of Jordan–Pólya numbers begins: They form the smallest multiplicatively closed set containing all of the factorials. The n th Jordan–Pólya number grows more quickly than any polynomial of n , but more slowly than any exponential function of n . More precisely, for every ε>0 , and every sufficiently large x (depending on ε ), the number J(x) of Jordan–Pólya numbers up to x obeys the inequalities Factorials that are products of smaller factorials: Every Jordan–Pólya number n , except 2, has the property that its factorial n! can be written as a product of smaller factorials. This can be done simply by expanding n!=n⋅(n−1)! and then replacing n in this product by its representation as a product of factorials. It is conjectured, but unproven, that the only numbers n whose factorial n! equals a product of smaller factorials are the Jordan–Pólya numbers (except 2) and the two exceptional numbers 9 and 10, for which 9!=2!⋅3!⋅3!⋅7! and 10 !=6!⋅7!=3!⋅5!⋅7! . The only other known representation of a factorial as a product of smaller factorials, not obtained by replacing n in the product expansion of n! , is 16 14 ! , but as 16 is itself a Jordan–Pólya number, it also has the representation 16 15 !
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ElGamal encryption** ElGamal encryption: In cryptography, the ElGamal encryption system is an asymmetric key encryption algorithm for public-key cryptography which is based on the Diffie–Hellman key exchange. It was described by Taher Elgamal in 1985. ElGamal encryption is used in the free GNU Privacy Guard software, recent versions of PGP, and other cryptosystems. The Digital Signature Algorithm (DSA) is a variant of the ElGamal signature scheme, which should not be confused with ElGamal encryption. ElGamal encryption: ElGamal encryption can be defined over any cyclic group G , like multiplicative group of integers modulo n. Its security depends upon the difficulty of a certain problem in G related to computing discrete logarithms. The algorithm: The algorithm can be described as first performing a Diffie–Hellman key exchange to establish a shared secret s , then using this as a one-time pad for encrypting the message. ElGamal encryption is performed in three phases: the key generation, the encryption, and the decryption. The first is purely key exchange, whereas the latter two mix key exchange computations with message computations. The algorithm: Key generation The first party, Alice, generates a key pair as follows: Generate an efficient description of a cyclic group G of order q with generator g . Let e represent the identity element of G It is not necessary to come up with a group and generator anew for each new key. Indeed, one may expect a specific implementation of ElGamal to be hardcoded to use a specific group, or a group from a specific suite. The choice of group is mostly about how large keys you want to use. The algorithm: Choose an integer x randomly from {1,…,q−1} Compute := gx The public key consists of the values (G,q,g,h) . Alice publishes this public key and retains x as her private key, which must be kept secret. Encryption A second party, Bob, encrypts a message M to Alice under her public key (G,q,g,h) as follows: Map the message M to an element m of G using a reversible mapping function. Choose an integer y randomly from {1,…,q−1} Compute := hy . This is called the shared secret. The algorithm: Compute := gy Compute := m⋅s Bob sends the ciphertext (c1,c2) to Alice.Note that if one knows both the ciphertext (c1,c2) and the plaintext m , one can easily find the shared secret s , since c2⋅m−1=s . Therefore, a new y and hence a new s is generated for every message to improve security. For this reason, y is also called an ephemeral key. The algorithm: Decryption Alice decrypts a ciphertext (c1,c2) with her private key x as follows: Compute := c1x . Since c1=gy , c1x=gxy=hy , and thus it is the same shared secret that was used by Bob in encryption. The algorithm: Compute s−1 , the inverse of s in the group G . This can be computed in one of several ways. If G is a subgroup of a multiplicative group of integers modulo n , where n is prime, the modular multiplicative inverse can be computed using the extended Euclidean algorithm. An alternative is to compute s−1 as c1q−x . This is the inverse of s because of Lagrange's theorem, since s⋅c1q−x=gxy⋅g(q−x)y=(gq)y=ey=e Compute := c2⋅s−1 . This calculation produces the original message m , because c2=m⋅s ; hence c2⋅s−1=(m⋅s)⋅s−1=m⋅e=m Map m back to the plaintext message M Practical use Like most public key systems, the ElGamal cryptosystem is usually used as part of a hybrid cryptosystem, where the message itself is encrypted using a symmetric cryptosystem, and ElGamal is then used to encrypt only the symmetric key. This is because asymmetric cryptosystems like ElGamal are usually slower than symmetric ones for the same level of security, so it is faster to encrypt the message, which can be arbitrarily large, with a symmetric cipher, and then use ElGamal only to encrypt the symmetric key, which usually is quite small compared to the size of the message. Security: The security of the ElGamal scheme depends on the properties of the underlying group G as well as any padding scheme used on the messages. If the computational Diffie–Hellman assumption (CDH) holds in the underlying cyclic group G , then the encryption function is one-way.If the decisional Diffie–Hellman assumption (DDH) holds in G , then ElGamal achieves semantic security. Semantic security is not implied by the computational Diffie–Hellman assumption alone. See Decisional Diffie–Hellman assumption for a discussion of groups where the assumption is believed to hold. Security: ElGamal encryption is unconditionally malleable, and therefore is not secure under chosen ciphertext attack. For example, given an encryption (c1,c2) of some (possibly unknown) message m , one can easily construct a valid encryption (c1,2c2) of the message 2m To achieve chosen-ciphertext security, the scheme must be further modified, or an appropriate padding scheme must be used. Depending on the modification, the DDH assumption may or may not be necessary. Security: Other schemes related to ElGamal which achieve security against chosen ciphertext attacks have also been proposed. The Cramer–Shoup cryptosystem is secure under chosen ciphertext attack assuming DDH holds for G . Its proof does not use the random oracle model. Another proposed scheme is DHIES, whose proof requires an assumption that is stronger than the DDH assumption. Efficiency: ElGamal encryption is probabilistic, meaning that a single plaintext can be encrypted to many possible ciphertexts, with the consequence that a general ElGamal encryption produces a 1:2 expansion in size from plaintext to ciphertext. Encryption under ElGamal requires two exponentiations; however, these exponentiations are independent of the message and can be computed ahead of time if needed. Decryption requires one exponentiation and one computation of a group inverse, which can, however, be easily combined into just one exponentiation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Blend time** Blend time: Blend time, sometimes termed mixing time, is the time to achieve a predefined level of homogeneity of a tracer in a mixing vessel. Blend time is an important parameter to evaluate the mixing efficiency of mixing devices. In order to make this definition valid, the tracer should be in the same physical phase (e.g. liquid) as the bulk material. Blend time can be determined either with experiments or numerical modeling, such as computational fluid dynamics (CFD). The experimental methods to determine the blend time in liquid include conductivity method and discoloration method. The conductivity method requires a conductivity probe to present in the target system, which make it an intrusive method because the existence of the probe might change the mixing efficiency of the mixing device. Discoloration method does not require any probe which makes it a non-intrusive method. However, the color detection device (sometimes the human eye) needs to be calibrated against the conductivity method. Both methods are usually applied to monitor the concentration of the tracer in the most difficult to mix locations such as the area adjacent to the impeller shaft. Blend time: The benefit of numerical modeling is that once the modeling is completed, the blend time of any predetermined level of homogeneity of any location within the mixing system can be predicted, which is impossible to accomplish by experimental methods. However, numerical modeling needs to be validated by experimental methods.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Millwork** Millwork: Millwork is historically any wood mill produced decorative materials used in building construction. Stock profiled and patterned millwork building components fabricated by milling at a planing mill can usually be installed with minimal alteration. Today, millwork may encompass items that are made using alternatives to wood, including synthetics, plastics, and wood-adhesive composites. Often specified by architects and designers, millwork products are considered a design element within a room or on a building to create a mood or design theme. Millwork products are used in both interior and exterior applications and can serve as either decorative or functional features of a building. Historical context: Woodworking skills originally formed around wood carving, carpentry, parquetry, and cabinet making in ancient China. Historically, the term millwork applied to building elements made specifically from wood. During the "Golden Age" of mill working (1880–1910), virtually everything in the house was made from wood. During this time, the millwork produced in the United States became standardized nationwide.Today, the increase in the use of synthetic materials has led many professionals to consider any item that is composed of a combination of wood and synthetic elements to also be properly defined as millwork. This includes products that make use of pressed-wood chips in the design, such as melamine coated shelving. Specifics: Millwork building materials include the ready-made carpentry elements usually installed in any building. Many of the specific features in a space are created using different types of architectural millwork: doors, windows, transoms, sidelights, molding, trim, stair parts, and cabinetry to name just a few. The primary material used in millwork items today are most often produced from soft or hardwood lumber. Other materials used in millwork products include MDF (medium density fiberboard), finger-jointed wood, composite materials, particle board and fiberglass. Some millwork products like doors, windows and stair parts now incorporate the use of steel, stainless steel, aluminum, and glass components. Specifics: Most wood products used for millwork require decorative finish coatings. These finishes include stain and semi-transparent finishes or paint. The finishes protect the wood from decay, warping, splitting, and fading. Millwork building materials can usually be installed with little or no modification as part of the construction process. Fabrication: There are two types of manufacturers of millwork goods. In one, referred to as "stock millwork", commodity fabricators mass-produce trims and building components—with the end product being low cost, interchangeable items for commercial or home builders. In another, the product is custom produced for individuals or individual building projects—usually a costlier option which is referred to as "architectural millwork. Uses: Millwork building materials are used for both decoration and function in buildings. Exterior doors and windows are typically tested by independent agencies and rated for energy efficiency. They can also be impact-rated, fire-rated, and can be specified to reduce sound transference. Interior millwork products are not rated for energy efficiency. These products are used primarily as a decorative feature, but will often serve functions for privacy, storage, and sound-deadening.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HP OpenView Storage Area Manager** HP OpenView Storage Area Manager: HP OpenView Storage Area Manager (OVSAM) is a Hewlett-Packard software suite for management of storage resources and infrastructure. HP OpenView Storage Area Manager: HP OpenView Storage Area Manager provides comprehensive, centralized management across distributed, heterogeneous storage networks. The HP OpenView Storage Area Manager suite includes the following applications that share a common core services, GUI, host agent, and repository: Storage Node Manager (Device Management, Health/Status), Storage Optimizer (Performance), Storage Builder (Capacity), Storage Accountant (Chargeback/Metering), Storage Allocator (LUN Access Control) HP Storage Essentials Enterprise Edition has effectively replaced HP OpenView Storage Area Manager in the HP Storage Management Software portfolio. Major Releases: HP OpenView Storage Area Manager 3.2, July 2004 HP OpenView Storage Area Manager 3.1 HP OpenView Storage Area Manager 3.0 HP OpenView Storage Area Manager 2.2, February 2002 Current Release: SANMGR_00017 Patch (aka HP OVSAM v3.2.5), December 2005 External Product Links HP OpenView Storage Area Manager (OVSAM) QuickSpecs HP OpenView Storage Area Manager (OVSAM) Device Plug-Ins (DPI) Download HP OVSAM Software Patches - passport login required HP Product Manuals Search Page - Select "Storage Area Manager" Related Product External Links HP Systems Insight Manager HP Storage Essentials Software HP Storage Essentials Enterprise Edition See also List of SAN Network Management Systems
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Origin and occurrence of fluorine** Origin and occurrence of fluorine: Fluorine is relatively rare in the universe compared to other elements of nearby atomic weight. On Earth, fluorine is essentially found only in mineral compounds because of its reactivity. The main commercial source, fluorite, is a common mineral. In the universe: At 400 ppb, fluorine is estimated to be the 24th most common element in the universe. It is comparably rare for a light element (elements tend to be more common the lighter they are). All of the elements from atomic number 6 (carbon) to atomic number 12 (magnesium) are hundreds or thousands of times more common than fluorine except for 11 (sodium). One science writer described fluorine as a "shack amongst mansions" in terms of abundance. Fluorine is so rare because it is not a product of the usual nuclear fusion processes in stars. And any created fluorine within stars is rapidly eliminated through strong nuclear fusion reactions—either with hydrogen to form oxygen and helium, or with helium to make neon and hydrogen. The presence of fluorine at all—outside of temporary existence in stars—is somewhat of a mystery because of the need to escape these fluorine-destroying reactions.Three theoretical solutions to the mystery exist: In type II supernovae, atoms of neon could be hit by neutrinos during the explosion and converted to fluorine. In Wolf-Rayet stars (blue stars over 40 times heavier than the Sun), a strong solar wind could blow the fluorine out of the star before hydrogen or helium could destroy it. Finally, in asymptotic giant branch (a type of red giant) stars, fusion reactions occur in pulses and convection could lift fluorine out of the inner star. Only the red giant hypothesis has supporting evidence from observations.In space, fluorine commonly combines with hydrogen to form hydrogen fluoride. (This compound has been suggested as a tracer to enable tracking reservoirs of hydrogen in the universe.) In addition to HF, monatomic fluorine has been observed in the interstellar medium. Fluorine cations have been seen in planetary nebulae and in stars, including the Sun. On Earth: Fluorine is the thirteenth most common element in Earth's crust, comprising between 600 and 700 ppm of the crust by mass. Because of its reactivity, it is essentially only found in compounds. Commercial sources Three minerals exist that are industrially relevant sources of fluorine: fluorite, fluorapatite, and cryolite. On Earth: Fluorite Fluorite (CaF2), also called fluorspar, is the main source of commercial fluorine. Fluorite is a colorful mineral associated with hydrothermal deposits. It is common and found worldwide. China supplies more than half of the world's demand and Mexico is the second-largest producer in the world The United States produced most of the world's fluorite in the early 20th century, but its last mine, in Illinois, shut down in 1995. Canada also exited production in the 1990s. The United Kingdom has declining fluorite mining and has been a net importer since the 1980s. On Earth: Fluorapatite Fluorapatite (Ca5(PO4)3F) is mined along with other apatites for its phosphate content and is used mostly for production of fertilizers. Most of the Earth's fluorine is bound in this mineral, but because the percentage within the mineral is low (3.5%), the fluorine is discarded as waste. Only in the United States is there significant recovery. There, the hexafluorosilicates produced as byproducts are used to supply water fluoridation. On Earth: Cryolite Cryolite (Na3AlF6) is the least abundant of the three major fluorine-containing minerals, but is a concentrated source of fluorine. It was formerly used directly in aluminium production. However, the main commercial mine, on the west coast of Greenland, closed in 1987. On Earth: Minor occurrences Several other minerals, such as the gemstone topaz, contain fluoride. Fluoride is not significant in seawater or brines, unlike the other halides, because the alkaline earth fluorides precipitate out of water. Commercially insignificant quantities of organofluorines have been observed in volcanic eruptions and in geothermal springs. Their ultimate origin (from biological sources or geological formation) is unclear.The possibility of small amounts of gaseous fluorine within crystals has been debated for many years. One form of fluorite, antozonite, has a smell suggestive of fluorine when crushed. The mineral also has a dark black color, perhaps from free calcium (not bonded to fluoride). In 2012, a study reported detection of trace quantities (0.04% by weight) of diatomic fluorine in antozonite. It was suggested that radiation from small amounts of uranium within the crystals had caused the free fluorine defects.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Arthur Kollmann** Arthur Kollmann: Arthur Kollmann (1858–1941) was a German medical researcher from Hamburg who studied the fingerprint characteristics of friction ridges and volar pads.In the 1880s (1883, 1885), Kollmann was the first researcher to address the formation of friction ridges on the fetus and the random physical stresses and tensions which may have played a part in their growth.Kollmann may have been the first researcher to study the development of friction ridges. He grouped the volar pads of humans and also grouped the volar pads of many primates. Kollmann is credited with establishing and then naming ten volar pads in humans, and he was the first to study epidermic markings in different races. Alfred R. Hale described Kollmann as the first researcher (1883) to suggest that mechanical stresses inherent in fetal growth may influence the ultimate dermatoglyphic configuration. Arthur Kollmann: He is buried in the Nordfriedhof, Leipzig.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Relevance vector machine** Relevance vector machine: In mathematics, a Relevance Vector Machine (RVM) is a machine learning technique that uses Bayesian inference to obtain parsimonious solutions for regression and probabilistic classification. The RVM has an identical functional form to the support vector machine, but provides probabilistic classification. Relevance vector machine: It is actually equivalent to a Gaussian process model with covariance function: k(x,x′)=∑j=1N1αjφ(x,xj)φ(x′,xj) where φ is the kernel function (usually Gaussian), αj are the variances of the prior on the weight vector w∼N(0,α−1I) , and x1,…,xN are the input vectors of the training set.Compared to that of support vector machines (SVM), the Bayesian formulation of the RVM avoids the set of free parameters of the SVM (that usually require cross-validation-based post-optimizations). However RVMs use an expectation maximization (EM)-like learning method and are therefore at risk of local minima. This is unlike the standard sequential minimal optimization (SMO)-based algorithms employed by SVMs, which are guaranteed to find a global optimum (of the convex problem). Relevance vector machine: The relevance vector machine was patented in the United States by Microsoft (patent expired September 4, 2019). Software: dlib C++ Library The Kernel-Machine Library rvmbinary: R package for binary classification scikit-rvm fast-scikit-rvm, rvm tutorial
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Logitech Unifying receiver** Logitech Unifying receiver: The Logitech Unifying Receiver is a small dedicated USB wireless receiver, based on the nRF24L-family of RF devices, that allows up to six compatible Logitech human interface devices (such as mice, trackballs, touchpads, and keyboards; headphones are not compatible) to be linked to the same computer using 2.4 GHz band radio communication. Receivers that are bundled with a Logitech product are paired with the device at the factory. When purchasing a replacement receiver or connecting multiple devices to one receiver, pairing requires the free-of-charge Logitech Unifying software, available for Microsoft Windows and Mac OS X. On Linux the Solaar software can be used to adjust the configurations. Although not compatible with Bluetooth, devices pair to Unifying Receivers in a similar way. Peripherals remain paired, and can then be used on systems not supporting the software. Logitech receivers compatible with the Unifying protocol can be identified by the orange Unifying logo, which distinguishes them from Logitech Nano receivers of similar appearance, which pair in a similar manner but only with a single device, without using the Unifying protocol. Logitech Unifying receiver: Logitech Unifying Receivers (LURs) are often included in wireless Logitech keyboard, mouse, and combo sets, and may be purchased separately. Some Logitech peripherals allow a receiver to be stored inside. Compatibility and use: Each peripheral device can pair to one receiver per profile. While most peripherals only store one profile, newer products such as the Logitech MX Master, MX Anywhere series, and M720 Triathlon allow multiple profiles. These devices can be connected to multiple receivers simultaneously. This allows the use of receivers in several computers, e.g., a desktop and a laptop computer, selecting the computer to use by changing profiles on the mouse. This multi-computer function is further augmented by Logitech Flow (software KVM solution) which is similar to Synergy. For devices without multi-computer support, the receiver and input devices can be moved together from one computer to another, maintaining their paired status after being unplugged, as the pairing information is held in the little USB receiver—this is much simpler than transferring the peripheral from one receiver to another by changing the setup in software, and also avoids the limitation to 45 pairings of older devices. This also allows the use of peripherals on computing devices that do not support Unifying Software, e.g. devices supporting USB OTG with operating systems such as Android: first pair to the receiver on a PC or Mac.Some older Unifying devices limit the number of allowable pairing changes to a maximum of 45 times. Once the 45th connection is made, it is no longer possible to connect such a device to a different receiver. For users who often switch a Unifying device between multiple PCs or laptops with individual receivers, this connection limit can become an issue. For example, a user who frequently switches a mouse between two receivers (e.g. at work and home) will quickly exhaust the limit of available pairing switches. Logitech advises customers with this issue to contact their Customer Care. Newer devices can switch pairings an unlimited number of times. Compatibility and use: Pairing software is available from Logitech for Microsoft Windows and Mac OS X. Wireless devices using the Unifying Receiver are supported since Linux 3.2. Software to manage Unifying devices on Linux is available from third party developers, such as Solaar.Many companies have made peripherals that connect via USB wireless receivers very similar to Logitech's; Logitech devices are incompatible with many of these "off-brand" receivers. There are many different hardware versions of the unifying receiver. The most common is used for daily use, and is marked CU-0007 on the metal jacket. CU-0008 is distributed with gaming devices, and features lower latency. Security: Several security vulnerabilities of the Logitech Unifying system were reported in 2016 and 2019, and patches released. Security: MouseJacking and keyjacking MouseJacking, first reported by Bastille Networks, Inc., is the sending of malicious radio signals (packets) wirelessly to an unsuspecting user through Logitech Unifying wireless technology. The exploit takes advantage of a user's vulnerable Logitech Unifying Receiver and unencrypted signals within a range of about 100 meters. Possible exploits include: Keystroke injection by either spoofing a paired mouse or keyboard Forced pairing Affected devices and firmware Firmware not affected Response Logitech has released Unifying receiver firmware updates as new exploits were reported.Linux users can use fwupd to flash an updated firmware. It will automatically detect available updates for any connected unifying receivers and many other firmware updatable devices. An outdated alternative is MouseJack.Flashing on a Linux/UNIX host via a hypervisor such as VirtualBox along with a Windows virtual guest image and the Windows Logitech update executable is also possible. If using a Windows virtual guest, it is recommended to have a second available pointing device while the dongle is being updated. The second pointing device may be needed to allow the user to select and enable pass through of the unifying receiver via the hypervisor task bar after executing the firmware updater so that the device is found and updated. Security: Updating the Unifying receiver firmware to versions RQR12.08 or greater and RQR24.06 or greater can limit some functionality of certain paired devices unless the devices' firmware is also updated. Security: Other vulnerabilities On July 9, 2019 another set of vulnerabilities was disclosed and documented by a different researcher. A firmware update for Unifying receivers addressing the "Encryption Key Extraction Through USB" vulnerability (CVE-2019-13054/55) was released on 28 August 2019. Some users reported in 2019 that some Unifying devices were still being sold that were vulnerable to the original 2016 MouseJacking attack.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pesticide detection kit** Pesticide detection kit: A Pesticide detection kit is a kit that scientific test kit detects the presence of pesticide residues. Various organizations create them, among them Defence Food Research Laboratory of India.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Word addressing** Word addressing: In computer architecture, word addressing means that addresses of memory on a computer uniquely identify words of memory. It is usually used in contrast with byte addressing, where addresses uniquely identify bytes. Almost all modern computer architectures use byte addressing, and word addressing is largely only of historical interest. A computer that uses word addressing is sometimes called a word machine. Basics: Consider a computer which provides 524,288 (219) bits of memory. If that memory is arranged in a byte-addressable flat address space using 8-bit bytes, then there are 65,536 (216) valid addresses, from 0 to 65,535, each denoting an independent 8 bits of memory. If instead it is arranged in a word-addressable flat address space using 32-bit words, then there are 16,384 (214) valid addresses, from 0 to 16,383, each denoting an independent 32 bits. Basics: More generally, the minimum addressable unit (MAU) is a property of a specific memory abstraction. Different abstractions within a computer may use different MAUs, even when they are representing the same underlying memory. For example, a computer might use 32-bit addresses with byte addressing in its instruction set, but the CPU's cache coherence system might work with memory only at a granularity of 64-byte cache lines, allowing any particular cache line to be identified with only a 26-bit address and decreasing the overhead of the cache. Basics: The address translation done by virtual memory often affects the structure and width of the address space, but it does not change the MAU. Trade-offs of different minimum addressable units: The size of the minimum addressable unit of memory can have complex trade-offs. Using a larger MAU allows the same amount of memory to be covered with a smaller address, which can substantially decrease the memory requirements of a program. However, using a smaller MAU makes it easier to work efficiently with small items of data. Trade-offs of different minimum addressable units: Suppose a program wishes to store one of the 12 traditional signs of Western astrology. A single sign can be stored in 4 bits. If a sign is stored in its own MAU, then 4 bits will be wasted with byte addressing (50% efficiency), while 28 bits will be wasted with 32-bit word addressing (12.5% efficiency). If a sign is "packed" into a MAU with other data, then it may be relatively more expensive to read and write. For example, to write a new sign into a MAU that other data has been packed into, the computer must read the current value of the MAU, overwrite just the appropriate bits, and then store the new value back. This will be especially expensive if it is necessary for the program to allow other threads to concurrently modify the other data in the MAU. Trade-offs of different minimum addressable units: A more common example is a string of text. Common string formats such as UTF-8 and ASCII store strings as a sequence of 8-bit code points. With byte addressing, each code point can be placed in its own independently-addressable MAU with no overhead. With 32-bit word addressing, placing each code point in a separate MAU would increase the memory usage by 300%, which is not viable for programs that work with large amounts of text. Packing adjacent code points into a single word avoids this cost. However, many algorithms for working with text prefer to be able to independently address code points; to do this with packed code points, the algorithm must use a "wide" address which also stores the offset of the character within the word. If this wide address needs to be stored elsewhere within the program's memory, it may require more memory than an ordinary address. Trade-offs of different minimum addressable units: To evaluate these effects on a complete program, consider a web browser displaying a large and complex page. Some of the browser's memory will be used to store simple data such as images and text; the browser will likely choose to store this data as efficiently as possible, and it will occupy about the same amount of memory regardless of the size of the MAU. Other memory will represent the browser's model of various objects on the page, and these objects will include many references: to each other, to the image and text data, and so on. The amount of memory needed to store these object will depend greatly on the address width of the computer. Trade-offs of different minimum addressable units: Suppose that, if all the addresses in the program were 32-bit, this web page would occupy about 10 Gigabytes of memory. Trade-offs of different minimum addressable units: If the web browser is running on a computer with 32-bit addresses and byte-addressable memory, the address space will cover 4 Gigabytes of memory, which is insufficient. The browser will either be unable to display this page, or it will need to be able to opportunistically move some of the data to slower storage, which will substantially hurt its performance. Trade-offs of different minimum addressable units: If the web browser is running on a computer with 64-bit addresses and byte-addressable memory, it will require substantially more memory in order to store the larger addresses. The exact overhead will depend on how much of the 10 Gigabytes is simple data and how much is object-like and dense with references, but a figure of 40% is not implausible, for a total of 14 Gigabytes required. This is, of course, well within the capabilities of a 64-bit address space. However, the browser will generally exhibit worse locality and make worse use of the computer's memory caches within the computer, assuming equal resources with the alternatives. Trade-offs of different minimum addressable units: If the web browser is running on a computer with 32-bit addresses and 32-bit-word-addressable memory, it will likely require extra memory because of suboptimal packing and the need for a few wide addresses. This impact is likely to be relatively small, as the browser will use packing and non-wide addresses for most important purposes, and the browser will fit comfortably within the maximum addressable range of 16 Gigabytes. However, there may be a significant runtime overhead due to the widespread use of packed data for images and text. More importantly, 16 Gigabytes is a relatively low limit, and if the web page grows significantly, this computer will exhaust its address space and begin to have some of the same difficulties as the byte-addressed computer. Trade-offs of different minimum addressable units: If the web browser is running on a computer with 64-bit addresses and 32-bit-word-addressable memory, it will suffer from both of the above runtime overheads: it require substantially more memory to accommodate the larger 64-bit addresses, hurting locality, while also incurring the runtime overhead of working with extensive packing of text and image data. Word addressing means that the program can theoretically address up to 64 Exabytes of memory instead of only 16 Exabytes, but since the program is nowhere near needing this much memory (and in practice no real computer is capable of providing it), this provides no benefit.Thus, word addressing allows a computer to address substantially more memory without increasing its address width and incurring the corresponding large increase in memory usage. However, this is valuable only within a relatively narrow range of working set sizes, and it can introduce substantial runtime overheads depending on the application. Programs which do relatively little work with byte-oriented data like images, text, files, and network traffic may be able to benefit most. Sub-word accesses and wide addresses: A program running on a computer that uses word addressing can still work with smaller units of memory by emulating an access to the smaller unit. For a load, this requires loading the enclosing word and then extracting the desired bits. For a store, this requires loading the enclosing word, shifting the new value into place, overwriting the desired bits, and then storing the enclosing word. Sub-word accesses and wide addresses: Suppose that four consecutive code points from a UTF-8 string need to be packed into a 32-bit word. The first code point might occupy bits 0–7, the second 8–15, the third 16–23, and the fourth 24–31. (If the memory were byte-addressable, this would be a little endian byte order.) In order to clearly elucidate the code necessary for sub-word accesses without tying the example too closely to any particular word-addressed architecture, the following examples use MIPS assembly. In reality, MIPS is a byte-addressed architecture with direct support for loading and storing 8-bit and 16-bit values, but the example will pretend that it only provides 32-bit loads and stores and that offsets within a 32-bit word must be stored separately from an address. MIPS has been chosen because it is a simple assembly language with no specialized facilities that would make these operations more convenient. Sub-word accesses and wide addresses: Suppose that a program wishes to read the third code point into register r1 from the word at an address in register r2. In the absence of any other support from the instruction set, the program must load the full word, right-shift by 16 to drop the first two code points, and then mask off the fourth code point: ldw $r1, 0($r2) # Load the full word srl $r1, $r1, 16 # Shift right by 16 andi $r1, $r1, 0xFF # Mask off other code points If the offset is not known statically, but instead a bit-offset is stored in the register r3, a slightly more complex approach is required: ldw $r1, 0($r2) # Load the full word srlv $r1, $r1, $r3 # Shift right by the bit offset andi $r1, $r1, 0xFF # Mask off other code points Suppose instead that the program wishes to assign the code point in register r1 to the third code point in the word at the address in r2. In the absence of any other support from the instruction set, the program must load the full word, mask off the old value of that code point, shift the new value into place, merge the values, and store the full word back: sll $r1, $r1, 16 # Shift the new value left by 16 lhi $r5, 0x00FF # Construct a constant mask to select the third byte nor $r5, $r5, $zero # Flip the mask so that it clears the third byte ldw $r4, 0($r2) # Load the full word and $r4, $r5, $r4 # Clear the third byte from the word or $r4, $r4, $r1 # Merge the new value into the word stw $r4, 0($r2) # Store the result as the full word Again, if the offset is instead stored in r3, a more complex approach is required: sllv $r1, $r1, $r3 # Shift the new value left by the bit offset llo $r5, 0x00FF # Construct a constant mask to select a byte sllv $r5, $r5, $r3 # Shift the mask left by the bit offset nor $r5, $r5, $zero # Flip the mask so that it clears the selected byte ldw $r4, 0($r2) # Load the full word and $r4, $r5, $r4 # Clear the selected byte from the word or $r4, $r4, $r1 # Merge the new value into the word stw $r4, 0($r2) # Store the result as the full word This code sequence assumes that another thread cannot modify other bytes in the word concurrently. If concurrent modification is possible, then one of the modifications might be lost. To solve this problem, the last few instructions must be turned into an atomic compare-exchange loop so that a concurrent modification will simply cause it to repeat the operation with the new value. No memory barriers are required in this case. Sub-word accesses and wide addresses: A pair of a word address and an offset within the word is called a wide address (also known as a fat address or fat pointer). (This should not be confused with other uses of wide addresses for storing other kinds of supplemental data, such as the bounds of an array.) The stored offset may be either a bit offset or a byte offset. The code sequences above benefit from the offset being denominated in bits because they use it as a shift count; an architecture with direct support for selecting bytes might prefer to just store a byte offset. Sub-word accesses and wide addresses: In these code sequences, the additional offset would have to be stored alongside the base address, effectively doubling the overall storage requirements of an address. This is not always true on word machines, primarily because addresses themselves are often not packed with other data to make accesses more efficient. For example, the Cray X1 uses 64-bit words, but addresses are only 32 bits; when an address is stored in memory, it is stored in its own word, and so the byte offset can be placed in the upper 32 bits of the word. The inefficiency of using wide addresses on that system is just all the extra logic to manipulate this offset and extract and insert bytes within words; it has no memory-use impact. Related concepts: The minimum addressable unit of a computer isn't necessarily the same as the minimum memory access size of the computer's instruction set. For example, a computer might use byte addressing without providing any instructions to directly read or write a single byte. Programs would be expected to emulate those operations in software with bit-manipulations, just like the example code sequences above do. This is relatively common in 64-bit computer architectures designed as successors to 32-bit supercomputers or minicomputers, such the DEC Alpha and the Cray X1. Related concepts: The C standard states that a pointer is expected to have the usual representation of an address. C also allows a pointer to be formed to any object except a bit-field; this includes each individual element of an array of bytes. C compilers for computers that use word addressing often use different representations for pointers to different types depending on their size. A pointer to a type that's large enough to fill a word will be a simple address, while a pointer such as char* or void* will be a wide pointer: a pair of the address of a word and the offset of a byte within that word. Converting between pointer types is therefore not necessarily a trivial operation and can lose information if done incorrectly. Related concepts: Because the size of a C struct is not always known when deciding the representation of a pointer to that struct, it is not possible to reliably apply the rule above. Compilers may need to align the start of a struct so that it can use a more efficient pointer representation. Examples: The ERA 1103 uses word addressing with 36-bit words. Only addresses 0-1023 refer to random-access memory; others are either unmapped or refer to drum memory. The PDP-10 uses word addressing with 36-bit words and 18-bit addresses. Most Cray supercomputers from the 1980s and 1990s use word addressing with 64-bit words. The Cray-1 and Cray X-MP use 24-bit addresses, while most others use 32-bit addresses. The Cray X1 uses byte addressing with 64-bit addresses. It does not directly support memory accesses smaller than 64 bits, and such accesses must be emulated in software. The C compiler for the X1 was the first Cray compiler to support emulating 16-bit accesses. Examples: The DEC Alpha uses byte addressing with 64-bit addresses. Early Alpha processors do not provide any direct support for 8-bit and 16-bit memory accesses, and programs are required to e.g. load a byte by loading the containing 64-bit word and then separately extracting the byte. Because the Alpha uses byte addressing, this offset is still represented in the least significant bits of the address (rather than separately as a wide address), and the Alpha conveniently provides load and store unaligned instructions (ldq_u and stq_u) which ignore those bits and simply load and store the containing aligned word. The later byte-word extensions to the architecture (BWX) added 8-bit and 16-bit loads and stores, starting with the Alpha 21164a. Again, this extension was possible without serious software incompatibilities because the Alpha had always used byte addressing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trade item** Trade item: A trade item is an item that is the subject of trade. It is a term used primarily by people in supply chain management and logistic engineering. An often used term in the Journals of the Lewis and Clark Expedition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Salinity** Salinity: Salinity () is the saltiness or amount of salt dissolved in a body of water, called saline water (see also soil salinity). It is usually measured in g/L or g/kg (grams of salt per liter/kilogram of water; the latter is dimensionless and equal to ‰). Salinity is an important factor in determining many aspects of the chemistry of natural waters and of biological processes within it, and is a thermodynamic state variable that, along with temperature and pressure, governs physical characteristics like the density and heat capacity of the water. Salinity: A contour line of constant salinity is called an isohaline, or sometimes isohale. Definitions: Salinity in rivers, lakes, and the ocean is conceptually simple, but technically challenging to define and measure precisely. Conceptually the salinity is the quantity of dissolved salt content of the water. Salts are compounds like sodium chloride, magnesium sulfate, potassium nitrate, and sodium bicarbonate which dissolve into ions. The concentration of dissolved chloride ions is sometimes referred to as chlorinity. Operationally, dissolved matter is defined as that which can pass through a very fine filter (historically a filter with a pore size of 0.45 μm, but nowadays usually 0.2 μm). Salinity can be expressed in the form of a mass fraction, i.e. the mass of the dissolved material in a unit mass of solution. Definitions: Seawater typically has a mass salinity of around 35 g/kg, although lower values are typical near coasts where rivers enter the ocean. Rivers and lakes can have a wide range of salinities, from less than 0.01 g/kg to a few g/kg, although there are many places where higher salinities are found. The Dead Sea has a salinity of more than 200 g/kg. Precipitation typically has a TDS of 20 mg/kg or less.Whatever pore size is used in the definition, the resulting salinity value of a given sample of natural water will not vary by more than a few percent (%). Physical oceanographers working in the abyssal ocean, however, are often concerned with precision and intercomparability of measurements by different researchers, at different times, to almost five significant digits. A bottled seawater product known as IAPSO Standard Seawater is used by oceanographers to standardize their measurements with enough precision to meet this requirement. Definitions: Composition Measurement and definition difficulties arise because natural waters contain a complex mixture of many different elements from different sources (not all from dissolved salts) in different molecular forms. The chemical properties of some of these forms depend on temperature and pressure. Many of these forms are difficult to measure with high accuracy, and in any case complete chemical analysis is not practical when analyzing multiple samples. Different practical definitions of salinity result from different attempts to account for these problems, to different levels of precision, while still remaining reasonably easy to use. Definitions: For practical reasons salinity is usually related to the sum of masses of a subset of these dissolved chemical constituents (so-called solution salinity), rather than to the unknown mass of salts that gave rise to this composition (an exception is when artificial seawater is created). For many purposes this sum can be limited to a set of eight major ions in natural waters, although for seawater at highest precision an additional seven minor ions are also included. The major ions dominate the inorganic composition of most (but by no means all) natural waters. Exceptions include some pit lakes and waters from some hydrothermal springs. Definitions: The concentrations of dissolved gases like oxygen and nitrogen are not usually included in descriptions of salinity. However, carbon dioxide gas, which when dissolved is partially converted into carbonates and bicarbonates, is often included. Silicon in the form of silicic acid, which usually appears as a neutral molecule in the pH range of most natural waters, may also be included for some purposes (e.g., when salinity/density relationships are being investigated). Definitions: Seawater The term 'salinity' is, for oceanographers, usually associated with one of a set of specific measurement techniques. As the dominant techniques evolve, so do different descriptions of salinity. Salinities were largely measured using titration-based techniques before the 1980s. Titration with silver nitrate could be used to determine the concentration of halide ions (mainly chlorine and bromine) to give a chlorinity. The chlorinity was then multiplied by a factor to account for all other constituents. The resulting 'Knudsen salinities' are expressed in units of parts per thousand (ppt or ‰). Definitions: The use of electrical conductivity measurements to estimate the ionic content of seawater led to the development of the scale called the practical salinity scale 1978 (PSS-78). Salinities measured using PSS-78 do not have units. The suffix psu or PSU (denoting practical salinity unit) is sometimes added to PSS-78 measurement values. The addition of PSU as a unit after the value is "formally incorrect and strongly discouraged".In 2010 a new standard for the properties of seawater called the thermodynamic equation of seawater 2010 (TEOS-10) was introduced, advocating absolute salinity as a replacement for practical salinity, and conservative temperature as a replacement for potential temperature. This standard includes a new scale called the reference composition salinity scale. Absolute salinities on this scale are expressed as a mass fraction, in grams per kilogram of solution. Salinities on this scale are determined by combining electrical conductivity measurements with other information that can account for regional changes in the composition of seawater. They can also be determined by making direct density measurements. Definitions: A sample of seawater from most locations with a chlorinity of 19.37 ppt will have a Knudsen salinity of 35.00 ppt, a PSS-78 practical salinity of about 35.0, and a TEOS-10 absolute salinity of about 35.2 g/kg. The electrical conductivity of this water at a temperature of 15 °C is 42.9 mS/cm.On the global scale, it is extremely likely that human-caused climate change has contributed to observed surface and subsurface salinity changes since the 1950s, and projections of surface salinity changes throughout the 21st century indicate that fresh ocean regions will continue to get fresher and salty regions will continue to get saltier. Definitions: Lakes and rivers Limnologists and chemists often define salinity in terms of mass of salt per unit volume, expressed in units of mg/L or g/L. It is implied, although often not stated, that this value applies accurately only at some reference temperature because solution volume varies with temperature. Values presented in this way are typically accurate to the order of 1%. Limnologists also use electrical conductivity, or "reference conductivity", as a proxy for salinity. This measurement may be corrected for temperature effects, and is usually expressed in units of μS/cm. Definitions: A river or lake water with a salinity of around 70 mg/L will typically have a specific conductivity at 25 °C of between 80 and 130 μS/cm. The actual ratio depends on the ions present. The actual conductivity usually changes by about 2% per degree Celsius, so the measured conductivity at 5 °C might only be in the range of 50–80 μS/cm. Definitions: Direct density measurements are also used to estimate salinities, particularly in highly saline lakes. Sometimes density at a specific temperature is used as a proxy for salinity. At other times an empirical salinity/density relationship developed for a particular body of water is used to estimate the salinity of samples from a measured density. Classification of water bodies based upon salinity: Marine waters are those of the ocean, another term for which is euhaline seas. The salinity of euhaline seas is 30 to 35 ‰. Brackish seas or waters have salinity in the range of 0.5 to 29 ‰ and metahaline seas from 36 to 40 ‰. These waters are all regarded as thalassic because their salinity is derived from the ocean and defined as homoiohaline if salinity does not vary much over time (essentially constant). The table on the right, modified from Por (1972), follows the "Venice system" (1959).In contrast to homoiohaline environments are certain poikilohaline environments (which may also be thalassic) in which the salinity variation is biologically significant. Poikilohaline water salinities may range anywhere from 0.5 to greater than 300 ‰. The important characteristic is that these waters tend to vary in salinity over some biologically meaningful range seasonally or on some other roughly comparable time scale. Put simply, these are bodies of water with quite variable salinity. Classification of water bodies based upon salinity: Highly saline water, from which salts crystallize (or are about to), is referred to as brine. Environmental considerations: Salinity is an ecological factor of considerable importance, influencing the types of organisms that live in a body of water. As well, salinity influences the kinds of plants that will grow either in a water body, or on land fed by a water (or by a groundwater). A plant adapted to saline conditions is called a halophyte. A halophyte which is tolerant to residual sodium carbonate salinity are called glasswort or saltwort or barilla plants. Organisms (mostly bacteria) that can live in very salty conditions are classified as extremophiles, or halophiles specifically. An organism that can withstand a wide range of salinities is euryhaline. Environmental considerations: Salts are expensive to remove from water, and salt content is an important factor in water use, factoring into potability and suitability for irrigation. Increases in salinity have been observed in lakes and rivers in the United States, due to common road salt and other salt de-icers in runoff.The degree of salinity in oceans is a driver of the world's ocean circulation, where density changes due to both salinity changes and temperature changes at the surface of the ocean produce changes in buoyancy, which cause the sinking and rising of water masses. Changes in the salinity of the oceans are thought to contribute to global changes in carbon dioxide as more saline waters are less soluble to carbon dioxide. In addition, during glacial periods, the hydrography is such that a possible cause of reduced circulation is the production of stratified oceans. In such cases, it is more difficult to subduct water through the thermohaline circulation. Not only is salinity a driver of ocean circulation, but changes in ocean circulation also affect salinity, particularly in the subpolar North Atlantic where from 1990 to 2010 increased contributions of Greenland meltwater were counteracted by increased northward transport of salty Atlantic waters. However, North Atlantic waters have become fresher since the mid-2010s due to increased Greenland meltwater flux.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**.scot** .scot: .scot is a GeoTLD for Scotland and Scottish culture, including the Gaelic and Scots languages.Later it was decided to allow almost any top-level domain for introduction some time in 2013, and a list of applications for these was published in June 2012; the domain .scot was included.On 27 January 2014, dotScot Registry, a not-for-profit organization established in 2009, announced that it had agreed terms to operate the .scot domain name, with plans to get it up and running later in summer of 2014.On 15 July 2014, .scot was officially launched. The first .scot domain name to go live was calico.scot, registered by hosting company Calico Internet Ltd.On 17 February 2015, the Scottish Government migrated its website from scotland.gov.uk to gov.scot. Likewise, the Scottish Parliament moved from scottish.parliament.uk to parliament.scot in May 2016, to coincide with the 2016 elections.The 2017 Global Amendment to the base New GeoTLD Registry Agreement is effective as of 31 July 2017.On 3 May 2018 the dotScot Registry lifted registration restrictions on locality domains (based on towns, etc.) and other premium names.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Teaching machine** Teaching machine: Teaching machines were originally mechanical devices that presented educational materials and taught students. They were first invented by Sidney L. Pressey in the mid-1920s. His machine originally administered multiple-choice questions. The machine could be set so it moved on only when the student got the right answer. Tests showed that learning had taken place. This was an example of how knowledge of results causes learning. Much later, Norman Crowder developed the Pressey idea further.B. F. Skinner was responsible for a different type of machine called GLIDER, which used his ideas on how learning should be directed with positive reinforcement. Skinner advocated the use of teaching machines for a broad range of students (e.g., preschool aged to adult) and instructional purposes (e.g., reading and music). The instructional potential of the teaching machine stemmed from several factors: it provided automatic, immediate and regular reinforcement without the use of aversive control; the material presented was coherent, yet varied and novel; the pace of learning could be adjusted to suit the individual. As a result, students were interested, attentive, and learned efficiently by producing the desired behavior, "learning by doing".There is extensive experience that both methods worked well, and so did programmed learning in other forms, such as books. Teaching machine: The ideas of teaching machines and programmed learning provided the basis for later ideas such as open learning and computer-assisted instruction. Illustrations of early teaching machines can be found in the 1960 sourcebook, Teaching Machines and Programmed Learning. An "Autotutor" was demonstrated at the 1964 World's Fair. Quotes: Edward L. Thorndike in 1912: "If, by a miracle of mechanical ingenuity, a book could be so arranged that only to him who had done what was directed on page one would page two become visible, and so on, much that now requires personal instruction could be managed by print". Sidney L. Pressey in 1932: "Education was the one major activity in this country which has thus far not systematically applied ingenuity to the solution of its problems" (p. 668). He thought the machine he developed would lead to an "industrial revolution in education" (p. 672).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MPEG-4 Part 11** MPEG-4 Part 11: MPEG-4 Part 11 Scene description and application engine was published as ISO/IEC 14496-11 in 2005. MPEG-4 Part 11 is also known as BIFS, XMT, MPEG-J. It defines: the coded representation of the spatio-temporal positioning of audio-visual objects as well as their behaviour in response to interaction (scene description); the coded representation of synthetic two-dimensional (2D) or three-dimensional (3D) objects that can be manifested audibly or visually; the Extensible MPEG-4 Textual (XMT) format - a textual representation of the multimedia content described in MPEG-4 using the Extensible Markup Language (XML); and a system level description of an application engine (format, delivery, lifecycle, and behaviour of downloadable Java byte code applications). (The MPEG-J Graphics Framework eXtensions (GFX) is defined in MPEG-4 Part 21 - ISO/IEC 14496-21.)Binary Format for Scenes (BIFS) is a binary format for two- or three-dimensional audiovisual content. It is based on VRML and part 11 of the MPEG-4 standard. MPEG-4 Part 11: BIFS is MPEG-4 scene description protocol to compose MPEG-4 objects, describe interaction with MPEG-4 objects and to animate MPEG-4 objects. MPEG-4 Part 11: MPEG-4 Binary Format for Scene (BIFS) is used in Digital Multimedia Broadcasting (DMB).The XMT framework accommodates substantial portions of SMIL, W3C Scalable Vector Graphics (SVG) and X3D (the new name of VRML). Such a representation can be directly played back by a SMIL or VRML player, but can also be binarised to become a native MPEG-4 representation that can be played by an MPEG-4 player. Another bridge has been created with BiM (Binary MPEG format for XML).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NDFIP1** NDFIP1: Nedd4 family interacting protein 1 is a protein that in humans is encoded by the NDFIP1 gene. Function: The protein encoded by this gene belongs to a small group of evolutionarily conserved proteins with three transmembrane domains. It is a potential target for ubiquitination by the Nedd4 family of proteins. This protein is thought to be part of a family of integral Golgi membrane proteins. [provided by RefSeq, Jul 2008].
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gluten-free diet** Gluten-free diet: A gluten-free diet (GFD) is a nutritional plan that strictly excludes gluten, which is a mixture of proteins found in wheat (and all of its species and hybrids, such as spelt, kamut, and triticale), as well as barley, rye, and oats. The inclusion of oats in a gluten-free diet remains controversial, and may depend on the oat cultivar and the frequent cross-contamination with other gluten-containing cereals.Gluten may cause both gastrointestinal and systemic symptoms for those with gluten-related disorders, including coeliac disease (CD), non-coeliac gluten sensitivity (NCGS), gluten ataxia, dermatitis herpetiformis (DH), and wheat allergy. In these people, the gluten-free diet is demonstrated as an effective treatment, but several studies show that about 79% of the people with coeliac disease have an incomplete recovery of the small bowel, despite a strict gluten-free diet. This is mainly caused by inadvertent ingestion of gluten. People with a poor understanding of a gluten-free diet often believe that they are strictly following the diet, but are making regular errors.In addition, a gluten-free diet may, in at least some cases, improve gastrointestinal or systemic symptoms in diseases like irritable bowel syndrome, rheumatoid arthritis, or HIV enteropathy, among others. There is no good evidence that gluten-free diets are an alternative medical treatment for people with autism.Gluten proteins have low nutritional and biological value and the grains that contain gluten are not essential in the human diet. However, an unbalanced selection of food and an incorrect choice of gluten-free replacement products may lead to nutritional deficiencies. Replacing flour from wheat or other gluten-containing cereals with gluten-free flours in commercial products may lead to a lower intake of important nutrients, such as iron and B vitamins. Some gluten-free commercial replacement products are not as enriched or fortified as their gluten-containing counterparts, and often have greater lipid/carbohydrate content. Children especially often over-consume these products, such as snacks and biscuits. Nutritional complications can be prevented by a correct dietary education.A gluten-free diet may be based on gluten-free foods, such as meat, fish, eggs, milk and dairy products, legumes, nuts, fruits, vegetables, potatoes, rice, and corn. Gluten-free processed foods may be used. Pseudocereals (quinoa, amaranth, and buckwheat) and some minor cereals are alternative choices. Rationale behind adoption of the diet: Coeliac disease Coeliac disease (American English: celiac) (CD) is a chronic, immune-mediated, and mainly intestinal process, that appears in genetically predisposed people of all ages. It is caused by the ingestion of gluten, which is present in wheat, barley, rye and derivatives. Coeliac disease is not only a gastrointestinal disease, because it may affect several organs and cause an extensive variety of non-gastrointestinal symptoms, and most importantly, it may often be completely asymptomatic. Added difficulties for diagnosis are the fact that serological markers (anti-tissue transglutaminase [TG2]) are not always present, and many people with coeliac may have minor mucosal lesions, without atrophy of the intestinal villi.Coeliac disease affects approximately 1–2% of the general population all over the world and is on the increase, but most cases remain unrecognized, undiagnosed and untreated, exposing patients to the risk of long-term complications. People may develop severe disease symptoms and be subjected to extensive investigations for many years before a proper diagnosis is achieved. Untreated coeliac disease may cause malabsorption, reduced quality of life, iron deficiency, osteoporosis, obstetric complications (stillbirth, intrauterine growth restriction, preterm birth, low birthweight, and small for gestational age), an increased risk of intestinal lymphomas and greater mortality. Coeliac disease is associated with some autoimmune diseases, such as diabetes mellitus type 1, thyroiditis, gluten ataxia, psoriasis, vitiligo, autoimmune hepatitis, dermatitis herpetiformis, primary sclerosing cholangitis, and more.Coeliac disease with "classic symptoms", which include gastrointestinal manifestations such as chronic diarrhea and abdominal distention, malabsorption, loss of appetite, and impaired growth, is currently the least common presentation form of the disease and affects predominantly to small children generally younger than two years of age.Coeliac disease with "non-classic symptoms" is the most common clinical type and occurs in older children (over two years old), adolescents and adults. It is characterized by milder or even absent gastrointestinal symptoms and a wide spectrum of non-intestinal manifestations that can involve any organ of the body, and very frequently may be completely asymptomatic both in children (at least in 43% of the cases) and adults.Following a lifelong gluten-free diet is the only medically-accepted treatment for people with coeliac disease. Rationale behind adoption of the diet: Non-coeliac gluten sensitivity Non-coeliac gluten sensitivity (NCGS) is described as a condition of multiple symptoms that improves when switching to a gluten-free diet, after coeliac disease and wheat allergy are excluded. People with NCGS may develop gastrointestinal symptoms, which resemble those of irritable bowel syndrome (IBS) or a variety of nongastrointestinal symptoms.Gastrointestinal symptoms may include any of the following: abdominal pain, bloating, bowel habit abnormalities (either diarrhoea or constipation), nausea, aerophagia, gastroesophageal reflux disease, and aphthous stomatitis. A range of extra-intestinal symptoms, said to be the only manifestation of NCGS in the absence of gastrointestinal symptoms, have been suggested, but remain controversial. These include: headache, migraine, "brain fog", fatigue, fibromyalgia, joint and muscle pain, leg or arm numbness, tingling of the extremities, dermatitis (eczema or skin rash), atopic disorders such as asthma, rhinitis, other allergies, depression, anxiety, iron-deficiency anemia, folate deficiency or autoimmune diseases. NCGS has also been controversially implicated in some neuropsychiatric disorders, including schizophrenia, eating disorders, autism, peripheral neuropathy, ataxia and attention deficit hyperactivity disorder (ADHD). Above 20% of people with NCGS have IgE-mediated allergy to one or more inhalants, foods or metals, among which most common are mites, graminaceae, parietaria, cat or dog hair, shellfish and nickel. Approximately, 35% of people with NCGS have other food intolerances, mainly lactose intolerance.The pathogenesis of NCGS is not yet well understood. For this reason, it is a controversial syndrome and some authors still question it. There is evidence that not only gliadin (the main cytotoxic antigen of gluten), but also other proteins named ATIs which are present in gluten-containing cereals (wheat, rye, barley, and their derivatives) may have a role in the development of symptoms. ATIs are potent activators of the innate immune system. FODMAPs, especially fructans, are present in small amounts in gluten-containing grains and have been identified as a possible cause of some gastrointestinal symptoms in persons with NCGS. As of 2019, reviews have concluded that although FODMAPs may play a role in NCGS, they only explain certain gastrointestinal symptoms, such as bloating, but not the extra-digestive symptoms that people with NCGS may develop, such as neurological disorders, fibromyalgia, psychological disturbances, and dermatitis.After exclusion of coeliac disease and wheat allergy, the subsequent step for diagnosis and treatment of NCGS is to start a strict gluten-free diet to assess if symptoms improve or resolve completely. This may occur within days to weeks of starting a GFD, but improvement may also be due to a non-specific, placebo response. Recommendations may resemble those for coeliac disease, for the diet to be strict and maintained, with no transgression. The degree of gluten cross contamination tolerated by people with NCGS is not clear but there is some evidence that they can present with symptoms even after consumption of small amounts. It is not yet known whether NCGS is a permanent or a transient condition. A trial of gluten reintroduction to observe any reaction after 1–2 years of strict gluten-free diet might be performed.A subgroup of people with NCGS may not improve by eating commercially available gluten-free products, which are usually rich of preservatives and additives, because chemical additives (such as sulphites, glutamates, nitrates and benzoates) might have a role in evoking functional gastrointestinal symptoms of NCGS. These people may benefit from a diet with a low content of preservatives and additives.NCGS, which is possibly immune-mediated, now appears to be more common than coeliac disease, with prevalence rates between 0.5 and 13% in the general population. Rationale behind adoption of the diet: Wheat allergy People can also experience adverse effects of wheat as result of a wheat allergy. Gastrointestinal symptoms of wheat allergy are similar to those of coeliac disease and non-coeliac gluten sensitivity, but there is a different interval between exposure to wheat and onset of symptoms. Other symptoms such as dermal reactions like as rashes or hyperpigmentation may also occur in some people. Wheat allergy has a fast onset (from minutes to hours) after the consumption of food containing wheat and could be anaphylaxis.The management of wheat allergy consists of complete withdrawal of any food containing wheat and other gluten-containing cereals. Nevertheless, some people with wheat allergy can tolerate barley, rye or oats. Rationale behind adoption of the diet: Gluten ataxia Gluten ataxia is an autoimmune disease triggered by the ingestion of gluten. With gluten ataxia, damage takes place in the cerebellum, the balance center of the brain that controls coordination and complex movements like walking, speaking and swallowing, with loss of Purkinje cells. People with gluten ataxia usually present gait abnormality or incoordination and tremor of the upper limbs. Gaze-evoked nystagmus and other ocular signs of cerebellar dysfunction are common. Myoclonus, palatal tremor, and opsoclonus-myoclonus may also appear.Early diagnosis and treatment with a gluten-free diet can improve ataxia and prevent its progression. The effectiveness of the treatment depends on the elapsed time from the onset of the ataxia until diagnosis, because the death of neurons in the cerebellum as a result of gluten exposure is irreversible.Gluten ataxia accounts for 40% of ataxias of unknown origin and 15% of all ataxias. Less than 10% of people with gluten ataxia present any gastrointestinal symptom, yet about 40% have intestinal damage. Rationale behind adoption of the diet: As a popular diet Since the beginning of the 21st century, the gluten-free diet has become the most popular fad diet in the United States and other countries. Clinicians worldwide have been challenged by an increasing number of people who do not have coeliac disease nor wheat allergy, with digestive or extra-digestive symptoms which improved removing wheat/gluten from the diet. Many of these persons began a gluten-free diet on their own, without having been previously evaluated. Another reason that contributed to this trend was the publication of several books that demonize gluten and point to it as a cause of type 2 diabetes, weight gain and obesity, and a broad list of diseases ranging from depression and anxiety to arthritis and autism. The book that has had the most impact is Grain Brain: The Surprising Truth About Wheat, Carbs, and Sugar—Your Brain's Silent Killers, by the American neurologist David Perlmutter, published in September 2013. Another book that has had great impact is Wheat Belly: Lose the Wheat, Lose the Weight, and Find Your Path Back to Health, by the cardiologist William Davis, which refers to wheat as a "chronic poison" and became a New York Times bestseller within a month of publication in 2011. The gluten-free diet has been advocated and followed by many celebrities to lose weight, such as Miley Cyrus, Gwyneth Paltrow, and Kourtney Kardashian, and are used by some professional athletes, who believe the diet can improve energy and health. It became popular in the US, as the popularity of low-carbohydrate diets faded.Estimates suggest that in 2014, 30% of people in the US and Australia were consuming gluten-free foods, with a growing number, calculated from surveys that by 2016 approximately 100 million Americans would consume gluten-free products. Data from a 2015 Nielsen survey of 30,000 adults in 60 countries around the world conclude that 21% of people prefer to buy gluten-free foods, being the highest interest among the younger generations. In the US, it was estimated that more than half of people who buy foods labeled gluten-free do not have a clear reaction to gluten, and they do so "because they think it will help them lose weight, because they seem to feel better or because they mistakenly believe they are sensitive to gluten." Although gluten is highly immunologically reactive and humans appear not to have evolved to digest it well, a gluten-free diet is not a healthier option for the general population, other than people with gluten-related disorders or other associated conditions which improve with a gluten-free diet in some cases, such as irritable bowel syndrome and certain autoimmune and neurological disorders. There is no published experimental evidence to support that the gluten-free diet contributes to weight loss.In a review of May 2015 published in Gastroenterology, Fasano et al. conclude that, although there is an evident "fad component" to the recent rise in popularity of the gluten-free diet, there is also growing and unquestionable evidence of the existence of non-coeliac gluten sensitivity.In some cases, the popularity of the gluten-free diet may harm people who must eliminate gluten for medical reasons. For example, servers in restaurants may not take dietary requirements seriously, believing them to be merely a preference. This could prevent appropriate precautions in food handling to prevent gluten cross-contamination. Medical professionals may also confuse medical explanations for gluten intolerance with patient preference. On the other hand, the popularity of the gluten-free diet has increased the availability of commercial gluten-free replacement products and gluten-free grains.Gluten-free commercial replacement products, such as gluten-free cakes, are more expensive than their gluten-containing counterparts, so their purchase adds a financial burden. They are also typically higher in calories, fat, and sugar, and lower in dietary fibre. In less developed countries, wheat can represent an important source of protein, since it is a substantial part of the diet in the form of bread, noodles, bulgur, couscous, and other products.In the British National Health Service, gluten-free foods have been supplied on prescription. For many patients, this meant at no cost. When it was proposed to alter this in 2018, the Department of Health and Social Care made an assessment of the costs and benefits. The potential annual financial saving to the service was estimated at £5.3 million, taking into account the reduction in cost spending and the loss of income from prescription charges. The proposed scenario was actually that patients could still be prescribed gluten-free breads and mixes but would have to buy any other gluten-free products themselves. The savings would only amount to £700,000 a year. Local initiatives by clinical commissioning groups had already reduced the cost of gluten-free foods to the NHS by 39% between 2015 and 2017.Healthcare professionals recommend against undertaking a gluten-free diet as a form of self-diagnosis, because tests for coeliac disease are reliable only if the person has been consuming gluten recently. There is a consensus in the medical community that people should consult a physician before going on a gluten-free diet, so that a medical professional can accurately test for coeliac disease or any other gluten-induced health issues.Although popularly used as an alternative treatment for people with autism, there is no good evidence that a gluten-free diet is of benefit in reducing the symptoms of autism. Rationale behind adoption of the diet: Research In a 2013 double-blind, placebo-controlled challenge (DBPC) by Biesiekierski et al. in a few people with irritable bowel syndrome, the authors found no difference between gluten or placebo groups and the concept of non-celiac gluten sensitivity as a syndrome was questioned. Nevertheless, this study had design errors and an incorrect selection of participants, and probably the reintroduction of both gluten and whey protein had a nocebo effect similar in all people, and this could have masked the true effect of gluten/wheat reintroduction.In a 2015 double-blind placebo cross-over trial, small amounts of purified wheat gluten triggered gastrointestinal symptoms (such as abdominal bloating and pain) and extra-intestinal manifestations (such as foggy mind, depression and aphthous stomatitis) in self-reported non-celiac gluten sensitivity. Nevertheless, it remains elusive whether these findings specifically implicate gluten or other proteins present in gluten-containing cereals.In a 2018 double-blind, crossover research study on 59 persons on a gluten-free diet with challenges of gluten, fructans or placebo, intestinal symptoms (specifically bloating) were borderline significantly higher after challenge with fructans, in comparison with gluten proteins (P=0.049). Although the differences between the three interventions was very small, the authors concluded that fructans (the specific type of FODMAP found in wheat) are more likely to be the cause of gastrointestinal symptoms of non-celiac gluten sensitivity, rather than gluten. For this previous study, experts recommend a low FODMAP diet instead of a gluten free diet for those patients suffering of functional gastrointestinal disorders as bloating. In addition, fructans used in the study were extracted from chicory root, so it remains to be seen whether the wheat fructans produce the same effect. Eating gluten-free: A gluten-free diet is a diet that strictly excludes gluten, proteins present in wheat (and all wheat varieties such as spelt and kamut), barley, rye, oat, and derivatives of these grains such as malt and triticale, and foods that may include them, or shared transportation or processing facilities with them. The inclusion of oats in a gluten-free diet remains controversial. Oat toxicity in people with gluten-related disorders depends on the oat cultivar consumed because the immunoreactivities of toxic prolamins are different among oat varieties. Furthermore, oats are frequently cross-contaminated with the other gluten-containing cereals. Pure oat (labelled as "pure oat" or "gluten-free oat") refers to oats uncontaminated with any of the other gluten-containing cereals. Some cultivars of pure oat could be a safe part of a gluten-free diet, requiring knowledge of the oat variety used in food products for a gluten-free diet. Nevertheless, the long-term effects of pure oats consumption are still unclear and further studies identifying the cultivars used are needed before making final recommendations on their inclusion in the gluten-free diet.Other grains, although gluten-free in themselves, may contain gluten by cross-contamination with gluten-containing cereals during grain harvesting, transporting, milling, storing, processing, handling or cooking.Processed foods commonly contain gluten as an additive (as emulsifiers, thickeners, gelling agents, fillers, and coatings), so they would need specific labeling. Unexpected sources of gluten are, among others, processed meat, vegetarian meat substitutes, reconstituted seafood, stuffings, butter, seasonings, marinades, dressings, confectionary, candies, and ice cream. Eating gluten-free: Cross-contamination in the home is also a consideration for those who have gluten-related disorders. There can be many sources of cross-contamination, as for example when family members prepare gluten-free and gluten-containing foods on the same surfaces (countertops, tables, etc.) or share utensils that have not been cleaned after being used to prepare gluten-containing foods (cutting boards, colanders, cutlery, etc.), kitchen equipment (toaster, cupboards, etc.) or certain packaged foods (butter, peanut butter, etc.). Eating gluten-free: Restaurants prove to be another source of cross-contamination for those following a strict gluten-free diet. A study conducted by Columbia University Medical Center found that 32% of foods labeled gluten-free at restaurants contain above 20 parts per million of gluten, meaning that it contains enough gluten that it is no longer considered gluten-free by the Codex Alimentarius. Cross-contamination occurs in these areas frequently because of a general lack of knowledge about the needed level of caution and the prevalence of gluten in restaurant kitchens. If cooks are unaware of the severity of their guest's diet restrictions or of the important practices needed to limit cross-contamination, they can unknowingly deliver contaminated food. However, some restaurants utilize a training program for their employees to educate them about the gluten-free diet. The accuracy of the training varies. One resource to find these safer restaurants is an app and website called "Find Me Gluten Free" that allows people following a gluten-free diet to rate the safety of different restaurants from their point of view and describe their experience to help future customers. Eating gluten-free: Easily locating gluten-free items is one of the main difficulties in following a gluten-free diet. To assist in this process, many restaurants and grocery stores choose to label food items. Restaurants often add a gluten-free section to their menu, or specifically mark gluten-free items with a symbol of some kind. Grocery stores often have a gluten-free aisle, or they will attach labels on the shelf underneath gluten-free items. Though the food is labeled gluten-free in this way, it does not necessarily mean that the food is safe for those with gluten-related disorders, as a compilation of studies suggest.Medications and dietary supplements are made using excipients that may contain gluten. Eating gluten-free: The gluten-free diet includes naturally gluten-free food, such as meat, fish, seafood, eggs, milk and dairy products, nuts, legumes, fruit, vegetables, potatoes, pseudocereals (in particular amaranth, buckwheat, chia seed, quinoa), only certain cereal grains (corn, rice, sorghum), minor cereals (including fonio, Job's tears, millet, teff, called "minor" cereals as they are "less common and are only grown in a few small regions of the world"), some other plant products (arrowroot, mesquite flour, sago, tapioca) and products made from these gluten-free foods. Most Indian cuisine options, particularly South Indian cuisine, are for the most part inherently gluten-free and in fact most Indian vegetarian options are inherently vegan. Eating gluten-free: Risks An unbalanced selection of food and an incorrect choice of gluten-free replacement products may lead to nutritional deficiencies. Replacing flour from wheat or other gluten-containing cereals with gluten-free flours in commercial products may lead to a lower intake of important nutrients, such as iron and B vitamins and a higher intake of sugars and saturated fats. Some gluten-free commercial replacement products are not enriched or fortified as their gluten-containing counterparts, and often have greater lipid / carbohydrate content. Children especially often over-consume these products, such as snacks and biscuits. These nutritional complications can be prevented by a correct dietary education. Pseudocereals (quinoa, amaranth, and buckwheat) and some minor cereals are healthy alternatives to these prepared products and have higher biological and nutritional value. Advances towards higher nutrition-content gluten-free bakery products, improved for example in terms of fiber content and glycemic index, have been made by using not exclusively corn starch or other starches to substitute for flour. In this aim, for example the dietary fibre inulin (which acts as a prebiotic) or quinoa or amaranth wholemeal have been as substitute for part of the flour. Similarly, xanthan gum can be used in up to gram quantities per serving in some gluten-free baked goods and can be fermented by specific microbiomes in the gastrointestinal tract. Such substitution has been found to also yield improved crust and texture of bread. It is recommended that anyone embarking on a gluten-free diet check with a registered dietitian to make sure they are getting the required amount of key nutrients like iron, calcium, fiber, thiamin, riboflavin, niacin and folate. Vitamins often contain gluten as a binding agent. Experts have advised that it is important to always read the content label of any product that is intended to be swallowed.Up to 30% of people with known coeliac disease often continue having or redeveloping symptoms. Also, a lack of symptoms or negative blood antibodies levels are not reliable indicators of intestinal recuperation. Several studies show an incomplete recovery of small bowel despite a strict gluten-free diet, and about 79% of such people have persistent villous atrophy. This lack of recovery is mainly caused by inadvertent exposure to gluten. People with poor basic education and understanding of the gluten-free diet often believe that they are strictly following the diet, but are making regular errors. In addition, some people often deliberately continue eating gluten because of limited availability, inferior taste, higher price, and inadequate labelling of gluten-free products. Poor compliance with the regimen is also influenced by age at diagnosis (adolescents), ignorance of the consequences of the lack of a strict treatment and certain psychological factors. Ongoing gluten intake can cause severe disease complications, such as various types of cancers (both intestinal and extra-intestinal) and osteoporosis. Regulation and labels: The term gluten-free is generally used to indicate a supposed harmless level of gluten rather than a complete absence. The exact level at which gluten is harmless is uncertain and controversial. A 2008 systematic review tentatively concluded that consumption of less than 10 mg (10 ppm) of gluten per day is unlikely to cause histological abnormalities, although it noted that few reliable studies had been done.Regulation of the label gluten-free varies by country. Most countries derive key provisions of their gluten-free labelling regulations from the Codex Alimentarius international standards for food labelling as a standard relating to the labelling of products as gluten-free. It only applies to foods that would normally contain gluten. Gluten-free is defined as 20 ppm (= 20 mg/kg) or less. It categorizes gluten-free food as: Food that is gluten-free by composition Food that has become gluten-free through special processing. Regulation and labels: Reduced gluten content, food which includes food products with between 20 and 100 ppm of gluten Reduced gluten content is left up to individual nations to more specifically define.The Codex Standard suggests the enzyme-linked Immunoassay (ELISA) R5 Mendez method for indicating the presence of gluten, but allows for other relevant methods, such as DNA. The Codex Standard specifies that the gluten-free claim must appear in the immediate proximity of the name of the product, to ensure visibility. Regulation and labels: There is no general agreement on the analytical method used to measure gluten in ingredients and food products. The ELISA method was designed to detect w-gliadins, but it suffered from the setback that it lacked sensitivity for barley prolamins. The use of highly sensitive assays is mandatory to certify gluten-free food products. The European Union, World Health Organization, and Codex Alimentarius require reliable measurement of the wheat prolamins, gliadins rather than all-wheat proteins. Regulation and labels: Australia The Australian government recommends that: food labelled gluten-free include no detectable gluten (<3ppm ) oats or their products, cereals containing gluten that have been malted or their products food labelled low gluten claims such that the level of 20 mg gluten per 100 g of the food Brazil All food products must be clearly labelled whether they contain gluten or they are gluten-free. Since April 2016, the declaration of the possibility of cross-contamination is mandatory when the product does not intentionally add any allergenic food or its derivatives, but the Good Manufacturing Practices and allergen control measures adopted are not sufficient to prevent the presence of accidental trace amounts. When a product contains the warning of cross-contamination with wheat, rye, barley, oats and their hybridised strains, the warning "contains gluten" is mandatory. The law does not establish a gluten threshold for the declaration of its absence. Regulation and labels: Canada Health Canada considers that foods containing levels of gluten not exceeding 20 ppm as a result of contamination, meet the health and safety intent of section B.24.018 of the Food and Drug Regulations when a gluten-free claim is made. Any intentionally added gluten, even at low levels must be declared on the packaging and a gluten-free claim would be considered false and misleading. Labels for all food products sold in Canada must clearly identify the presence of gluten if it is present at a level greater than 10 ppm. Regulation and labels: European Union The EU European Commission delineates the categories as: gluten-free: 20 ppm or less of gluten very low gluten foodstuffs: 20-100ppm gluten.All foods containing gluten as an ingredient must be labelled accordingly as gluten is defined as one of the 14 recognised EU allergens. Regulation and labels: United States Until 2012 anyone could use the gluten-free claim with no repercussion. In 2008, Wellshire Farms chicken nuggets labelled gluten-free were purchased and samples were sent to a food allergy laboratory where they were found to contain gluten. After this was reported in the Chicago Tribune, the products continued to be sold. The manufacturer has since replaced the batter used in its chicken nuggets. The U.S. first addressed gluten-free labelling in the 2004 Food Allergen Labeling and Consumer Protection Act (FALCPA). The Alcohol and Tobacco Tax and Trade Bureau published interim rules and proposed mandatory labelling for alcoholic products in 2006. The FDA issued their Final Rule on August 5, 2013. Regulation and labels: When a food producer voluntarily chooses to use a gluten-free claim for a product, the food bearing the claim in its labelling may not contain: an ingredient that is a gluten-containing grain an ingredient that is derived from a gluten-containing grain that has not been processed to remove gluten an ingredient that is derived from a gluten-containing grain, that has been processed to remove gluten but results in the presence of 20 ppm or more gluten in the food. Any food product claiming to be gluten-free and also bearing the term "wheat" in its ingredient list or in a separate "Contains wheat" statement, must also include the language "*the wheat has been processed to allow this food to meet the FDA requirements for gluten-free foods," in close proximity to the ingredient statement.Any food product that inherently does not contain gluten may use a gluten-free label where any unavoidable presence of gluten in the food bearing the claim in its labelling is below 20 ppm gluten.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Skateboard** Skateboard: A skateboard is a type of sports equipment used for skateboarding. It is usually made of a specially designed 7–8-ply maple plywood deck and has polyurethane wheels attached to the underside by a pair of skateboarding trucks. Skateboard: The skateboarder moves by pushing with one foot while the other foot remains balanced on the board, or by pumping one's legs in structures such as a bowl or half pipe. A skateboard can also be used by standing on the deck while on a downward slope and allowing gravity to propel the board and the rider. If the rider's leading foot is their right foot, they are said to ride "goofy".The two main types of skateboards are the longboard and shortboard. The shape of the board is also important: the skateboard must be concaved to perform tricks. History: Skateboarding started in California in the 1950s. The first skateboards were made from roller skates attached to a board. Skateboarding gained in popularity because of surfing: in fact, skateboarding was initially referred to as "sidewalk surfing". The very first skateboards were handmade from wooden boxes and planks by individuals. Companies started manufacturing skateboards in 1959, as the sport became more popular. In postwar America, society was carefree with children commonly playing in the streets.Skateboarding is a very individual activity, and it continues to evolve. Since 2000, due to attention in the media and products like skateboarding video games, children's skateboards and commercialization, skateboarding has been pulled into the mainstream. As more interest and money has been invested into skateboarding, more skate parks, and better skateboards have become available. In addition, the continuing interest has motivated skateboarding companies to keep innovating and inventing new things. Skateboarding appeared for the first time in the 2020 Summer Olympics. Parts: Deck "Long" boards are usually over 36 inches (91 cm) long. Plastic "penny" boards are typically about 22 inches (56 cm) long. Some larger penny boards over 27 inches (69 cm) long are called "nickel" boards.The longboard, a common variant of the skateboard, is used for higher speed and rough surface boarding, and they are much more expensive. "Old school" boards (those made in the 1970s–80s or modern boards that mimic their shape) are generally wider and often have only one kicktail. Variants of the 1970s often have little or no concavity. Parts: Wheels The wheels allow for movement on the skateboard and helps determine the speed while riding. There are typically four wheels on a skateboard that are attached to the trucks. Ranging in size from around 48mm to around 60mm, smaller wheels are lighter in weight and are used for shorter distances and tricks. The wheels are typically made of polyurethane (PU) and come in different grades of PU. Higher-grade PU is more durable and provides a smoother ride, while lower-grade PU is more affordable but wears out faster. Larger wheels are heavier in weight, which are better for maintaining speed and longer distances. Wheels that are larger than 60mm are typically used for longboards. Parts: Trucks The metal parts known as skateboard trucks are what hold a skateboard's wheels to the deck. They are made up of a hanger that holds the axle and wheels and a baseplate that is mounted to the board. The hanger and baseplate are joined by a kingpin, allowing the truck to swivel and turn.Trucks for skateboards come in a variety of forms and sizes and can be modified to the rider's preferences. The truck's height can have an impact on the board's stability and turning ability. The truck's width should equal the width of the deck. Parts: To manage the looseness or tightness of the trucks, the kingpin's tightness can also be changed. This is a matter of taste and has an impact on the board's stability and ability to turn. Parts: Bearings Each skateboard wheel is mounted on its axle via two ball bearings. With few exceptions, the bearings are the industrial standard "608" size, with a bore of 8 or 10 mm (0.315 or 0.394 inches) depending on the axle), an outer diameter of 22 mm (0.866 inches), and a width of 7 mm (0.276 inches). These are usually made of steel, though silicon nitride, a high-tech ceramic, is sometimes used. Many skateboard bearings are graded according to the ABEC scale. The starts with ABEC 1 with the least precise manufacturing tolerance, followed by 3, 5, 7, and ABEC 9 with the strictest tolerance. Bearing performance is determined by how well maintained the bearings are. Maintenance on bearings includes periodically cleaning and lubricating them. Optional components: Risers/wedges Wedges can be used to change the turning characteristics of a truck. Skateboard multi-tool While not part of a skateboard, an all-in-one skateboard tool capable of mounting and removing trucks & wheels and adjusting truck kingpins are commonly sold by skate shops. Deck rails Deck rails are thin, plastic strips usually screwed into the bottom section of a skateboard to decrease friction while performing slide tricks and protecting the board’s graphic from damage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Canadian Shift** Canadian Shift: The Canadian Shift is a chain shift of vowel sounds found in Canadian English, beginning among speakers in the last quarter of the 20th century and most significantly involving the lowering and backing of the front vowels. This lowering and backing is structurally identical to the California Shift reported in California English and some younger varieties of Western New England English, Western American English, Pacific Northwest English, and Midland American English; whether the similarly structured shifts in these regional dialects have a single unified cause or not is still not entirely clear. Similar, though not identical, changes to the short front vowels are attested in many English dialects as of 21st-century research, including RP, Indian English, Hiberno-English, South African English, and Australian English (the last two dialects traditionally defined by a chain shift moving in the opposite direction of the Canadian Shift). Canadian Shift: The back and downward movement of all the front vowels was first noted in some California speakers in 1987, then in some Canadian speakers in 1995 (initially reported as two separate phenomena), and later documented among some speakers in Western and Midland U.S. cities born after 1980, based on impressionistic analysis. Assuming the similar chain shifts found in Canada and various parts of the U.S. are a phenomenon with a single common origin, a variety of names have been proposed for this trans-regional chain shift, including the Third Dialect Shift, Elsewhere Shift, Low Back Merger Shift, Short Front Vowel Shift, and North American Shift. Canadian Shift in Canada: The shift involves the lowering of the tongue in the front lax vowels /æ/ (the short-a of trap), /ɛ/ (the short-e of dress), and /ɪ/ (the short-i of kit). Canadian Shift in Canada: It is triggered by the cot–caught merger: /ɑ/ (as in cot) and /ɔ/ (as in caught) merge as [ɒ], a low back rounded vowel. As each space opens up, the next vowel along moves into it. Thus, the short a /æ/ retracts from a near-low front position to a low central position, with a quality similar to the vowel heard in Northern England [a]. The retraction of /æ/ was independently observed in Vancouver and is more advanced for Ontarians and women than for people from the Prairies or Atlantic Canada and men. /æ/ also retracts more before /l/ than other consonants. In Toronto, /æ/-retraction is inhibited by a following nasal, but it isn't in Vancouver.However, scholars disagree on the behaviour of /ɛ/ and /ɪ/: According to Clarke et al. (1995), who impressionistically studied the speech of a few young Ontarians, /ɛ/ and /ɪ/ tend to lower in the direction of [æ] and [ɛ], respectively. Hence, bet and bit tend to sound, respectively, like bat and bet as pronounced by a speaker without the shift. Canadian Shift in Canada: Labov et al. (2006), through acoustic analysis of 33 subjects from all over the country, noted a backward and downward movement of /ɛ/ in apparent time in all of Canada except the Atlantic Provinces. No movement of /ɪ/ was detected. Canadian Shift in Canada: Boberg (2005) considers the primary movement of /ɛ/ and /ɪ/ to be retraction, at least in Montreal. He studied a diverse range of English-speaking Montrealers and found that younger speakers had a significantly retracted /ɛ/ and /ɪ/ compared with older speakers but did not find that the vowels were significantly lower. A small group of young people from Ontario were also studied, and there too retraction was most evident. Under this scenario, a similar group of vowels (short front) are retracting in a parallel manner, with /ɛ/ and /ʌ/ approaching each other. Therefore, with Boberg's results, bet approaches but remains different from but, and bit sounds different but remains distinct. Canadian Shift in Canada: Hagiwara (2006), through acoustic analysis, noted that /ɛ/ and /ɪ/ do not seem to be lowered in Winnipeg, although the lowering and retraction of /æ/ has caused a redistribution of backness values for the front lax vowels. Canadian Shift in Canada: Sadlier-Brown and Tamminga (2008) studied a few speakers from Vancouver and Halifax and found the shift to be active in Halifax as well, although not as advanced as in Vancouver. For these speakers, the movement of /ɛ/ and /ɪ/ in apparent time was diagonal, and Halifax had /æ/ diagonal movement too; in Vancouver, however, the retraction of /æ/ was not accompanied by lowering.Due to the Canadian Shift, the short-a and the short-o are shifted in opposite directions to that of the Northern Cities Shift, found across the border in the Inland Northern U.S. and Western New England, which is causing these two dialects to diverge: the Canadian short-a is very similar in quality to the Inland Northern short-o. For example, the production [map] would be recognized as map in Canada but mop in the Inland North. U.S. Third Dialect Shifts: In the United States, the cot-caught merger is widespread across many regions of the United States, particularly in the Midland and West, but speakers with the merger are often not affected by the shift, possibly due to the fact that the merged vowel is less rounded, less back and slightly lower than the Canadian vowel. This means that there is less space for the retraction of the vowel /æ/, which is a key feature of the Canadian Shift. However, there are many regions of the United States where the Canadian Shift can be observed. U.S. Third Dialect Shifts: California The California Shift in progress in California English contains features similar to the Canadian Shift, including the lowering/retraction of the front lax vowels. However, the retraction of /æ/ has happened in California even though the Californian /ɑ/ may be more centralized and not as rounded as the Canadian /ɒ/, leading some scholars suggest that the two phenomena are distinct, while others suggest that it was backed "just enough" to allow the shift to happen. U.S. Third Dialect Shifts: Other Western States The Atlas of North American English finds that, in the Western United States, one out of every four speakers exhibits the Canadian Shift, as defined quantitatively by Labov et al. based on the formant values for /æ/, /ɑ/, and /ɛ/. More recent data, however, suggests that the shift is widespread among younger speakers throughout the West. U.S. Third Dialect Shifts: Stanley (2020) found evidence of the shift in Cowlitz County, Washington, where the formant trajectories of /æ/, /ɛ/, and /ɪ/ flattened, causing the onset of /æ/ to lower and slightly retract, the onset of /ɛ/ to lower and retract, and the onset of /ɪ/ to retract. However, the speakers in the study tended to pronounce /ɑ/ and /ɔ/ "close" but distinct, with /ɔ/ being further back and more diphthongal. Furthermore, this state of near merger had persisted for all 4 generations in the study. An explanation for this is that while the merger itself was not the trigger for the shift, the backing of /ɑ/ leading to the near-merger of /ɑ/ and /ɔ/ was the trigger. U.S. Third Dialect Shifts: The Midlands Durian (2008) found evidence of the Canadian Shift in the vowel systems of men born in 1965 and later in Columbus, Ohio. This is located in the U.S. Midland. The Midland dialect is a mix of Northern and Southern dialect features. In Columbus, /ʌ/ is undergoing fronting without lowering, while still remaining distinct from the space occupied by /ɛ/. At the same time, historical /ɒ/ (the vowel in "lot") is merged with the /ɑ/ class, which is raising and backing towards /ɔ/, such that the two are merged or "close." This allows a "free space" for the retraction of /æ/, which is also suggested as a possibility for Western U.S. dialects by Boberg (2005). In Columbus, the Canadian Shift closely resembles the version found by Boberg (2005) in Montreal, where /ɑ/ and /ɔ/ are either merged or "close," and /æ/, /ɛ/, and /ɪ/ show retraction of the nucleus without much lowering (with /æ/ also showing "rising diphthong" behavior). However, the retraction of /ɪ/ was not found among all speakers and is more mild among the speakers that do show it than the retraction of /ɛ/ among those speakers. Additionally, the outcome of low back merger-like behavior is more like the California Shift outcome noted above than the rounded variant found in most of Canada. U.S. Third Dialect Shifts: Western Pennsylvania In Pittsburgh, another region where the cot-caught merger is prevalent, the mouth vowel /aʊ/ is usually a monophthong that fills the lower central space, which prevents retracting. However, as /aʊ/ monophthongization declines, some younger speakers are retracting /æ/. U.S. Third Dialect Shifts: NCVS Reversal As noted above, the first two stages of the Northern Cities Vowel Shift (NCVS) shift /æ/ and /ɑ/ in the exact opposite direction of the Canadian shift. However, the NCVS is gaining stigma among younger speakers, which can trigger the lowering of /æ/ and the backing of /ɑ/. In fact, Savage et al. (2015) found that, while the raising of /æ/ and fronting of /ɑ/ are stigmatized, the lowering and backing of /ɛ/, a feature of both shifts, is considered prestigious. Nesbitt et al. (2019) say that the Canadian shift may be replacing the NCVS.Jacewicz (2011) found the shift in parts of Wisconsin, where, despite the NCVS, /æ/ is lowered and backed, and /ɑ/ raises, backs, and diphthongizes to approach /ɔ/, although, like in Columbus and in Cowlitz County, the merger isn't actually complete for most of the speakers in the study, and the lowering of /æ/ is more linked with the raising of /ɑ/. In addition, /ɛ/ is lowered and backed which is alignment with both the NCVS and the Canadian shift. U.S. Third Dialect Shifts: The South Jacewicz (2011) also found evidence for the shift in parts of North Carolina, where the vowels /ɪ/, /ɛ/, and /æ/ lower and monophthongize, undoing the Southern shift. /ɑ/ raises, backs and diphthongizes to approach /ɔ/, although the low back merger is not complete for any of the speakers in the study. U.S. Third Dialect Shifts: In the ANAE, the speech of Atlanta, Georgia is classified as a typologically Midland dialect because it had already lacked the monopthongization of /aɪ/. However, it appears that the monopthongization of /aɪ/ was a feature of Atlantan speech in the early 20th century, and that much younger speakers have undone the reversal of the front lax and tense vowels that is part of the Southern shift, retracted /ɪ/, /ɛ/, and /æ/, and have a near merger /ɑ/ and /ɔ/.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Strontium oxalate** Strontium oxalate: Strontium oxalate is a compound with the chemical formula SrC2O4. Strontium oxalate can exist either in a hydrated form (SrC2O4•nH2O) or as the acidic salt of strontium oxalate (SrC2O4•mH2C2O4•nH2O). Use in pyrotechnics: With the addition of heat, strontium oxalate will decompose based on the following reaction: SrC2O4 → SrO + CO2 + COStrontium oxalate is a good agent for use in pyrotechnics since it decomposes readily with the addition of heat. When it decomposes into strontium oxide, it will produce a red color. Since this reaction produces carbon monoxide, which can undergo a further reduction with magnesium oxide, strontium oxalate is an excellent red color producing agent in the presence of magnesium. If it is not in the presence of magnesium, strontium carbonate has been found to be a better option to produce an even greater effect.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bánh mì** Bánh mì: In Vietnamese cuisine, bánh mì or banh mi (, ; Vietnamese: [ɓǎjŋ̟ mì], 'bread') is a short baguette with thin, crisp crust and soft, airy texture. It is often split lengthwise and filled with savory ingredients like a submarine sandwich and served as a meal, called bánh mì thịt. Plain bánh mì is also eaten as a staple food. Bánh mì: A typical Vietnamese roll or sandwich is a fusion of meats and vegetables from native Vietnamese cuisine such as chả lụa (Vietnamese sausage), coriander leaf (cilantro), cucumber, pickled carrots, and pickled daikon combined with condiments from French cuisine such as pâté, along with red chili and mayonnaise. However, a wide variety of popular fillings are used, from xíu mại (a Chinese cuisine) to even ice cream. In Vietnam, bread rolls and sandwiches are typically eaten for breakfast or as a snack. Bánh mì: The baguette was introduced to Vietnam by the French in the mid-19th century, during the Nguyễn dynasty, and became a staple food by the early 20th century. In the 1950s, a distinctly Vietnamese style of sandwich developed in Saigon, becoming a popular street food, also known as bánh mì Sài Gòn ('Saigon sandwich' or 'Saigon-style bánh mì'). Following the Vietnam War, overseas Vietnamese popularized the bánh mì sandwich in countries such as Australia, Canada and the United States. In these countries, they are commonly sold in Asian bakeries. Terminology: In Vietnamese, the word bánh mì is derived from bánh (which can refer to many kinds of food, primarily baked goods, including bread) and mì (wheat). It may also be spelled bánh mỳ in northern Vietnam. Taken alone, bánh mì means any kind of bread, but it could refer to the Vietnamese baguette, or the sandwich made from it. To distinguish the un-filled bread from the sandwich with fillings, the term bánh mì không ("plain bread") can be used. To distinguish the Vietnamese-style bread from other kinds of bread, the term bánh mì Sài Gòn ("Saigon-style bread") or bánh mì Việt Nam ("Vietnam-style bread") can be used. Terminology: A folk etymology claims that the word bánh mì is a corruption of the French pain de mie, meaning soft, white bread. However, bánh (or its Nôm form, 餅) has referred to rice cakes and other pastries since as early as the 13th century, long before French contact. History: The word bánh mì, meaning "bread", is attested in Vietnamese as early as the 1830s, in Jean-Louis Taberd's dictionary Dictionarium Latino-Annamiticum. The French introduced Vietnam to the baguette, along with other baked goods such as pâté chaud, in the 1860s, at the start of their imperialism in Vietnam. Northern Vietnamese initially called the baguette bánh tây, literally "Western bánh", while Southern Vietnamese called it bánh mì, "wheat bánh". Nguyễn Đình Chiểu mentions the baguette in his 1861 poem "Văn tế nghĩa sĩ Cần Giuộc". Due to the price of imported wheat at the time, French baguettes and sandwiches were considered a luxury. During World War I, an influx of French soldiers and supplies arrived. At the same time, disruptions of wheat imports led bakers to begin mixing in inexpensive rice flour (which also made the bread fluffier). As a result, it became possible for ordinary Vietnamese to enjoy French staples such as bread. Many shops baked twice a day, because bread tends to go stale quickly in the hot, humid climate of Vietnam. Baguettes were mainly eaten for breakfast with some butter and sugar. History: Until the 1950s, sandwiches hewed closely to French tastes, typically a jambon-beurre moistened with a mayonnaise or liver pâté spread. The 1954 Partition of Vietnam sent over a million migrants from North Vietnam to South Vietnam, transforming Saigon's local cuisine. Among the migrants were Lê Minh Ngọc and Nguyễn Thị Tịnh, who opened a small bakery named Hòa Mã in District 3. In 1958, Hòa Mã became one of the first shops to sell bánh mì thịt. Around this time, another migrant from the North began selling chả sandwiches from a basket on a mobylette, and a stand in Gia Định Province (present-day Phú Nhuận District) began selling phá lấu sandwiches. Some shops stuffed sandwiches with inexpensive Cheddar cheese, which came from French food aid that migrants from the North had rejected. Vietnamese communities in France also began selling bánh mì.After the Fall of Saigon in 1975, bánh mì sandwiches became a luxury item once again. During the so-called "subsidy period", state-owned phở eateries often served bread or cold rice as a side dish, leading to the present-day practice of dipping quẩy in phở. In the 1980s, Đổi Mới market reforms led to a renaissance in bánh mì, mostly as street food.Meanwhile, Vietnamese Americans brought bánh mì sandwiches to cities across the United States. In Northern California, Lê Văn Bá and his sons are credited with popularizing bánh mì among Vietnamese and non-Vietnamese Americans alike through their food truck services provider and their fast-food chain, Lee's Sandwiches, beginning in the 1980s. Sometimes bánh mì was likened to local sandwiches. In New Orleans, a "Vietnamese po' boy" recipe won the 2009 award for best po' boy at the annual Oak Street Po-Boy Festival. A restaurant in Philadelphia also sells a similar sandwich, marketed as a "Vietnamese hoagie". History: Since the 1970s Vietnamese refugees from the Vietnam War arrived in London and were hosted at community centres in areas of London such as De Beauvoir Town eventually founding a string of successful Vietnamese-style canteens in Shoreditch where bánh mì alongside phở, were popularised from the 1990s. Bánh mì sandwiches were featured in the 2002 PBS documentary Sandwiches That You Will Like. The word bánh mì was added to the Oxford English Dictionary on 24 March 2011. As of 2017, bánh mì is included in about 2% of U.S. restaurant sandwich menus, a nearly fivefold increase from 2013. On March 24, 2020, Google celebrated bánh mì with a Google Doodle. Ingredients: Bread A Vietnamese baguette has a thin crust and white, airy crumb. It may consist of both wheat flour and rice flour.Besides being made into a sandwich, it is eaten alongside meat dishes, such as bò kho (a beef stew), curry, and phá lấu. It can also be dipped in condensed milk (see Sữa Ông Thọ). Fillings A bánh mì sandwich typically consists of one or more meats, accompanying vegetables, and condiments. Accompanying vegetables typically include fresh cucumber slices, cilantro (leaves of the coriander plant) and pickled carrots and white radishes in shredded form (đồ chua). Common condiments include spicy chili sauce, sliced chilis, Maggi seasoning sauce, and mayonnaise. Varieties: Many fillings are used. A typical bánh mì shop in the United States offers at least 10 varieties.The most popular variety is bánh mì thịt, thịt meaning "meat". Bánh mì thịt nguội (also known as bánh mì pâté chả thịt, bánh mì đặc biệt, or "special combo") is made with various Vietnamese cold cuts, such as sliced pork or pork belly, chả lụa (Vietnamese sausage), and head cheese, along with the liver pâté and vegetables like carrot or cucumbers.Other varieties include: Bánh mì bì (shredded pork sandwich) – shredded pork or pork skin, doused with fish sauce Bánh mì chà bông (pork floss sandwich) Bánh mì xíu mại (minced pork meatball sandwich) – smashed pork meatballs bánh mì thịt nguội (ham sandwich) Bánh mì cá mòi (sardine sandwich) Bánh mì pa-tê (pâté sandwich) Bánh mì xá xíu or bánh mì thịt nướng (barbecue pork sandwich) Bánh mì chả lụa or bánh mì giò lụa (Vietnamese sausage sandwich) Bánh mì gà nướng (grilled chicken sandwich) Bánh mì chay (vegetarian sandwich) – made with tofu or seitan; in Vietnam, usually made at Buddhist temples during special religious events, but uncommon on the streets Bánh mì chả cá (fish patty sandwich) Bánh mì bơ (margarine or buttered sandwich) – margarine / butter and sugar Bánh mì trứng ốp-la (fried egg sandwich) – contains fried eggs with onions, sprinkled with soy sauce, sometimes buttered; served for breakfast in Vietnam Bánh mì kẹp kem (ice cream sandwich) – contains scoops of ice cream topped with crushed peanuts Nowadays, it is popular with different type of bánh mì: bánh mì que. Its shape is thinner and longer than a normal one. But it can be fulfilled with different ingredients as normal bánh mì. Notable vendors: Prior to the Fall of Saigon in 1975, well-known South Vietnamese bánh mì vendors included Bánh mì Ba Lẹ and Bánh mì Như Lan (which opened in 1968). Notable vendors: In regions of the United States with significant populations of Vietnamese Americans, numerous bakeries and fast food restaurants specialize in bánh mì. Lee's Sandwiches, a fast food chain with locations in several states, specializes in Vietnamese sandwiches served on French baguettes (or traditional bánh mì at some locations) as well as Western-style sandwiches served on croissants. In New Orleans, Dong Phuong Oriental Bakery is known for the bánh mì bread that it distributes to restaurants throughout the city. After 1975, Ba Lẹ owner Võ Văn Lẹ fled to the United States and, along with Lâm Quốc Thanh, founded Bánh mì Ba Lê. The Eden Center shopping center in Northern Virginia has several well-known bakeries specializing in bánh mì.Mainstream fast food chains have also incorporated bánh mì and other Vietnamese dishes into their portfolios. Yum! Brands operates a chain of bánh mì cafés called Bánh Shop. The former Chipotle-owned ShopHouse Southeast Asian Kitchen chain briefly sold bánh mì. Jack in the Box offers a "bánh mì–inspired" fried chicken sandwich as part of its Food Truck Series. McDonald's and Paris Baguette locations in Vietnam offer bánh mì.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solutions for cavitation in marine propellers** Solutions for cavitation in marine propellers: With the introduction of the marine propeller in the early 19th century, cavitation during operation has always been a limiting factor in the efficiency of ships. Cavitation in marine propellers develops when the propeller operates at a high speed and reduces the efficiency of the propeller. Ever since the introduction of the propeller, solutions for cavitation have been developed and tested. Nozzle System: As the name suggests, this system uses a set of nozzles to help reduce and prevent the likelihood of cavitation in propellers. This system was developed by Samsung Shipping which is based in South Korea. InToeduce the possibility of cavitation happening in marine propellers, a set of nozzles are placed on the hull of the ship directly in front of the propeller. These nozzles spray out compressed air over the propeller that creates “a macro bubble”. This bubble completely encompasses the propeller that is in operation. With the differing characteristics of the seawater outside of the bubble and the air inside, a zone develops that has the ability to reduce the “resonance frequency”. Due to this reduction, cavitation is less likely to occur during the operation of a marine propeller.To determine how effective this nozzle system could be, multiple tests were carried out with the nozzles and without them. In these tests, it was discovered that the resonance frequencies and the likelihood of cavitation could be reduced by up to 75%. Those who conducted these tests also tried two different arrangements of the nozzles to find out which one was more effective. The first arrangement used only one nozzle, and though it used considerably less power than the other option, it was not nearly as successful. The multi-nozzle system, on the other hand, gave much better results but required more power to operate.While this nozzle system has major drawbacks, particularly in its power requirements, the possibility of cavitation in the operation of marine propellers is reduced considerably. Thus, to some ship owners and operators, the cost of installing these nozzles and operating them is outweighed by the benefits of increased efficiency in their propellers. Air-Filled Rubber Membrane: The Air-Filled Rubber Membrane uses the same principles as the Nozzle System to reduce cavitation in marine propellers. Since the Nozzle System requires a large source of energy to operate, the creators sought to create a system that has the same results but is cheaper to operate. This membrane builds on the lessons learned in designing the Nozzle System and uses a pocket of air to prevent cavitation but does not require nozzles or compressors. While at the same time limiting the cost of operation, this membrane provides just as much protection against cavitation as the nozzles do.The Air-Filled Rubber Membrane is placed directly behind an operating marine propeller in the hull. As described before, the differing characteristics of the air in the membrane and the seawater around it reduce the resonance frequency, which in turn increases the point at which cavitation is encountered. The membrane is specially designed which, along with the use of rubber, furthers the effect of reducing the frequency. This membrane is cheaper to operate than the Propeller Control System+ and the Nozzle System but is not as effective as the PCS+ in reducing cavitation. Different Materials for Propellers: This solution focuses on the materials that marine propellers are created from which is a direct factor in cavitation. While redesigning propellers would only garner an extra one or two percent efficiency in operation, changing the materials a propeller is made from has greater effects. The most common blend that marine propellers are created from is the nickel-aluminum bronze blend. While this blend can resist erosion which is why it is so common, it cannot properly handle cavitation.However, this is beginning to change. The Royal Netherlands Navy for one is starting to experiment with composite materials like resins or carbon fiber. These materials, when formed into a propeller, are flexible enough under pressure to “deflect,” which can reduce cavitation. Other options are made from carbon fiber, epoxy resin, or even glass, and can produce “a hydroelastic effect”. Since these new propellers can flex and are not nearly as rigid under pressure, the risk of cavitation is reduced.While replacing propellers would be the most efficient on ships that are currently under construction, the benefits from newer propeller materials could outweigh the costs of replacing current marine propellers. Despite the initial cost of the propellers, this solution costs nothing to operate making it more feasible to ship around the globe.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Simplified Cangjie** Simplified Cangjie: Simplified Cangjie, known as Quick (Chinese: 簡易) or Sucheng (Chinese: 速成) is a stroke based keyboard input method based on the Cangjie IME (倉頡輸入法) but simplified with select lists. Unlike full Cangjie, the user enters only the first and last keystrokes used in the Cangjie system, and then chooses the desired character from a list of candidate Chinese characters that pops up. This method is popular in Taiwan and Hong Kong, the latter in particular. Simplified Cangjie is one of the few input methods which has an IME pre-installed on Traditional Chinese-capable personal computers. Performance and learning: Although described as having an easier learning curve with less errors, Simplified Cangjie users have slower typing speed compared to full Cangjie. The user must choose from a list of candidate characters, which can be compared to "hunt and peck" vs. ordinary touch typing. Because Simplified Cangjie does not promote the full sequence of keystrokes of standard Cangjie, it could leave simplified Cangjie users without knowledge of how to code a character without the disambiguation lists. Implementations: Windows In Windows, Simplified Cangjie is called 'Quick'. Microsoft Quick IME is bundled with all Traditional Chinese editions of Windows 98 or higher. Since Office 2007 and Windows 7, Microsoft offers two types of Quick: 'Quick' and 'New Quick'. Both are found under the section for Chinese (Traditional, Taiwan). The main difference between the two is that after the second keystroke, traditional Quick shows its drop down list while 'New Quick' will guess and output a character depending on the context (the New-Quick list needs to be manually invoked with an arrow key). 'New Quick' may also change previous characters of the sentence depending on whether the context changes. Microsoft also claims New-Quick to have an improved learning algorithm. Implementations: macOS Sucheng input is part of the standard installation of macOS. Adoption: Hong Kong In Cantonese-speaking Hong Kong, average computer users tend to prefer Simplified Cangjie over the full Cangjie largely due to its ease of use, and also the lack of other input methods available. The Cangjie IME itself has evidence of a strong presence in Hong Kong with it being available on most operating systems and keyboard layouts. As Hong Kong people are generally unfamiliar with phonetic-based input methods designed for Mandarin speakers such as pinyin and zhuyin, these methods are not widely used. Children in Hong Kong learn Chinese in a very different way from their peers in Mainland China and Taiwan, not only that they generally learn Chinese in Cantonese instead of Mandarin, but they do not learn any transliteration system until perhaps much later in their lives when they start learning Mandarin. Indeed, children in Hong Kong learn Chinese characters from the very beginning in kindergartens; in contrast, in mainland China and Taiwan, transliteration systems like pinyin or zhuyin are taught first before introducing any Chinese characters to children.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Braille Challenge** Braille Challenge: The Braille Challenge is an annual two-stage Braille literacy competition designed to motivate blind students to emphasize their study of Braille. The program parallels with the importance and educational purpose of a spelling bee for sighted children. Braille is a reading and writing method that breaks language into a code of raised dots. There are three grades of braille: Grade 1, which consists of the 26 standard letters of the alphabet and punctuation. Braille Challenge: This grade of braille is only used by people who are first starting to read Braille. Grade 2, which consists of the 26 standard letters of the alphabet, punctuation and contractions. In this grade of braille contractions are used to save space. A normal Braille page cannot fit as much text as a standard printed page. Books, signs in public places, menus, and most other Braille materials are written in Grade 2 Braille. Grade 3, which is used only in personal letters, diaries, and notes. Braille Challenge: This grade is a type of shorthand that shortens entire words to a few letters.The Braille Challenge started locally in 2000 sponsored by Braille Institute to help encourage and promote students’ braille skills. In 2003 Braille Institute began partnering with other organizations and formed an advisory committee in order to make the Braille Challenge accessible to all kids across the United States and Canada. Two hundred students from twenty-eight states and four Canadian provinces traveled to participate in the regional events, sending fifty-five finalists to Los Angeles to compete for the 2003 Braille Challenge title. Participation in the contest has doubled since 2003. By 2005 the institute received 775 requests for the preliminary contest, representing students from forty states and six Canadian provinces.In 2009, thirty-one blind service agencies and schools for the blind and visually impaired throughout the United States and Canada hosted regional events. Over five hundred students participated regionally in 2009, and the national top twelve scores in each of the five age groups competed nationally at the final round held at the Braille Institute in Los Angeles on June 20, 2009.In 2016, the Braille Challenge finals were held in Los Angeles on June 17–18. Regional competitions: Regional events offer parent workshops, entertainment, speakers, and adaptive technology demonstrations. The regional contests give parents of blind children the opportunity to meet other blind students and parents, and also gives students the opportunity to experience performing in a live competition as well as receive acknowledgement for the hard work they put into preparing for the event. The process builds community awareness about the importance of braille literacy. Contest categories and sample questions: The Braille Challenge includes four categories, each lasting fifty minutes. Students with the top twelve scores nationally in each of the five age groups advance to the Final Round in June, held at the Braille Institute in Los Angeles. Contest categories and sample questions: Following the final 2009 competition, an awards ceremony will be held at the Universal Hilton Hotel. The first through third place winners in each age group receive a savings bond, ranging in value from $500 for the youngest group, to $5,000 for the oldest. In addition to these prizes, Freedom Scientific has donated the latest adaptive equipment for the winners—a pocket PC with a braille display called a PacMate.Braille Speed and Accuracy In this event, contestants listen to a tape-recorded story and must transcribe it into braille. Contestants are ranked from lowest to highest, based on the number of correct words (including punctuation) they transcribe from the page. A point is subtracted for each word that contains one or more mistakes, including missing or extra words. Students can download sample contest questions for each level formatted as MP3 files from the Braille Challenge website.Braille Spelling Contestants are asked to spell braille vocabulary words correctly. Points are earned for each correctly spelled word. Extra points are given for additionally brailling the contracted version of the word correctly. Sample contests are formatted as generic BRF files, which can be opened in any of the commonly used braille translation software programs and then output on the students own braille embosser. They can also download text versions of each of the sample contests in PDF format.Chart and Graph Reading Contestants read raised-line images called tactile graphs and earn points by correctly answering a series of multiple-choice questions about the content. Contestants are ranked based on the most points earned. Both Braille and text versions are available online at the Braille Challenge website. Contest categories and sample questions: Proofreading Contestants read a series of braille sentences, some with grammar, punctuation or spelling errors. Contestant are asked to choose the multiple-choice option that is brailled correctly.Reading Comprehension Contestants read a story in braille to themselves and then answer 10 multiple-choice questions. Based on the content, contestants are ranked in order based on the number of questions they can answer correctly.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BIAS Peak** BIAS Peak: Peak is a digital audio editing application for the Macintosh, used primarily for stereo/mono recording, sample editing, loop creation, and CD mastering. It is commonly used by amateur and professional audio and video editors, mastering engineers, musicians, sound designers, artists, educators, and hobbyists.It was published by the now defunct company BIAS Inc. in several editions, with varying levels of features. BIAS Peak: Peak differs from Digital audio workstation-type audio editing applications in that most of its editing is done directly at the file level, without having to first create a project and import the audio to be edited into it. Peak can be assigned to many DAW-type applications as a supplemental external sample editor. When used this capacity, it is similar to having Peak's editing capabilities available as a plug-in, within the other application.BIAS Inc. ceased all business operations as of June, 2012. Reviews: BIAS Peak Pro 6 XT review Sound on Sound magazine (January 2009) BIAS Peak v5 review Sound on Sound magazine (July 2006) BIAS Peak v4 review Sound on Sound magazine (May 2004) BIAS Peak v3.1 review Sound on Sound magazine (January 2003) BIAS Peak v2.02 review Sound on Sound magazine (June 1999) BIAS Peak v1.6 review Sound on Sound magazine (October 1997) BIAS Peak v1.0 review Sound on Sound magazine (September 1996]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Maize (video game)** Maize (video game): Maize is a video game by Finish Line Games in Toronto. It was released on December 1, 2016. Gameplay: GameSpot says the game is an "absurdist" game based on Monty Python and X-Files humor. It tells the story of sentient corn created by government scientists who misinterpreted a memo.During gameplay, players encounter talking objects and solve puzzles. It takes place at an abandoned farm, nearby an active underground research facility. The game also requires you to collect pieces of information. Release: Toronto-based studio Finish Line Games worked on it, after they had made Cel Damage HD. A trailer came out in May 2016. It later came out for PC in the fall. It was released on December 1, 2016. Reception: Metacritic gave it a compiled score of 65/100 from 17 critics.City Weekly liked the game and found it entertaining, but disliked the frame rate stalling.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Intermediate filament** Intermediate filament: Intermediate filaments (IFs) are cytoskeletal structural components found in the cells of vertebrates, and many invertebrates. Homologues of the IF protein have been noted in an invertebrate, the cephalochordate Branchiostoma.Intermediate filaments are composed of a family of related proteins sharing common structural and sequence features. Initially designated 'intermediate' because their average diameter (10 nm) is between those of narrower microfilaments (actin) and wider myosin filaments found in muscle cells, the diameter of intermediate filaments is now commonly compared to actin microfilaments (7 nm) and microtubules (25 nm). Animal intermediate filaments are subcategorized into six types based on similarities in amino acid sequence and protein structure. Most types are cytoplasmic, but one type, Type V is a nuclear lamin. Unlike microtubules, IF distribution in cells show no good correlation with the distribution of either mitochondria or endoplasmic reticulum. Structure: The structure of proteins that form intermediate filaments (IF) was first predicted by computerized analysis of the amino acid sequence of a human epidermal keratin derived from cloned cDNAs. Analysis of a second keratin sequence revealed that the two types of keratins share only about 30% amino acid sequence homology but share similar patterns of secondary structure domains. As suggested by the first model, all IF proteins appear to have a central alpha-helical rod domain that is composed of four alpha-helical segments (named as 1A, 1B, 2A and 2B) separated by three linker regions.The central building block of an intermediate filament is a pair of two intertwined proteins that is called a coiled-coil structure. This name reflects the fact that the structure of each protein is helical, and the intertwined pair is also a helical structure. Structural analysis of a pair of keratins shows that the two proteins that form the coiled-coil bind by hydrophobic interactions. The charged residues in the central domain do not have a major role in the binding of the pair in the central domain.Cytoplasmic IFs assemble into non-polar unit-length filaments (ULFs). Identical ULFs associate laterally into staggered, antiparallel, soluble tetramers, which associate head-to-tail into protofilaments that pair up laterally into protofibrils, four of which wind together into an intermediate filament. Structure: Part of the assembly process includes a compaction step, in which ULF tighten and assume a smaller diameter. The reasons for this compaction are not well understood, and IF are routinely observed to have diameters ranging between 6 and 12 nm. The N-terminus and the C-terminus of IF proteins are non-alpha-helical regions and show wide variation in their lengths and sequences across IF families. Structure: The N-terminal "head domain" binds DNA. Vimentin heads are able to alter nuclear architecture and chromatin distribution, and the liberation of heads by HIV-1 protease may play an important role in HIV-1 associated cytopathogenesis and carcinogenesis. Phosphorylation of the head region can affect filament stability. The head has been shown to interact with the rod domain of the same protein.C-terminal "tail domain" shows extreme length variation between different IF proteins.The anti-parallel orientation of tetramers means that, unlike microtubules and microfilaments, which have a plus end and a minus end, IFs lack polarity and cannot serve as basis for cell motility and intracellular transport. Structure: Also, unlike actin or tubulin, intermediate filaments do not contain a binding site for a nucleoside triphosphate. Cytoplasmic IFs do not undergo treadmilling like microtubules and actin fibers, but are dynamic. Biomechanical properties: IFs are rather deformable proteins that can be stretched several times their initial length. The key to facilitate this large deformation is due to their hierarchical structure, which facilitates a cascaded activation of deformation mechanisms at different levels of strain. Initially the coupled alpha-helices of unit-length filaments uncoil as they're strained, then as the strain increases they transition into beta-sheets, and finally at increased strain the hydrogen bonds between beta-sheets slip and the ULF monomers slide along each other. Types: There are about 70 different human genes coding for various intermediate filament proteins. However, different kinds of IFs share basic characteristics: In general, they are all polymers that measure between 9–11 nm in diameter when fully assembled. Types: Animal IFs are subcategorized into six types based on similarities in amino acid sequence and protein structure: Types I and II – acidic and basic keratins These proteins are the most diverse among IFs and constitute type I (acidic) and type II (basic) IF proteins. The many isoforms are divided in two groups: epithelial keratins (about 20) in epithelial cells (image to right) trichocytic keratins (about 13) (hair keratins), which make up hair, nails, horns and reptilian scales.Regardless of the group, keratins are either acidic or basic. Acidic and basic keratins bind each other to form acidic-basic heterodimers and these heterodimers then associate to make a keratin filament.Cytokeratin filaments laterally associate with each other to create a thick bundle of ~50 nm radius. The optimal radius of such bundles is determined by the interplay between the long range electrostatic repulsion and short range hydrophobic attraction. Subsequently, these bundles would intersect through junctions to form a dynamic network, spanning the cytoplasm of epithelial cells. Types: Type III There are four proteins classed as type III intermediate filament proteins, which may form homo- or heteropolymeric proteins. Desmin IFs are structural components of the sarcomeres in muscle cells and connect different cell organells like the desmosomes with the cytoskeleton. Glial fibrillary acidic protein (GFAP) is found in astrocytes and other glia. Peripherin found in peripheral neurons. Vimentin, the most widely distributed of all IF proteins, can be found in fibroblasts, leukocytes, and blood vessel endothelial cells. They support the cellular membranes, keep some organelles in a fixed place within the cytoplasm, and transmit membrane receptor signals to the nucleus. Syncoilin is an atypical type III IF protein. Type IV Alpha-internexin Neurofilaments – the type IV family of intermediate filaments that is found in high concentrations along the axons of vertebrate neurons. Synemin Syncoilin Type V – nuclear lamins LaminsLamins are fibrous proteins having structural function in the cell nucleus. In metazoan cells, there are A and B type lamins, which differ in their length and pI. Human cells have three differentially regulated genes. B-type lamins are present in every cell. B type lamins, lamin B1 and B2, are expressed from the LMNB1 and LMNB2 genes on 5q23 and 19q13, respectively. A-type lamins are only expressed following gastrulation. Lamin A and C are the most common A-type lamins and are splice variants of the LMNA gene found at 1q21. These proteins localize to two regions of the nuclear compartment, the nuclear lamina—a proteinaceous structure layer subjacent to the inner surface of the nuclear envelope and throughout the nucleoplasm in the nucleoplasmic veil. Types: Comparison of the lamins to vertebrate cytoskeletal IFs shows that lamins have an extra 42 residues (six heptads) within coil 1b. The c-terminal tail domain contains a nuclear localization signal (NLS), an Ig-fold-like domain, and in most cases a carboxy-terminal CaaX box that is isoprenylated and carboxymethylated (lamin C does not have a CAAX box). Lamin A is further processed to remove the last 15 amino acids and its farnesylated cysteine. Types: During mitosis, lamins are phosphorylated by MPF, which drives the disassembly of the lamina and the nuclear envelope. Type VI Beaded filaments: Filensin, Phakinin. Nestin (was once proposed for reclassification but due to differences, remains as a type VI IF protein)Vertebrate-only. Related to type I-IV. Used to contain other newly discovered IF proteins not yet assigned to a type. Function: Cell adhesion At the plasma membrane, some keratins or desmin interact with desmosomes (cell-cell adhesion) and hemidesmosomes (cell-matrix adhesion) via adapter proteins. Associated proteins Filaggrin binds to keratin fibers in epidermal cells. Plectin links vimentin to other vimentin fibers, as well as to microfilaments, microtubules, and myosin II. Kinesin is being researched and is suggested to connect vimentin to tubulin via motor proteins. Keratin filaments in epithelial cells link to desmosomes (desmosomes connect the cytoskeleton together) through plakoglobin, desmoplakin, desmogleins, and desmocollins; desmin filaments are connected in a similar way in heart muscle cells. Diseases arising from mutations in IF genes: Dilated cardiomyoathy (DCM), mutations in the DES gene Arrhythmogenic cardiomyopathy (ACM), mutations in the DES gene Restrictive cardiomyopathy (RCM), mutations in the DES gene Non-compaction cardiomyopathy, mutations in the DES genes Cardiomyopathy in combination with skeletal myopathy (DES) Epidermolysis bullosa simplex; keratin 5 or keratin 14 mutation Laminopathies are a family of diseases caused by mutations in nuclear lamins and include Hutchinson-Gilford progeria syndrome and various lipodystrophies and cardiomyopathies among others. In other organisms: IF proteins are universal among animals in the form of a nuclear lamin. The Hydra has an additional "nematocilin" derived from the lamin. Cytoplasmic IFs (type I-IV) are only found in Bilateria; they also arose from a gene duplication event involving "type V" nuclear lamin. In addition, a few other diverse types of eukaryotes have lamins, suggesting an early origin of the protein.There was not really a concrete definition of an "intermediate filament protein", in the sense that the size or shape-based definition does not cover a monophyletic group. With the inclusion of unusual proteins like the network-forming beaded lamins (type VI), the current classification is moving to a clade containing nuclear lamin and its many descendents, characterized by sequence similarity as well as the exon structure. Functionally-similar proteins out of this clade, like crescentins, alveolins, tetrins, and epiplasmins, are therefore only "IF-like". They likely arose through convergent evolution.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lipid IVA 4-amino-4-deoxy-L-arabinosyltransferase** Lipid IVA 4-amino-4-deoxy-L-arabinosyltransferase: Lipid IVA 4-amino-4-deoxy-L-arabinosyltransferase (EC 2.4.2.43, undecaprenyl phosphate-alpha-L-Ara4N transferase, 4-amino-4-deoxy-L-arabinose lipid A transferase, polymyxin resistance protein PmrK, arnT (gene)) is an enzyme with systematic name 4-amino-4-deoxy-alpha-L-arabinopyranosyl ditrans, octacis-undecaprenyl phosphate:lipid IVA 4-amino-4-deoxy-L-arabinopyranosyltransferase. This enzyme catalyses the following chemical reaction (1) 4-amino-4-deoxy-alpha-L-arabinopyranosyl ditrans, octacis-undecaprenyl phosphate + alpha-Kdo-(2->4)-alpha-Kdo-(2->6)-lipid A ⇌ alpha-Kdo-(2->4)-alpha-Kdo-(2->6)-[4-P-L-Ara4N]-lipid A + ditrans, octacis-undecaprenyl phosphate (2) 4-amino-4-deoxy-alpha-L-arabinopyranosyl ditrans, octacis-undecaprenyl phosphate + lipid IVA ⇌ lipid IIA + ditrans, octacis-undecaprenyl phosphate (3) 4-amino-4-deoxy-alpha-L-arabinopyranosyl ditrans, octacis-undecaprenyl phosphate + alpha-Kdo-(2->4)-alpha-Kdo-(2->6)-lipid IVA ⇌ 4'-alpha-L-Ara4N-alpha-Kdo-(2->4)-alpha-Kdo-(2->6)-lipid IVA + ditrans, octacis-undecaprenyl phosphateThis integral membrane protein is present in the inner membrane of certain Gram negative endobacteria.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neuro (video game)** Neuro (video game): Neuro is a cyberpunk first-person shooter video game developed by Revolt Games and published by Russobit-M. It was released on 10 March 2006.The game's plot and world is tech-noir and cyberpunk-themed, as well as dystopian, with inspiration drawn from Blade Runner and Akira, and the works of writers such as William Gibson and Philip K. Dick. Plot: Neuro is a low-key crime drama with a cyberpunk theme and backdrop that philosophizes on the devolution of humankind: Even though humans have spread themselves out amongst the stars and developed technology to improve and enrich their lives, they are still likely to exploit each other whenever possible. James Gravesen is a law officer who is attempting to arrest an elusive smuggler with government connections, Ramone, who is dealing in "Lilac Death," a highly dangerous weaponized substance that can "wipe out Sorgo three times". James has biotechnology implanted in his brain that gives him a handful of psi-weapons: From 30 feet away and only using his mind, he can light enemies on fire, blow them off their feet and crush them, and make them go berserk and kill their allies. He can also see through walls to identify where enemies lurk, and he can heal himself. All of this takes a psi-energy which depletes with each use but resets over time. The enemies are mostly crooks trying to stop you from completing your various missions. History: Prior to release, Neuro had been in development since 2002 and was demonstrated at E3s 2003 and 2004. While intended for worldwide release, it was only released in Russia, the CIS (dubbed into Russian) and Taiwan (dubbed into English). In 2010, an academic, Keith Duffy, found out about Neuro and, not knowing about the official English-language release in Taiwan, translated it into English. His translation was released for free on his blog.The Taiwanese release features a GFI Russia logo in the intro despite being distributed by Miracle Express, probably indicating that GFI would have been responsible for European and other Western distribution of the game.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ABS methods** ABS methods: ABS methods, where the acronym contains the initials of Jozsef Abaffy, Charles G. Broyden and Emilio Spedicato, have been developed since 1981 to generate a large class of algorithms for the following applications: solution of general linear algebraic systems, determined or underdetermined, full or deficient rank; solution of linear Diophantine systems, i.e. equation systems where the coefficient matrix and the right hand side are integer valued and an integer solution is sought; this is a special but important case of Hilbert's tenth problem, the only one in practice soluble; solution of nonlinear algebraic equations; solution of continuous unconstrained or constrained optimization.At the beginning of 2007 ABS literature consisted of over 400 papers and reports and two monographs, one due to Abaffy and Spedicato and published in 1989, one due to Xia and Zhang and published, in Chinese, in 1998. Moreover, three conferences had been organized in China. ABS methods: Research on ABS methods has been the outcome of an international collaboration coordinated by Spedicato of university of Bergamo, Italy. It has involved over forty mathematicians from Hungary, UK, China, Iran and other countries. ABS methods: The central element in such methods is the use of a special matrix transformation due essentially to the Hungarian mathematician Jenő Egerváry, who investigated its main properties in some papers that went unnoticed. For the basic problem of solving a linear system of m equations in n variables, where m≤n , ABS methods use the following simple geometric idea: Given an arbitrary initial estimate of the solution, find one of the infinite solutions, defining a linear variety of dimension n − 1, of the first equation. ABS methods: Find a solution of the second equation that is also a solution of the first, i.e. find a solution lying in the intersection of the linear varieties of the solutions of the first two equations considered separately. ABS methods: By iteration of the above approach after m' steps one gets a solution of the last equation that is also a solution of the previous equations, hence of the full system. Moreover, it is possible to detect equations that are either redundant or incompatible.Among the main results obtained so far: unification of algorithms for linear, nonlinear algebraic equations and for linearly constrained nonlinear optimization, including the LP problem as a special case; the method of Gauss has been improved by reducing the required memory and eliminating the need for pivoting; new methods for nonlinear systems with convergence properties better than for Newton method; derivation of a general algorithm for Hilbert tenth problem, linear case, with the extension of a classic Euler theorem from one equation to a system; solvers have been obtained that are more stable than classical ones, especially for the problem arising in primal-dual interior point method; ABS methods are usually faster on vector or parallel machines; ABS methods provide a simpler approach for teaching for a variety of classes of problems, since particular methods are obtained just by specific parameter choices.Knowledge of ABS methods is still quite limited among mathematicians, but they have great potential for improving the methods currently in use.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Binomial regression** Binomial regression: In statistics, binomial regression is a regression analysis technique in which the response (often referred to as Y) has a binomial distribution: it is the number of successes in a series of n independent Bernoulli trials, where each trial has probability of success p . In binomial regression, the probability of a success is related to explanatory variables: the corresponding concept in ordinary regression is to relate the mean value of the unobserved response to explanatory variables. Binomial regression: Binomial regression is closely related to binary regression: a binary regression can be considered a binomial regression with n=1 , or a regression on ungrouped binary data, while a binomial regression can be considered a regression on grouped binary data (see comparison). Binomial regression models are essentially the same as binary choice models, one type of discrete choice model: the primary difference is in the theoretical motivation (see comparison). In machine learning, binomial regression is considered a special case of probabilistic classification, and thus a generalization of binary classification. Example application: In one published example of an application of binomial regression, the details were as follows. The observed outcome variable was whether or not a fault occurred in an industrial process. There were two explanatory variables: the first was a simple two-case factor representing whether or not a modified version of the process was used and the second was an ordinary quantitative variable measuring the purity of the material being supplied for the process. Specification of model: The response variable Y is assumed to be binomially distributed conditional on the explanatory variables X. The number of trials n is known, and the probability of success for each trial p is specified as a function θ(X). This implies that the conditional expectation and conditional variance of the observed fraction of successes, Y/n, are E(Y/n∣X)=θ(X) Var ⁡(Y/n∣X)=θ(X)(1−θ(X))/n The goal of binomial regression is to estimate the function θ(X). Typically the statistician assumes θ(X)=m(βTX) , for a known function m, and estimates β. Common choices for m include the logistic function.The data are often fitted as a generalised linear model where the predicted values μ are the probabilities that any individual event will result in a success. The likelihood of the predictions is then given by L(μ∣Y)=∏i=1n(1yi=1(μi)+1yi=0(1−μi)), where 1A is the indicator function which takes on the value one when the event A occurs, and zero otherwise: in this formulation, for any given observation yi, only one of the two terms inside the product contributes, according to whether yi=0 or 1. The likelihood function is more fully specified by defining the formal parameters μi as parameterised functions of the explanatory variables: this defines the likelihood in terms of a much reduced number of parameters. Fitting of the model is usually achieved by employing the method of maximum likelihood to determine these parameters. In practice, the use of a formulation as a generalised linear model allows advantage to be taken of certain algorithmic ideas which are applicable across the whole class of more general models but which do not apply to all maximum likelihood problems. Specification of model: Models used in binomial regression can often be extended to multinomial data. There are many methods of generating the values of μ in systematic ways that allow for interpretation of the model; they are discussed below. Link functions: There is a requirement that the modelling linking the probabilities μ to the explanatory variables should be of a form which only produces values in the range 0 to 1. Many models can be fitted into the form μ=g(η). Link functions: Here η is an intermediate variable representing a linear combination, containing the regression parameters, of the explanatory variables. The function g is the cumulative distribution function (cdf) of some probability distribution. Usually this probability distribution has a support from minus infinity to plus infinity so that any finite value of η is transformed by the function g to a value inside the range 0 to 1. Link functions: In the case of logistic regression, the link function is the log of the odds ratio or logistic function. In the case of probit, the link is the cdf of the normal distribution. The linear probability model is not a proper binomial regression specification because predictions need not be in the range of zero to one; it is sometimes used for this type of data when the probability space is where interpretation occurs or when the analyst lacks sufficient sophistication to fit or calculate approximate linearizations of probabilities for interpretation. Comparison with binary regression: Binomial regression is closely connected with binary regression. If the response is a binary variable (two possible outcomes), then these alternatives can be coded as 0 or 1 by considering one of the outcomes as "success" and the other as "failure" and considering these as count data: "success" is 1 success out of 1 trial, while "failure" is 0 successes out of 1 trial. This can now be considered a binomial distribution with n=1 trial, so a binary regression is a special case of a binomial regression. If these data are grouped (by adding counts), they are no longer binary data, but are count data for each group, and can still be modeled by a binomial regression; the individual binary outcomes are then referred to as "ungrouped data". An advantage of working with grouped data is that one can test the goodness of fit of the model; for example, grouped data may exhibit overdispersion relative to the variance estimated from the ungrouped data. Comparison with binary choice models: A binary choice model assumes a latent variable Un, the utility (or net benefit) that person n obtains from taking an action (as opposed to not taking the action). The utility the person obtains from taking the action depends on the characteristics of the person, some of which are observed by the researcher and some are not: Un=β⋅sn+εn where β is a set of regression coefficients and sn is a set of independent variables (also known as "features") describing person n, which may be either discrete "dummy variables" or regular continuous variables. εn is a random variable specifying "noise" or "error" in the prediction, assumed to be distributed according to some distribution. Normally, if there is a mean or variance parameter in the distribution, it cannot be identified, so the parameters are set to convenient values — by convention usually mean 0, variance 1. Comparison with binary choice models: The person takes the action, yn = 1, if Un > 0. The unobserved term, εn, is assumed to have a logistic distribution. Comparison with binary choice models: The specification is written succinctly as: Un = βsn + εn if if Un≤0 ε ∼ logistic, standard normal, etc.Let us write it slightly differently: Un = βsn − en if if Un≤0 e ∼ logistic, standard normal, etc.Here we have made the substitution en = −εn. This changes a random variable into a slightly different one, defined over a negated domain. As it happens, the error distributions we usually consider (e.g. logistic distribution, standard normal distribution, standard Student's t-distribution, etc.) are symmetric about 0, and hence the distribution over en is identical to the distribution over εn. Comparison with binary choice models: Denote the cumulative distribution function (CDF) of e as Fe, and the quantile function (inverse CDF) of e as Fe−1. Note that Pr Pr Pr Pr Pr (en≤β⋅sn)=Fe(β⋅sn) Since Yn is a Bernoulli trial, where Pr (Yn=1), we have E[Yn]=Fe(β⋅sn) or equivalently Fe−1(E[Yn])=β⋅sn. Note that this is exactly equivalent to the binomial regression model expressed in the formalism of the generalized linear model. If en∼N(0,1), i.e. distributed as a standard normal distribution, then Φ−1(E[Yn])=β⋅sn which is exactly a probit model. If Logistic ⁡(0,1), i.e. distributed as a standard logistic distribution with mean 0 and scale parameter 1, then the corresponding quantile function is the logit function, and logit ⁡(E[Yn])=β⋅sn which is exactly a logit model. Comparison with binary choice models: Note that the two different formalisms — generalized linear models (GLM's) and discrete choice models — are equivalent in the case of simple binary choice models, but can be extended if differing ways: GLM's can easily handle arbitrarily distributed response variables (dependent variables), not just categorical variables or ordinal variables, which discrete choice models are limited to by their nature. GLM's are also not limited to link functions that are quantile functions of some distribution, unlike the use of an error variable, which must by assumption have a probability distribution. Comparison with binary choice models: On the other hand, because discrete choice models are described as types of generative models, it is conceptually easier to extend them to complicated situations with multiple, possibly correlated, choices for each person, or other variations. Latent variable interpretation / derivation: A latent variable model involving a binomial observed variable Y can be constructed such that Y is related to the latent variable Y* via if if 0. The latent variable Y* is then related to a set of regression variables X by the model Y∗=Xβ+ϵ. This results in a binomial regression model. The variance of ϵ can not be identified and when it is not of interest is often assumed to be equal to one. If ϵ is normally distributed, then a probit is the appropriate model and if ϵ is log-Weibull distributed, then a logit is appropriate. If ϵ is uniformly distributed, then a linear probability model is appropriate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reputation system** Reputation system: Reputation systems are programs or algorithms that allow users to rate each other in online communities in order to build trust through reputation. Some common uses of these systems can be found on E-commerce websites such as eBay, Amazon.com, and Etsy as well as online advice communities such as Stack Exchange. These reputation systems represent a significant trend in "decision support for Internet mediated service provisions". With the popularity of online communities for shopping, advice, and exchange of other important information, reputation systems are becoming vitally important to the online experience. The idea of reputation systems is that even if the consumer can't physically try a product or service, or see the person providing information, that they can be confident in the outcome of the exchange through trust built by recommender systems.Collaborative filtering, used most commonly in recommender systems, are related to reputation systems in that they both collect ratings from members of a community. The core difference between reputation systems and collaborative filtering is the ways in which they use user feedback. In collaborative filtering, the goal is to find similarities between users in order to recommend products to customers. The role of reputation systems, in contrast, is to gather a collective opinion in order to build trust between users of an online community. Types: Online Howard Rheingold states that online reputation systems are "computer-based technologies that make it possible to manipulate in new and powerful ways an old and essential human trait". Rheingold says that these systems arose as a result of the need for Internet users to gain trust in the individuals they transact with online. The trait he notes in human groups is that social functions such as gossip "keeps us up to date on who to trust, who other people trust, who is important, and who decides who is important". Internet sites such as eBay and Amazon, he argues, seek to make use of this social trait and are "built around the contributions of millions of customers, enhanced by reputation systems that police the quality of the content and transactions exchanged through the site". Types: Reputation banks The emerging sharing economy increases the importance of trust in peer-to-peer marketplaces and services. Users can build up reputation and trust in individual systems but usually don't have the ability to carry those reputations to other systems. Rachel Botsman and Roo Rogers argue in their book What's Mine is Yours (2010), that "it is only a matter of time before there is some form of network that aggregates reputation capital across multiple forms of Collaborative Consumption". These systems, often referred to as reputation banks, try to give users a platform to manage their reputation capital across multiple systems. Maintaining effective reputation systems: The main function of reputation systems is to build a sense of trust among users of online communities. As with brick and mortar stores, trust and reputation can be built through customer feedback. Paul Resnick from the Association for Computing Machinery describes three properties that are necessary for reputation systems to operate effectively. Entities must have a long lifetime and create accurate expectations of future interactions. They must capture and distribute feedback about prior interactions. They must use feedback to guide trust.These three properties are critically important in building reliable reputations, and all revolve around one important element: user feedback. User feedback in reputation systems, whether it be in the form of comments, ratings, or recommendations, is a valuable piece of information. Without user feedback, reputation systems cannot sustain an environment of trust. Eliciting user feedback can have three related problems. The first of these problems is the willingness of users to provide feedback when the option to do so is not required. If an online community has a large stream of interactions happening, but no feedback is gathered, the environment of trust and reputation cannot be formed. The second of these problems is gaining negative feedback from users. Many factors contribute to users not wanting to give negative feedback, the most prominent being a fear of retaliation. When feedback is not anonymous, many users fear retaliation if negative feedback is given. Maintaining effective reputation systems: The final problem related to user feedback is eliciting honest feedback from users. Although there is no concrete method for ensuring the truthfulness of feedback, if a community of honest feedback is established, new users will be more likely to give honest feedback as well.Other pitfalls to effective reputation systems described by A. Josang et al. include change of identities and discrimination. Again these ideas tie back to the idea of regulating user actions in order to gain accurate and consistent user feedback. When analyzing different types of reputation systems it is important to look at these specific features in order to determine the effectiveness of each system. Maintaining effective reputation systems: Standardization attempt The IETF proposed a protocol to exchange reputation data. It was originally aimed at email applications, but it was subsequently developed as a general architecture for a reputation-based service, followed by an email-specific part. However, the workhorse of email reputation remains with DNSxL's, which do not follow that protocol. Those specification don't say how to collect feedback —in fact, the granularity of email sending entities makes it impractical to collect feedback directly from recipients— but are only concerned with reputation query/response methods. Notable examples of practical applications: Search: web (see PageRank) eCommerce: eBay, Epinions, Bizrate, Trustpilot Social news: Reddit, Digg, Imgur Programming communities: Advogato, freelance marketplaces, Stack Overflow Wikis: Increase contribution quantity and quality Internet Security: TrustedSource Question-and-Answer sites: Quora, Yahoo! Answers, Gutefrage.net, Stack Exchange Email: DNSBL and DNSWL provide global reputation about email senders Personal Reputation: CouchSurfing (for travelers), Non Governmental organizations (NGOs): GreatNonProfits.org, GlobalGiving Professional reputation of translators and translation outsourcers: BlueBoard at ProZ.com All purpose reputation system: Yelp, Inc. Notable examples of practical applications: Academia: general bibliometric measures, e.g. the h-index of a researcher. Reputation as a resource: High reputation capital often confers benefits upon the holder. For example, a wide range of studies have found a positive correlation between seller rating and selling price on eBay, indicating that high reputation can help users obtain more money for their items. High product reviews on online marketplaces can also help drive higher sales volumes. Reputation as a resource: Abstract reputation can be used as a kind of resource, to be traded away for short-term gains or built up by investing effort. For example, a company with a good reputation may sell lower-quality products for higher profit until their reputation falls, or they may sell higher-quality products to increase their reputation. Some reputation systems go further, making it explicitly possible to spend reputation within the system to derive a benefit. For example, on the Stack Overflow community, reputation points can be spent on question "bounties" to incentivize other users to answer the question.Even without an explicit spending mechanism in place, reputation systems often make it easier for users to spend their reputation without harming it excessively. For example, a ridesharing company driver with a high ride acceptance score (a metric often used for driver reputation) may opt to be more selective about his or her clientele, decreasing the driver's acceptance score but improving his or her driving experience. With the explicit feedback provided by the service, drivers can carefully manage their selectivity to avoid being penalized too heavily. Attacks and defense: Reputation systems are in general vulnerable to attacks, and many types of attacks are possible. As the reputation system tries to generate an accurate assessment based on various factors including but not limited to unpredictable user size and potential adversarial environments, the attacks and defense mechanisms play an important role in the reputation systems. Attack classification of reputation system is based on identifying which system components and design choices are the targets of attacks. While the defense mechanisms are concluded based on existing reputation systems. Attacks and defense: Attacker model The capability of the attacker is determined by several characteristics, e.g., the location of the attacker related to the system (insider attacker vs. outsider attacker). An insider is an entity who has legitimate access to the system and can participate according to the system specifications, while an outsider is any unauthorized entity in the system who may or may not be identifiable. Attacks and defense: As the outsider attack is much more similar to other attacks in a computer system environment, the insider attack gets more focus in the reputation system. Usually, there are some common assumptions: the attackers are motivated either by selfish or malicious intent and the attackers can either work alone or in coalitions. Attack classification Attacks against reputation systems are classified based on the goals and methods of the attacker. Attacks and defense: Self-promoting Attack. The attacker falsely increases their own reputation. A typical example is the so-called Sybil attack where an attacker subverts the reputation system by creating a large number of pseudonymous entities, and using them to gain a disproportionately large influence. A reputation system's vulnerability to a Sybil attack depends on how cheaply Sybils can be generated, the degree to which the reputation system accepts input from entities that do not have a chain of trust linking them to a trusted entity, and whether the reputation system treats all entities identically. Attacks and defense: Whitewashing Attack. The attacker uses some system vulnerability to update their reputation. This attack usually targets the reputation system’s formulation that is used to calculate the reputation result. The whitewashing attack can be combined with other types of attacks to make each one more effective. Slandering Attack. The attacker reports false data to lower the reputation of the victim nodes. It can be achieved by a single attacker or a coalition of attackers. Orchestrated Attack. The attacker orchestrates their efforts and employs several of the above strategies. One famous example of an orchestrated attack is known as an oscillation attack. Denial of Service Attack. The attacker prevents the calculation and dissemination of reputation values in reputation systems by using Denial of Service method. Defense strategies Here are some strategies to prevent the above attacks. Preventing Multiple Identities Mitigating Generation of False Rumors Mitigating Spreading of False Rumors Preventing Short-Term Abuse of the System Mitigating Denial of Service Attacks
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**4Pi microscope** 4Pi microscope: A 4Pi microscope is a laser scanning fluorescence microscope with an improved axial resolution. With it the typical range of the axial resolution of 500–700 nm can be improved to 100–150 nm, which corresponds to an almost spherical focal spot with 5–7 times less volume than that of standard confocal microscopy. Working principle: The improvement in resolution is achieved by using two opposing objective lenses, which both are focused to the same geometrical location. Also the difference in optical path length through each of the two objective lenses is carefully aligned to be minimal. By this method, molecules residing in the common focal area of both objectives can be illuminated coherently from both sides and the reflected or emitted light can also be collected coherently, i.e. coherent superposition of emitted light on the detector is possible. The solid angle Ω that is used for illumination and detection is increased and approaches its maximum. In this case the sample is illuminated and detected from all sides simultaneously. Working principle: The operation mode of a 4Pi microscope is shown in the figure. The laser light is divided by a beam splitter and directed by mirrors towards the two opposing objective lenses. At the common focal point superposition of both focused light beams occurs. Excited molecules at this position emit fluorescence light, which is collected by both objective lenses, combined by the same beam splitter and deflected by a dichroic mirror onto a detector. There superposition of both emitted light pathways can take place again. Working principle: In the ideal case each objective lens can collect light from a solid angle of Ω=2π . With two objective lenses one can collect from every direction (solid angle Ω=4π ). The name of this type of microscopy is derived from the maximal possible solid angle for excitation and detection. Practically, one can achieve only aperture angles of about 140° for an objective lens, which corresponds to 1.3 π The microscope can be operated in three different ways: In a 4Pi microscope of type A, the coherent superposition of excitation light is used to generate the increased resolution. The emission light is either detected from one side only or in an incoherent superposition from both sides. In a 4Pi microscope of type B, only the emission light is interfering. When operated in the type C mode, both excitation and emission light are allowed to interfere, leading to the highest possible resolution increase (~7-fold along the optical axis as compared to confocal microscopy). Working principle: In a real 4Pi microscope light cannot be applied or collected from all directions equally, leading to so-called side lobes in the point spread function. Typically (but not always) two-photon excitation microscopy is used in a 4Pi microscope in combination with an emission pinhole to lower these side lobes to a tolerable level. History: In 1971, Christoph Cremer and Thomas Cremer proposed the creation of a perfect hologram, i.e. one that carries the whole field information of the emission of a point source in all directions, a so-called 4π hologram. However the publication from 1978 had drawn an improper physical conclusion (i.e. a point-like spot of light) and had completely missed the axial resolution increase as the actual benefit of adding the other side of the solid angle. The first description of a practicable system of 4Pi microscopy, i.e. the setup with two opposing, interfering lenses, was invented by Stefan Hell in 1991. He demonstrated it experimentally in 1994.In the following years, the number of applications for this microscope has grown. For example, parallel excitation and detection with 64 spots in the sample simultaneously combined with the improved spatial resolution resulted in the successful recording of the dynamics of mitochondria in yeast cells with a 4Pi microscope in 2002. A commercial version was launched by microscope manufacturer Leica Microsystems in 2004 and later discontinued. History: Up to now, the best quality in a 4Pi microscope was reached in conjunction with super-resolution techniques like the stimulated emission depletion (STED) principle. Using a 4Pi microscope with appropriate excitation and de-excitation beams, it was possible to create a uniformly 50 nm sized spot, which corresponds to a decreased focal volume compared to confocal microscopy by a factor of 150–200 in fixed cells. With the combination of 4Pi microscopy and RESOLFT microscopy with switchable proteins, it is now possible to take images of living cells at low light levels with isotropic resolutions below 40 nm.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electric dipole moment** Electric dipole moment: The electric dipole moment is a measure of the separation of positive and negative electrical charges within a system, that is, a measure of the system's overall polarity. The SI unit for electric dipole moment is the coulomb-meter (C⋅m). The debye (D) is another unit of measurement used in atomic physics and chemistry. Theoretically, an electric dipole is defined by the first-order term of the multipole expansion; it consists of two equal and opposite charges that are infinitesimally close together, although real dipoles have separated charge. Elementary definition: Often in physics the dimensions of a massive object can be ignored and can be treated as a pointlike object, i.e. a point particle. Point particles with electric charge are referred to as point charges. Two point charges, one with charge +q and the other one with charge −q separated by a distance d, constitute an electric dipole (a simple case of an electric multipole). For this case, the electric dipole moment has a magnitude and is directed from the negative charge to the positive one. Some authors may split d in half and use s = d/2 since this quantity is the distance between either charge and the center of the dipole, leading to a factor of two in the definition. Elementary definition: A stronger mathematical definition is to use vector algebra, since a quantity with magnitude and direction, like the dipole moment of two point charges, can be expressed in vector form where d is the displacement vector pointing from the negative charge to the positive charge. The electric dipole moment vector p also points from the negative charge to the positive charge. With this definition the dipole direction tends to align itself with an external electric field (and note that the electric flux lines produced by the charges of the dipole itself, which point from positive charge to negative charge then tend to oppose the flux lines of the external field). Note that this sign convention is used in physics, while the opposite sign convention for the dipole, from the positive charge to the negative charge, is used in chemistry.An idealization of this two-charge system is the electrical point dipole consisting of two (infinite) charges only infinitesimally separated, but with a finite p. This quantity is used in the definition of polarization density. Energy and torque: An object with an electric dipole moment p is subject to a torque τ when placed in an external electric field E. The torque tends to align the dipole with the field. A dipole aligned parallel to an electric field has lower potential energy than a dipole making some angle with it. For a spatially uniform electric field across the small region occupied by the dipole, the energy U and the torque τ are given by The scalar dot "⋅" product and the negative sign shows the potential energy minimises when the dipole is parallel with field and is maximum when antiparallel while zero when perpendicular. The symbol "×" refers to the vector cross product. The E-field vector and the dipole vector define a plane, and the torque is directed normal to that plane with the direction given by the right-hand rule. Note that a dipole in such a uniform field may twist and oscillate but receives no overall net force with no linear acceleration of the dipole. The dipole twists to align with the external field. Energy and torque: However in a non-uniform electric field a dipole may indeed receive a net force since the force on one end of the dipole no longer balances that on the other end. It can be shown that this net force is generally parallel to the dipole moment. Expression (general case): More generally, for a continuous distribution of charge confined to a volume V, the corresponding expression for the dipole moment is: where r locates the point of observation and d3r′ denotes an elementary volume in V. For an array of point charges, the charge density becomes a sum of Dirac delta functions: where each ri is a vector from some reference point to the charge qi. Substitution into the above integration formula provides: This expression is equivalent to the previous expression in the case of charge neutrality and N = 2. For two opposite charges, denoting the location of the positive charge of the pair as r+ and the location of the negative charge as r−: showing that the dipole moment vector is directed from the negative charge to the positive charge because the position vector of a point is directed outward from the origin to that point. Expression (general case): The dipole moment is particularly useful in the context of an overall neutral system of charges, for example a pair of opposite charges, or a neutral conductor in a uniform electric field. For such a system of charges, visualized as an array of paired opposite charges, the relation for electric dipole moment is: where r is the point of observation, and di = r'i − ri, ri being the position of the negative charge in the dipole i, and r'i the position of the positive charge. Expression (general case): This is the vector sum of the individual dipole moments of the neutral charge pairs. (Because of overall charge neutrality, the dipole moment is independent of the observer's position r.) Thus, the value of p is independent of the choice of reference point, provided the overall charge of the system is zero. Expression (general case): When discussing the dipole moment of a non-neutral system, such as the dipole moment of the proton, a dependence on the choice of reference point arises. In such cases it is conventional to choose the reference point to be the center of mass of the system, not some arbitrary origin. This choice is not only a matter of convention: the notion of dipole moment is essentially derived from the mechanical notion of torque, and as in mechanics, it is computationally and theoretically useful to choose the center of mass as the observation point. For a charged molecule the center of charge should be the reference point instead of the center of mass. For neutral systems the reference point is not important, and the dipole moment is an intrinsic property of the system. Potential and field of an electric dipole: An ideal dipole consists of two opposite charges with infinitesimal separation. We compute the potential and field of such an ideal dipole starting with two opposite charges at separation d > 0, and taking the limit as d → 0. Two closely spaced opposite charges ±q have a potential of the form: corresponding to the charge density by Coulomb's law, where the charge separation is: Let R denote the position vector relative to the midpoint r++r−2 , and R^ the corresponding unit vector: Taylor expansion in dR (see multipole expansion and quadrupole) expresses this potential as a series. Potential and field of an electric dipole: where higher order terms in the series are vanishing at large distances, R, compared to d. Here, the electric dipole moment p is, as above: The result for the dipole potential also can be expressed as: which relates the dipole potential to that of a point charge. A key point is that the potential of the dipole falls off faster with distance R than that of the point charge. Potential and field of an electric dipole: The electric field of the dipole is the negative gradient of the potential, leading to: Thus, although two closely spaced opposite charges are not quite an ideal electric dipole (because their potential at short distances is not that of a dipole), at distances much larger than their separation, their dipole moment p appears directly in their potential and field. Potential and field of an electric dipole: As the two charges are brought closer together (d is made smaller), the dipole term in the multipole expansion based on the ratio d/R becomes the only significant term at ever closer distances R, and in the limit of infinitesimal separation the dipole term in this expansion is all that matters. As d is made infinitesimal, however, the dipole charge must be made to increase to hold p constant. This limiting process results in a "point dipole". Dipole moment density and polarization density: The dipole moment of an array of charges, determines the degree of polarity of the array, but for a neutral array it is simply a vector property of the array with no information about the array's absolute location. The dipole moment density of the array p(r) contains both the location of the array and its dipole moment. When it comes time to calculate the electric field in some region containing the array, Maxwell's equations are solved, and the information about the charge array is contained in the polarization density P(r) of Maxwell's equations. Depending upon how fine-grained an assessment of the electric field is required, more or less information about the charge array will have to be expressed by P(r). As explained below, sometimes it is sufficiently accurate to take P(r) = p(r). Sometimes a more detailed description is needed (for example, supplementing the dipole moment density with an additional quadrupole density) and sometimes even more elaborate versions of P(r) are necessary. Dipole moment density and polarization density: It now is explored just in what way the polarization density P(r) that enters Maxwell's equations is related to the dipole moment p of an overall neutral array of charges, and also to the dipole moment density p(r) (which describes not only the dipole moment, but also the array location). Only static situations are considered in what follows, so P(r) has no time dependence, and there is no displacement current. First is some discussion of the polarization density P(r). That discussion is followed with several particular examples. Dipole moment density and polarization density: A formulation of Maxwell's equations based upon division of charges and currents into "free" and "bound" charges and currents leads to introduction of the D- and P-fields: where P is called the polarization density. In this formulation, the divergence of this equation yields: and as the divergence term in E is the total charge, and ρf is "free charge", we are left with the relation: with ρb as the bound charge, by which is meant the difference between the total and the free charge densities. Dipole moment density and polarization density: As an aside, in the absence of magnetic effects, Maxwell's equations specify that which implies Applying Helmholtz decomposition: for some scalar potential φ, and: Suppose the charges are divided into free and bound, and the potential is divided into Satisfaction of the boundary conditions upon φ may be divided arbitrarily between φf and φb because only the sum φ must satisfy these conditions. It follows that P is simply proportional to the electric field due to the charges selected as bound, with boundary conditions that prove convenient. In particular, when no free charge is present, one possible choice is P = ε0 E. Dipole moment density and polarization density: Next is discussed how several different dipole moment descriptions of a medium relate to the polarization entering Maxwell's equations. Dipole moment density and polarization density: Medium with charge and dipole densities As described next, a model for polarization moment density p(r) results in a polarization restricted to the same model. For a smoothly varying dipole moment distribution p(r), the corresponding bound charge density is simply as we will establish shortly via integration by parts. However, if p(r) exhibits an abrupt step in dipole moment at a boundary between two regions, ∇·p(r) results in a surface charge component of bound charge. This surface charge can be treated through a surface integral, or by using discontinuity conditions at the boundary, as illustrated in the various examples below. Dipole moment density and polarization density: As a first example relating dipole moment to polarization, consider a medium made up of a continuous charge density ρ(r) and a continuous dipole moment distribution p(r). The potential at a position r is: where ρ(r) is the unpaired charge density, and p(r) is the dipole moment density. Using an identity: the polarization integral can be transformed: where the vector identity was used in the last steps. The first term can be transformed to an integral over the surface bounding the volume of integration, and contributes a surface charge density, discussed later. Putting this result back into the potential, and ignoring the surface charge for now: where the volume integration extends only up to the bounding surface, and does not include this surface. Dipole moment density and polarization density: The potential is determined by the total charge, which the above shows consists of: showing that: In short, the dipole moment density p(r) plays the role of the polarization density P for this medium. Notice, p(r) has a non-zero divergence equal to the bound charge density (as modeled in this approximation). Dipole moment density and polarization density: It may be noted that this approach can be extended to include all the multipoles: dipole, quadrupole, etc. Using the relation: the polarization density is found to be: where the added terms are meant to indicate contributions from higher multipoles. Evidently, inclusion of higher multipoles signifies that the polarization density P no longer is determined by a dipole moment density p alone. For example, in considering scattering from a charge array, different multipoles scatter an electromagnetic wave differently and independently, requiring a representation of the charges that goes beyond the dipole approximation. Dipole moment density and polarization density: Surface charge Above, discussion was deferred for the first term in the expression for the potential due to the dipoles. Integrating the divergence results in a surface charge. The figure at the right provides an intuitive idea of why a surface charge arises. The figure shows a uniform array of identical dipoles between two surfaces. Internally, the heads and tails of dipoles are adjacent and cancel. At the bounding surfaces, however, no cancellation occurs. Instead, on one surface the dipole heads create a positive surface charge, while at the opposite surface the dipole tails create a negative surface charge. These two opposite surface charges create a net electric field in a direction opposite to the direction of the dipoles. Dipole moment density and polarization density: This idea is given mathematical form using the potential expression above. Ignoring the free charge, the potential is: Using the divergence theorem, the divergence term transforms into the surface integral: with dA0 an element of surface area of the volume. In the event that p(r) is a constant, only the surface term survives: with dA0 an elementary area of the surface bounding the charges. In words, the potential due to a constant p inside the surface is equivalent to that of a surface charge which is positive for surface elements with a component in the direction of p and negative for surface elements pointed oppositely. (Usually the direction of a surface element is taken to be that of the outward normal to the surface at the location of the element.) If the bounding surface is a sphere, and the point of observation is at the center of this sphere, the integration over the surface of the sphere is zero: the positive and negative surface charge contributions to the potential cancel. If the point of observation is off-center, however, a net potential can result (depending upon the situation) because the positive and negative charges are at different distances from the point of observation. The field due to the surface charge is: which, at the center of a spherical bounding surface is not zero (the fields of negative and positive charges on opposite sides of the center add because both fields point the same way) but is instead: If we suppose the polarization of the dipoles was induced by an external field, the polarization field opposes the applied field and sometimes is called a depolarization field. In the case when the polarization is outside a spherical cavity, the field in the cavity due to the surrounding dipoles is in the same direction as the polarization.In particular, if the electric susceptibility is introduced through the approximation: where E, in this case and in the following, represent the external field which induces the polarization. Dipole moment density and polarization density: Then: Whenever χ(r) is used to model a step discontinuity at the boundary between two regions, the step produces a surface charge layer. For example, integrating along a normal to the bounding surface from a point just interior to one surface to another point just exterior: where An, Ωn indicate the area and volume of an elementary region straddling the boundary between the regions, and n^ a unit normal to the surface. The right side vanishes as the volume shrinks, inasmuch as ρb is finite, indicating a discontinuity in E, and therefore a surface charge. That is, where the modeled medium includes a step in permittivity, the polarization density corresponding to the dipole moment density necessarily includes the contribution of a surface charge.A physically more realistic modeling of p(r) would have the dipole moment density drop off rapidly, but smoothly to zero at the boundary of the confining region, rather than making a sudden step to zero density. Then the surface charge will not concentrate in an infinitely thin surface, but instead, being the divergence of a smoothly varying dipole moment density, will distribute itself throughout a thin, but finite transition layer. Dipole moment density and polarization density: Dielectric sphere in uniform external electric field The above general remarks about surface charge are made more concrete by considering the example of a dielectric sphere in a uniform electric field. The sphere is found to adopt a surface charge related to the dipole moment of its interior. Dipole moment density and polarization density: A uniform external electric field is supposed to point in the z-direction, and spherical-polar coordinates are introduced so the potential created by this field is: The sphere is assumed to be described by a dielectric constant κ, that is, and inside the sphere the potential satisfies Laplace's equation. Skipping a few details, the solution inside the sphere is: while outside the sphere: At large distances, φ> → φ∞ so B = −E∞ . Continuity of potential and of the radial component of displacement D = κε0E determine the other two constants. Supposing the radius of the sphere is R, As a consequence, the potential is: which is the potential due to applied field and, in addition, a dipole in the direction of the applied field (the z-direction) of dipole moment: or, per unit volume: The factor (κ − 1)/(κ + 2) is called the Clausius–Mossotti factor and shows that the induced polarization flips sign if κ < 1. Of course, this cannot happen in this example, but in an example with two different dielectrics κ is replaced by the ratio of the inner to outer region dielectric constants, which can be greater or smaller than one. The potential inside the sphere is: leading to the field inside the sphere: showing the depolarizing effect of the dipole. Notice that the field inside the sphere is uniform and parallel to the applied field. The dipole moment is uniform throughout the interior of the sphere. The surface charge density on the sphere is the difference between the radial field components: This linear dielectric example shows that the dielectric constant treatment is equivalent to the uniform dipole moment model and leads to zero charge everywhere except for the surface charge at the boundary of the sphere. Dipole moment density and polarization density: General media If observation is confined to regions sufficiently remote from a system of charges, a multipole expansion of the exact polarization density can be made. By truncating this expansion (for example, retaining only the dipole terms, or only the dipole and quadrupole terms, or etc.), the results of the previous section are regained. In particular, truncating the expansion at the dipole term, the result is indistinguishable from the polarization density generated by a uniform dipole moment confined to the charge region. To the accuracy of this dipole approximation, as shown in the previous section, the dipole moment density p(r) (which includes not only p but the location of p) serves as P(r). Dipole moment density and polarization density: At locations inside the charge array, to connect an array of paired charges to an approximation involving only a dipole moment density p(r) requires additional considerations. The simplest approximation is to replace the charge array with a model of ideal (infinitesimally spaced) dipoles. In particular, as in the example above that uses a constant dipole moment density confined to a finite region, a surface charge and depolarization field results. A more general version of this model (which allows the polarization to vary with position) is the customary approach using electric susceptibility or electrical permittivity. Dipole moment density and polarization density: A more complex model of the point charge array introduces an effective medium by averaging the microscopic charges; for example, the averaging can arrange that only dipole fields play a role. A related approach is to divide the charges into those nearby the point of observation, and those far enough away to allow a multipole expansion. The nearby charges then give rise to local field effects. In a common model of this type, the distant charges are treated as a homogeneous medium using a dielectric constant, and the nearby charges are treated only in a dipole approximation. The approximation of a medium or an array of charges by only dipoles and their associated dipole moment density is sometimes called the point dipole approximation, the discrete dipole approximation, or simply the dipole approximation. Electric dipole moments of fundamental particles: Not to be confused with spin which refers to the magnetic dipole moments of particles, much experimental work is continuing on measuring the electric dipole moments (EDM; or anomalous electric dipole moment) of fundamental and composite particles, namely those of the electron and neutron, respectively. As EDMs violate both the parity (P) and time-reversal (T) symmetries, their values yield a mostly model-independent measure of CP-violation in nature (assuming CPT symmetry is valid). Therefore, values for these EDMs place strong constraints upon the scale of CP-violation that extensions to the standard model of particle physics may allow. Current generations of experiments are designed to be sensitive to the supersymmetry range of EDMs, providing complementary experiments to those done at the LHC.Indeed, many theories are inconsistent with the current limits and have effectively been ruled out, and established theory permits a much larger value than these limits, leading to the strong CP problem and prompting searches for new particles such as the axion.We know at least in the Yukawa sector from neutral kaon oscillations that CP is broken. Experiments have been performed to measure the electric dipole moment of various particles like the electron and the neutron. Many models beyond the standard model with additional CP-violating terms generically predict a nonzero electric dipole moment and are hence sensitive to such new physics. Instanton corrections from a nonzero θ term in quantum chromodynamics predict a nonzero electric dipole moment for the neutron and proton, which have not been observed in experiments (where the best bounds come from analysing neutrons). This is the strong CP problem and is a prediction of chiral perturbation theory. Dipole moments of molecules: Dipole moments in molecules are responsible for the behavior of a substance in the presence of external electric fields. The dipoles tend to be aligned to the external field which can be constant or time-dependent. This effect forms the basis of a modern experimental technique called dielectric spectroscopy. Dipole moments of molecules: Dipole moments can be found in common molecules such as water and also in biomolecules such as proteins.By means of the total dipole moment of some material one can compute the dielectric constant which is related to the more intuitive concept of conductivity. If MTot is the total dipole moment of the sample, then the dielectric constant is given by, where k is a constant and Tot Tot Tot (t=0)⟩ is the time correlation function of the total dipole moment. In general the total dipole moment have contributions coming from translations and rotations of the molecules in the sample, Therefore, the dielectric constant (and the conductivity) has contributions from both terms. This approach can be generalized to compute the frequency dependent dielectric function.It is possible to calculate dipole moments from electronic structure theory, either as a response to constant electric fields or from the density matrix. Such values however are not directly comparable to experiment due to the potential presence of nuclear quantum effects, which can be substantial for even simple systems like the ammonia molecule. Coupled cluster theory (especially CCSD(T)) can give very accurate dipole moments, although it is possible to get reasonable estimates (within about 5%) from density functional theory, especially if hybrid or double hybrid functionals are employed. The dipole moment of a molecule can also be calculated based on the molecular structure using the concept of group contribution methods.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cilansetron** Cilansetron: Cilansetron is an experimental drug that is a 5-HT3 antagonist under development by Solvay Pharmaceuticals.5-HT3 receptors are responsible for causing many things from nausea to excess bowel movements. In conditions such as irritable bowel syndrome (IBS), the receptors have become faulty or oversensitive. 5-HT3 antagonists work by blocking the nervous and chemical signals from reaching these receptors. Cilansetron: Studies have shown that the drug can improve quality of life in men and women with diarrhea-predominant IBS. Cilansetron is the first 5-HT antagonist specifically designed for IBS that is effective in men as well as women.In 2005, Solvay received response from the U.S. Food and Drug Administration that cilansertron is not approvable without additional clinical trials; further development has been discontinued.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**COVID-19 pandemic in Guyana** COVID-19 pandemic in Guyana: The COVID-19 pandemic in Guyana was a part of the worldwide pandemic of coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The virus was confirmed to have reached Guyana on 11 March 2020. The first case was a woman who travelled from New York, a 52-year-old woman with underlying health conditions, including diabetes and hypertension. The woman died at the Georgetown Public Hospital. Background: On 12 January 2020, the World Health Organization (WHO) confirmed that a novel coronavirus was the cause of a respiratory illness in a cluster of people in Wuhan City, Hubei Province, China, which was reported to the WHO on 31 December 2019.The case fatality ratio for COVID-19 has been much lower than SARS of 2003, but the transmission has been significantly greater, with a significant total death toll. Timeline: March On 11 March 2020, the first case of coronavirus was recorded in Guyana from a 52-year-old woman with underlying health conditions, including diabetes and hypertension. The woman died at the Georgetown Public Hospital.On 18 March, the Guyana Civil Aviation Authority closed the country's airports to incoming international passenger flights for 14 days. All schools were closed.On 19 March, the Guyana Civil Aviation Authority (GCAA) closed Guyanese airspace to all international arrivals.On 23 March, the Courts of Guyana announced limited or suspended operations.On 25 March, Karen Gordon-Boyle, Deputy Chief Medical Officer, announced that only people exhibiting signs of COVID-19 infection or who have traveled abroad will be tested. The Pan American Health Organization had supplied Guyana with 700 testing kits and 400 screening kits.On 31 March, Ubraj Narine, the Mayor of Georgetown, said that he would not be implementing lockdowns or curfews in contrast to neighboring cities. Timeline: April On 1 April 2020, a second death was announced. The deceased was a 38-year-old former Emergency Medical Technician. The total number of cases is 12: 10 cases in Region 4, 1 in Region 3 and 1 in Region 6. 52 people have been tested thus far.On 2 April, President David Granger announced the closure of bars, restaurants and other places of entertainment between 18:00 and 06:00.On 3 April, Guyana had reported 19 cases and 4 deaths, giving the country the world's highest COVID-19 case fatality rate at 21.05%. Timeline: The Minister of Health announced that all residents of Guyana would be restricted to their homes/yards. A national curfew would come into effect from 6 PM until 6 AM. The curfew had already been declared on 30 March in Region 10. A limited number of essential services would be operating daily with reduced hours of service. Timeline: The Civil Defence Commission (CDC) started a relief program consisting of food and cleaning essentials to the most vulnerable communities.On 6 April, Guyana had reported 29 cases.On 8 April, it was announced that Colonel John Lewis, who had died on 7 April, had contracted COVID-19. He had not been tested until after he died. His wife had died from pneumonia 12 days earlier. Timeline: All post offices would be closed from that 9 April onward. Arrangements were being made for pensioners to collect pensions.On 9 April, the European Union announced a grant of €8 million (US$8.6 million), which would be implemented by the Caribbean Public Health Agency, for the fight against the coronavirus. Guyana is one of the 24 members of the CARPHA. Timeline: A 6-year-old girl was rushed to the Linden Hospital Complex with serious medical conditions and had been scheduled to be transferred to Georgetown; however, she died within 90 minutes. She would be tested for COVID-19 because she had a fever and trouble breathing, symptoms of the virus. The result of the test was negative.Volda Lawrence, Public Health Minister, announced that there had been no new cases on 9 April and that a total of 152 people had been tested.On 11 April, the Civil Defence Commission announced that there were four quarantine facilities with a total capacity of 254.On 12 April, the Ministry of Health has allowed private hospitals to test for COVID-19. Timeline: At least 34 Guyanese had died of COVID-19 in New York City by early April according to the Consulate General of Guyana in New York. Prime Minister Moses Nagamootoo said that 10,000 to 12,000 people were stranded in New York alone, but that at the time no repatriation flights would take place. 200 American citizens were repatriated on 14 April by Eastern Airlines. Timeline: Guyana was set to receive 30,000 masks and ventilators from China.On 15 April, the Ministry of Health announced that of the infected cases, 14 were from the East Coast of Demerara, five from the East Bank of Demerara and 17 within central Georgetown which meant that Region 4 had 86% of all COVID-19 cases.On 18 April, indigenous villages throughout the regions were concerned about food shortages due to significant increases costs, especially of freight, caused by the pandemic. Up to that time the CDC had not delivered any aid packages due to a reconstruction of their long-term care program. Timeline: A seventh death was recorded at the ICU of Georgetown Public Hospital.On 19 April, PAHO announced 7,000 additional COVID-19 test kits would be sent, adding to the initial 2,000 test kits the country had.On 21 April, Marvin Pearce, a Guyanese political activist and supporter of APNU+AFC, died on in the United States from COVID-19 at the age of 44.Suriname and Guyana have agreed to allow legitimate trade over the Courantyne River. The river which forms the border between the countries had been closed, which had resulted in food and fuel shortage in the Amerindian villages, Orealla and Siparuta. The border would remain closed for people.On 23 April, Guyana dispatched mobile COVID-19 testing units across the country, because there were suspicions that there were more cases due to the limited amount of testing. Guyana had now 9,000 test kits.On 24 April, Moses Nagamootoo, Chairman of the COVID-19 task force, said that foreign aid had been halted by the irregularities surrounding the 2020 Guyanese general election. Guyana was excluded by the World Bank from the first batch of aid packages. The lack of a budget for 2020 made matters worse.On 27 April, the Public Health Ministry announced that 464 tests had been performed, an increase of nine tests compared to the day before.The ninth death to COVID-19 was a 67-year-old man who died at approximately 20:20 on 29 April.On 30 April, ExxonMobil and its partners donated GY$60 million (~US$290,000) for the fight against COVID-19. Forty million Guyanese dollars would go to the CDC, the Salvation Army, and Rotary Guyana would receive GY$10 million each. Timeline: May On 6 May 2020, it was disclosed that the tenth death to COVID-19 was tested after the individual passed away due to complications from the virus. The victim was a 64-year-old man in the Palms Geriatric Home. Twelve staff members and 24 other bedridden persons were quarantined. The number of tests began improving, and up to that time, 714 persons had been tested. Timeline: Ten Guyanese were arrested for trying to cross into Brazil illegally and were placed in quarantine. The situation in Brazil with regards to COVID-19 did not deter crossings.On 10 May, another resident of Palms Geriatric Home tested positive for the virus.On 11 May, it was reported that an 11-year-old girl, who was one of the first cases, still tested positive after 56 days and had to remain in quarantine.On 12 May, the virus was now present in seven regions. Region 7 (Cuyuni-Mazaruni) was the newest region. The other regions had been 1 (Barima-Waini), 3 (Essequibo Islands-West Demerara), 4 (Demerara-Mahaica), 6 (East Berbice-Corentyne), 9 (Upper Takutu-Upper Essequibo) and 10 (Upper Demerara-Berbice). Timeline: A man who was quarantined with COVID-19 in Lethem escaped and was captured after crossing the closed border Brazil that same day, raising concerns about legalities and his contacts during the escape.On 20 May, random testing was performed among the Presidential Guard resulted in eight members testing positive for COVID-19. Timeline: August On 18 August 2020, seventeen were arrested for violating COVID-19 restrictions put in place by police at Montrose bar.It was announced on 19 August, that President Irfaan Ali would address the nation on the government's response to the COVID-19 pandemic that night, amid a sharp rise of cases. The Ministry of Health (MOH) reported that one other person who tested positive for the novel coronavirus (COVID-19) has died at the Georgetown Public Hospital Corporation (GPHC). The deceased was a 43-year-old woman and a patient of the transitional ward at the GPHC. Upon admission to the GPHC, a swab test was done and following her death, the results came back as positive. Another two other persons who tested positive for the novel coronavirus (COVID-19) have died at the Bartica Hospital (Cuyuni-Mazaruni), Region 7. The patients who died at the hospital were a male and a female, age 55 and 41 respectively. To date, the country has recorded 776 confirmed cases of COVID-19 of which 381 has recovered. There were 29 COVID-19 related deaths. Preventive measures: All borders, airports, and ports are closed for passengers All schools have been closed A curfew has been instituted between 18:00 and 06:00 All non-essential businesses must close All post offices are closed Everybody should stay at home except for essential journeys Public transport may only carry half the number of passengers they are licensed to carry Disputed territory with Venezuela: The International Court of Justice planned to discuss Guyana and Venezuela border dispute over Guayana Esequiba in March 2020. The hearing was postponed due to the pandemic.The first hearing was finally carried out on 30 June 2020, but Venezuela did not participate saying that the ICJ lacked jurisdiction. The hearing was held by video conference due to pandemic. Notable deaths: John Percy Leon Lewis (13 February 1943 – 7 April 2020), Guyanese military officer and president of the Guyana Rugby Football Union Samuel Wilson (c.1942 – 9 September 2020), former toshao (indigenous village chief) of Batavia, Cuyuni-Mazaruni Statistics: New cases per day Deaths per day Active cases per day Chronology of the number of active cases Gaps in data completed using Consulytic Caribbean
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Crop rotation** Crop rotation: Crop rotation is the practice of growing a series of different types of crops in the same area across a sequence of growing seasons. This practice reduces the reliance of crops on one set of nutrients, pest and weed pressure, along with the probability of developing resistant pests and weeds. Crop rotation: Growing the same crop in the same place for many years in a row, known as monocropping, gradually depletes the soil of certain nutrients and selects for both a highly competitive pest and weed community. Without balancing nutrient use and diversifying pest and weed communities, the productivity of monocultures is highly dependent on external inputs that may be harmful to the soil's fertility. Conversely, a well-designed crop rotation can reduce the need for synthetic fertilizers and herbicides by better using ecosystem services from a diverse set of crops. Additionally, crop rotations can improve soil structure and organic matter, which reduces erosion and increases farm system resilience. History: Agriculturalists have long recognized that suitable rotations such as planting spring crops for livestock in place of grains for human consumption make it possible to restore or to maintain productive soils. Ancient Near Eastern farmers practiced crop rotation in 6000 BC without understanding the chemistry, alternately planting legumes and cereals. Unknowingly, this was the start of a practice that would soon benefit many farmers. History: Two-field systems Under a two-field rotation, half the land was planted in a year, while the other half lay fallow. Then, in the next year, the two fields were reversed. In China both the two-field and three-field system had been used since the Eastern Zhou period. From the times of Charlemagne (died 814), farmers in Europe transitioned from a two-field crop rotation to a three-field crop rotation. History: Three-field systems From the end of the Middle Ages until the 20th century, Europe's farmers practiced a three-field rotation, where available lands were divided into three sections. One section was planted in the autumn with rye or winter wheat, followed by spring oats or barley; the second section grew crops such as peas, lentils, or beans; and the third field was left fallow. The three fields were rotated in this manner so that every three years, one of the fields would rest and lie fallow. Under the two-field system, if one has a total of 600 acres (2.4 km2) of fertile land, one would only plant 300 acres. Under the new three-field rotation system, one would plant (and therefore harvest) 400 acres. But the additional crops had a more significant effect than mere quantitative productivity. Since the spring crops were mostly legumes, they increased the overall nutrition of the people of Northern Europe. History: Four-field rotations Farmers in the region of Waasland (in present-day northern Belgium) pioneered a four-field rotation in the early 16th century, and the British agriculturist Charles Townshend (1674–1738) popularised this system in the 18th century. The sequence of four crops (wheat, turnips, barley and clover), included a fodder crop and a grazing crop, allowing livestock to be bred year-round. The four-field crop rotation became a key development in the British Agricultural Revolution. The rotation between arable and ley is sometimes called ley farming. History: Modern developments George Washington Carver (1860s–1943) studied crop-rotation methods in the United States, teaching southern farmers to rotate soil-depleting crops like cotton with soil-enriching crops like peanuts and peas. History: In the Green Revolution of the mid-20th century the traditional practice of crop rotation gave way in some parts of the world to the practice of supplementing the chemical inputs to the soil through topdressing with fertilizers, adding (for example) ammonium nitrate or urea and restoring soil pH with lime. Such practices aimed to increase yields, to prepare soil for specialist crops, and to reduce waste and inefficiency by simplifying planting, harvesting, and irrigation. Crop choice: A preliminary assessment of crop interrelationships can be found in how each crop: contributes to soil organic matter (SOM) content provides for pest management manages deficient or excess nutrients how it contributes to or controls for soil erosion interbreeds with other crops to produce hybrid offspring, and impacts surrounding food webs and field ecosystemsCrop choice is often related to the goal the farmer is looking to achieve with the rotation, which could be weed management, increasing available nitrogen in the soil, controlling for erosion, or increasing soil structure and biomass, to name a few. When discussing crop rotations, crops are classified in different ways depending on what quality is being assessed: by family, by nutrient needs/benefits, and/or by profitability (i.e. cash crop versus cover crop). For example, giving adequate attention to plant family is essential to mitigating pests and pathogens. However, many farmers have success managing rotations by planning sequencing and cover crops around desirable cash crops. The following is a simplified classification based on crop quality and purpose. Crop choice: Row crops Many crops which are critical for the market, like vegetables, are row crops (that is, grown in tight rows). While often the most profitable for farmers, these crops are more taxing on the soil. Row crops typically have low biomass and shallow roots: this means the plant contributes low residue to the surrounding soil and has limited effects on structure. With much of the soil around the plant exposed to disruption by rainfall and traffic, fields with row crops experience faster break down of organic matter by microbes, leaving fewer nutrients for future plants.In short, while these crops may be profitable for the farm, they are nutrient depleting. Crop rotation practices exist to strike a balance between short-term profitability and long-term productivity. Crop choice: Legumes A great advantage of crop rotation comes from the interrelationship of nitrogen-fixing crops with nitrogen-demanding crops. Legumes, like alfalfa and clover, collect available nitrogen from the atmosphere and store it in nodules on their root structure. When the plant is harvested, the biomass of uncollected roots breaks down, making the stored nitrogen available to future crops.In addition, legumes have heavy tap roots that burrow deep into the ground, lifting soil for better tilth and absorption of water. Crop choice: Grasses and cereals Cereal and grasses are frequent cover crops because of the many advantages they supply to soil quality and structure. The dense and far-reaching root systems give ample structure to surrounding soil and provide significant biomass for soil organic matter. Grasses and cereals are key in weed management as they compete with undesired plants for soil space and nutrients. Green manure Green manure is a crop that is mixed into the soil. Both nitrogen-fixing legumes and nutrient scavengers, like grasses, can be used as green manure. Green manure of legumes is an excellent source of nitrogen, especially for organic systems, however, legume biomass does not contribute to lasting soil organic matter like grasses do. Planning a rotation: There are numerous factors that must be taken into consideration when planning a crop rotation. Planning an effective rotation requires weighing fixed and fluctuating production circumstances: market, farm size, labor supply, climate, soil type, growing practices, etc. Moreover, a crop rotation must consider in what condition one crop will leave the soil for the succeeding crop and how one crop can be seeded with another crop. For example, a nitrogen-fixing crop, like a legume, should always precede a nitrogen depleting one; similarly, a low residue crop (i.e. a crop with low biomass) should be offset with a high biomass cover crop, like a mixture of grasses and legumes.There is no limit to the number of crops that can be used in a rotation, or the amount of time a rotation takes to complete. Decisions about rotations are made years prior, seasons prior, or even at the last minute when an opportunity to increase profits or soil quality presents itself. Implementation: Crop rotation systems may be enriched by the influences of other practices such as the addition of livestock and manure, intercropping or multiple cropping, and is common in organic cropping systems. Implementation: Incorporation of livestock Introducing livestock makes the most efficient use of critical sod and cover crops; livestock (through manure) are able to distribute the nutrients in these crops throughout the soil rather than removing nutrients from the farm through the sale of hay.Mixed farming or the practice of crop cultivation with the incorporation of livestock can help manage crops in a rotation and cycle nutrients. Crop residues provide animal feed, while the animals provide manure for replenishing crop nutrients and draft power. These processes promote internal nutrient cycling and minimize the need for synthetic fertilizers and large-scale machinery. As an additional benefit, the cattle, sheep and/or goat provide milk and can act as a cash crop in the times of economic hardship. Implementation: Intercropping Multiple cropping systems, such as intercropping or companion planting, offer more diversity and complexity within the same season or rotation. An example of companion planting is the three sisters, the inter-planting of corn with pole beans and vining squash or pumpkins. In this system, the beans provide nitrogen; the corn provides support for the beans and a "screen" against squash vine borer; the vining squash provides a weed suppressive canopy and a discouragement for corn-hungry raccoons.Double-cropping is common where two crops, typically of different species, are grown sequentially in the same growing season, or where one crop (e.g. vegetable) is grown continuously with a cover crop (e.g. wheat). This is advantageous for small farms, which often cannot afford to leave cover crops to replenish the soil for extended periods of time, as larger farms can. When multiple cropping is implemented on small farms, these systems can maximize benefits of crop rotation on available land resources. Implementation: Organic farming Crop rotation is a required practice, in the United States, for farm seeking organic certification. The “Crop Rotation Practice Standard” for the National Organic Program under the U.S. Code of Federal Regulations, section §205.205, states Farmers are required to implement a crop rotation that maintains or builds soil organic matter, works to control pests, manages and conserves nutrients, and protects against erosion. Producers of perennial crops that aren’t rotated may utilize other practices, such as cover crops, to maintain soil health. Implementation: In addition to lowering the need for inputs (by controlling for pests and weeds and increasing available nutrients), crop rotation helps organic growers increase the amount of biodiversity their farms. Biodiversity is also a requirement of organic certification, however, there are no rules in place to regulate or reinforce this standard. Increasing the biodiversity of crops has beneficial effects on the surrounding ecosystem and can host a greater diversity of fauna, insects, and beneficial microorganisms in the soil as found by McDaniel et al 2014 and Lori et al 2017. Some studies point to increased nutrient availability from crop rotation under organic systems compared to conventional practices as organic practices are less likely to inhibit of beneficial microbes in soil organic matter.While multiple cropping and intercropping benefit from many of the same principals as crop rotation, they do not satisfy the requirement under the NOP. Benefits: Agronomists describe the benefits to yield in rotated crops as "The Rotation Effect". There are many benefits of rotation systems. The factors related to the increase are broadly due to alleviation of the negative factors of monoculture cropping systems. Specifically, improved nutrition; pest, pathogen, and weed stress reduction; and improved soil structure have been found in some cases to be correlated to beneficial rotation effects. Benefits: Other benefits of rotation cropping systems include production cost advantages. Overall financial risks are more widely distributed over more diverse production of crops and/or livestock. Less reliance is placed on purchased inputs and over time crops can maintain production goals with fewer inputs. This in tandem with greater short and long term yields makes rotation a powerful tool for improving agricultural systems. Benefits: Soil organic matter The use of different species in rotation allows for increased soil organic matter (SOM), greater soil structure, and improvement of the chemical and biological soil environment for crops. With more SOM, water infiltration and retention improves, providing increased drought tolerance and decreased erosion. Benefits: Soil organic matter is a mix of decaying material from biomass with active microorganisms. Crop rotation, by nature, increases exposure to biomass from sod, green manure, and various other plant debris. The reduced need for intensive tillage under crop rotation allows biomass aggregation to lead to greater nutrient retention and utilization, decreasing the need for added nutrients. With tillage, disruption and oxidation of soil creates a less conducive environment for diversity and proliferation of microorganisms in the soil. These microorganisms are what make nutrients available to plants. So, where "active" soil organic matter is a key to productive soil, soil with low microbial activity provides significantly fewer nutrients to plants; this is true even though the quantity of biomass left in the soil may be the same. Benefits: Soil microorganisms also decrease pathogen and pest activity through competition. In addition, plants produce root exudates and other chemicals which manipulate their soil environment as well as their weed environment. Thus rotation allows increased yields from nutrient availability but also alleviation of allelopathy and competitive weed environments. Benefits: Carbon sequestration Studies have shown that crop rotations greatly increase soil organic carbon (SOC) content, the main constituent of soil organic matter. Carbon, along with hydrogen and oxygen, is a macronutrient for plants. Highly diverse rotations spanning long periods of time have shown to be even more effective in increasing SOC, while soil disturbances (e.g. from tillage) are responsible for exponential decline in SOC levels. In Brazil, conversion to no-till methods combined with intensive crop rotations has been shown an SOC sequestration rate of 0.41 tonnes per hectare per year.In addition to enhancing crop productivity, sequestration of atmospheric carbon has great implications in reducing rates of climate change by removing carbon dioxide from the air. Benefits: Nitrogen fixing Rotating crops adds nutrients to the soil. Legumes, plants of the family Fabaceae, for instance, have nodules on their roots which contain nitrogen-fixing bacteria called rhizobia. During a process called nodulation, the rhizobia bacteria use nutrients and water provided by the plant to convert atmospheric nitrogen into ammonia, which is then converted into an organic compound that the plant can use as its nitrogen source. It therefore makes good sense agriculturally to alternate them with cereals (family Poaceae) and other plants that require nitrates. How much nitrogen made available to the plants depends on factors such as the kind of legume, the effectiveness of rhizobia bacteria, soil conditions, and the availability of elements necessary for plant food. Benefits: Pathogen and pest control Crop rotation is also used to control pests and diseases that can become established in the soil over time. The changing of crops in a sequence decreases the population level of pests by (1) interrupting pest life cycles and (2) interrupting pest habitat. Plants within the same taxonomic family tend to have similar pests and pathogens. By regularly changing crops and keeping the soil occupied by cover crops instead of lying fallow, pest cycles can be broken or limited, especially cycles that benefit from overwintering in residue. For example, root-knot nematode is a serious problem for some plants in warm climates and sandy soils, where it slowly builds up to high levels in the soil, and can severely damage plant productivity by cutting off circulation from the plant roots. Growing a crop that is not a host for root-knot nematode for one season greatly reduces the level of the nematode in the soil, thus making it possible to grow a susceptible crop the following season without needing soil fumigation. Benefits: This principle is of particular use in organic farming, where pest control must be achieved without synthetic pesticides. Benefits: Weed management Integrating certain crops, especially cover crops, into crop rotations is of particular value to weed management. These crops crowd out weeds through competition. In addition, the sod and compost from cover crops and green manure slows the growth of what weeds are still able to make it through the soil, giving the crops further competitive advantage. By slowing the growth and proliferation of weeds while cover crops are cultivated, farmers greatly reduce the presence of weeds for future crops, including shallow rooted and row crops, which are less resistant to weeds. Cover crops are, therefore, considered conservation crops because they protect otherwise fallow land from becoming overrun with weeds.This system has advantages over other common practices for weeds management, such as tillage. Tillage is meant to inhibit growth of weeds by overturning the soil; however, this has a countering effect of exposing weed seeds that may have gotten buried and burying valuable crop seeds. Under crop rotation, the number of viable seeds in the soil is reduced through the reduction of the weed population. Benefits: In addition to their negative impact on crop quality and yield, weeds can slow down the harvesting process. Weeds make farmers less efficient when harvesting, because weeds like bindweeds, and knotgrass, can become tangled in the equipment, resulting in a stop-and-go type of harvest. Benefits: Preventing soil erosion Crop rotation can significantly reduce the amount of soil lost from erosion by water. In areas that are highly susceptible to erosion, farm management practices such as zero and reduced tillage can be supplemented with specific crop rotation methods to reduce raindrop impact, sediment detachment, sediment transport, surface runoff, and soil loss.Protection against soil loss is maximized with rotation methods that leave the greatest mass of crop stubble (plant residue left after harvest) on top of the soil. Stubble cover in contact with the soil minimizes erosion from water by reducing overland flow velocity, stream power, and thus the ability of the water to detach and transport sediment. Soil erosion and seal prevent the disruption and detachment of soil aggregates that cause macropores to block, infiltration to decline, and runoff to increase. This significantly improves the resilience of soils when subjected to periods of erosion and stress. Benefits: When a forage crop breaks down, binding products are formed that act like an adhesive on the soil, which makes particles stick together, and form aggregates. The formation of soil aggregates is important for erosion control, as they are better able to resist raindrop impact, and water erosion. Soil aggregates also reduce wind erosion, because they are larger particles, and are more resistant to abrasion through tillage practices.The effect of crop rotation on erosion control varies by climate. In regions under relatively consistent climate conditions, where annual rainfall and temperature levels are assumed, rigid crop rotations can produce sufficient plant growth and soil cover. In regions where climate conditions are less predictable, and unexpected periods of rain and drought may occur, a more flexible approach for soil cover by crop rotation is necessary. An opportunity cropping system promotes adequate soil cover under these erratic climate conditions. In an opportunity cropping system, crops are grown when soil water is adequate and there is a reliable sowing window. This form of cropping system is likely to produce better soil cover than a rigid crop rotation because crops are only sown under optimal conditions, whereas rigid systems are not necessarily sown in the best conditions available.Crop rotations also affect the timing and length of when a field is subject to fallow. This is very important because depending on a particular region's climate, a field could be the most vulnerable to erosion when it is under fallow. Efficient fallow management is an essential part of reducing erosion in a crop rotation system. Zero tillage is a fundamental management practice that promotes crop stubble retention under longer unplanned fallows when crops cannot be planted. Such management practices that succeed in retaining suitable soil cover in areas under fallow will ultimately reduce soil loss. In a recent study that lasted a decade, it was found that a common winter cover crop after potato harvest such as fall rye can reduce soil run-off by as much as 43%, and this is typically the most nutritional soil. Benefits: Biodiversity Increasing the biodiversity of crops has beneficial effects on the surrounding ecosystem and can host a greater diversity of fauna, insects, and beneficial microorganisms in the soil as found by McDaniel et al 2014 and Lori et al 2017. Some studies point to increased nutrient availability from crop rotation under organic systems compared to conventional practices as organic practices are less likely to inhibit of beneficial microbes in soil organic matter, such as arbuscular mycorrhizae, which increase nutrient uptake in plants. Increasing biodiversity also increases the resilience of agro-ecological systems. Benefits: Farm productivity Crop rotation contributes to increased yields through improved soil nutrition. By requiring planting and harvesting of different crops at different times, more land can be farmed with the same amount of machinery and labour. Risk management Different crops in the rotation can reduce the risks of adverse weather for the individual farmer. Challenges: While crop rotation requires a great deal of planning, crop choice must respond to a number of fixed conditions (soil type, topography, climate, and irrigation) in addition to conditions that may change dramatically from year to the next (weather, market, labor supply). In this way, it is unwise to plan crops years in advance. Improper implementation of a crop rotation plan may lead to imbalances in the soil nutrient composition or a buildup of pathogens affecting a critical crop. The consequences of faulty rotation may take years to become apparent even to experienced soil scientists and can take just as long to correct.Many challenges exist within the practices associated with crop rotation. For example, green manure from legumes can lead to an invasion of snails or slugs and the decay from green manure can occasionally suppress the growth of other crops.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded