text
stringlengths
60
353k
source
stringclasses
2 values
**Tinware** Tinware: Tinware is any item made of prefabricated tinplate. Usually tinware refers to kitchenware made of tinplate, often crafted by tinsmiths. Many cans used for canned food are tinware as well. Something that is tinned after being shaped and fabricated is not considered tinware. Similar industrial products are called tin-sheet products or tinwork. Properties: Tinware is strong, easily shaped, solderable, and is non-toxic. In addition, it has a good appearance which can be further enhanced by lacquering it. Of extreme importance is its property of corrosion resistance, especially against attack by food products. These properties are due to the properties of tinplate, as tinware is made of tinplate. Tinplate: Tinplate originated in Bohemia in the Middle Ages. Sources differ as to when this happened, ranging from the late thirteenth century to the fourteenth century. The technique for how to make tinplate spread to nearby regions of Germany, and by the sixteenth century Germany was the only source of tinplate in Europe. Tinsmiths throughout Europe were dependent on German suppliers of tinplate, and when events such as the Thirty Years War interrupted tinplate production, tinwares became much more expensive. This caused many European nations, including Great Britain, to attempt to start tinplate manufacturing industries.Successful creation of a non-German tinplate industry was hampered by both technical difficulties and the cheapness of German tinplate. Though there was a widely acclaimed expedition by Andrew Yarranton assisted in the transfer of technical knowledge, it was not until innovations like the water-powered rolling mill founded by Major Hanbury in 1728 that a successful English tinplate industry was created.Tinplate became a British dominated industry until 1890, with an output exceeding 13 million boxes of plate, of which 70% were exported to the United States. This may help explain why the United States passed the McKinley Tariff bill, which placed a tariff of 2.2 cents per pound on tinplate. After this tariff, and with other causes, the US tinplate industry became the largest in the world. History of Tinware in the United States: Tinware production in the United States is widely acclaimed to have started when a Scottish immigrant named Edward Pattison settled in Berlin, Hartford County, Connecticut. His tinware goods became extremely popular due to their ease of use and ease of cleaning, and to help fulfill tinware orders he took on apprentices, which later made Berlin, Connecticut, the center of tinware manufacturing in the American Colonies.During the Industrial Revolution, many inventors turned their attention to tinware. A good example of this is the invention of the circular shears by Calvin Whiting in 1804.Tinware was often sold by traveling salesmen called Yankee Peddlers. These Yankee Peddlers were both employees of tinware shops and independent. Often, tinware was traded for “Truck”, or bartered items, which were sold at the tinware shop. In fact, it was often preferable for Yankee Peddlers to get truck, as written by Harvey Filley in 1822, “I don’t take but a little cash when I can get truck, for it is better these times than cash. Most all the truck is in demand, more can be made by having quantities and knowing the market.” Applications of Tinware: Most kitchenware items that are made of aluminum, stainless steel and plastic in the 20th and 21st century were made of tinware in the 18th and 19th century. Its uses range from ale tasters and coffee pots to cookie cutters and boxes. There is an advertisement for tinware posted by Thomas Passmore on November 30, 1793 in the Federal Gazette (Philadelphia) that is unique because of the alphabetical arrangement of his tinware goods. 19 letters of the alphabet are represented in this list, showcasing the astounding variety of tinware goods. Tinware was featured prominently in the 1897 Sears Roebuck and Co. Catalogue, including many pots, pails, pans, and snuff boxes to name a few.However, since aluminum and plastic have become affordable in the 20th century, most kitchenware is now not made of tinware. Tin cans still remain as a major commodity. In 1970 there was an annual production of 12 to 13 million tons of tinplate, of which 90% were used to manufacture packaging like tin cans.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gershgorin circle theorem** Gershgorin circle theorem: In mathematics, the Gershgorin circle theorem may be used to bound the spectrum of a square matrix. It was first published by the Soviet mathematician Semyon Aronovich Gershgorin in 1931. Gershgorin's name has been transliterated in several different ways, including Geršgorin, Gerschgorin, Gershgorin, Hershhorn, and Hirschhorn. Statement and proof: Let A be a complex n×n matrix, with entries aij . For i∈{1,…,n} let Ri be the sum of the absolute values of the non-diagonal entries in the i -th row: Ri=∑j≠i|aij|. Let D(aii,Ri)⊆C be a closed disc centered at aii with radius Ri . Such a disc is called a Gershgorin disc. Theorem. Every eigenvalue of A lies within at least one of the Gershgorin discs D(aii,Ri). Proof. Let λ be an eigenvalue of A with corresponding eigenvector x=(xj) . Find i such that the element of x with the largest absolute value is xi . Since Ax=λx , in particular we take the ith component of that equation to get: ∑jaijxj=λxi. Taking aii to the other side: ∑j≠iaijxj=(λ−aii)xi. Therefore, applying the triangle inequality and recalling that |xj||xi|≤1 based on how we picked i, |λ−aii|=|∑j≠iaijxjxi|≤∑j≠i|aij|=Ri. Corollary. The eigenvalues of A must also lie within the Gershgorin discs Cj corresponding to the columns of A.Proof. Apply the Theorem to AT while recognizing that the eigenvalues of the transpose are the same as those of the original matrix. Example. For a diagonal matrix, the Gershgorin discs coincide with the spectrum. Conversely, if the Gershgorin discs coincide with the spectrum, the matrix is diagonal. Discussion: One way to interpret this theorem is that if the off-diagonal entries of a square matrix over the complex numbers have small norms, the eigenvalues of the matrix cannot be "far from" the diagonal entries of the matrix. Therefore, by reducing the norms of off-diagonal entries one can attempt to approximate the eigenvalues of the matrix. Of course, diagonal entries may change in the process of minimizing off-diagonal entries. Discussion: The theorem does not claim that there is one disc for each eigenvalue; if anything, the discs rather correspond to the axes in Cn , and each expresses a bound on precisely those eigenvalues whose eigenspaces are closest to one particular axis. In the matrix (322110101)(a000b000c)(322110101)−1=(−3a+2b+2c6a−2b−4c6a−4b−2cb−aa+(a−b)2(a−b)c−a2(a−c)a+(a−c)) — which by construction has eigenvalues a , b , and c with eigenvectors (311) , (210) , and (201) — it is easy to see that the disc for row 2 covers a and b while the disc for row 3 covers a and c . This is however just a happy coincidence; if working through the steps of the proof one finds that it in each eigenvector is the first element that is the largest (every eigenspace is closer to the first axis than to any other axis), so the theorem only promises that the disc for row 1 (whose radius can be twice the sum of the other two radii) covers all three eigenvalues. Strengthening of the theorem: If one of the discs is disjoint from the others then it contains exactly one eigenvalue. If however it meets another disc it is possible that it contains no eigenvalue (for example, A=(0140) or A=(1−21−1) ). In the general case the theorem can be strengthened as follows: Theorem: If the union of k discs is disjoint from the union of the other n − k discs then the former union contains exactly k and the latter n − k eigenvalues of A, when the eigenvalues are counted with their algebraic multiplicities. Proof: Let D be the diagonal matrix with entries equal to the diagonal entries of A and let B(t)=(1−t)D+tA. Strengthening of the theorem: We will use the fact that the eigenvalues are continuous in t , and show that if any eigenvalue moves from one of the unions to the other, then it must be outside all the discs for some t , which is a contradiction. Strengthening of the theorem: The statement is true for D=B(0) . The diagonal entries of B(t) are equal to that of A, thus the centers of the Gershgorin circles are the same, however their radii are t times that of A. Therefore, the union of the corresponding k discs of B(t) is disjoint from the union of the remaining n-k for all t∈[0,1] . The discs are closed, so the distance of the two unions for A is d>0 . The distance for B(t) is a decreasing function of t, so it is always at least d. Since the eigenvalues of B(t) are a continuous function of t, for any eigenvalue λ(t) of B(t) in the union of the k discs its distance d(t) from the union of the other n-k discs is also continuous. Obviously d(0)≥d , and assume λ(1) lies in the union of the n-k discs. Then d(1)=0 , so there exists 0<t0<1 such that 0<d(t0)<d . But this means λ(t0) lies outside the Gershgorin discs, which is impossible. Therefore λ(1) lies in the union of the k discs, and the theorem is proven. Strengthening of the theorem: Remarks: It is necessary to count the eigenvalues with respect to their algebraic multiplicities. Here is a counter-example : Consider the matrix, The union of the first 3 disks does not intersect the last 2, but the matrix has only 2 eigenvectors, e1,e4, and therefore only 2 eigenvalues, demonstrating that theorem is false in its formulation. The demonstration of the shows only that eigenvalues are distinct, however any affirmation about number of them is something that does not fit, and this is a counterexample. Strengthening of the theorem: The continuity of λ(t) should be understood in the sense of topology. It is sufficient to show that the roots (as a point in space Cn ) is continuous function of its coefficients. Note that the inverse map that maps roots to coefficients is described by Vieta's formulas (note for Characteristic polynomial an≡1 ) which can be proved an open map. This proves the roots as a whole is a continuous function of its coefficients. Since composition of continuous functions is again continuous, the λ(t) as a composition of roots solver and B(t) is also continuous. Strengthening of the theorem: Individual eigenvalue λ(t) could merge with other eigenvalue(s) or appeared from a splitting of previous eigenvalue. This may confuse people and questioning the concept of continuous. However, when viewing from the space of eigenvalue set Cn , the trajectory is still a continuous curve although not necessarily smooth everywhere.Added Remark: The proof given above is arguably (in)correct...... There are two types of continuity concerning eigenvalues: (1) each individual eigenvalue is a usual continuous function (such a representation does exist on a real interval but may not exist on a complex domain), (2) eigenvalues are continuous as a whole in the topological sense (a mapping from the matrix space with metric induced by a norm to unordered tuples, i.e., the quotient space of C^n under permutation equivalence with induced metric). Whichever continuity is used in a proof of the Gerschgorin disk theorem, it should be justified that the sum of algebraic multiplicities of eigenvalues remains unchanged on each connected region. A proof using the argument principle of complex analysis requires no eigenvalue continuity of any kind. For a brief discussion and clarification, see. Application: The Gershgorin circle theorem is useful in solving matrix equations of the form Ax = b for x where b is a vector and A is a matrix with a large condition number. Application: In this kind of problem, the error in the final result is usually of the same order of magnitude as the error in the initial data multiplied by the condition number of A. For instance, if b is known to six decimal places and the condition number of A is 1000 then we can only be confident that x is accurate to three decimal places. For very high condition numbers, even very small errors due to rounding can be magnified to such an extent that the result is meaningless. Application: It would be good to reduce the condition number of A. This can be done by preconditioning: A matrix P such that P ≈ A−1 is constructed, and then the equation PAx = Pb is solved for x. Using the exact inverse of A would be nice but finding the inverse of a matrix is something we want to avoid because of the computational expense. Now, since PA ≈ I where I is the identity matrix, the eigenvalues of PA should all be close to 1. By the Gershgorin circle theorem, every eigenvalue of PA lies within a known area and so we can form a rough estimate of how good our choice of P was. Example: Use the Gershgorin circle theorem to estimate the eigenvalues of: 10 0.2 0.2 0.2 11 ]. Starting with row one, we take the element on the diagonal, aii as the center for the disc. We then take the remaining elements in the row and apply the formula: ∑j≠i|aij|=Ri to obtain the following four discs: 10 0.6 and 11 ,3). Example: Note that we can improve the accuracy of the last two discs by applying the formula to the corresponding columns of the matrix, obtaining 1.2 ) and 11 2.2 ) The eigenvalues are 9.8218, 8.1478, 1.8995, −10.86. Note that this is a (column) diagonally dominant matrix: {\textstyle |a_{ii}|>\sum _{j\neq i}|a_{ji}|} . This means that most of the matrix is in the diagonal, which explains why the eigenvalues are so close to the centers of the circles, and the estimates are very good. For a random matrix, we would expect the eigenvalues to be substantially further from the centers of the circles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Conroe (microprocessor)** Conroe (microprocessor): Conroe is the code name for many Intel processors sold as Core 2 Duo, Xeon, Pentium Dual-Core and Celeron. It was the first desktop processor to be based on the Core microarchitecture, replacing the NetBurst microarchitecture based Cedar Mill processor. It has product code 80557, which is shared with Allendale and Conroe-L that are very similar but have a smaller L2 cache. Conroe-L has only one processor core and a new CPUID model. The mobile version of Conroe is Merom, the dual-socket server version is Woodcrest, and the quad-core desktop version is Kentsfield. Conroe (microprocessor): Conroe was replaced by the 45 nm Wolfdale processor. Variants: Conroe The first Intel Core 2 Duo branded processor cores, code-named Conroe, were launched on July 27, 2006, at Fragapalooza, a yearly gaming event in Edmonton, Alberta, Canada. These processors were fabricated on 300 mm wafers using a 65 nm manufacturing process, and intended for desktop computers, as a replacement for the Pentium 4 and Pentium D branded CPUs. Intel has claimed that Conroe provides 40% more performance at 40% less power compared to the Pentium D; the E6300, lowest end of the initial Conroe lineup, is able to match or even exceed the former flagship Pentium Extreme Edition 965 in performance despite a massive 50% clock frequency deficit. All Conroe processors are manufactured with 4 MB L2 cache; however, due to manufacturing defects or possibly for marketing purposes, the E6300 and E6400 versions based on this core have half their cache disabled, leaving them with only 2 MB of usable L2 cache. These Conroe-based E6300 and E6400 CPUs have the B2 stepping. Variants: The lower end E6300 (1.86 GHz) and E6400 (2.13 GHz), both with a 1066 MHz FSB (64-bits wide), were released on July 27, 2006. Traditionally, CPUs of the same family with less cache simply have the unavailable cache disabled, since this allows parts that fail quality control to be sold at a lower rating. When yields improve, they may be replaced with versions that only have the cache amount needed on the die, to bring down manufacturing cost. At launch time, Intel's prices for the Core 2 Duo E6300 and E6400 processors were US$183 and US$224 each in quantities of 1000. Conroe CPUs have improved capabilities over previous models with similar processor clock rates. According to reviews, the larger 4 MB L2 cache vs. the smaller 2 MB L2 cache at the same frequency and FSB can provide a 0–9% performance gain with certain applications and 0–16% performance gain with certain games. Variants: The higher end Conroe processors are the E6600 (2.4 GHz) and E6700 (2.67 GHz) Core 2 Duo models. The family has a 1066 MHz front side bus, 4 MB shared L2 cache, and 65 watts TDP. These processors have been tested against AMD's then-current top performing processors (Athlon 64 FX Series), which were, until this latest Intel release, the highest performance X86 CPUs available. Conroe chips also produce less heat than their predecessors — a benefit of the new 65 nm technology and the more efficient microarchitecture. At launch time, Intel's prices for the Core 2 Duo E6600 and E6700 processors were US$316 and US$530, respectively, each in quantities of 1000. Variants: E6320 and E6420 Conroe CPUs at 1.86 and 2.13 GHz respectively were launched on April 22, 2007 featuring a full 4 MB of cache and are considered Conroes. Variants: Intel released four additional Core 2 Duo Processors on July 22, 2007. The release coincided with that of the Intel Bearlake (x3x) chipsets. The new processors are named Core 2 Duo E6540, E6550, E6750, and E6850. Processors with a number ending in "50" have a 1333 MHz FSB. The processors all have 4 MB of L2 cache. Their clock frequency is similar to that of the already released processors with the same first two digits (E6600, E6700, X6800). An additional model, the E6540, was launched with specifications similar to the E6550 but lacking Intel Trusted Execution Technology and vPro support. These processors are stated to compete with AMD's Phenom processor line and are therefore priced below corresponding processors with a 1066 MHz FSB.All remaining Conroe Core 2 processors have been phased out in March 2009. Variants: Conroe XE The Core 2 Extreme was officially released on July 29, 2006. However, some retailers appeared to have released it on July 13, 2006, though at a higher premium. Currently, the Core 2 Extreme X6800 is the only dual core Core 2 Extreme processor available. The less powerful E6x00 models of Core 2 Duo were scheduled for simultaneous release with the X6800, which are both available at this time. It uses the Conroe XE core and replaces the dual-core Pentium Extreme Edition processors. The Core 2 Extreme X6800 has a clock rate of 2.93 GHz and a 1066 MHz FSB, although it was initially expected to be released with a 3.33 GHz clock rate and a 1333 MHz FSB. The TDP for the X6800 is 75–watts, higher than the TDPs of regular Core 2 Duo CPUs, which have a 65 watt TDP. With SpeedStep enabled, the average temperature of the CPU when idle is essentially that of the ambient atmosphere with its fan running at 1500 RPM.At launch time, Intel's price for the Core 2 Extreme X6800 was US$999 each in quantities of 1000. Like the desktop Core 2 Duo, it has 4 MB of shared L2 cache available. This means that the only major difference between the regular Core 2 Duo and Core 2 Extreme is the higher clock rate and unlocked multiplier, the usual advantages of the "Extreme Edition." The fully unlocked multiplier is of use to enthusiasts as it allows the user to set the clock rate higher than shipping frequency without modifying the FSB frequency unlike mainstream Core 2 Duo models whose multipliers are downward unlocked only. Variants: Allendale Allendale was originally the name for the E4000 processors, which use a low-cost version of the Conroe core. They feature a lower front side bus frequency of 800 MHz instead of 1066 MHz and only half the L2 cache (2 MB, similar to the Core 2 Duo E6300 and E6400), offering a smaller die size and therefore greater yields. Most media have subsequently applied the name Allendale to all LGA 775 processors with steppings L2 and M0, while Intel refers to all of these as Conroe. Variants: The Core 2 Duo E4300 uses an Allendale core, released on January 21, 2007. Allendale processors are produced in the LGA 775 form factor, on the 65 nm process node. Variants: Initial list price per processor in quantities of one thousand for the E4300 was US$163. A standard OEM price was US$175, or US$189 for a retail package. The price was cut on April 22, 2007, when the E4400 was released at $133 and the E4300 dropped to $113. A new E2000 series of Allendale processors with half their L2 cache disabled was released in mid-June 2007 under the Pentium Dual-Core brand name. The working cache memory was reduced by half again when the Allendale core was released under Intel's Celeron brand; the Celeron E1000 processors have a 512k L2 cache shared between its two cores. Variants: Subsequent E4000 Allendale processors were introduced as E4500 and E4600. The final E4700 processor was using the G0 stepping instead of M0, which makes it a Conroe core. The E4000 processors were discontinued on March 6, 2009. Variants: E6300 and E6400 CPUs, as well as their Xeon 3040 and 3050 counterparts, have been made using the original 4 MB B2 stepping with half their L2 cache disabled prior to Q1 2007, but using the 2 MB L2 stepping later. This caused contention regarding whether or not the previously available versions were specimens of the Allendale core. Only the newer cores are now commonly referred to as Allendale. Variants: Quoted from The Tech Report: You'll find plenty of sources that will tell you the code name for these 2 MB Core 2 Duo processors is "Allendale," but Intel says otherwise. These CPUs are still code-named "Conroe," which makes sense since they're the same physical chips with half of their L2 cache disabled. Intel may well be cooking up a chip code-named Allendale with 2 MB of L2 cache natively, but this is not that chip. Variants: Conroe-L The Conroe-L Celeron is a single-core processor built on the Core microarchitecture and is clocked much lower than the Cedar Mill Celerons, but still outperforms them. It is based on the 65 nm Conroe-L core, and uses a 400-series model number sequence. The FSB was increased from 533 MHz to 800 MHz in this generation, and the TDP was decreased from 65 W to 35 W. Traditionally with Celerons, it does not have Intel VT-x support or SpeedStep. All Conroe-L models are single-core processors for the value segment of the market, much like the AMD K8-based Sempron. The product line was launched on June 5, 2007. Variants: On October 21, 2007, Intel presented a new processor for its Intel Essential Series. The full name of the processor is a Celeron 220 and is soldered on the D201GLY2 motherboard. With 1.2 GHz and a 512 KB second level cache it has a TDP of 19 Watt and can be cooled passively. The Celeron 220 is the successor of the Celeron 215 which is based on a Yonah core and used on the D201GLY motherboard. This processor is exclusively used on the mini-ITX boards targeted to the sub-value market segment. Variants: Conroe-CL Conroe-CL is a version of Conroe with the LGA 771 Socket otherwise used in Woodcrest. Unlike real Woodcrest, they are only usable in single-CPU configurations. The three Conroe-CL processors that are known are sold as Core 2 Duo E6305, E6405 and Celeron 445. These processors will not work in regular LGA 775 mainboards but are typically used in blade servers that also use Woodcrest or other DP server processors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OGS (electronic toll collection)** OGS (electronic toll collection): OGS (Otomatik Geçiş Sistemi; English: Automatic Pass System) was an electronic toll collection system of RFID transponder type available on toll roads and toll bridges in Turkey. The system was adopted to avoid traffic congestion at toll plazas. The successor to OGS is the HGS system of RFID tag type implemented later on at the same toll plazas. OGS (electronic toll collection): It was launched in 1998, and was first installed on the Fatih Sultan Mehmet Bridge at O-2 over Bosporus in Istanbul. The OGS was later extended to the intercity motorways O-3, O-4, O-32, O-51 and O-52 and the other toll bridge Bosphorus Bridge on O-1 over the Istanbul Strait. The system was administrated by the General Directorate of Highways (Turkish: TC Karayolları Genel Müdürlüğü, KGM).OGS was retired on March 31, 2022 and HGS is now the sole way of electronic toll collection. Tolling: The radio transponder, also known as a tag, was obtained from the office of the toll plazas, authorized major banks, mobile points of sales and on the internet as well, after signing an agreement. The tag was issued for a specific licence plate and so for a specific vehicle category. It could not be transferred without notice to the provider. The tags slightly differed in size and form depending on the provider, and could be replaced within the guarantee period, which varied from three to five years. Before using OGS, the tag's account had to be credited.The OGS system involved the fixing of the tag on the inside of the vehicle's windshield, mostly behind the rear-view mirror. However, on some car brands, the tag had to be attached on the inside base of the windshield. At the toll plaza, the vehicle had to drive to a toll booth on the OGS-only assigned lane. OGS lanes were always leftmost passing lanes, and they are designated at some distance before the toll plaza. However, there were also additional OGS booths on the right side of toll plazas dedicated to heavy vehicles. The booth was marked with an OGS-sign at the top. A flashing green light at the top signaled its usability. The battery-driven tag communicated with the reader device built into the toll collection equipment at the toll booth as the vehicle passes through. It was not required that the vehicle stop. However, a speed limit of 30 km/h (19 mph) was recommended to avoid accidents before the booth.Vehicles are classified for toll purposes in five categories based on the wheelbase and the number of axles. If the number of axles of a vehicle is less than the maximum number of axles -because some axles of a truck are lifted up or a tractor is running without its trailer-, the integrated automated vehicle classification system changes the vehicle category read in to the actual one during the passage allowing a lower toll tariff. The bridge toll is fixed for each vehicle category, and is charged only once in eastward direction on the two bridges in Istanbul. The road toll is rated further depending on the distance driven between the entry and the exit of motorway for each vehicle category. For this reason, a signal has to be initiated on the OGS-lane by entering the motorway. OGS users benefit from 20% discount at all toll roads and bridges in Turkey.The tag could sound in two different modes: One beep indicates that tolling was successful. Four consecutive beeps meant that there was insufficient credit on the tag's account. If the tag did not sound at the next three or four passes, it was most likely that either the tag is out of order or it was being used in an incorrect way. In this case, the tag had to be tested by the highway authority, since untolled passes more than ten times was absolutely not allowed. Violation: In order to allow an uninterrupted passage, there are no gate arms at toll booths on OGS-lanes (except toll booths on Autobahn O-5). If the tolling process is not successful, a lamp under the electronic message display at the exit of the OGS booth turns yellow for a short time, and the display shows the message "CEZALI GEÇİŞ, KAÇIŞ" (Violation, escape) adding vehicle's category and the amount of fine. Further, the driver is warned by a loud electronic horn at the booth, and the vehicle is photographed as well for evidence.In the case of an unmatched vehicle category, the tag's account is charged with the actual category. However, the driver is warned by the booth horn, and the vehicle is photographed. The tag's account is charged with an additional fine of the amount of double the maximum toll for each violation. The vehicle is blacklisted when this type of violation occurs more than ten times. Vehicles on the blacklist or vehicles without a tag or with another vehicle's tag are charged with a fine as high as eleven times the toll amount.Vehicle owners or drivers can query information about their violation online or at a call center. Payment of the fine within 15 days following the violation notice's receipt enables 25% discount according to Turkish law.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gadolinium** Gadolinium: Gadolinium is a chemical element with the symbol Gd and atomic number 64. Gadolinium is a silvery-white metal when oxidation is removed. It is only slightly malleable and is a ductile rare-earth element. Gadolinium reacts with atmospheric oxygen or moisture slowly to form a black coating. Gadolinium below its Curie point of 20 °C (68 °F) is ferromagnetic, with an attraction to a magnetic field higher than that of nickel. Above this temperature it is the most paramagnetic element. It is found in nature only in an oxidized form. When separated, it usually has impurities of the other rare-earths because of their similar chemical properties. Gadolinium: Gadolinium was discovered in 1880 by Jean Charles de Marignac, who detected its oxide by using spectroscopy. It is named after the mineral gadolinite, one of the minerals in which gadolinium is found, itself named for the Finnish chemist Johan Gadolin. Pure gadolinium was first isolated by the chemist Paul-Émile Lecoq de Boisbaudran around 1886. Gadolinium possesses unusual metallurgical properties, to the extent that as little as 1% of gadolinium can significantly improve the workability and resistance to oxidation at high temperatures of iron, chromium, and related metals. Gadolinium as a metal or a salt absorbs neutrons and is, therefore, used sometimes for shielding in neutron radiography and in nuclear reactors. Like most of the rare earths, gadolinium forms trivalent ions with fluorescent properties, and salts of gadolinium(III) are used as phosphors in various applications. Gadolinium: Gadolinium(III) ions in water-soluble salts are highly toxic to mammals. However, chelated gadolinium(III) compounds prevent the gadolinium(III) from being exposed to the organism and the majority is excreted by healthy kidneys before it can deposit in tissues. Because of its paramagnetic properties, solutions of chelated organic gadolinium complexes are used as intravenously administered gadolinium-based MRI contrast agents in medical magnetic resonance imaging. Varying amounts deposit in tissues of the brain, cardiac muscle, kidney, other organs and the skin, mainly depending on kidney function, structure of the chelates (linear or macrocyclic) and the dose administered. Characteristics: Physical properties Gadolinium is the eighth member of the lanthanide series. In the periodic table, it appears between the elements europium to its left and terbium to its right, and above the actinide curium. It is a silvery-white, malleable, ductile rare-earth element. Its 64 electrons are arranged in the configuration of [Xe]4f75d16s2, of which the ten 4f, 5d, and 6s electrons are valence. Characteristics: Like most other metals in the lanthanide series, three electrons are usually available as valence electrons. The remaining 4f electrons are too strongly bound: this is because the 4f orbitals penetrate the most through the inert xenon core of electrons to the nucleus, followed by 5d and 6s, and this increases with higher ionic charge. Gadolinium crystallizes in the hexagonal close-packed α-form at room temperature. At temperatures above 1,235 °C (2,255 °F), it forms or transforms into its β-form, which has a body-centered cubic structure.The isotope gadolinium-157 has the highest thermal-neutron capture cross-section among any stable nuclide: about 259,000 barns. Only xenon-135 has a higher capture cross-section, about 2.0 million barns, but this isotope is radioactive.Gadolinium is believed to be ferromagnetic at temperatures below 20 °C (68 °F) and is strongly paramagnetic above this temperature. There is evidence that gadolinium is a helical antiferromagnetic, rather than a ferromagnetic, below 20 °C (68 °F). Gadolinium demonstrates a magnetocaloric effect whereby its temperature increases when it enters a magnetic field and decreases when it leaves the magnetic field. A significant magnetocaloric effect is observed at higher temperatures, up to about 300 kelvins, in the compounds Gd5(Si1-xGex)4.Individual gadolinium atoms can be isolated by encapsulating them into fullerene molecules, where they can be visualized with a transmission electron microscope. Individual Gd atoms and small Gd clusters can be incorporated into carbon nanotubes. Characteristics: Chemical properties Gadolinium combines with most elements to form Gd(III) derivatives. It also combines with nitrogen, carbon, sulfur, phosphorus, boron, selenium, silicon, and arsenic at elevated temperatures, forming binary compounds.Unlike the other rare-earth elements, metallic gadolinium is relatively stable in dry air. However, it tarnishes quickly in moist air, forming a loosely-adhering gadolinium(III) oxide (Gd2O3): 4 Gd + 3 O2 → 2 Gd2O3,which spalls off, exposing more surface to oxidation. Characteristics: Gadolinium is a strong reducing agent, which reduces oxides of several metals into their elements. Gadolinium is quite electropositive and reacts slowly with cold water and quite quickly with hot water to form gadolinium(III) hydroxide (Gd(OH)3): 2 Gd + 6 H2O → 2 Gd(OH)3 + 3 H2.Gadolinium metal is attacked readily by dilute sulfuric acid to form solutions containing the colorless Gd(III) ions, which exist as [Gd(H2O)9]3+ complexes: 2 Gd + 3 H2SO4 + 18 H2O → 2 [Gd(H2O)9]3+ + 3 SO2−4 + 3 H2. Characteristics: Chemical compounds In the great majority of its compounds, like many rare-earth metals, gadolinium adopts the oxidation state +3. However, gadolinium can be found on rare occasions in the 0, +1 and +2 oxidation states. All four trihalides are known. All are white, except for the iodide, which is yellow. Most commonly encountered of the halides is gadolinium(III) chloride (GdCl3). The oxide dissolves in acids to give the salts, such as gadolinium(III) nitrate. Characteristics: Gadolinium(III), like most lanthanide ions, forms complexes with high coordination numbers. This tendency is illustrated by the use of the chelating agent DOTA, an octadentate ligand. Salts of [Gd(DOTA)]− are useful in magnetic resonance imaging. A variety of related chelate complexes have been developed, including gadodiamide. Reduced gadolinium compounds are known, especially in the solid state. Gadolinium(II) halides are obtained by heating Gd(III) halides in presence of metallic Gd in tantalum containers. Gadolinium also forms the sesquichloride Gd2Cl3, which can be further reduced to GdCl by annealing at 800 °C (1,470 °F). This gadolinium(I) chloride forms platelets with layered graphite-like structure. Isotopes Naturally occurring gadolinium is composed of six stable isotopes, 154Gd, 155Gd, 156Gd, 157Gd, 158Gd and 160Gd, and one radioisotope, 152Gd, with the isotope 158Gd being the most abundant (24.8% natural abundance). The predicted double beta decay of 160Gd has never been observed (an experimental lower limit on its half-life of more than 1.3×1021 years has been measured). Characteristics: Thirty-three radioisotopes of gadolinium have been observed, with the most stable being 152Gd (naturally occurring), with a half-life of about 1.08×1014 years, and 150Gd, with a half-life of 1.79×106 years. All of the remaining radioactive isotopes have half-lives of less than 75 years. The majority of these have half-lives of less than 25 seconds. Gadolinium isotopes have four metastable isomers, with the most stable being 143mGd (t1/2= 110 seconds), 145mGd (t1/2= 85 seconds) and 141mGd (t1/2= 24.5 seconds). Characteristics: The isotopes with atomic masses lower than the most abundant stable isotope, 158Gd, primarily decay by electron capture to isotopes of europium. At higher atomic masses, the primary decay mode is beta decay, and the primary products are isotopes of terbium. History: Gadolinium is named after the mineral gadolinite, in turn named after Finnish chemist and geologist Johan Gadolin. In 1880, the Swiss chemist Jean Charles Galissard de Marignac observed the spectroscopic lines from gadolinium in samples of gadolinite (which actually contains relatively little gadolinium, but enough to show a spectrum) and in the separate mineral cerite. The latter mineral proved to contain far more of the element with the new spectral line. De Marignac eventually separated a mineral oxide from cerite, which he realized was the oxide of this new element. He named the oxide "gadolinia". Because he realized that "gadolinia" was the oxide of a new element, he is credited with the discovery of gadolinium. The French chemist Paul-Émile Lecoq de Boisbaudran carried out the separation of gadolinium metal from gadolinia in 1886. Occurrence: Gadolinium is a constituent in many minerals such as monazite and bastnäsite. The metal is too reactive to exist naturally. Paradoxically, as noted above, the mineral gadolinite actually contains only traces of this element. The abundance in the Earth's crust is about 6.2 mg/kg. The main mining areas are in China, the US, Brazil, Sri Lanka, India, and Australia with reserves expected to exceed one million tonnes. World production of pure gadolinium is about 400 tonnes per year. The only known mineral with essential gadolinium, lepersonnite-(Gd), is very rare. Production: Gadolinium is produced both from monazite and bastnäsite. Crushed minerals are extracted with hydrochloric acid or sulfuric acid, which converts the insoluble oxides into soluble chlorides or sulfates. The acidic filtrates are partially neutralized with caustic soda to pH 3–4. Thorium precipitates as its hydroxide, and is then removed. The remaining solution is treated with ammonium oxalate to convert rare earths into their insoluble oxalates. The oxalates are converted to oxides by heating. The oxides are dissolved in nitric acid that excludes one of the main components, cerium, whose oxide is insoluble in HNO3. The solution is treated with magnesium nitrate to produce a crystallized mixture of double salts of gadolinium, samarium and europium. The salts are separated by ion exchange chromatography. Production: The rare-earth ions are then selectively washed out by a suitable complexing agent.Gadolinium metal is obtained from its oxide or salts by heating it with calcium at 1,450 °C (2,640 °F) in an argon atmosphere. Sponge gadolinium can be produced by reducing molten GdCl3 with an appropriate metal at temperatures below 1,312 °C (2,394 °F) (the melting point of Gd) at reduced pressure. Applications: Gadolinium has no large-scale applications, but it has a variety of specialized uses. Applications: Neutron absorber Because gadolinium has a high neutron cross-section, it is effective for use with neutron radiography and in shielding of nuclear reactors. It is used as a secondary, emergency shut-down measure in some nuclear reactors, particularly of the CANDU reactor type. Gadolinium is used in nuclear marine propulsion systems as a burnable poison. The use of gadolinium in neutron capture therapy to target tumors has been investigated, and gadolinium-containing compounds have proven promising. Applications: Alloys Gadolinium possesses unusual metallurgic properties, with as little as 1% of gadolinium improving the workability and resistance of iron, chromium, and related alloys to high temperatures and oxidation. Applications: Magnetic contrast agent Gadolinium is paramagnetic at room temperature, with a ferromagnetic Curie point of 20 °C (68 °F). Paramagnetic ions, such as gadolinium, increase nuclear spin relaxation rates, making gadolinium useful as a contrast agent for magnetic resonance imaging (MRI). Solutions of organic gadolinium complexes and gadolinium compounds are used as intravenous contrast agents to enhance images in medical and magnetic resonance angiography (MRA) procedures. Magnevist is the most widespread example. Nanotubes packed with gadolinium, called "gadonanotubes", are 40 times more effective than the usual gadolinium contrast agent. Traditional gadolinium-based contrast agents are un-targeted, generally distributing throughout the body after injection, but will not readily cross the intact blood–brain barrier. Brain tumors, and other disorders that degrade the blood-brain barrier, allow these agents to penetrate into the brain and facilitate their detection by contrast-enhanced MRI. Similarly, delayed gadolinium-enhanced magnetic resonance imaging of cartilage uses an ionic compound agent, originally Magnevist, that is excluded from healthy cartilage based on electrostatic repulsion but will enter proteoglycan-depleted cartilage in diseases such as osteoarthritis. Applications: Phosphors Gadolinium is used as a phosphor in medical imaging. It is contained in the phosphor layer of X-ray detectors, suspended in a polymer matrix. Terbium-doped gadolinium oxysulfide (Gd2O2S:Tb) at the phosphor layer converts the X-rays released from the source into light. This material emits green light at 540 nm because of the presence of Tb3+, which is very useful for enhancing the imaging quality. The energy conversion of Gd is up to 20%, which means that one fifth of the X-ray energy striking the phosphor layer can be converted into visible photons. Gadolinium oxyorthosilicate (Gd2SiO5, GSO; usually doped by 0.1–1.0% of Ce) is a single crystal that is used as a scintillator in medical imaging such as positron emission tomography, and for detecting neutrons.Gadolinium compounds are also used for making green phosphors for color TV tubes. Applications: Gamma ray emitter Gadolinium-153 is produced in a nuclear reactor from elemental europium or enriched gadolinium targets. It has a half-life of 240±10 days and emits gamma radiation with strong peaks at 41 keV and 102 keV. It is used in many quality-assurance applications, such as line sources and calibration phantoms, to ensure that nuclear-medicine imaging systems operate correctly and produce useful images of radioisotope distribution inside the patient. It is also used as a gamma-ray source in X-ray absorption measurements and in bone density gauges for osteoporosis screening. Applications: Electronic and optical devices Gadolinium is used for making gadolinium yttrium garnet (Gd:Y3Al5O12), which has microwave applications and is used in fabrication of various optical components and as substrate material for magneto-optical films. Electrolyte in fuel cells Gadolinium can also serve as an electrolyte in solid oxide fuel cells (SOFCs). Using gadolinium as a dopant for materials like cerium oxide (in the form of gadolinium-doped ceria) gives an electrolyte having both high ionic conductivity and low operating temperatures. Applications: Magnetic refrigeration Research is being conducted on magnetic refrigeration near room temperature, which could provide significant efficiency and environmental advantages over conventional refrigeration methods. Gadolinium-based materials, such as Gd5(SixGe1−x)4, are currently the most promising materials, owing to their high Curie temperature and giant magnetocaloric effect. Pure Gd itself exhibits a large magnetocaloric effect near its Curie temperature of 20 °C (68 °F), and this has sparked interest into producing Gd alloys having a larger effect and tunable Curie temperature. In Gd5(SixGe1−x)4, Si and Ge compositions can be varied to adjust the Curie temperature. Applications: Superconductors Gadolinium barium copper oxide (GdBCO) is a superconductor with applications in superconducting motors or generators such as in wind turbines. It can be manufactured in the same way as the most widely researched cuprate high temperature superconductor, yttrium barium copper oxide (YBCO) and uses an analogous chemical composition (GdBa2Cu3O7−δ ). It was used in 2014 to set a new world record for the highest trapped magnetic field in a bulk high temperature superconductor, with a field of 17.6T being trapped within two GdBCO bulks. Applications: Niche and former applications Gadolinium is used for antineutrino detection in the Japanese Super-Kamiokande detector in order to sense supernova explosions. Low-energy neutrons that arise from antineutrino absorption by protons in the detector's ultrapure water are captured by gadolinium nuclei, which subsequently emit gamma rays that are detected as part of the antineutrino signature.Gadolinium gallium garnet (GGG, Gd3Ga5O12) was used for imitation diamonds and for computer bubble memory. Safety: As a free ion, gadolinium is reported often to be highly toxic, but MRI contrast agents are chelated compounds and are considered safe enough to be used in most persons. The toxicity of free gadolinium ions in animals is due to interference with a number of calcium-ion channel dependent processes. The 50% lethal dose is about 0.34 mmol/kg (IV, mouse) or 100–200 mg/kg. Toxicity studies in rodents show that chelation of gadolinium (which also improves its solubility) decreases its toxicity with regard to the free ion by a factor of 31 (i.e., the lethal dose for the Gd-chelate increases by 31 times). It is believed therefore that clinical toxicity of gadolinium-based contrast agents (GBCAs) in humans will depend on the strength of the chelating agent; however this research is still not complete. About a dozen different Gd-chelated agents have been approved as MRI contrast agents around the world.In patients with kidney failure, there is a risk of a rare but serious illness called nephrogenic systemic fibrosis (NSF) that is caused by the use of gadolinium based contrast agents. The disease resembles scleromyxedema and to some extent scleroderma. It may occur months after a contrast agent has been injected. Its association with gadolinium and not the carrier molecule is confirmed by its occurrence with various contrast materials in which gadolinium is carried by very different carrier molecules. Due to this, it is not recommended to use these agents for any individual with end-stage kidney failure as they will require emergent dialysis. Similar but not identical symptoms to NSF may occur in subjects with normal or near-normal renal function within hours to 2 months following the administration of GBCAs; the name "gadolinium deposition disease" (GDD) has been proposed for this condition, which occurs in the absence of pre-existent disease or subsequently developed disease of an alternate known process. A 2016 study reported numerous anecdotal cases of GDD. However, in that study, participants were recruited from online support groups for subjects self-identified as having gadolinium toxicity, and no relevant medical history or data were collected. There have yet to be definitive scientific studies proving the existence of the condition. Safety: Included in the current guidelines from the Canadian Association of Radiologists are that dialysis patients should only receive gadolinium agents where essential and that they should receive dialysis after the exam. If a contrast-enhanced MRI must be performed on a dialysis patient, it is recommended that certain high-risk contrast agents be avoided but not that a lower dose be considered. The American College of Radiology recommends that contrast-enhanced MRI examinations be performed as closely before dialysis as possible as a precautionary measure, although this has not been proven to reduce the likelihood of developing NSF. The FDA recommends that potential for gadolinium retention be considered when choosing the type of GBCA used in patients requiring multiple lifetime doses, pregnant women, children, and patients with inflammatory conditions.Anaphylactoid reactions are rare, occurring in approximately 0.03–0.1%.Long-term environmental impacts of gadolinium contamination due to human usage is a topic of ongoing research. Biological use: Gadolinium has no known native biological role, but its compounds are used as research tools in biomedicine. Gd3+ compounds are components of MRI contrast agents. It is used in various ion channel electrophysiology experiments to block sodium leak channels and stretch activated ion channels. Gadolinium has recently been used to measure the distance between two points in a protein via electron paramagnetic resonance, something that gadolinium is especially amenable to thanks to EPR sensitivity at w-band (95 GHz) frequencies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Higher local field** Higher local field: In mathematics, a higher (-dimensional) local field is an important example of a complete discrete valuation field. Such fields are also sometimes called multi-dimensional local fields. Higher local field: On the usual local fields (typically completions of number fields or the quotient fields of local rings of algebraic curves) there is a unique surjective discrete valuation (of rank 1) associated to a choice of a local parameter of the fields, unless they are archimedean local fields such as the real numbers and complex numbers. Similarly, there is a discrete valuation of rank n on almost all n-dimensional local fields, associated to a choice of n local parameters of the field. In contrast to one-dimensional local fields, higher local fields have a sequence of residue fields. There are different integral structures on higher local fields, depending how many residue fields information one wants to take into account.Geometrically, higher local fields appear via a process of localization and completion of local rings of higher dimensional schemes. Higher local fields are an important part of the subject of higher dimensional number theory, forming the appropriate collection of objects for local considerations. Definition: Finite fields have dimension 0 and complete discrete valuation fields with finite residue field have dimension one (it is natural to also define archimedean local fields such as R or C to have dimension 1), then we say a complete discrete valuation field has dimension n if its residue field has dimension n−1. Higher local fields are those of dimension greater than one, while one-dimensional local fields are the traditional local fields. We call the residue field of a finite-dimensional higher local field the 'first' residue field, its residue field is then the second residue field, and the pattern continues until we reach a finite field. Examples: Two-dimensional local fields are divided into the following classes: Fields of positive characteristic, they are formal power series in variable t over a one-dimensional local field, i.e. Fq((u))((t)). Equicharacteristic fields of characteristic zero, they are formal power series F((t)) over a one-dimensional local field F of characteristic zero. Examples: Mixed-characteristic fields, they are finite extensions of fields of type F{{t}}, F is a one-dimensional local field of characteristic zero. This field is defined as the set of formal power series, infinite in both directions, with coefficients from F such that the minimum of the valuation of the coefficients is an integer, and such that the valuation of the coefficients tend to zero as their index goes to minus infinity. Examples: Archimedean two-dimensional local fields, which are formal power series over the real numbers R or the complex numbers C. Constructions: Higher local fields appear in a variety of contexts. A geometric example is as follows. Given a surface over a finite field of characteristic p, a curve on the surface and a point on the curve, take the local ring at the point. Then, complete this ring, localise it at the curve and complete the resulting ring. Finally, take the quotient field. The result is a two-dimensional local field over a finite field.There is also a construction using commutative algebra, which becomes technical for non-regular rings. The starting point is a Noetherian, regular, n-dimensional ring and a full flag of prime ideals such that their corresponding quotient ring is regular. A series of completions and localisations take place as above until an n-dimensional local field is reached. Topologies on higher local fields: One-dimensional local fields are usually considered in the valuation topology, in which the discrete valuation is used to define open sets. This will not suffice for higher dimensional local fields, since one needs to take into account the topology at the residue level too. Higher local fields can be endowed with appropriate topologies (not uniquely defined) which address this issue. Such topologies are not the topologies associated with discrete valuations of rank n, if n > 1. In dimension two and higher the additive group of the field becomes a topological group which is not locally compact and the base of the topology is not countable. The most surprising thing is that the multiplication is not continuous; however, it is sequentially continuous, which suffices for all reasonable arithmetic purposes. There are also iterated Ind–Pro approaches to replace topological considerations by more formal ones. Measure, integration and harmonic analysis on higher local fields: There is no translation invariant measure on two-dimensional local fields. Instead, there is a finitely additive translation invariant measure defined on the ring of sets generated by closed balls with respect to two-dimensional discrete valuations on the field, and taking values in formal power series R((X)) over reals. This measure is also countably additive in a certain refined sense. It can be viewed as higher Haar measure on higher local fields. The additive group of every higher local field is non-canonically self-dual, and one can define a higher Fourier transform on appropriate spaces of functions. This leads to higher harmonic analysis. Higher local class field theory: Local class field theory in dimension one has its analogues in higher dimensions. The appropriate replacement for the multiplicative group becomes the nth Milnor K-group, where n is the dimension of the field, which then appears as the domain of a reciprocity map to the Galois group of the maximal abelian extension over the field. Even better is to work with the quotient of the nth Milnor K-group by its subgroup of elements divisible by every positive integer. Due to Fesenko theorem, this quotient can also be viewed as the maximal separated topological quotient of the K-group endowed with appropriate higher dimensional topology. Higher local reciprocity homomorphism from this quotient of the nth Milnor K-group to the Galois group of the maximal abelian extension of the higher local field has many features similar to those of the one-dimensional local class field theory. Higher local class field theory: Higher local class field theory is compatible with class field theory at the residue field level, using the border map of Milnor K-theory to create a commutative diagram involving the reciprocity map on the level of the field and the residue field.General higher local class field theory was developed by Kazuya Kato and by Ivan Fesenko. Higher local class field theory in positive characteristic was proposed by Aleksei Parshin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dive Coaster** Dive Coaster: The Dive Coaster is a steel roller coaster model developed and engineered by Bolliger & Mabillard. The design features one or more near-vertical drops that are approximately 90 degrees, which provide a moment of free-falling for passengers. The experience is enhanced by unique trains that seat up to ten riders per row, spanning only two or three rows total. Unlike traditional train design, this distinguishing aspect gives all passengers virtually the same experience throughout the course of the ride. Another defining characteristic of Dive Coasters is the holding brake at the top of the lift hill that holds the train momentarily right as it enters the first drop, suspending some passengers with a view looking straight down and releasing suddenly moments later. Dive Coaster: Development of the Dive Coaster began between 1994 and 1995 with Oblivion at Alton Towers opening on March 14, 1998, making it the world's first Dive Coaster. The trains for this type of coaster are relatively short consisting of two to three cars. B&M also uses floorless trains on this model to enhance the experience. As of July 30, 2022, sixteen Dive Coasters have been built, with the newest being Dr. Diabolical's Cliffhanger at Six Flags Fiesta Texas which features a drop of 150 feet or 46 meters. Featuring a height of 68 m (223 ft), a length of 1,105 m (3,625 ft), and a maximum speed of 130 km/h (81 mph), Yukon Striker, as of 2019, was the world's tallest, longest, and fastest Dive Coaster. History: According to Walter Bolliger, development of the Dive Coaster began between 1994 and 1995. On March 14, 1998, the world's first Dive Coaster, Oblivion, opened at Alton Towers. Though Oblivion is classified as a Dive Coaster, it does not have a true vertical drop as the drop angle is 87-degrees. Two years later, the second Dive Coaster built, Diving Machine G5, opened at Janfusun Fancyworld and also does not have a vertical drop. In 2005, SheiKra opened at Busch Gardens Tampa Bay and was the first Dive Coaster to feature a 90-degree drop and a splashdown element. In 2007, Busch Gardens Williamsburg announced that Griffon would be the first ever Dive Coaster to feature floorless trains and SheiKra would have its trains replaced with floorless ones. In 2011, the first 'mini' Dive Coaster opened at Heide Park Resort, named Krake. Unlike other Dive Coasters, Krake has smaller trains consisting of three rows of six riders. In 2019, Yukon Striker at Canada's Wonderland was the first Dive Coaster to feature a vertical loop, allowing it to have the most inversions on a Dive Coaster with four in total. On July 30 2022, Dr. Diabolical’s Cliffhanger at Six Flags Fiesta Texas is the first B&M Dive Coaster to feature a beyond vertical (95°) drop and 7-across seating. Design: The design of a Dive Coaster can vary slightly from one to another. Depending on the amusement park's request, one row on the train can seat anywhere from 6 to 10 riders. Stadium seating is also used to give every rider a clear view. Next, compared to standard Bolliger & Mabillard 4 abreast cars, because of the extra weight of each train on a Dive Coaster, the size of the track must be larger than other B&M models (such as the Hyper Coaster) to support the weight. At the top of the primary vertical drop, a braking system holds the train for 3 to 5 seconds, giving riders a view of the drop ahead before being released into the drop.In the station, Dive Coasters that use non-floorless trains simply use a standard station. With Dive Coasters that use floorless trains, in order to allow riders to load and unload the train, a movable floor is necessary. Because the front row has nothing in front of it to stop riders from walking over the edge of the station, a gate is placed in front of the train to prevent this from happening. Once all the over-the-shoulder restraints are locked, the gate opens and the floor separates into several pieces and moves underneath the station. When the next train enters the station, the gate is closed and the floors are brought back up where the next riders board. Installations: Bolliger & Mabillard has built sixteen Dive Coasters as of 2023, with only one no longer being in operation. Similar coasters: HangTime, a Gerstlauer roller coaster located at Knott's Berry Farm in Buena Park, California, has been marketed as "the first dive coaster on the West Coast".In 2018, Golden Horse, a Chinese amusement ride manufacturer infamous for creating knock-off coasters and rides, installed a Dive Coaster at Great Xingdong Tourist World. The trains contain four cars, each seating 6 riders per row, compared to B&M dive coasters, which have two or three cars per train.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Epidemiological method** Epidemiological method: The science of epidemiology has matured significantly from the times of Hippocrates, Semmelweis and John Snow. The techniques for gathering and analyzing epidemiological data vary depending on the type of disease being monitored but each study will have overarching similarities. Outline of the process of an epidemiological study: Establish that a problem exists Full epidemiological studies are expensive and laborious undertakings. Before any study is started, a case must be made for the importance of the research. Confirm the homogeneity of the events Any conclusions drawn from inhomogeneous cases will be suspicious. All events or occurrences of the disease must be true cases of the disease. Collect all the events It is important to collect as much information as possible about each event in order to inspect a large number of possible risk factors. The events may be collected from varied methods of epidemiological study or from censuses or hospital records. The events can be characterized by Incidence rates and prevalence rates. Often, occurrence of a single disease entity is set as an event. Given inherent heterogeneous nature of any given disease (i.e., the unique disease principle), a single disease entity may be treated as disease subtypes. This framework is well conceptualized in the interdisciplinary field of molecular pathological epidemiology (MPE). Characterize the events as to epidemiological factors Predisposing factors Non-environmental factors that increase the likelihood of getting a disease. Genetic history, age, and gender are examples. Enabling/disabling factors Factors relating to the environment that either increase or decrease the likelihood of disease. Exercise and good diet are examples of disabling factors. A weakened immune system and poor nutrition are examples of enabling factors. Precipitation factors This factor is the most important in that it identifies the source of exposure. It may be a germ, toxin or gene. Reinforcing factors These are factors that compound the likelihood of getting a disease. They may include repeated exposure or excessive environmental stresses. Look for patterns and trends Here one looks for similarities in the cases which may identify major risk factors for contracting the disease. Epidemic curves may be used to identify such risk factors. Formulate a hypothesis If a trend has been observed in the cases, the researcher may postulate as to the nature of the relationship between the potential disease-causing agent and the disease. Test the hypothesis Because epidemiological studies can rarely be conducted in a laboratory the results are often polluted by uncontrollable variations in the cases. This often makes the results difficult to interpret. Two methods have evolved to assess the strength of the relationship between the disease causing agent and the disease. Koch's postulates were the first criteria developed for epidemiological relationships. Because they only work well for highly contagious bacteria and toxins, this method is largely out of favor. Bradford-Hill Criteria are the current standards for epidemiological relationships. A relationship may fill all, some, or none of the criteria and still be true. Publish the results. Measures: Epidemiologists are famous for their use of rates. Each measure serves to characterize the disease giving valuable information about contagiousness, incubation period, duration, and mortality of the disease. Measures: Measures of occurrence Incidence measures Incidence rate, where cases included are defined using a case definition Hazard rate Cumulative incidence Prevalence measures Point prevalence Period prevalence Measures of association Relative measures Risk ratio Rate ratio Odds ratio Hazard ratio Absolute measures Absolute risk reduction Attributable risk Attributable risk in exposed Percent attributable risk Levin's attributable risk Other measures Virulence and Infectivity Mortality rate and Morbidity rate Case fatality Sensitivity (tests) and Specificity (tests) Limitations: Epidemiological (and other observational) studies typically highlight associations between exposures and outcomes, rather than causation. While some consider this a limitation of observational research, epidemiological models of causation (e.g. Bradford Hill criteria) contend that an entire body of evidence is needed before determining if an association is truly causal. Moreover, many research questions are impossible to study in experimental settings, due to concerns around ethics and study validity. For example, the link between cigarette smoke and lung cancer was uncovered largely through observational research; however research ethics would certainly prohibit conducting a randomized trial of cigarette smoking once it had already been identified as a potential health threat.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Discovery and development of direct thrombin inhibitors** Discovery and development of direct thrombin inhibitors: Direct thrombin inhibitors (DTIs) are a class of anticoagulant drugs that can be used to prevent and treat embolisms and blood clots caused by various diseases. They inhibit thrombin, a serine protease which affects the coagulation cascade in many ways. DTIs have undergone rapid development since the 90's. With technological advances in genetic engineering the production of recombinant hirudin was made possible which opened the door to this new group of drugs. Before the use of DTIs the therapy and prophylaxis for anticoagulation had stayed the same for over 50 years with the use of heparin derivatives and warfarin which have some well known disadvantages. DTIs are still under development, but the research focus has shifted towards factor Xa inhibitors, or even dual thrombin and fXa inhibitors that have a broader mechanism of action by both inhibiting factor IIa (thrombin) and Xa. A recent review of patents and literature on thrombin inhibitors has demonstrated that the development of allosteric and multi-mechanism inhibitors might lead the way to a safer anticoagulant. History: Anticoagulation therapy has a long history. In 1884 John Berry Haycraft described a substance found in the saliva of leeches, Hirudo medicinalis, that had anticoagulant effects. He named the substance ‘Hirudine’ from the Latin name. The use of medicinal leeches can be dated back all the way to ancient Egypt. In the early 20th century Jay McLean, L. Emmet Holt Jr. and William Henry Howell discovered the anticoagulant heparin, which they isolated from the liver (hepar). Heparin remains one of the most effective anticoagulants and is still used today, although it has its disadvantages, such as requiring intravenous administration and having a variable dose-response curve due to substantial protein binding. In the 1980s low molecular-weight heparin (LMWH) were developed. They are derived from heparin by enzymatic or chemical depolymerization and have better pharmacokinetic properties than heparin. In 1955 the first clinical use of warfarin, a vitamin K antagonist, was reported. Warfarin was originally used as a rat poison in 1948 and thought to be unsafe for humans, but a suicide attempt suggested that it was relatively safe for humans. Vitamin K antagonists are the most commonly used oral anticoagulants today and warfarin was the 11th most prescribed drug in the United States in 1999 and is actually the most widely prescribed oral anticoagulant worldwide. Warfarin has its disadvantages though, just like heparin, such as a narrow therapeutic index and multiple food and drug interactions and it requires routine anticoagulation monitoring and dose adjustment. Since both heparin and warfarin have their downsides the search for alternative anticoagulants has been ongoing and DTIs are proving to be worthy competitors. The first DTI was actually hirudin, which became more easily available with genetic engineering. It is now available in a recombinant form as lepirudin (Refludan) and desirudin (Revasc, Iprivask). Development of other DTIs followed with the hirudin analog, bivalirudin, and then the small molecular DTIs. However, such DTIs were also having side effects such as bleeding complications and liver toxicity, and their long-term effects were in doubt. Mechanism of action: Blood clotting cascade When a blood vessel ruptures or gets injured, factor VII comes into contact with tissue factors which starts a process called the blood coagulation cascade. Its purpose is to stop bleeding and repair tissue damage. When this process is too active due to various problems the risk of blood clots or embolisms increases. As the name indicates the cascade is a multi-step procedure where the main product thrombin is made by activating various proenzymes (mainly serine proteases) in each step of the cascade. Thrombin has multiple purposes, but mainly it converts soluble fibrinogen to an insoluble fibrin complex. Furthermore, it activates factors V, VIII and XI, all by cleaving the sequences GlyGlyGlyValArg-GlyPro and PhePheSerAlaArg-GlyHis, selectively between Arginine (Arg) and Glycine (Gly). These factors generate more thrombin. Thrombin also activates factor XIII that stabilizes the fibrin complex and therefore the clot and it stimulates platelets, which help with the coagulation. Given this broad action of thrombin it stands as a good drug target for anticoagulant drugs such as heparin, warfarin and DTIs and antiplatelet drugs like aspirin. Mechanism of action: Binding sites Thrombin is in the serine protease family. It has 3 binding domains in which thrombin-inhibition drugs bind to. Those proteases have a deep narrow gap as an active binding site that consists of two β-barrel subdomains that make up the surface gap which binds substrate peptides. The surface in the gap seems to have limiting access to molecules by steric hindrance, this binding site consists of 3 amino acids, Asp-102, His-57 and Ser-195. Thrombin also has two exosites (1 and 2). Thrombin is a little different from other serine proteases as exosite 1 is anion-binding and binds to fibrin and other similar substrates while exosite 2 is a heparin-binding domain. Mechanism of action: DTIs inhibit thrombin by two ways; bivalent DTIs block simultaneously the active site and exosite 1 and act as competitive inhibitors of fibrin, while univalent DTIs block only the active site and can therefore both inhibit unbound and fibrin-bound thrombin. In contrast, heparin drugs bind in exosite 2 and form a bridge between thrombin and antithrombin, a natural anticoagulant substrate formed in the body, and strongly catalyzes its function. But heparin can also form a bridge between thrombin and fibrin which binds to exosite 1 which protects the thrombin from inhibiting function of heparin-antithrombin complex and increases thrombin's affinity to fibrin. DTIs that bind to the anion-binding site have shown to inactivate thrombin without disconnecting thrombin from fibrin, which points to a separate binding site for fibrin. DTIs aren't dependent to cofactors like antithrombin to inhibit thrombin so they can both inhibit free/soluble thrombin as well as fibrin bound thrombin unlike heparins. The inhibition is either irreversible or reversible. Reversible inhibition is often linked to lesser risk of bleeding. Due to this action of DTIs they can both be used for prophylaxis as well as treatment for embolisms/clots. Mechanism of action: Active site's pockets DTIs that fit in the active binding site have to fit in the hydrophobic pocket (S1) that contains aspartic acid residue at the bottom which recognizes the basic side chain. The S2 site has a loop around tryptophan which occludes a hydrophobic pocket that can recognize larger aliphatic residues. The S3 site is flat and the S4 site is hydrophobic, it has tryptophan lined by leucine and isoleucine. Mechanism of action: Nα-(2-naphthyl-sulphonyl-glycyl)-DL-p-amidinophenylalanyl-piperidine (NAPAP) binds thrombin in the S1, S2 and S4 pockets. The amidine group on NAPAP forms a bidentate salt bridge with Asp deep in the S1 pocket, the piperidine group takes the role of proline residue and binds in the S2 pocket, and the naphthyl rings of the molecule forms a hydrophobic interaction with Trp in the S4 pocket. Pharmaceutical companies have used the structural knowledge of NAPAP to develop DTIs. Dabigatran, like NAPAP binds to S1, S2 and S4 pockets. Benzamidine group on the dabigatran structure binds deep in the S1 pocket, the methylbenzimidazole fits nicely in the hydrophobic S2 pocket and the Ile and Leu at the bottom of the S4 pocket binds to the aromatic group of dabigatran. Drug development: Hirudin derivatives Hirudin derivatives are all bivalent DTIs, they block both the active site and exosite 1 in an irreversible 1:1 stoichiometric complex. The active site is the binding site for the globular amino-terminal domain and exosite 1 is the binding site for the acidic carboxy-terminal domain of hirudin. Native hirudin, a 65-amino-acid polypeptide, is produced in the parapharyngeal glands of medicinal leeches. Hirudins today are produced by recombinant biotechnology using yeast. These recombinant hirudins lack a sulfate group at Tyr-63 and are therefore called desulfatohirudins. They have a 10-fold lower binding affinity to thrombin compared to native hirudin, but remain a highly specific inhibitor of thrombin and have an inhibition constant for thrombin in the picomolar range. Renal clearance and degradation account for the most part for the systemic clearance of desulfatohirudins and there is accumulation of the drug in patients with chronic kidney disease. These drugs should not be used in patients with impaired renal function, since there is no specific antidote available to reverse the effects. Hirudins are given parenterally, usually by intravenous injection. 80% of hirudin is distributed in the extravascular compartment and only 20% is found in the plasma. The most common desulfatohirudins today are lepirudin and desirudin. Drug development: Hirudin In a meta-analysis of 11 randomized trials involving hirudin and other DTIs versus heparin in the treatment of acute coronary syndrome (ACS) it was found that hirudin has a significantly higher incidence of bleeding compared with heparin. Hirudin is therefore not recommended for treatment of ACS and currently it has no clinical indications. Drug development: Lepirudin Lepirudin is approved for the treatment of heparin-induced thrombocytopenia (HIT) in the USA, Canada, Europe and Australia. HIT is a very serious adverse event related to heparin and occurs with both unfractionated heparin and LMWH, although to a lesser extent with the latter. It is an immune-mediated, prothrombotic complication which results from a platelet-activating immune response triggered by the interaction of heparin with platelet factor 4 (PF4). The PF4-heparin complex can activate platelets and may cause venous and arterial thrombosis. When lepirudin binds to thrombin it hinders its prothrombic activity. Three prospective studies, called the Heparin-Associated-Thrombocytopenia (HAT) 1,2, and 3, were performed that compared lepirudin with historical controls in the treatment of HIT. All three studies showed that the risk of new thrombosis was decreased with the use of lepirudin, but the risk for major bleeding was increased. The higher incidence of major bleeding is thought to be the result of an approved dosing regimen that was too high, consequently the recommended dose was halved from the initial dose. Drug development: As of May 2012 Bayer HealthCare, the only manufacturer of lepirudin injections, discontinued its production. They expect supplies from wholesalers to be depleted by mid-2013. Drug development: Desirudin Desirudin is approved for treatment of venous thromboembolism (VTE) in Europe and multiple phase III trials are presently ongoing in the USA. Two studies comparing desirudin with enoxaparin (a LMWH) or unfractionated heparin have been performed. In both studies desirudin was considered to be superior in preventing VTE. Desirudin also reduced the rate of proximal deep vein thrombosis. Bleeding rates were similar with desirudin and heparin. Drug development: Bivalirudin Bivalirudin, a 20 amino acid polypeptide, is a synthetic analog of hirudin. Like the hirudins it is also a bivalent DTI. It has an amino-terminal D-Phe-Pro-Arg-Pro domain that is linked via four Gly residues to a dodecapeptide analog of the carboxy-terminal of hirudin. The amino-terminal domain binds to the active site and the carboxy-terminal domain binds to exosite 1 on thrombin. Different from the hirudins, once bound thrombin cleaves the Arg-Pro bond at the amino-terminal of bivalirudin and as a result restores the functions to the active site of the enzyme. Even though the carboxy-terminal domain of bivalirudin is still bound to exosite 1 on thrombin, the affinity of the bond is decreased after the amino-terminal is released. This allows substrates to substrates to compete with cleaved bivalirudin for access to exosite 1 on thrombin. The use of bivalirudin has mostly been studied in the setting of acute coronary syndrome. A few studies indicate that bivalirudin is non-inferior compared to heparin and that bivalirudin is associated with a lower rate of bleeding. Unlike the hirudins, bivalirudin is only partially (about 20%) excreted by the kidneys, other sites such as hepatic metabolism and proteolysis also contribute to its metabolism, making it safer to use in patients with renal impairment; however, dose adjustments are needed in severe renal impairment. Drug development: Small molecular direct thrombin inhibitors Small molecular direct thrombin inhibitors (smDTIs) are non-peptide small molecules that specifically and reversibly inhibit both free and clot-bound thrombin by binding to the active site of the thrombin molecule. They prevent VTE in patients undergoing hip- and knee replacement surgery. The advantages of this type of DTIs are that they do not need monitoring, have a wide therapeutic index and the possibility of oral administration route. They are theoretically more convenient than both vitamin K antagonist and LMWH. Researches will, however, have to show the indication of the use and their safety.The smDTIs where derived using a peptidomimetic design with either P1 residue from arginine itself (e.g. argatroban) or arginine-like substrates such as benzamidine (e.g. NAPAP). Drug development: Argatroban Argatroban is a small univalent DTI formed from P1 residue from arginine. It binds to the active site on thrombin. The X-ray crystal structure shows that the piperidine ring binds in the S2 pocket and the guanidine group binds with hydrogen bonds with Asp189 into the S1 pocket. It’s given as an intravenous bolus because the highly basic guanidine with pKa 13 prevents it to be absorbed from the gastrointestinal tract. The plasma half-life is approximately 45 minutes. As argatroban is metabolized via hepatic pathway and is mainly excreted through the biliary system, dose adjustments are necessary in patients with hepatic impairment but not renal damage. Argatroban has been approved in the USA since 2000 for the treatment of thrombosis in patients with HIT and 2002 for anticoagulation in patients with a history of HIT or are at risk of HIT undergoing percutaneous coronary interventions (PCI). It was first introduced in Japan in 1990 for treatment of peripheral vascular disorders. Drug development: Ximelagatran The publication of the NAPAP-fIIa crystal structure triggered many researches on thrombin inhibitors. NAPAP is an active site thrombin inhibitor. It fills the S3 and S2 pockets with its naphthalene and piperidine groups. AstraZeneca used the information to develop melagatran. The compound was poorly orally available, but after renovation they got a double prodrug which was the first oral DTI in clinical trials, ximelagatran. Ximelagatran was on the European market for approximately 20 months when it was suspended. Studies showed that treatment for over 35 days was linked with the risk of hepatic toxicity. It was never approved by the FDA. Drug development: Dabigatran etexilate Researchers at Boehringer Ingelheim also used the publicized information about the NAPAP-fIIa crystal structure, starting with the NAPAP structure that led to the discovery of dabigatran, which is a very polar compound and therefore not orally active. By masking the amidinium moiety as a carbamate-ester and turning the carboxylate into an ester they were able to make a prodrug called dabigatran etexilate, a highly lipophilic, gastrointestinally absorbed and orally bioavailable double prodrug such as ximelagatran, with the plasma half-life of approximately 12 hours. Dabigatran etexilate is rapidly absorbed, it lacks interaction with cytochrome P450 enzymes and with other food and drugs, there is no need for routine monitoring and it has a broad therapeutic index and a fixed-dose administration, which is excellent safety compared with warfarin. Unlike ximelagatran, a long-term treatment of dabigatran etexilate has not been linked with hepatic toxicity, seeing as how the drug is predominantly eliminated (>80%) by the kidneys. Dabigatran etexilate was approved in Canada and Europe in 2008 for the prevention of VTE in patients undergoing hip- and knee surgery. In October 2010 the US FDA approved dabigatran etexilate for the prevention of stroke in patients with atrial fibrillation (AF). Many pharmaceutical companies have attempted to develop orally bioavailable DTI drugs but dabigatran etexilate is the only one to reach the market.In a 2012 meta-analysis dabigatran was associated with increased risk of myocardial infarction (MI) or ACS when tested against different controls in a broad spectrum of patients. Drug development: Table 1: Summary of characteristics of DTIs iv: intravenous, sc: subcutaneous, HIT: heparin-induced thrombocytopenia, VTE: Venous thromboembolism, DVT: Deep vein thrombosis, PTCA: Percutaneous transluminal coronary angioplasty, PCI: percutaneous coronary intervention, FDA: Food and Drug Administration, AF: Atrial fibrillation, TI: Therapeutic index Status 2014: In 2014 dabigatran remains the only approved oral DTI and is therefore the only DTI alternative to the vitamin K antagonists. There are, however, some novel oral anticoagulant drugs that are currently in early and advanced stages of clinical development. Most of those drugs are in the class of direct factor Xa inhibitors, but there is one DTI called AZD0837, which is a follow-up compound of ximelgatran that is being developed by AstraZeneca. It is the prodrug of a potent, competitive, reversible inhibitor of free and fibrin-bound thrombin called ARH0637. The development of AZD 0837 is discontinued. Due to a limitation identified in long-term stability of the extended-release AZD0837 drug product, a follow-up study from ASSURE on stroke prevention in patients with non-valvular atrial fibrillation, was prematurely closed in 2010 after 2 years. There was also a numerically higher mortality against warfarin. In a Phase 2 trial for AF the mean serum creatinine concentration increased by about 10% from baseline in patients treated with AZD0837, which returned to baseline after cessation of therapy. Development of other oral DTIs, such as Sofigatran from Mitsubishi Tanabe Pharma has been discontinued. Status 2014: Another strategy for developing oral anticoagulant drugs is that of dual thrombin and fXa inhibitors that some pharmaceutical companies, including Boehringer Ingelheim, have reported on. These compounds show favorable anticoagulant activity in vitro.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ORYX** ORYX: ORYX is an encryption algorithm used in cellular communications in order to protect data traffic. It is a stream cipher designed to have a very strong 96-bit key strength with a way to reduce the strength to 32-bits for export. However, due to mistakes the actual strength is a trivial 16-bits and any signal can be cracked after the first 25–27 bytes.It is one of the four cryptographic primitives standardized by TIA's for use in their digital cellular communications standards TDMA and CDMA. Algorithm description: ORYX is a simple stream cipher based on binary linear-feedback shift registers (LFSRs) to protect cellular data transmissions (for wireless data services). The cipher ORYX has four components: three 32-bit LFSRs which labeled as LFSRA, LFSRB and LFSRK, and an S-box containing a known permutation P of the integer values 0 to 255. Algorithm description: The feedback function for LFSRK is defined as: Lt + 32 = Lt + 28 ⊕ Lt + 19 ⊕ Lt + 18 ⊕ Lt + 16 ⊕ Lt + 14 ⊕ Lt + 11 ⊕ Lt + 10 ⊕ Lt + 9 ⊕ Lt + 6 ⊕ Lt + 5 ⊕ Lt + 1 ⊕ Lt The feedback functions for LFSRA are defined as: Lt + 32 = Lt + 26 ⊕ Lt + 23 ⊕ Lt + 22 ⊕ Lt + 16 ⊕ Lt + 12 ⊕ Lt + 11 ⊕ Lt + 10 ⊕ Lt + 8 ⊕ Lt + 7 ⊕ Lt + 5 ⊕ Lt + 4 ⊕ Lt + 2 ⊕ Lt + 1 ⊕ Lt and Lt + 32 = Lt + 27 ⊕ Lt + 26 ⊕ Lt + 25 ⊕ Lt + 24 ⊕ Lt + 23 ⊕ Lt + 22 ⊕ Lt + 17 ⊕ Lt + 13 ⊕ Lt + 11 ⊕ Lt + 10 ⊕ Lt + 9 ⊕ Lt + 8 ⊕ Lt + 7 ⊕ Lt + 2 ⊕ Lt + 1 ⊕ Lt The feedback function for LFSRB is: Lt + 32 = Lt + 31 ⊕ Lt + 21 ⊕ Lt + 20 ⊕ Lt + 16 ⊕ Lt + 15 ⊕ Lt + 6 ⊕ Lt + 3 ⊕ Lt + 1 ⊕ Lt
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Unfolded protein response** Unfolded protein response: The unfolded protein response (UPR) is a cellular stress response related to the endoplasmic reticulum (ER) stress. It has been found to be conserved between mammalian species, as well as yeast and worm organisms. Unfolded protein response: The UPR is activated in response to an accumulation of unfolded or misfolded proteins in the lumen of the endoplasmic reticulum. In this scenario, the UPR has three aims: initially to restore normal function of the cell by halting protein translation, degrading misfolded proteins, and activating the signalling pathways that lead to increasing the production of molecular chaperones involved in protein folding. If these objectives are not achieved within a certain time span or the disruption is prolonged, the UPR aims towards apoptosis. Unfolded protein response: Sustained overactivation of the UPR has been implicated in prion diseases as well as several other neurodegenerative diseases, and inhibiting the UPR could become a treatment for those diseases. Diseases amenable to UPR inhibition include Creutzfeldt–Jakob disease, Alzheimer's disease, Parkinson's disease, and Huntington's disease. Protein folding in the endoplasmic reticulum: Protein synthesis The term protein folding incorporates all the processes involved in the production of a protein after the nascent polypeptides have become synthesized by the ribosomes. The proteins destined to be secreted or sorted to other cell organelles carry an N-terminal signal sequence that will interact with a signal recognition particle (SRP). The SRP will lead the whole complex (Ribosome, RNA, polypeptide) to the ER membrane. Once the sequence has “docked”, the protein continues translation, with the resultant strand being fed through the polypeptide translocator directly into the ER. Protein folding commences as soon as the polypeptide enters to the luminal environment, even as translation of the remaining polypeptide continues. Protein folding in the endoplasmic reticulum: Protein folding and quality control Protein folding steps involve a range of enzymes and molecular chaperones to coordinate and regulate reactions, in addition to a range of substrates required in order for the reactions to take place. The most important of these to note are N-linked glycosylation and disulfide bond formation. N-linked glycosylation occurs as soon as the protein sequence passes into the ER through the translocon, where it is glycosylated with a sugar molecule that forms the key ligand for the lectin molecules calreticulin (CRT; soluble in ER lumen) and calnexin (CNX; membrane bound). Favoured by the highly oxidizing environment of the ER, protein disulfide isomerases facilitate formation of disulfide bonds, which confer structural stability to the protein in order for it to withstand adverse conditions such as extremes of pH and degradative enzymes. Protein folding in the endoplasmic reticulum: The ER is capable of recognizing misfolding proteins without causing disruption to the functioning of the ER. The aforementioned sugar molecule remains the means by which the cell monitors protein folding, as the misfolding protein becomes characteristically devoid of glucose residues, targeting it for identification and re-glycosylation by the enzyme UGGT (UDP-glucose:glycoprotein glucosyltransferase). If this fails to restore the normal folding process, exposed hydrophobic residues of the misfolded protein are bound by the protein glucose regulate protein 78 (Grp78), a heat shock protein 70kDa family member that prevents the protein from further transit and secretion.Where circumstances continue to cause a particular protein to misfold, the protein is recognized as posing a threat to the proper functioning of the ER, as they can aggregate to one another and accumulate. In such circumstances the protein is guided through endoplasmic reticulum-associated degradation (ERAD). The chaperone EDEM guides the retrotranslocation of the misfolded protein back into the cytosol in transient complexes with PDI and Grp78. Here it enters the ubiquitin-proteasome pathway, as it is tagged by multiple ubiquitin molecules, targeting it for degradation by cytosolic proteasomes. Successful protein folding requires a tightly controlled environment of substrates that include glucose to meet the metabolic energy requirements of the functioning molecular chaperones; calcium that is stored bound to resident molecular chaperones; and redox buffers that maintain the oxidizing environment required for disulfide bond formation.Unsuccessful protein folding can be caused by HLA-B27, disturbing balance of important (IL-10 and TNF) signaling proteins. At least some disturbances are reliant on correct HLA-B27 folding.However, where circumstances cause a more global disruption to protein folding that overwhelms the ER's coping mechanisms, the UPR is activated. Molecular mechanism: Initiation The molecular chaperone BiP/Grp78 has a range of functions within the ER. It maintains specific transmembrane receptor proteins involved in initiation of the downstream signalling of the UPR in an inactive state by binding to their luminal domains. An overwhelming load of misfolded proteins or simply the over-expression of proteins (e.g. IgG) requires more of the available BiP/Grp78 to bind to the exposed hydrophobic regions of these proteins, and consequently BiP/Grp78 dissociates from these receptor sites to meet this requirement. Dissociation from the intracellular receptor domains allows them to become active. PERK dimerizes with BiP in resting cells and oligomerizes in ER-stressed cells. Although this is traditionally the accepted model, doubts have been raised over its validity. It has been argued that the genetic and structural evidence supporting the model simply shows BiP dissociation to be merely correlated with Ire1 activation, rather than specifically causing it. An alternative model has been proposed, whereby unfolded proteins interact directly with the ER-lumenal domain of Ire1, causing oligomerization and transautophosphorylation. However these models are not mutually exclusive, it is also possible that both direct interaction of Ire1 with unfolded proteins and dissosiation of BiP from IRE1 contribute to the activation of the Ire1 pathway. Molecular mechanism: Functions The initial phases of UPR activation have two key roles: Translation Attenuation and Cell Cycle Arrest by the PERK Receptor This occurs within minutes to hours of UPR activation to prevent further translational loading of the ER. PERK (protein kinase RNA-like endoplasmic reticulum kinase) activates itself by oligomerization and autophosphorylation of the free luminal domain. The activated cytosolic domain causes translational attenuation by directly phosphorylating the α subunit of the regulating initiator of the mRNA translation machinery, eIF2. This also produces translational attenuation of the protein machinery involved in running the cell cycle, producing cell cycle arrest in the G1 phase. PERK deficiency may have a significant impact on physiological states associated with ER stress. Molecular mechanism: Increased Production of Proteins Involved in the Functions of the UPR UPR activation also results in upregulation of proteins involved in chaperoning malfolding proteins, protein folding and ERAD, including further production of Grp78. Ultimately this increases the cell's molecular mechanisms by which it can deal with the misfolded protein load. These receptor proteins have been identified as: Inositol-requiring kinase 1, whose free luminal domain activates itself by homodimerisation and transautophosphorylation. The activated domain is able to activate the transcription factor XBP1(Xbox binding protein) mRNA (the mammalian equivalent of the yeast Hac1 mRNA) by cleavage and removal of a 26bp intron. The activated transcription factor upregulates UPR 'stress genes' by directly binding to stress element promoters in the nucleus. Molecular mechanism: ATF6 (activating transcription factor 6) is a basic leucine zipper transcription factor. Upon Grp78 dissociation, the entire 90kDa protein translocates to the Golgi, where it is cleaved by proteases to form an active 50kDa transcription factor that translocates to the nucleus. It binds to stress element promoters upstream of genes that are upregulated in the UPR.The aim of these responses is to remove the accumulated protein load whilst preventing any further addition to the stress, so that normal function of the ER can be restored as soon as possible. Molecular mechanism: If the UPR pathway is activated in an abnormal fashion, such as when obesity triggers chronic ER stress and the pathway is constitutively active, this can lead to insensitivity to insulin signaling and thus insulin resistance. Individuals suffering from obesity have an elevated demand placed on the secretory and synthesis systems of their cells. This activates cellular stress signaling and inflammatory pathways because of the abnormal conditions disrupting ER homeostasis. Molecular mechanism: A downstream effect of the ER stress is a significant decrease in insulin-stimulated phosphorylation of tyrosine residues of insulin receptor substrate (IRS-1), which is the substrate for insulin tyrosine kinase (the insulin receptor). C-Jun N-terminal kinase (JNK) is also activated at high levels by IRE-1α, which itself is phosphorylated to become activated in the presence of ER stress. Subsequently, JNK phosphorylates serine residues of IRS-1, and thus inhibits insulin receptor signaling. IRE-1α also recruits tumor necrosis factor receptor-associated factor 2 (TRAF2). This kinase cascade that is dependent on IRE-1α and JNK mediates ER stress–induced inhibition of insulin action.Obesity provides chronic cellular stimuli for the UPR pathway as a result of the stresses and strains placed upon the ER, and without allowing restoration to normal cellular responsiveness to insulin hormone signaling, an individual becomes very likely to develop type 2 diabetes. Molecular mechanism: Skeletal muscles are sensitive to physiological stress, as exercise can impair ER homeostasis. This causes the expression of ER chaperones to be induced by the UPR in response to the exercise-induced ER stress. Muscular contraction during exercise causes calcium to be released from the sarcoplasmic reticulum (SR), a specialized ER network in skeletal muscles. This calcium then interacts with calcineurin and calcium/calmodulin-dependent kinases that in turn activate transcription factors. These transcription factors then proceed to alter the expression of exercise-regulated muscle genes. PGC-1alpha, a transcriptional coactivator, is a key transcription factor involved in mediating the UPR in a tissue-specific manner in skeletal muscles by coactivating ATF6alpha. Therefore, PGC-1alpha gets expressed in muscles after acute and long-term exercise training. The function of this transcription factor is to increase the number and function of mitochondria, as well as to induce a switch of skeletal fibers to slow oxidative muscle fibers, as these are fatigue-resistant. Therefore, this UPR pathway mediates changes in muscles that have undergone endurance training by making them more resistant to fatigue and protecting them from future stress. Molecular mechanism: Initiating apoptosis In conditions of prolonged stress, the goal of the UPR changes from being one that promotes cellular survival to one that commits the cell to a pathway of apoptosis. Proteins downstream of all 3 UPR receptor pathways have been identified as having pro-apoptotic roles. However, the point at which the 'apoptotic switch' is activated has not yet been determined, but it is a logical consideration that this should be beyond a certain time period in which resolution of the stress has not been achieved. The two principal UPR receptors involved are Ire1 and PERK. Molecular mechanism: By binding with the protein TRAF2, Ire1 activates a JNK signaling pathway, at which point human procaspase 4 is believed to cause apoptosis by activating downstream caspases. Molecular mechanism: Although PERK is recognised to produce a translational block, certain genes can bypass this block. An important example is that the proapoptotic protein CHOP (CCAAT/-enhancer-binding protein homologous protein), is upregulated downstream of the bZIP transcription factor ATF4 (activating transcription factor 4) and uniquely responsive to ER stress. CHOP causes downregulation of the anti-apoptotic mitochondrial protein Bcl-2, favouring a pro-apoptotic drive at the mitochondria by proteins that cause mitochondrial damage, cytochrome c release and caspase 3 activation. Molecular mechanism: Diseases Diseases amenable to UPR inhibition include Creutzfeldt–Jakob disease, Alzheimer's disease, Parkinson's disease, and Huntington's disease.Endoplasmic reticulum stress was reported to play a major role in non‐alcoholic fatty liver disease (NAFLD) induction and progression. High fat diet fed rats showed increased ER stress markers CHOP, XBP1, and GRP78. ER stress is known to activate hepatic de novo lipogenesis, inhibit VLDL secretion, promote insulin resistance and inflammatory process, and promote cell apoptosis. Thus it increase the level of fat accumulation and worsens the NAFLD to a more serious hepatic state. Zingiber officinale (ginger) extract and omega‐3 fatty acids were reported to ameliorate endoplasmic reticulum stress in a nonalcoholic fatty liver rat model.As stated above, the UPR can also be activated as a compensatory mechanism in disease states. For instance, the UPR is up-regulated in an inherited form of dilated cardiomyopathy due to a mutation in gene encoding the Phospholamban protein. Further activation proved therapeutic in a human induced pluripotent stem cell model of PLN mutant dilated cardiomyopathy. Chemical inducers: Brefeldin A is a very common inducer of the unfolded protein response or endoplasmic reticulum stress response (ER stress). thapsigargin leads to ER Ca2+ depletion due to inhibition of the Sarco/Endoplasmic Reticulum Ca2+-ATPase (SERCA). A23187 upregulates expression of ER stress proteins 2-deoxyglucose dithiothreitol reduces the disulfide bridges of proteins. The denatured proteins accumulated inside the ER. fenretinide and bortezomib (Velcade), each acting via different cellular mechanisms, induce ER stress, leading to apoptosis in melanoma cells. tunicamycin inhibits N-linked glycosylation. Biological inducers: Dengue virus induces PERK dependent ER stress as part of virus induced response in infected cells to favor replication. Influenza virus requires endoplasmic reticulum protein 57-kD (ERp57) for replication and apoptosis induction in infected cells.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Skannerz** Skannerz: Skannerz is a series of electronic toys made by Radica Games that use barcode technology to create an interactive battle game. Radica brand barcodes have the additional feature of being able to act as a healing code in the first 2 iterations of the game. Models: Skannerz The original Skannerz came in three versions which represented the three tribes: Zendra (blue) Pataak (green) Ujalu (red)If a player scans a barcode containing a rival tribe's monster, a battle would be initiated. Also, items could be gathered from Universal Product Code barcodes. Another feature is the two player battle system where two versions could be linked together and battle. There are 126 monsters to collect and control between the 3 controllers. Additionally, the monsters also each possessed a special 'type' or 'class' - these types are: Magic Tech PowerReleased in 2000. Models: Skannerz Commander The Skannerz Commander is second in the series. There are no "tribes" in this version nor are there different models. With this model the user that scans can get the opponent's monster by winning, but this means that therefore they will lose a monster with a loss. The Skannerz Commander game is incompatible with the original Skannerz models. It was released in 2001. Models: Skannerz Racerz The Skannerz Racerz is the third in the Skannerz series, where the players can scan barcodes, and race cars from three classes: "Off-Road", "Drag", and "Street". There are 120 cars in total, which consists of 40 cars per class. The game also has 64 optional parts that can be installed onto any of player's vehicles to upgrade them. The players start the game with a basic vehicle known as the "Gizmo" and they must win a certain number of races to progress in rank, from Rookie to Amateur to Pro. Cars and parts can be obtained by scanning barcodes and either receiving a car from the dealership or beating your opponent in a Pink Slip style race. Parts are also obtained the same way as dealership cars, but are received from the Garage. Only one of each car and car part can be obtained. Hindrances can pop up anytime the players scan a barcode in the form of losing car to the police, being impounded or a modified car crashing into a tree and losing a part. Races involve beating the other opponent and also involve 5 Speed Gear changes. Each track has its own obstacles: off-road has bumps that the players have to avoid, drag involves car swaying from side to side along the track and street involves car slowing and sliding around turns. Models: Skannerz Orbz The fourth Skannerz series did not involve barcode scanning. The Skannerz Orbz are balls that act as arenas for other monsters. Not all previous monsters are included but the selected were put into disks ("dizks") which plug into the orb. The orb has two halves: the bottom half is the storage compartment for the dizks the top half is where the dizks are inserted and battledThe orbz have a larger focus on strategy, the player must choose the monsters and strategy before the battle starts; once the battle commences, the player has no control over the battle except to cheer the team on. The top half of the orb can be connected to the top half of a separate player. This battle is set up the same but at the end of a player battle the winner is given special items. Monsters: There are 126 monsters to collect divided evenly between the first three consoles and 12 secret monsters (commonly known as the "Exiles") that any console can get. The Skannerz Commander console has another 126 monsters and 12 secret monsters that are unobtainable on the other models. Monsters can be obtained through scanning a product barcode, and made to fight other monsters through a battle with another controller, or a battle with a NPC when a barcode belonging to another console is scanned. Each monster has up to three attacks that can be learned when it gains enough HP. Generally, each attack is more powerful than the last, and the player may choose which one to use to fight.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coulomb damping** Coulomb damping: Coulomb damping is a type of constant mechanical damping in which the system's kinetic energy is absorbed via sliding friction (the friction generated by the relative motion of two surfaces that press against each other). Coulomb damping is a common damping mechanism that occurs in machinery. History: Coulomb damping was so named because Charles-Augustin de Coulomb carried on research in mechanics. He later published a work on friction in 1781 entitled "Theory of Simple Machines" for an Academy of Sciences contest. Coulomb then gained much fame for his work with electricity and magnetism. Modes of Coulombian friction: Coulomb damping absorbs energy with friction, which converts that kinetic energy into thermal energy, i.e. heat. Coulomb friction considers this under two distinct modes: either static, or kinetic. Modes of Coulombian friction: Static friction occurs when two objects are not in relative motion, e.g. if both are stationary. The force Fs exerted between the objects does exceed—in magnitude—the product of the normal force N and the coefficient of static friction μs: |Fs|<μsN .Kinetic friction on the other hand, occurs when two objects are undergoing relative motion, as they slide against each other. The force Fk exerted between the moving objects is equal in magnitude to the product of the normal force N and the coefficient of kinetic friction μk: |Fk|=μkN .Regardless of the mode, friction always acts to oppose the objects' relative motion. The normal force is taken perpendicularly to the direction of relative motion; under the influence of gravity, and in the common case of an object supported by a horizontal surface, the normal force is just the weight of the object itself. Modes of Coulombian friction: As there is no relative motion under static friction, no work is done, and hence no energy can be dissipated. An oscillating system is (by definition) only dampened via kinetic friction. Illustration: Consider a block of mass m that slides over a rough horizontal surface under the restraint of a spring with a spring constant k . The spring is attached to the block and mounted to an immobile object on the other end allowing the block to be moved by the force of the spring F=kx ,where x is the horizontal displacement of the block from when the spring is unstretched. On a horizontal surface, the normal force is constant and equal to the weight of the block by Newton's third law, i.e. Illustration: N=mg .As stated earlier, Fk acts to opposite the motion of the block. Once in motion, the block will oscillate horizontally back and forth around the equilibrium. Newton's second law states that the equation of motion of the block is sgn sgn ⁡x˙)μkmg .Above, x˙ and x¨ respectively denote the velocity and acceleration of the block. Note that the sign of the kinetic friction term depends on sgn ⁡x˙ —the direction the block is travelling in—but not the speed. Illustration: A real-life example of Coulomb damping occurs in large structures with non-welded joints such as airplane wings. Theory: Coulomb damping dissipates energy constantly because of sliding friction. The magnitude of sliding friction is a constant value; independent of surface area, displacement or position, and velocity. The system undergoing Coulomb damping is periodic or oscillating and restrained by the sliding friction. Essentially, the object in the system is vibrating back and forth around an equilibrium point. A system being acted upon by Coulomb damping is nonlinear because the frictional force always opposes the direction of motion of the system as stated earlier. And because there is friction present, the amplitude of the motion decreases or decays with time. Under the influence of Coulomb damping, the amplitude decays linearly with a slope of ±2μmgωn/(kπ) where ωn is the natural frequency. The natural frequency is the number of times the system oscillates between a fixed time interval in an undamped system. It should also be known that the frequency and the period of vibration do not change when the damping is constant, as in the case of Coulomb damping. The period τ is the amount of time between the repetition of phases during vibration. As time progresses, the object sliding slows and the distance it travels during these oscillations becomes smaller until it reaches zero, the equilibrium point. The position where the object stops, or its equilibrium position, could potentially be at a completely different position than when initially at rest because the system is nonlinear. Linear systems have only a single equilibrium point.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Launch Pad (card game)** Launch Pad (card game): Launch Pad is a family strategy card game for 2 to 4 players, ages 10 and up. It was designed by Melanie James and published by Stratus Games.In the game, players construct various types of rockets and ready them for launch by advancing them through 3 phases of production (construction, quality control, and launch preparation). Each rocket requires a certain amount of metal and fuel in order to be considered complete. An expert with the necessary skill set is required in order to advance a completed rocket to the next phase. Rockets can also be manned with astronauts and oxygen, be stamped with a quality certificate, and be placed under maximum security in order to be worth additional points or be protected from action cards that could affect them. Launch Pad (card game): The launch pad, consisting of four game cards, serves as a variable game clock to determine the end of the game. The cards constituting the launch pad advance into the center of the table; when all four cards have advanced, forming a completed launch pad, there is one final round of play and the game ends. At the end of the game, players score points based on the rockets they have built. The player with the highest point total wins. Components: 140 game cards Instruction booklet 4 reference cardsCards are made from a durable 350 GSM card stock with linen finish and are 63 by 88 millimeters in size. The instruction booklet is printed in full color and is 12 pages in length. The reference cards include a brief summary of game layout, setup, turns, cards and scoring. Game Layout: Each player controls an area containing 3 zones, each representing a different phase of production. The lowest zone (closest to the player) is called the Construction Zone; the middle zone is the Quality Control Zone; the highest zone is the Launch Zone. Each card contains a diagram that represents the zone in which the card is played. Card Types: Cards are divided into 7 distinct categories. Card Types: Rocket Rockets are the main scoring mechanism of the game. There are 4 types of Rockets, each worth a different point value: Observer - 6 points Explorer - 8 points Intrepid - 10 points Galactic - 12 pointsRockets are played initially in the Construction Zone, where they gather metal and fuel until they are ready for advancement to the Quality Control Zone and the Launch Zone. Card Types: Component Components consist of Metal and Fuel cards. A certain number of each is required by each Rocket in order to be eligible for advancement to the next zone: Observer - 1 Metal, 1 Fuel Explorer - 1 Metal, 2 Fuel Intrepid - 2 Metal, 2 Fuel Galactic - 3 Metal, 2 Fuel Expert Experts are required to be in place in a zone prior to Rocket advancement from that zone or to score full points at the end of the game. There are 4 different Experts: Engineer - played in the Construction Zone. Card Types: Inspector - played in the Quality Control Zone. Mission Controller - played in the Launch Zone. Jack of All Trades - takes the place of any of the others; played in any zone. Card Types: Action Action cards allow a player to carry out a specific action, as indicated on each card. There are 13 distinct Action cards, which include actions such as sabotaging a Rocket, stealing cards from other players' hands, drawing extra cards, salvaging any card from the discard pile, etc. Action cards are played by discarding them and carrying out the action specified on the card. Card Types: Specialty Specialty cards grant a player a specific ability or protection for as long as they are in play. They are played to the right of a player's zones. Bonus Bonus cards offer additional points or protection for a specific Rocket. There are 4 different Bonus cards: Astronaut - played with a Rocket in the Launch Zone; coupled with Oxygen, scores 4 bonus points at the end of the game; without Oxygen, incurs a 4-point penalty. Oxygen - played with a Rocket in the Launch Zone; allows an Astronaut to score points, but worth no points on its own. Quality Certificate - played with a Rocket in the Quality Control Zone and advances with it to the Launch Zone; scores 3 bonus points at the end of the game and protects the Rocket from the "Quality Check" Action card. Maximum Security - played with a Rocket in the Launch Zone; protects it from the "Sabotage", "Abort Mission", and "Vacuum" Action cards. Card Types: Launch Pad There are 4 individual Launch Pad cards that form a larger depiction of a rocket on a launch pad when they are placed together. Launch Pad cards advance through each of a player's zones and subsequently are placed at the center of the table. When all Launch Pad cards have been placed together, each player takes a final turn and the game ends. Setup: Game setup consists of shuffling the game cards, dealing 6 cards to each player, and shuffling the Launch Pad cards into the bottom half of the deck. Turns: Each turn consists of the following steps: Advance any Launch Pad cards in play in the current player's zones Advance completed Rockets (only 1 per zone); an Expert must be in place in the zone the Rocket is leaving. Draw cards until reaching the hand limit of 6 cards; 1 card may be drawn from the discard pile. Play as many playable cards as desired. Draw again up to the hand limit if all cards have been played. Discard as many cards as desired. Scoring: At the end of the game, each player scores points based on the Rockets in his or her individual game zones. Launch Zone Rockets score positive points as indicated on each Rocket card. If a Rocket has an Astronaut and Oxygen, add 4 bonus points to its value. If a Rocket has an Astronaut but no Oxygen, subtract 4 points from its value. If a Rocket has a Quality Certificate, add 3 points to its value. If no Mission Controller (Expert) is in play, subtract 10 points from the total score. Quality Control Zone No points are scored for any Rockets or Bonus cards in this zone. Construction Zone Rockets score negative points; subtract each Rocket's value from the total score.The player with the highest score wins the game. Optional Rules: Several rule variations are included in the Launch Pad instructions (Game Length Variations, Negotiations, and Super Powers), with other official and community-submitted rules being collected on the publisher's website.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GCSE Science** GCSE Science: In the education system in England and Wales, science at GCSE level is studied through Biology, Chemistry and Physics. Double Award: Combined Science results in two GCSEs. Those with GCSEs in Combined Science can progress to A Levels in all of the three natural science subjects. Prior to this, around 1996, Combined Science GCSEs were available as an alternative to three separate Sciences for many exam boards. Combined Science consists of either Higher Tier (HT) or Foundation Tier (FT) papers AQA offer two different specifications entitled Synergy and Trilogy. Triple Award: Triple Award Science, commonly referred to as Triple Science, results in three separate GCSEs in Biology, Chemistry and Physics and provide the broadest coverage of the main three science subjects. The qualifications are offered by the four main awarding bodies in England; AQA, Edexcel, OCR, CIE and Eduqas. History: In August 2018, Ofqual announced that it had intervened to adjust the GCSE Science grade boundaries for students who had taken the "higher tier" paper in its new double award science exams and performed poorly, due to an excessive number of students in danger of receiving a grade of "U" or "unclassified". Criticisms: In 2020, Teach First published a report stating that only two female scientists, chemist and crystallographer Rosalind Franklin and paleoanthropologist Mary Leakey, were included in the GCSE Science curriculum, versus 40 male scientists who were named. The report argued that the lack of female role models in the science curriculum was perpetuating gender biases in the profession.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chart datum** Chart datum: A chart datum is the water level surface serving as origin of depths displayed on a nautical chart. A chart datum is generally derived from some tidal phase, in which case it is also known as a tidal datum. Common chart datums are lowest astronomical tide (LAT) and mean lower low water (MLLW). In non-tidal areas, e.g. the Baltic Sea, mean sea level (MSL) is used. Chart datum: A chart datum is a type of vertical datum and must not be confused with the horizontal datum for the chart. Definitions: The following tidal phases are commonly used in the definition of chart datums. Lowest and highest astronomical tide Lowest astronomical tide (LAT) is defined as the lowest tide level which can be predicted to occur under average meteorological conditions and under any combination of astronomical conditions. Definitions: Many national charting agencies, including the United Kingdom Hydrographic Office and the Australian Hydrographic Service, use the LAT to define chart datums. One advantage of using LAT for chart datums is that all predicted tidal heights must then be positive (or zero) avoiding possible ambiguity and the need to explicitly state sign. Calculation of the LAT only allows for gravitational effects so lower tides may occur in practice due to meteorological effects, such as high pressure systems.The highest astronomical tide (HAT) can be defined similarly. Definitions: Mean high water Mean high water (MHW) is the average of all the daily tidal high water levels observed over a period of several years. It is not the same as the normal tidal limit. In the United States this period spans 19 years and is referred to as the National Tidal Datum Epoch.In Australia, the definition of the MHW is '...the line of the medium high tide between the highest tide of each lunar month (the springs) and the lowest each lunar month (the Neaps) averaged over the year.' Mean water Mean lower low water Mean lower low water (MLLW) is the average height of the lowest tide recorded at a tide station each day during a 19-year recording period, known as the National Tidal Datum Epoch as used by the United States' National Oceanic and Atmospheric Administration. MLLW is only a mean, so some tidal levels may be negative relative to MLLW; see also #Mean low water springs. The 19-year recording period is the nearest full year count to the 18.6-year cycle of the lunar node regression, which has an effect on tides. Definitions: Lower low water large tide This is an average of lowest low waters taken over a fixed period of tidal predictions, as opposed to actual observations. This is the datum used for coastal charts published by the Canadian Hydrographic Service, with the average taken from 19 years of tidal predictions. Mean higher high water Similarly, the mean higher high water (MHHW) is the average height of the highest tide recorded at a tide station each day during the recording period. It is used, among other things as a datum from which to measure the navigational clearance, or air draft, under bridges. Definitions: Mean water spring Spring tides are those when the moon is in a direct alignment with the sun (thus new or full) and in many extra-tropics places when its declination is 23.5°, its maximum. In equatorial, tropical seas, such as the Banda Sea such tides (bulges) occur when there is such an alignment and the declination of the moon is more towards its 0° average, thus more overhead or antiposed. Definitions: Mean low water springs Mean low water springs (MLWS) is the average of the water levels of each pair of successive low waters during that period of about 24 hours in each semi-lunation (approximately every 14 days), when the tidal range is greatest (spring range). Definitions: Mean high water springs Mean high water springs (MHWS) is the averaged highest level that spring tides reach over many years (often the last 19 years). Within this, to ensure anomalous levels are tempered, at least two successive high waters during the highest-tide 24 hours are taken.Such a local level is generally close to the "high water mark" where debris accumulates on a tidal shore on about two days six months apart (and nearby days) annually. Definitions: The levels are local as some places are nearer to or form places of almost no tides in and around each ocean (amphidromic points). Usage: Charts and tables Charted depths and drying heights on nautical charts are given relative to chart datum. Some height values on charts, such as vertical clearances under bridges or overhead wires, may be referenced to a different vertical datum, such as mean high water springs or highest astronomical tide (HAT) (for "HAT" see tidal range). Usage: Tide tables give the height of the tide above a chart datum making it feasible to calculate the depth of water at a given point and at a given time by adding the charted depth to the height of the tide. One may calculate whether an area that dries is under water by subtracting the drying height from the [given] height calculated from the tide table. Usage: Using charts and tables not based on the same geodetic datum can result in incorrect calculation of water depths. Usage: Satellite navigation In recent years national hydrographic agencies have spearheaded developments to establish chart datum with respect to the Geodetic Reference System 1980 (GRS 80) reference ellipsoid, thus enabling direct compatibility with satellite navigation (GNSS) positioning. Examples of this include Vertical Offshore Reference Frames (VORF) for the United Kingdom Hydrographic Office (UKHO) and Bathyelli for Naval Hydrographic and Oceanographic Service (SHOM).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SymbOS** SymbOS: SYmbiosis Multitasking Based Operating System (SymbOS) is a multitasking operating system for Zilog Z80-based 8-bit computer systems. SymbOS: Contrary to early 8-bit operating systems it is based on a microkernel, which provides preemptive and priority-oriented multitasking and manages random-access memory (RAM) with a size of up to 1024 KB. SymbOS contains a Microsoft Windows like graphical user interface (GUI), supports hard disks with a capacity of up to 128 GB and can already be booted on an unexpanded Amstrad CPC-6128, a 128K-MSX2 and an Amstrad PCW. SymbOS: As of August 30, 2017 it is available for the Amstrad CPC series of computers, all MSX models starting from the MSX2 standard, MSX with V9990 graphics chip, all Amstrad PCW models, CPC-TREX, C-ONE and the Enterprise 64/128 computers. Motivation and rationale: SymbOS was originally started as an experiment to find out to what extent it is possible to implement a multitasking operating system with a windowed GUI on an 8-bit computer from 1985. GEOS contributed to the motivation, but the structure and features of SymbOS aren't similar to that system. The release in 2006 proved that such a "mini windows" system is possible on a then 20-year-old home computer with only quantitative limitations. SymbOS is one of the largest retro computing software projects of recent years. One of the goals of the project is to allow these old machines to be used like a modern PC, using hardware extensions. Motivation and rationale: Although only an 8-bit CPU, the Z80 can run a preemptive multitasking operating system. Features such as memory protection, which the Z80 lacks, are not essential in such an OS. For example, AmigaOS also lacks memory protection. The MP/M OS proved that multitasking on the Z80 CPU was possible. Yet, it was generally unavailable for home computers. Motivation and rationale: While the MOS Technology 6502 cannot move the stack pointer, the Z80 can freely relocate it to any position in memory, which makes it easier to implement preemptive multitasking. The existence of an alternative register set accelerates context switching between tasks dramatically. The restriction of Z80 system to a 64 KB address space can be solved with bank switching. In this way, computers like the Amstrad CPC and PCW, MSX, Enterprise or SAM Coupé can access hundreds or thousands of kilobytes of memory. Design: SymbOS includes a microkernel, which can perform task management, memory management and inter-process communication. Design: Task management For task management, a combination of preemptive and cooperative multitasking was chosen, which makes different task priorities possible. Preemptive means that tasks are interrupted after a certain amount of time by the operating system, in order to share the CPU time with other tasks. Cooperatively means that a task stops using CPU time by itself. It does that, if it's finished with its current job or waiting for a certain event. Because of this combination it is possible to assign priorities. Tasks with low priority get CPU time only if all tasks with higher priorities are not then working. Design: Memory and banking management Memory management divides the entire RAM into small 256 byte blocks, which can be assigned dynamically. Applications are always running in a secondary 64 KB RAM bank, where no memory space is occupied by the operating system or the video memory. That makes it possible to reserve up to 63 KB in one piece. Banking management ensures that the system can administer memory with a size of up to one megabyte, even though the Z80 CPU has only a 16-bit address bus. It makes transparent access to memory and functions placed in other 64 KB banks possible. Interprocess communication Communication between different tasks and the operating system usually does not take place via calls, but is done via messages. This is necessary inside a multitasking environment to avoid organization problems with the stack, global variables and shared system resources. The SymbOS kernel supports synchronous and asynchronous IPC. Design: File system management SymbOS supports the file systems CP/M, AMSDOS, and File Allocation Table (FAT) 12-16-32, on all platforms. With the last one, SymbOS can address mass storage devices with a capacity of up to 128 GB. Also, the ability to administer files with a size of up to 2 GB is uncommon for an 8-bit system. Because of the FAT support data exchange with other computers is quite easy, as most 32 and 64 bit operating systems do support the three FAT file systems. Interface: The graphical user interface (GUI) of SymbOS works in a fully object-oriented manner. The look and feel mimics that of Microsoft Windows. It contains the well-known task bar with the clock and the "start" menu and can open up to 32 windows that can be moved, resized and scrolled. The whole system is written in optimized assembly language, meaning that the GUI runs as fast as the host machine supports. Interface: Content of a window is defined with "controls" that are primitive GUI elements such as sliders, check boxes, text lines, buttons or graphics. The background or invisible areas of a window don't need to be saved in a separate bitmap buffer. If an area needs to be restored on the display, its contents will be redrawn instead. This makes SymbOS GUI much more memory-friendly compared to most other 8-bit GUIs. Applications: There are several standard applications available for SymbOS, which are designed to resemble similar software available on other operating systems. Examples include Notepad, SymCommander (similar to Norton Commander), SymShell (cmd.exe), SymZilla (Mozilla Firefox), SymPlay (QuickTime), SymAmp (Winamp) and Minesweeper. Commands The following list of commands is supported by SymShell. Development and release: SymbOS was originally developed for the Amstrad CPC. Its modular structure, with strict separation of general and hardware components, makes porting to other Z80-based systems comparatively easy. The MSX computers starting with the MSX2 standard have been supported since summer 2006. The Amstrad PCW port has been available since August 2007. Versions for the Enterprise 128, the SAM Coupé and such clones of ZXSpectrum as ATM-turbo 2+ and ZX-Evolution/BaseConf are possible, too, as they fulfill the requirements for SymbOS. By keeping a basic condition for an operating system, the strict separation of hardware and application software by an intermediate layer, SymbOS applications run platform-independently on each computer and doesn't need to be adapted for different systems, with the obvious exception of applications that directly access particular hardware.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IManager** IManager: iManager is a web-based Configuration Manager for Unix-based servers. It comes with Open Novell Enterprise Server software and it can be downloaded to different Operating Systems (Linux, Windows). It can be used to monitor and configure software and hardware in servers, over the network.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nanoracks** Nanoracks: Nanoracks LLC is a private in-space services company which builds space hardware and in-space repurposing tools. The company also facilitates experiments and launches of CubeSats to Low Earth Orbit.Nanoracks's main office is in Houston, Texas. The business development office is in Washington, D.C., and additional offices are located in Abu Dhabi, United Arab Emirates (UAE) and Turin, Italy.[6][7] Nanoracks provides tools, hardware and services that allow other companies, organizations and governments to conduct research and other projects in space.Some of Nanoracks customers include Student Spaceflight Experiments Program (SSEP), the European Space Agency (ESA), the German Space Agency (DLR), NASA, Planet Labs, Space Florida, Virgin Galactic, Adidas, Aerospace Corporation, National Reconnaissance Office (NRO), UAE Space Agency, Mohammed bin Rashid Space Centre (MBRSC), and the Beijing Institute of Technology.Nanoracks currently helps facilitate science on the International Space Station in multiple ways and built the Bishop Airlock to launch payloads from the International Space Station. History: Nanoracks was founded in 2009 by Jeffrey Manber and Charles Miller to provide commercial hardware and services for the U.S. National Laboratory on board the International Space Station via a Space Act Agreement with NASA. Nanoracks signed their first contract with NASA in September 2009 and had their first laboratory on the Space Station in April 2010.In August 2012, Nanoracks partnered with Space Florida to host the Space Florida International Space Station (ISS) Research Competition. As part of this program, Nanoracks and DreamUp provide research NanoLab box units to fly payloads to the ISS, with scientific research to be conducted on board the U.S. National Laboratory. In October 2013, Nanoracks became the first company to coordinate the deployment of small satellites from the ISS via the airlock in the Japanese Kibō module. This deployment was done using the Japanese Experiment Module (JEM) Small Satellite Orbital Deployer (J-SSOD).By 2015, Nanoracks had deployed 64 satellites into low Earth orbit, and had 16 satellites on the ISS awaiting deployment, with an order backlog of 99. The company also announced an agreement to fly a Chinese DNA experiment from the Beijing Institute of Technology on the International Space Station. The agreement includes Nanoracks delivering the experiment to the American side of the ISS in a SpaceX Dragon spacecraft and berthing the experiment to Nanoracks' orbiting laboratory facilities, then sending data back to the Chinese researchers. In 2022, Nanoracks became the first company to cut a piece of metal in space. Facilities and labs: Nanoracks Bishop Airlock The Nanoracks Bishop Airlock is a commercially-funded airlock module launched to the International Space Station on SpaceX CRS-21 on 6 December 2020. The module was built by Nanoracks, Thales Alenia Space, and Boeing. It will be used to deploy CubeSats, small satellites, and other external payloads for NASA, Center for the Advancement of Science in Space (CASIS), and other commercial and governmental customers. Facilities and labs: Internal ISS Services Nanoracks facilities on the International Space Station (ISS) include the Plate Reader-2 – a Molecular Devices SpectraMax M5e modified for space flight and the microgravity environment. This spectrophotometer analyzes samples by shining light (200-1000 nm) either on or through the top or bottom of each sample in the well of a microplate. The Nanoracks Plate Reader-2 can accommodate cuvettes in special microplate holders as well as 6-, 12-, 24-, 48-, 96-, and 384-well microplates. It can operate in absorbance, fluorescence intensity, or fluorescence polarization modes. Laboratory space on the ISS is provided to Nanoracks by NASA under a contractual lease arrangement. Facilities and labs: External ISS Services Nanoracks deploys small CubeSats into orbit from the ISS through the Nanoracks CubeSat Deployer via the airlock in the Japanese Kibō module, after the satellites are transported to the ISS on a cargo spacecraft. When released, the small satellites are provided a push of about 1 m/s (3.3 ft/s) that begins a slow process of satellite separation from the ISS.The Nanoracks CubeSat Deployer (NRCSD) is a self-contained deployment system that mechanically and electrically isolates CubeSats from the ISS, the ISS crew, and cargo resupply vehicles. The design of the NRCSD is compliant with the ISS flight safety requirements and is space qualified. The deployer is composed of anodized aluminum plates, access panels, deployer doors, and a base plate assembly. The inside of the NRCSD is designed to minimize and/or preclude the jamming of CubeSat appendages during deployment. Facilities and labs: External Platform (NREP) The Nanoracks External Platform (NREP), installed in August 2016, is a commercial gateway-and-return to the extreme environment of space. Following the CubeSat form factor, payloads experience the microgravity, radiation and other harsh elements native to the space environment, observe earth, test sensors, materials, and electronics, and can return the payload to Earth.The Nanoracks Kaber Microsat Deployer is a reusable system that allows the International Space Station to control and command satellite deployments. It can deploy microsatellites up to 82 kg into space. Microsatellites that are compatible with the Kaber Deployer have additional power, volume, and communication resources, which allows for deployments of higher scope and sophistication. Facilities and labs: External Cygnus Deployer (E-NRCSD) The satellite deployment service enabled satellites to be deployed at an altitude higher than the ISS via a Commercial Resupply Vehicle. These satellites are deployed after the completion of the primary cargo delivery mission and can fly at 500 kilometers above Earth and ca. 100 kilometers above the ISS and extends the life of CubeSats already deployed in low-Earth orbit. The Cygnus Deployer holds a total volume of 36U and adds approximately two years to the lifespan of these satellites.E-NRCSD missions: The Cygnus CRS OA-6 mission was launched 23 March 2016 at 03:05:52 UTC. Inside the Cygnus was the Saffire scientific payload. Mounted outside of the Cygnus was a CubeSat deployer by Nanoracks. Both of these systems remained inactive during the Cygnus docking at the ISS. After the CRS OA-6 resupply mission was completed, and the Cygnus was unberthed from the station and performed scientific experiments. The Saffire's purpose was to study combustion in microgravity, which was done once Cygnus left the ISS. Likewise, in between the CRS OA-6's initiation and its reentry into Earth's atmosphere, numerous Cubesats were deployed into orbit for the commercial entities that built and operate them. Facilities and labs: The Cygnus CRS OA-5 mission was launched 17 October 2016 at 23:45 UTC. On 25 November 2016, during the CRS OA-5 resupply mission, Nanoracks deployed four Spire LEMUR-2 CubeSats from the Cygnus Cargo Vehicle from a 500-kilometer orbit. The Cygnus CRS OA-7 mission was launched 18 April 2017 at 15:11:26 UTC. On Cygnus' eighth resupply mission, Nanoracks deployed four Spire LEMUR-2 CubeSats at a nearly 500-kilometer orbit. The Cygnus CRS OA-8E mission was launched 12 November 2017, 12:19:51 UTC. Cygnus CRS OA-9E mission was launched 21 May 2018, 08:44:06 UTC. Mars Demo-1 Mars Demo-1 (OMD-1) is a self-contained hosted payload platform to demonstrate the robotic cutting of second stage representative tank material on-orbit.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**S-procedure** S-procedure: The S-procedure or S-lemma is a mathematical result that gives conditions under which a particular quadratic inequality is a consequence of another quadratic inequality. The S-procedure was developed independently in a number of different contexts and has applications in control theory, linear algebra and mathematical optimization. Statement of the S-procedure: Let F1 and F2 be symmetric matrices, g1 and g2 be vectors and h1 and h2 be real numbers. Assume that there is some x0 such that the strict inequality x 0 T F 1 x 0 + 2 g 1 T x 0 + h 1 < 0 x_{0}^{T}F_{1}x_{0}+2g_{1}^{T}x_{0}+h_{1}<0 holds. Then the implication x T F 1 x + 2 g 1 T x + h 1 ≤ 0 ⟹ x T F 2 x + 2 g 2 T x + h 2 ≤ 0 x^{T}F_{1}x+2g_{1}^{T}x+h_{1}\leq 0\Longrightarrow x^{T}F_{2}x+2g_{2}^{T}x+h_{2}\leq 0 holds if and only if there exists some nonnegative number λ such that λ [ F 1 g 1 g 1 T h 1 ] − [ F 2 g 2 g 2 T h 2 ] \lambda {\begin{bmatrix}F_{1}&g_{1}\\g_{1}^{T}&h_{1}\end{bmatrix}}-{\begin{bmatrix}F_{2}&g_{2}\\g_{2}^{T}&h_{2}\end{bmatrix}} is positive semidefinite.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Xylopropamine** Xylopropamine: Xylopropamine (Perhedrin, Esanin), also known as 3,4-dimethylamphetamine, is a stimulant drug of the phenethylamine and amphetamine classes which was developed and marketed as an appetite suppressant in the 1950s.Xylopropamine was briefly sold as the sulfate salt, but it was not widely marketed. Other related amphetamine derivatives such as 2,4-dimethylamphetamine were also investigated for the same purpose, however these drugs had negative side effects such as high blood pressure and were not very successful, mainly due to the introduction of alternative drugs like phentermine which had similar efficacy but fewer side effects. Xylopropamine: Xylopropamine was also reported as having analgesic and anti-inflammatory effects but its side effect profile resulted in it never being further developed for these applications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Modified triadan system** Modified triadan system: The modified triadan system is a scheme of dental nomenclature that can be used widely across different animal species. It is used worldwide among veterinary surgeons. Each tooth is given a three digit number. Modified triadan system: The first number relates to the quadrant of the mouth in which the tooth lies: upper right upper left lower left lower rightIf it is a deciduous tooth that is being referred to, then a different number is used: upper right upper left lower left lower right The second and third numbers refer to the location of the tooth from front to back (or rostral to caudal). This starts at 01 and goes up to 11 for many species, depending on the total number of teeth.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Participatory information service** Participatory information service: A participatory information service consists of an information provider which helps receivers to find answers to their questions. It is related to collaborative services, " that (...) ask for the direct and active involvement of all the interested actors, final users included, [to be performed]".The development of participatory information services is due to global societal and economic changes of this century: the Internet era, the information age and the Learning organization. The availability of resources, such as personal data and general information, together with ICT tools (Information and Communication Technologies) have created a perfect match for sharing knowledge through the web involving users and/or consumers participation.The difference between an information service and a participatory information service is that the former one is a service that provides information to a user. In this process, the information flow is one-way. Whereas, the latter one is based on the culture of participation and the information flow is two-ways.Wikipedia is a good example of participatory information service. On the one hand, The Team Wikipedia manages the platform by providing instructions about writing or modifying articles, approving the contents and publishing them on the website. On the other hand, every user can create a new article and/or edit an existing one in a climate of high degree of mutual trust and relational qualities.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MAP3K19** MAP3K19: Mitogen-activated protein kinase kinase kinase 19 is a protein that in humans is encoded by the MAP3K19 gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thermal mass** Thermal mass: In building design, thermal mass is a property of the mass of a building that enables it to store heat and provide inertia against temperature fluctuations. It is sometimes known as the thermal flywheel effect. The thermal mass of heavy structural elements can be designed to work alongside a construction's lighter thermal resistance components to create energy efficient buildings. Thermal mass: For example, when outside temperatures are fluctuating throughout the day, a large thermal mass within the insulated portion of a house can serve to "flatten out" the daily temperature fluctuations, since the thermal mass will absorb thermal energy when the surroundings are higher in temperature than the mass, and give thermal energy back when the surroundings are cooler, without reaching thermal equilibrium. This is distinct from a material's insulative value, which reduces a building's thermal conductivity, allowing it to be heated or cooled relatively separately from the outside, or even just retain the occupants' thermal energy longer. Thermal mass: Scientifically, thermal mass is equivalent to thermal capacitance or heat capacity, the ability of a body to store thermal energy. It is typically referred to by the symbol Cth, and its SI unit is J/°C or J/K (which are equivalent). Thermal mass may also be used for bodies of water, machines or machine parts, living things, or any other structure or body in engineering or biology. In those contexts, the term "heat capacity" is typically used instead. Background: The equation relating thermal energy to thermal mass is: Q=CthΔT where Q is the thermal energy transferred, Cth is the thermal mass of the body, and ΔT is the change in temperature. For example, if 250 J of heat energy is added to a copper gear with a thermal mass of 38.46 J/°C, its temperature will rise by 6.50 °C. Background: If the body consists of a homogeneous material with sufficiently known physical properties, the thermal mass is simply the mass of material present times the specific heat capacity of that material. For bodies made of many materials, the sum of heat capacities for their pure components may be used in the calculation, or in some cases (as for a whole animal, for example) the number may simply be measured for the entire body in question, directly. Background: As an extensive property, heat capacity is characteristic of an object; its corresponding intensive property is specific heat capacity, expressed in terms of a measure of the amount of material such as mass or number of moles, which must be multiplied by similar units to give the heat capacity of the entire body of material. Thus the heat capacity can be equivalently calculated as the product of the mass m of the body and the specific heat capacity c for the material, or the product of the number of moles of molecules present n and the molar specific heat capacity c¯ . For discussion of why the thermal energy storage abilities of pure substances vary, see factors that affect specific heat capacity. Background: For a body of uniform composition, Cth can be approximated by Cth=mcp where m is the mass of the body and cp is the isobaric specific heat capacity of the material averaged over temperature range in question. For bodies composed of numerous different materials, the thermal masses for the different components can just be added together. Thermal mass in buildings: Thermal mass is effective in improving building comfort in any place that experiences these types of daily temperature fluctuations—both in winter as well as in summer. When used well and combined with passive solar design, thermal mass can play an important role in major reductions to energy use in active heating and cooling systems. Thermal mass in buildings: The use of materials with thermal mass is most advantageous where there is a big difference in outdoor temperatures from day to night (or, where nighttime temperatures are at least 10 degrees cooler than the thermostat set point). The terms heavy-weight and light-weight are often used to describe buildings with different thermal mass strategies, and affects the choice of numerical factors used in subsequent calculations to describe their thermal response to heating and cooling. Thermal mass in buildings: In building services engineering, the use of dynamic simulation computational modelling software has allowed for the accurate calculation of the environmental performance within buildings with different constructions and for different annual climate data sets. This allows the architect or engineer to explore in detail the relationship between heavy-weight and light-weight constructions, as well as insulation levels, in reducing energy consumption for mechanical heating or cooling systems, or even removing the need for such systems altogether. Thermal mass in buildings: Properties required for good thermal mass Ideal materials for thermal mass are those materials that have: high specific heat capacity, high densityAny solid, liquid, or gas that has mass will have some thermal mass. A common misconception is that only concrete or earth soil has thermal mass; even air has thermal mass (although very little). A table of volumetric heat capacity for building materials is available, but note that their definition of thermal mass is slightly different. Use of thermal mass in different climates The correct use and application of thermal mass is dependent on the prevailing climate in a district. Temperate and cold temperate climates Solar-exposed thermal mass Thermal mass is ideally placed within the building and situated where it still can be exposed to low-angle winter sunlight (via windows) but insulated from heat loss. In summer the same thermal mass should be obscured from higher-angle summer sunlight in order to prevent overheating of the structure. The thermal mass is warmed passively by the sun or additionally by internal heating systems during the day. Thermal energy stored in the mass is then released back into the interior during the night. It is essential that it be used in conjunction with the standard principles of passive solar design. Thermal mass in buildings: Any form of thermal mass can be used. A concrete slab foundation either left exposed or covered with conductive materials, e.g. tiles, is one easy solution. Another novel method is to place the masonry facade of a timber-framed house on the inside ('reverse-brick veneer'). Thermal mass in this situation is best applied over a large area rather than in large volumes or thicknesses. 7.5–10 cm (3″–4″) is often adequate. Thermal mass in buildings: Since the most important source of thermal energy is the Sun, the ratio of glazing to thermal mass is an important factor to consider. Various formulas have been devised to determine this. As a general rule, additional solar-exposed thermal mass needs to be applied in a ratio from 6:1 to 8:1 for any area of sun-facing (north-facing in Southern Hemisphere or south-facing in Northern Hemisphere) glazing above 7% of the total floor area. For example, a 200 m2 house with 20 m2 of sun-facing glazing has 10% of glazing by total floor area; 6 m2 of that glazing will require additional thermal mass. Therefore, using the 6:1 to 8:1 ratio above, an additional 36–48 m2 of solar-exposed thermal mass is required. The exact requirements vary from climate to climate. Thermal mass in buildings: Thermal mass for limiting summertime overheating Thermal mass is ideally placed within a building where it is shielded from direct solar gain but exposed to the building occupants. It is therefore most commonly associated with solid concrete floor slabs in naturally ventilated or low-energy mechanically ventilated buildings where the concrete soffit is left exposed to the occupied space. Thermal mass in buildings: During the day heat is gained from the sun, the occupants of the building, and any electrical lighting and equipment, causing the air temperatures within the space to increase, but this heat is absorbed by the exposed concrete slab above, thus limiting the temperature rise within the space to be within acceptable levels for human thermal comfort. In addition the lower surface temperature of the concrete slab also absorbs radiant heat directly from the occupants, also benefiting their thermal comfort. Thermal mass in buildings: By the end of the day the slab has in turn warmed up, and now, as external temperatures decrease, the heat can be released and the slab cooled down, ready for the start of the next day. However this "regeneration" process is only effective if the building ventilation system is operated at night to carry away the heat from the slab. In naturally ventilated buildings it is normal to provide automated window openings to facilitate this process automatically. Thermal mass in buildings: Hot, arid climates (e.g. desert) This is a classical use of thermal mass. Examples include adobe, rammed earth, or limestone block houses. Its function is highly dependent on marked diurnal temperature variations. The wall predominantly acts to retard heat transfer from the exterior to the interior during the day. The high volumetric heat capacity and thickness prevents thermal energy from reaching the inner surface. When temperatures fall at night, the walls re-radiate the thermal energy back into the night sky. In this application it is important for such walls to be massive to prevent heat transfer into the interior. Thermal mass in buildings: Hot humid climates (e.g. sub-tropical and tropical) The use of thermal mass is the most challenging in this environment where night temperatures remain elevated. Its use is primarily as a temporary heat sink. However, it needs to be strategically located to prevent overheating. It should be placed in an area that is not directly exposed to solar gain and also allows adequate ventilation at night to carry away stored energy without increasing internal temperatures any further. If to be used at all it should be used in judicious amounts and again not in large thicknesses. Thermal mass in buildings: Materials commonly used for thermal mass Water: water has the highest volumetric heat capacity of all commonly used material. Typically, it is placed in large container(s), acrylic tubes for example, in an area with direct sunlight. It may also be used to saturate other types material such as soil to increase heat capacity. Concrete, clay bricks and other forms of masonry: the thermal conductivity of concrete depends on its composition and curing technique. Concretes with stones are more thermally conductive than concretes with ash, perlite, fibers, and other insulating aggregates. Concrete's thermal mass properties save 5–8% in annual energy costs compared to softwood lumber. Insulated concrete panels consist of an inner layer of concrete to provide the thermal mass factor. This is insulated from the outside by a conventional foam insulation and then covered again with an outer layer of concrete. The effect is a highly efficient building insulation envelope. Insulating concrete forms are commonly used to provide both thermal mass and insulation to building structures. The concrete mass provides the specific heat capacity required for good thermal inertia. Insulating layers created on the side or interior surfaces of the form provide good thermal resistance. Clay brick, adobe brick or mudbrick: see brick and adobe. Thermal mass in buildings: Earth, mud and sod: dirt's heat capacity depends on its density, moisture content, particle shape, temperature, and composition. Early settlers to Nebraska built houses with thick walls made of dirt and sod because wood, stone, and other building materials were scarce. The extreme thickness of the walls provided some insulation, but mainly served as thermal mass, absorbing thermal energy during the day and releasing it during the night. Nowadays, people sometimes use earth sheltering around their homes for the same effect. In earth sheltering, the thermal mass comes not only from the walls of the building, but from the surrounding earth that is in physical contact with the building. This provides a fairly constant, moderating temperature that reduces heat flow through the adjacent wall. Thermal mass in buildings: Rammed earth: rammed earth provides excellent thermal mass because of its high density, and the high specific heat capacity of the soil used in its construction. Natural rock and stone: see stonemasonry. Thermal mass in buildings: Logs are used as a building material to create the exterior, and perhaps also the interior, walls of homes. Log homes differ from some other construction materials listed above because solid wood has both moderate R-value (insulation) and also significant thermal mass. In contrast, water, earth, rocks, and concrete all have low R-values. This thermal mass allows a log home to hold heat better in colder weather, and to better retain its cooler temperature in hotter weather. Thermal mass in buildings: Phase-change materials Seasonal energy storage If enough mass is used it can create a seasonal advantage. That is, it can heat in the winter and cool in the summer. This is sometimes called passive annual heat storage or PAHS. The PAHS system has been successfully used at 7000 ft. in Colorado and in a number of homes in Montana. The Earthships of New Mexico utilize passive heating and cooling as well as using recycled tires for foundation wall yielding a maximum PAHS/STES. It has also been used successfully in the UK at Hockerton Housing Project.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**KIRREL3** KIRREL3: Kin of IRRE-like protein 3 (KIRREL3) also known as kin of irregular chiasm-like protein 3 or NEPH2 is a protein that in humans is encoded by the KIRREL3 gene.NEPH2 is a member of the NEPH protein family of transmembrane proteins, which includes NEPH1 (KIRREL) and NEPH3 (KIRREL2). The NEPH proteins can interact with nephrin and CASK. Function: NEPH2 has been implicated in synapse formation. Disruption of KIRREL3 gene function had been associated with abnormal brain function.NEPH1 and NEPH2 are involved in the blood filtration function of the kidney and are located in the slit diaphragm.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nilcurve** Nilcurve: In mathematics, a nilcurve is a pointed stable curve over a finite field with an indigenous bundle whose p-curvature is square nilpotent. Nilcurves were introduced by Mochizuki (1996) as a central concept in his theory of p-adic Teichmüller theory. The nilcurves form a stack over the moduli stack of stable genus g curves with r marked points in characteristic p, of degree p3g–3+r.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PD 360** PD 360: PD 360 is an online library of educational professional development video programs broken into segments. The product was developed by the School Improvement Network, and it has since been renamed Edivate. The segments are topical based with classroom examples featuring various educational experts. PD 360 has increased student achievement in schools in North America. Basic Operation: PD 360 is an on-demand professional learning system for teachers and educators, with a series of tools built around a library of video segments. The video segments are streamed to the end user in the Flash Video format (FLV). The videos focus on pedagogical topics such as differentiated instruction, English language learners (ELL), instructional strategies, and classroom management. PD 360 is built in Flex, as a Flash application, so Flash is required for its operation. Basic Operation: Observation 360 is a sister product to PD 360 that allows principals and other instructional leaders to do an observation or walkthrough of a teacher using an iPad or iPod Touch. Observation 360 is linked to PD 360. History: PD 360 version 1 was released in June 2007, following a beta that had been released in November 2006. Version 1 contained basic viewing and searching capability. Version 2 released in November 2007 and added the capability to customize content a viewer sees based on the specific needs of a district. Version 3 was released in November 2008 and added reporting capability so that customers could track and see usage. Version 4 was released in summer of 2009 and added a learning community, with forums and file sharing between users. Version 3.5 was released in November 2009 and added the ability to have colleagues and create membership-based groups, and launched a beta of the achievements program. Awards: 2010 Bronze Telly Award for online video production: High School 2009 Adobe Max Award winner in education: PD 360 2009 Codie awards finalist in the category of Best Professional Development Solution: PD 360 2009 Scholastic Best in Tech Award: PD 360
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Antarctic field camps** Antarctic field camps: Many Antarctic research stations support satellite field camps which are, in general, seasonal camps. The type of field camp can vary – some are permanent structures used during the annual Antarctic summer, whereas others are little more than tents used to support short term activities. Field camps are used for many things, from logistics (Sky Blu) to dedicated scientific research (WAIS Divide Field Camp).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heilgenwalder Schichten Formation** Heilgenwalder Schichten Formation: The Heilgenwalder Schichten Formation is a geologic formation in Germany. It preserves fossils dating back to the Carboniferous period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fabio Paternò** Fabio Paternò: Fabio Paternò is Research Director and Head of the Laboratory on Human Interfaces in Information Systems at Istituto di Scienza e Tecnologie dell'Informazione, Consiglio Nazionale delle Ricerche in Pisa, Italy. Career: He received his PhD in Computer Science from University of York (UK). He wrote one book on Model-Based Design and Evaluation of Interactive Applications, and has long been working on user interface modeling languages, and tools for design, development or evaluation of interactive systems. In the field of Task analysis he developed the ConcurTaskTrees notation for specifying task models, which has inspired the W3C document on Task Models. He then worked on the TERESA and MARIA XML languages for the logical description of multi-device user interfaces. Career: He has also investigated novel solutions for End-User development in various contexts. He coordinated a European Network of Excellence (EUD-net), co-edited (together with Henry Lieberman from MIT, and Volker Wulf from University of Siegen) one of the best-known books on End User Development (widely cited), and carried out various research studies in the area. He has actively worked at the design of various authoring environments and tools, such as Puzzle for intuitively editing interactive applications from a smartphone, a mashup tool for creating new Web applications by composing existing components using the familiar copy-paste interaction across them, and an environment for specifying trigger action rules able to personalize Internet of Things applications. Career: He has addressed research issues related to multi-device environments by proposing original solutions for migratory and cross-device user interfaces, which allow seamless access through a variety of devices ranging from wearable to large public displays, and dynamic allocation of interactive components across them. He also edited and wrote part of a book on Migratory Interactive Applications for Ubiquitous Environments. Career: He has been co-chair of the World Wide Web Consortium (W3C) Group on Model-based User Interfaces.He has been the chair of the International Federation for Information Processing's WG 2.7/13.4 group on user interface engineering.He has been a member of the Programme Committee of the main international HCI conferences, including Papers Co-Chair of the ACM CHI 2000 conference, IFIP INTERACT 2003 and IFIP INTERACT 2005, and chair of MobileHCI 2002, EICS 2011, Ambient Intelligence 2012, ACM EICS 2014, Mobile HCI 2016, MUM 2019, and ACM IUI 2020 Awards: In 2009 he was appointed a Distinguished Scientist of the Association for Computing Machinery (ACM).He has been awarded the IFIP Silver Core in 2013, and the TC13 Pioneer in HCI in 2014.,, IFIP Fellow in recognition of substantial and enduring contributions in the ICT field in 2020, and has been elected in the ACM SIGCHI Academy in 2021.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sensory maps and brain development** Sensory maps and brain development: Sensory maps and brain development is a concept in neuroethology that links the development of the brain over an animal’s lifetime with the fact that there is spatial organization and pattern to an animal’s sensory processing. Sensory maps are the representations of sense organs as organized maps in the brain, and it is the fundamental organization of processing. Sensory maps are not always close to an exact topographic projection of the senses. The fact that the brain is organized into sensory maps has wide implications for processing, such as that lateral inhibition and coding for space are byproducts of mapping. The developmental process of an organism guides sensory map formation; the details are yet unknown. The development of sensory maps requires learning, long term potentiation, experience-dependent plasticity, and innate characteristics. There is significant evidence for experience-dependent development and maintenance of sensory maps, and there is growing evidence on the molecular basis, synaptic basis and computational basis of experience-dependent development. Sensory maps: List of known sensory maps: Somatotopic maps: homunculus, rat barrel cortex, star-nose mole nose Retino-topic maps: visual field position, orientation, direction, spatial frequency Tonotopic maps: interaural time difference, frequency tonotopic maps of the cochlea Computational maps: The computational map is the “key building block in the infrastructure of information processing by the nervous system.” Computation defined as the transformation in the representation of information is the essence of brain function. Computational maps are involved in processing sensory information and motor programming, and they contain derived information that is accessible to higher-order processing regions. The first computational map to be proposed was the Jeffress model (1948) which stated that the computation of sound localization was dependent upon timing differences of sensory input. Since the introduction of the Jeffress model, more general guiding principles for relating brain maps to the properties of the computations they perform have been proposed. One of the proposed models is that computations are distributed across parallel processors like computers; with this model, computer processing is a model for computations performed by the brain. More recently, the “elastic net” model has been proposed after studying how the primary visual cortex overlaps multiple visual maps, such as visual field position, orientation, direction, ocular dominance, and spatial frequency. The elastic net uses parallel algorithms to analyze the visual field and allows for optimized trade-off between coverage and continuity. Role of plasticity in map development: Maps are highly plastic and can be greatly altered depending on sensory experience. Long term potentiation is the primary mechanism by which plasticity occurs. Sequential firing induces a pattern of LTP that shifts the coded location, and behaviorally generated modifications of synaptic strengths subsequently affect behavior. Experience is crucial in maintaining maps. Experiments with the rat barrel cortex have shown that changes in the pattern of sensory activity can alter configuration of cortical receptive fields; if a particular whisker gets directed stimulus, the cortex will reflect the directed stimulus. Disruptions in sensory maps reflect actual discontinuities in the receptor sheet, and evoked and spontaneous neural activity instruct variable features of sensory maps. Theory of map formation: Sensory maps are formed largely by experience. Basic wiring of the brain is established in vivo by a variety of molecular guidance cues, and the wiring is then refined by patterns of neural activity based in sensory experience. For synchronization of multiple maps, replay of sensory input in circuits allows neurons to be organized into vertical topographic functional units before horizontal integration. Neurons become specialized: in the big brown bat, delay-tuned neurons encode a target range and act as probability encoders, and this comes from experience. In the owl, auditory units responded to specific locations in space, and units were arranged systematically according to the relative locations of their receptive fields, thereby creating a physiological map of auditory space. The receptive fields of the neurons found in the midbrain auditory nucleus had receptive fields independent of nature and intensity of the sound. Molecular basis: Roger Sperry proposed a chemical gradient model for eye rotation and for neuronal wiring diagram. Retinal neurons and target cells had identification tags in the form of chemical gradients so that the projection of neurons would be orderly. For a topographic map of the visual world, the map first forms during neural development via molecular signals, such as chemospecific matching between molecular gradients. The molecular basis of sensory maps and brain development is a field that is being actively explored. The most recent work has shown that gamma oscillations of neurons synchronize the development of the thalamus and cortex in young rats.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zero4 Champ RR** Zero4 Champ RR: Zero4 Champ RR (ゼロヨンチャンプ ダブルアール) is a 1994 racing life simulation video game developed and published by Media Rings for the Super Famicom. The game is about an 18-year-old race car driver who must chase his dreams of drag racing after being turned down for university and becoming a rōnin due to his joblessness status. The game received a sequel, Zero4 Champ RR-Z. Gameplay: In between races, the player must maintain a part-time job in order to earn money for the next race. There are elements of the role playing and simulation game genres present as the player must build up the character through his job as a security guard and through social interactions with the racing community. While being a security guard, players can bump into random encounters into enemies such as wandering rats. Attacking, using an item, or running away from battle are the only options. Losing all of the player's experience points means getting fired from the security guard job. Gameplay: Vehicle models included in the game are Toyota, Nissan, Mitsubishi, Mazda, and Honda. Automatic transmission is not available in the game but players can choose between 4-speed, 5-speed, or 6-speed manual transmission for their chosen Japanese car. All measurements are done using metric (i.e., kilometres per hour as opposed to miles per hour). Besides the story mode, the player can either challenge computer opponents or the best time on the battery backup to a drag racing showdown with all the rules of drag racing (including flying starts). Players must go to driving school and pass a test based on their proficiency with gear shifting and speed control before beginning their racing careers. Gameplay: The versus mode that allows the single contestant to either play by himself or against a computer opponent also allows multiple human opponents. In order for the three different multiplayer modes to be selected, additional controller(s) must be inserted. The "four drag strip system" is used when three to four competitors are involved as opposed to the "two drag strip system." All the important drag race rules apply even in three-player and four-player matches (with violating the rules more than twice leading to automatic disqualification and game over).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tribo** Tribo: Tribo (Ukrainian: Трібо) is a Ukrainian company, manufacturer of the brake pads, braking systems and friction materials. General information: Production facilities are located in Ukraine, the Czech Republic, Kazakhstan and Belarus The company manufactures about 300 products (foremost brake pads and friction materials) for railway vehicles, agricultural machinery, automotive industry International sales cover more than 20 countries worldwide (2018) Organization's quality management system is certified to the standards: ISO/IEC 17025, ISO/TS 16949, ISO 9001 History: On July 1, 1979, the first production line of the plant was launched in Bila Tserkva. In 1996 the company was reorganized into PJSC Tribo, later into Tribo Ltd.In 2005, Tribo Company founded the representative office in Kazakhstan and in 2009 a production of the brake pads for the freight and passenger cars was established here too. Tribo is the only manufacturer of these products in Kazakhstan.In 2008, a subsidiary company Tribo Rail UK Ltd was established in Buxton, UK. TriboRail consists of a UK based commercial office and distribution facility. In 2010, Tribo expanded their production capacities and built a new production hall in Bila Tserkva to cover needs of Tribo Rail.In 2013, the new testing laboratories were opened at the Tribo plant in Bila Tserkva: "Tribo R&D" for the development and testing of new friction materials and "Eurotest" for testing of the brake pads for vehicles. In the same year Tribo opened a new manufacturing facility "Tribo Tools" for the production of molds, forming dies and tooling.In 2017, Tribo and BelAZ opened a new factory for the production of the brake pads and discs in Staryya Darohi, Belarus.In 2018, BelTribo JSC officially started supplying brake pads to BelAZ plant.In 2018, the company launched the production of steel wire and fiber, and also passed the certification audit in accordance with the international quality standard of the automotive industry IATF 16949: 2016.During 2018-2019, Tribo began exporting products to African countries.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cimetropium bromide** Cimetropium bromide: Cimetropium bromide is a belladonna derivative. Evidence does not support its use in infantile colic.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Serdev suture** Serdev suture: Scarless Serdev Suture suspension liftings [1] use percutaneous skeletal fixation of movable fascias without incisions. In Brazil known as fio elastico., are used to correct early ptosis and flabbiness in areas of face and body. The suture suspension techniques are described to lift, if necessary to form volume and to correct position of soft tissue without traditional incisions. Serdev suture: The techniques consist of passing closed sutures, by needle perforations only, to lift movable fascias and fix them to non movable skeletal structures [2] in several facial and body areas: In face areas: total ambulatory SMAS Lift, temporal and supra-temporal suture SMAS lift, scarless brow suture lift, lateral cantus lifting, mid face suture lift, cheekbone lift and augmentation, lower smas-platysma face and neck lift using skin perforations only or by using hidden retro-lobular incisions, chin enhancement, form and position correction by suture, Serdev sutures for: nasal tip refinement; nasal tip rotation; nasal alar base narrowing, scarless Serdev suture method in prominent ears, chin dimples and smiling dimples by suture, permanent block of glabella muscles etc. Serdev suture: In body areas: scarless breast lift by suture and needle perforations only, scarless buttock lift by suture, abdominal flaccidity tightening by sutures, scarless inner thigh lift.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hiking equipment** Hiking equipment: Hiking equipment is the equipment taken on outdoor walking trips. Hiking is usually divided into day-hikes and multiple-day hikes, called backpacking, trekking, and walking tours. Hiking equipment: The equipment selected varies according to the duration, distance, planned activities, and the environment. Additional factors include weight and preparedness for unplanned events. The level of preparedness can relate to remoteness and potential hazards; for example, a short day hike across farmland or trekking in the Himalayas. The length and duration of a walk can influence the amount of weight carried. Hiking equipment: The nature of a hike is both by the natural environment and the applicable government regulations and hikers plan accordingly when considering equipment. To minimize the impact on the natural environment, many hikers follow the principles of "Leave No Trace". Planning and checklists: According to Tom Brown, the basic plan for survival is in the order of shelter (including clothing), water, fire, and food. Cody Lundin writes about the "Rule of 3s"; this relates to human survival without basics: three minutes without air, three hours without shelter, three days without water, or three weeks without food. Planning and checklists: Hikers may take with them equipment ranging from a stout knife to ultralight backpacking (10–25 pounds), to the heaviest, most durable gear a hiker can carry. Checklists help to minimize the chance of forgetting something important.Considerations for choice of hiking equipment may include: Length and remoteness of trip Optimal weight and capacity Special medical considerations Weather: temperature range, sun/shade, rain, snow, ice Terrain: trail conditions, cliffs, sand, swamp, river crossings Shelter and clothes Water plan Food Overnight shelter Protection from animals: insect repellent, anaphylactic medication, snakebite first-aid, antivenom, mace, bear spray, bear-resistant food storage container Equipment for special activitiesWhile Henry David Thoreau and several other early outdoor authors published lists of items to carry while hiking, it was The Mountaineers of Seattle who developed the “10 Essentials” list while teaching climbing courses in the 1930s. The list, now known as “The Classic List,” first appeared in print in 1974 with the publication of the third edition of Mountaineering: The Freedom of the Hills. The list was developed so that outdoor recreationists could respond to an accident, or spend an unforeseen night in the wild. Over time The Mountaineers have tweaked the list to reflect the availability of modern gear. Now known as the “Ten Essential Systems,” the club recommends that outdoor recreationists carry the following, as needed according to the situation: Navigation (map & compass) Sun protection (sunglasses & sunscreen) Insulation (extra clothing) Illumination (headlamp/flashlight) First-aid supplies Fire (waterproof matches/lighter/candle) Repair kit and tools Nutrition (extra food) Hydration (extra water) Emergency shelter (tent/plastic tube tent/garbage bag) Carrying methods and capacity: A pack's capacity to carry items is determined by: Carrying methods on the body Bag volume Construction strength, design, materials, and construction qualityCommonly-used carrying methods include: A wristband, belt loop, a thin neck lanyard, and clothing pockets are among the smaller, lighter methods. A small belt pouch (60 cu.in., 1 liter) that can attach to a belt A bodypack or tactical vest (100–200 cu.in.) is a load-bearing vest, and may be as simple as a fishing vest. A single-shoulder pack (500–800 cu.in., 8-14L) uses one shoulder strap, such as a haversack, messenger bag, or sling bag. A waistpack can range in size from a belt pouch to a haversack (1-14L); in the larger sizes, shoulder straps maybe provided. Waistpacks may be carried over a shoulder. Day packs (1,000–2,000 cu.in., 17-34L) are small to mid-sized backpacks that have two shoulder straps, smaller ones may not include a waist belt. A harness system may include a small backpack, a waistpack, a vest, and several belt pouches. Carrying methods and capacity: Larger cargo backpacks (6,000 cu.in, 100+L) that have substantial, well-padded shoulder straps and a waist belt; some of these are designed to carry a couple of hundred pounds.Some hikers divide their backpack into sections associated with specific needs, i.e. kitchen, bedroom, bathroom, etc., or by clothes, shelter, water, fire, and food. Military and law-enforcement personnel use a variety of modular and attachment systems, like duty belts, tactical vests, All-purpose Lightweight Individual Carrying Equipment, MOLLE, Improved Load Bearing Equipment, FILBE, and PLCE. Military surplus outlets are optional sources for backpacking equipment. Carrying methods and capacity: Construction quality may be determined by design, manufacturer reputation, advertised purpose, and field testing. Customer reviews are often posted online. Heavy pack fabrics are made from 800–1000 denier nylon material. Carrying methods and capacity: A large, heavy pack of 100 liters (6,100 cu in) weighs 100 pounds (45 kg), and 1 liter (0.26 U.S. gal) of water weighs 1 kilogram (2.2 lb). The best-made packs may carry up to twice their weight in water; less well-made packs may only carry half their weight in water. The British army bergen backpack, which has a capacity of 120 liters (7,300 cu in) carrying up to 90 kilograms (200 lb) is made from 1000 denier nylon. Backpacks carrying more than 30 pounds (14 kg) usually have waist-belts to help with posture by transferring the weight to the hips. Some experts recommend keeping the equipment's total weight to less than 25% of the hiker's weight. Apparel: Apparel, including clothing, jackets, shoes, hats, etc., provides insulation from heat, cold, water or fire. It shades the body and protects it from injury from thorns, insect bites, blisters and UV. Apparel: Basic outdoor clothing materials are goose down, wool, polyester, and polyolefin, which provide similar degrees of insulation when dry. Wool and polyesters perform reasonably well for most weather conditions and provide some insulation while wet. Cotton/linen wicks moisture, good for hot/humid weather. Cotton, linen and down lose insulation when wet unless they are treated to be water-resistant.Natural fabrics, such as cotton, linen and wool have higher burn temperatures, and they char instead of melting when exposed to flame. When a fabric melts onto skin it is difficult to remove, unlike a material that chars. Nomex is used for fire-resistant clothing. Wool is a good all-around fabric. Cotton and linen are best for hot weather and worst for cold, wet weather. Synthetics can be about the same as wool in the winter; many of them are fire hazards. Fabrics can be treated to help reduce their disadvantages. Apparel: Down is the lightest thermal-insulating material and compresses the most. Synthetics are next best. Wool is heavier than down and synthetics, and does not compress well. Stuff sacks and compression sacks are used for carrying insulated clothing and sleeping bags. Layered clothing allows for fine tuning of body temperature. The inner-base layer should wick away moisture. The mid-layer is used for the appropriate insulation, and the outer-shell layer provides wind and rain protection. Apparel: For long trips, having enough clothes to change into is helpful, while washing or drying others. An extra pair of socks can be used as mittens. Shorts for swimming and fording streams are also useful. Wet clothes do not insulate as well and can freeze while a hiker is wearing them. If a hiker falls into ice water, an immediate dry change of clothes from a dry bag can be lifesaving. Layered clothing helps regulate body temperature in varying weather conditions. Apparel: Gloves provide protection from blisters, abrasions, cold and hot objects, and insects. General purpose gloves are a thin glove-liners — wool may be preferred around campfires — combined with a pair of leather gloves. Glove liners often provide enough dexterity while not fully exposing the hands to freezing conditions. Shoes with traction reduce the chance of slipping, causing an injury or death. Shoes that support the ankle may also prevent injury. Well-constructed, breathable, waterproof hiking boots are general-purpose hiking shoes. Mountaineering boots provide more specialized protection. Trainers, sandals, or moccasins are useful for easy walks and may be taken on extended hikes as backup, and to wear when fording streams and in the evening around camp. Waterproof gaiters are used in cold or wet conditions to protect the lower pants and upper part of the shoes, and reduce the amount of water, snow, and debris from entering boots and soaking into other fabrics. Brush chaps or pants for thick brush or thorns, and snake chaps or gaiters help protect the legs from snake bites. Apparel: Hot-wet-weather clothing Long-sleeved shirt and long pants provide sun and insect protection, and help to reduce abrasions when plowing through brush and when slipping and falling on rocks. Sun hat Bug jackets and head-nets provide insect protection and are especially useful if the insect repellent is either not effective or runs out. These are a couple items that hikers may use to minimize the use of insect repellent on their skin. Apparel: Rain clothing made from waterproof or water-resistant fabrics, and preferably breathable, like Gore-Tex: Raincoat Rain pants and/or rain skirt (useful in hotter climates) Rain poncho, uses: tarpaulin, ground cloth, backpack cover, hammock, stretcher Plastic bags made into a poncho and a rain skirt Gloves and socks. Latex (petroleum degrades them) or Nitrile gloves (medical grade is the top grade) Plastic bags put on over socks then inserted into shoes Jungle footwear: vented boots to drain water, the best wicking socks possible, sandals with good walking straps or jungle moccasins Snow-ice-cold clothing High-altitude hikers encounter freezing conditions, even in summer, and there are also permanent glaciers and snowfields. Apparel: Parka, insulated coat extending below the waist, are often hooded Snow pants: insulated, water-wind-resistant Long underwear Balaclavas are versatile, because they can protect the head, neck and face from the cold. Insulated face mask to provide a solid wind barrier for extreme cold that is beyond a balaclava. Scarves are equally versatile and may be combined with knit hats for the same effect as a balaclava. Gloves: insulated, breathable, and water-resistant. Mittens for the more extreme cold temperatures, but they offer less dexterity. Glove liners used with mittens provide more dexterity without fully exposing the hands to the elements. Snow boots, mukluks, bunny boots Shelter: Overnight shelter An overnight shelter may be as simple as a wool blanket and tarp, or a complete sleep-system inside a double-walled, four-season tent. Sleeping layers may be layered the same way as clothing layers: inner, mid, and outer shell. Bedding options range from a pillow made from clothes to a sleep system comprising sleeping pad, sleeping bag, bivouac shelter, bag liner, and compression sack. Shelter structures can be constructed from a tarpaulin, ground sheet, rope, poles, or trees, with a mosquito net. Rain poncho may be used as a ground sheet, or used as an overhead tarp, or rigged as a hammock. Tent hammocks comes with a bug net and overhead tarp. A cave, bivouac shelter, or debris shelter can also be used. Jungle shelters are used in jungles and tropical rainforest, where rainfall, mud, invertebrates, and snakes are common. A Venezuelan or jungle hammock is well ventilated, with a bug net and a covering tarpaulin. A platform can be built off the ground or tied into a tree. Trekking poles can be used to construct a shelter; they can be used as poles for a tarpaulin. Some tents are designed to use trekking poles in place of carrying additional poles, a technique common in ultralight backpacking. Shelter: Continuous clothing-sleeping layers The line can blur or shift between clothing, bedding, and structural shelter. A rain-poncho and its thermal liner (or a regular poncho) is an example of equipment that can be clothing, bedding, and structural shelter. Ultralight backpackers use typical cold-weather clothes to extend the temperature ranges of their sleeping bags. Then this reasoning can extend to packing a winter coat and snow pants with a bivy bag before adding a two-pound sleeping bag. Adding an insulated pair of socks or down booties would complete the insulation layer. Shelter: Given an unexpected turn toward cold weather, bedding can be converted to mobile insulation, but if the pack already contains those items, then time and energy are saved. Basic equipment and abilities: The most basic hiking equipment is a stout knife, a pot, cordage, a few ways to make a fire, and equipment to carry water and other gear. Basic equipment and abilities: Bandana, uses: a hat, dust mask, face scarf, water filter, first-aid, signal, etc.; larger versions like a shawl, sarong Cutting, chopping, and sawing: knife, multi-tool, tomahawk, hatchet, axe, bucksaw, snow knife or snow saw Container (see below) Cordage (see below) Digging: sharp stick, stout knife, trowel, ice axe, entrenching tool (folding shovel), compact shovel, snow shovel Fire (see below) Light: Flashlight (UK torch) or two, preferably hands-free (headband or headlamp), spare batteries and bulb. Basic equipment and abilities: Candle from wax or tallow, or an oil lamp Fire and a wood torch Medical: first-aid kit, medicines, medicinal plants, cloth, cordage, superglue, Nitrile gloves Avoiding the need for medical treatment is preferable when possible by learning about nature, water treatment, food poisoning, poisonous plants and animals, and survival skills to avoid things like frostbite. Basic equipment and abilities: Sun protection: Clothing: long-sleeved shirt and pants, hat with a full brim or used with a bandana, thin gloves Sunglasses: year-round protection from blowing sand/snow, sharp objects, glare, and snow blindness. A band of cloth (bandana) or bark can be used to fashion a pair of emergency sunshades by cutting narrow slits in them. They are critical at high altitude. Basic equipment and abilities: Sunscreen protects from some rays Lip balm Information: Having information includes being aware of the surroundings and events that may be relevant to the hiker. This starts by being able to navigate. Another part is the weather, being able to read the weather, having gathered the latest and longer predictions before a hike, and possibly having a weather radio for updates. Being able to see further (binoculars) and record what is seen maybe additional equipment in this area. Basic equipment and abilities: Navigate by reference, terrain, global positioning system (GPS), and by map and compass. Swimming goes with the first Rule of 3: air. If a hiker is swept off his or her feet into deep water, or falls into a lake, then swimming moves to the top of the list. Trekking poles or hiking sticks are important for stability and balance. The key features of trekking poles that are important are weight, adjustability, shock absorption, locking mechanisms. Basic equipment and abilities: Water kit Water needs to be drinkable. Hikers usually carry some, but do not carry all that they need, because it weighs one kilogram (2.2 lbs) per liter, and hikers can consume 2-4+ liters per day (4–9 lbs). Additional water usually can be located, collected, filtered, and purified. All water in the wild is potentially unclean.The details of locating water are beyond the scope of this article. The basics are using a map, knowing how water flows through and collects in certain geographical formations (natural cisterns), and identifying which plants indicate shallow-underground water and contain easily accessed water. Heading downhill to streams, and looking for rich, green vegetation that may indicate a spring, are ways to start. Following bees and tracking animals to cisterns, knowing where to dig in apparent dry stream beds, and possibly waiting for night when vegetation releases water, are slightly more advanced techniques. Water can be collected in a clean container. Clear plastic bags can be used to make vegetation and solar stills. Dehydrated, chemical-free sponges can be used to wipe dew from vegetation, or tied to ankles before one walks through damp vegetation in the morning, soaking up water from wet rocks or sand. A flexible drinking straw can access water in a rock crack, or drink from a water still without opening it. Tarpaulins can also be used to collect rain water or dew. Basic equipment and abilities: To remove larger impurities, water can be filtered through grass, sand, charcoal or cloth, such as a bandana or pantyhose. Pantyhose can also be used as an emergency fishing net. Filtering water of larger impurities is a useful way to extend the life of commercial filters, preventing them from getting clogged quickly. Basic equipment and abilities: Water must be purified of harmful living organisms and chemicals. Some commercial filters can remove most organisms and some harmful chemicals, but they are not 100% effective. Distillation filters, purifies, and removes some harmful chemicals. Chemicals with a lower or about equal boiling point of water are not eliminated by distilling. Iodine or chlorine dioxide solutions or tablets can be used to purify water. It can be boil water in a fire-resistant pot or water bottle. Water can be boiled in some flammable materials like bark because the water absorbs the heat. Pasteurization takes place at temperatures lower than boiling point, but knowing the temperature of the water and calculating the duration of treatment can be difficult. This technique is useful when only non-durable containers are available. Sunlight can be used with a clear container. Filters made from heat-treated diatomaceous earth can also be used. Basic equipment and abilities: Transporting water A wide-mouth, metal water bottle or a metal pot or cup can also be used to boil and transport water, and can be used fairly easily for cooking. A lid for the pot will help water to boil sooner, helps with cooking by requiring less fuel, and reduces water loss. Other containers for transporting water include appropriate plastic water bottles in materials like Nalgene. There are hard plastic bottles, and soft-collapsible bottles. A hydration pack tube freezes easily. A non-lubricated condom can hold up to two liters, but is very vulnerable to puncture. Placing a stick in the knot will allow it to be re-used. Breast milk bags are plastic bags that double-Ziploc, so they are easier to reseal than a condom and they do not puncture as easily. They are transparent, allowing solar purification and can be used as a magnifying lens to start a fire. Containers that may freeze with water in them may allow for 10% expansion; they may be filled to 90%. Oral rehydration therapy packets can be added to water to help replace electrolytes. Basic equipment and abilities: Fire kit Fire needs ignition, oxygen, and fuel, and the ability to be extinguish. Ignition can come from a spark, a chemical reaction, electricity, or concentrated solar energy. The more oxygen involved, the easier the fire starts and the hotter it burns. Organic material must either be dry or the fire must be hot enough to dry it and burn it. Fraying organic material is more combustible as a tinder. Grain dust and granulated sugar can ignite when oxygenated over a flame. Basic equipment and abilities: Sources of ignition include flint, carbon steel, firesteel and a sharp edge, matches, butane and Zippo, and peanut lighters, and magnifying glasses. Basic equipment and abilities: Fuels include natural substances like dry wood, peat and coal. pitch, petroleum jelly, charred cotton, shaved rubber, and frayed synthetic cloth can be used as kindling. Candles provide illumination and can help start a fire. Alcohol, DIY and commercial alcohol stoves are made and carried by hikers. Oil, petroleum, vegetable, and tallow can help start and feed a fire. Propane bottles are made for backpacking. Charcoal or briquettes could be packed in the fire. Basic equipment and abilities: Sure fire is a way to start a fire in bad conditions or when a hiker has no man-made equipment, like when the fuel is wet, or the lighter has run out of fuel. Some hikers will carry tinder in a few forms, such as a few cotton balls soaked in pure petroleum jelly, fat wood (pitch). Alcohol-wipes and alcohol-hand-sanitizers are other options. Vegetable oils, and some fruit oils, like olive oil, coconut oil, corn chips, or nuts are edible and can be used to help start a fire because of the oil in them. "Bad" conditions also includes high altitude because of less oxygen, high winds blowing out a fire, high humidity that soaks either the fuel source or the igniter.To extinguish a campfire, see extinguishing a campfire. Knowing ways to survive a wildfire may also be helpful. Basic equipment and abilities: Cordage Cordage provides the abilities to bind, build, repair, and prevent damage. It comes in many sizes and materials, and can be used for building shelters and traps, flossing teeth, fishing, repairing and making clothes, replacing shoelaces, gluing or taping things together. Many cordages are made from natural materials. Some types of cordage are: Parachute cord is flexible; its inner threads can be easily pulled out to make longer cordage, or used as threads for sewing or fishing. Basic equipment and abilities: Sewing and suturing thread, dental floss, fishing line, bank line, string, twine, clothes line. Wire, such as tripwire or snare wire, has many uses. Basic equipment and abilities: Lanyards, straps, belts, bungee cords Tape: medical tape, duct tape, gaffer tape Climbing rope for shelters, cliffs, and scrambling Superglue Containers There are a variety of containers for organizing and keeping equipment dry: Clear, plastic Ziploc freezer bags in quart and gallon sizes can be used for emergency water purification and transportation. When filled with equipment and clothes, they become inflated and may help with emergency flotation. Basic equipment and abilities: Dry bags are heavier, more durable, and provide the same benefits. A dry-bag can be used as an emergency container for boiling water using the hot-rock method. Stuff sacks and compression sacks help reduce the volume of clothes and sleeping bags. Hard-sided, plastic containers that seal using an O-ring may be used to carry critical or expensive equipment, such as electronics, and for the kit that holds the main pocket items. Food: Military ready meals provide a wide range of food options for hikers; they tend to be higher in fat and salt than is appropriate for sedentary people. The meals are not dehydrated, and are not intended for more than a few days of hiking. Most of them are not designed to minimize the space they take in a pack. Food: In addition to a food's expiration date, the main considerations for hiking food are water content, caloric density (more energy per pound for a given space), and nutritional density (more nutrition per pound for a given space). Water weighs 1 kilogram per litre (8.3 lb/US gal), so a 4 litres (1.1 US gal) food container can weigh up to 4 kilograms (8.8 lb) less when it contains dehydrated food. Dehydrating foods can reduce weight and may reduce nutrition while increasing the need for water to reconstitute the food. More weight also expends more energy, so packing lighter foods reduces the need for more calories. Calories equate to energy. Nutrition becomes more important as the number of hiking days increases; for example, MREs are not intended to be used beyond ten days. Multi-vitamins may help offset some nutrition deficiencies. Food: The three macronutrients are fats (lipids), carbohydrates (sugars and starches), and protein. Fats are calorie dense and nutritionally important, nuts are a good example. Carbohydrates (starches and sugars) that release energy slowly (as measure by glycemic index and glycemic load or the insulin index) give sustained energy, such as legumes and whole grains. Some sources of protein are meats, dairy products, fish, eggs, whole grains, pulses, legumes, and nuts. These are the reasons that "trail" mix usually has dried fruit and a variety of nuts. Nuts and dried fruit can last a long time based on their expiration date. The USDA's page on expiration dates is available online.Not all food needs to be dehydrated or dried. When a hiker plans to carry a couple liters of water, that is 4.4 pounds of water weight that can be carried in fresh fruits and vegetables for the first day. The same is true for other foods based on their water-weight. Depending on which ones are chosen, this can eliminate or reduce the need for cooking, and the reconstitution time. One of the first meals on a hike could be a Greek salad with a fruit cocktail, followed later with refried beans and tortillas. Nut-butter and honey sandwiches on flatbread can last longer depending on the preservatives in the bread and nut-butter. The same is true for canned goods, most of the weight is in the water. Selecting a canned food is the same: calorie and nutritional dense. Using this can put a hiker down the trail a day or two with little difference in the foods normally eaten. Food: Taking foods that do not require cooking provides for higher mobility (not stopping to cook), and allows for the contingencies of not having a fire, the cook stove breaking, or running out of fuel. In general, the foods in a grocery store that are not in the refrigerated-freezer sections, are available for the trail. Food: No-bake home-made "energy" protein bars may contain oatmeal, ground flaxseed, arrowroot powder (medicinal uses), peanut butter, powdered nuts, chopped nuts, coconut oil (multi-use), coconut flakes, dried fruit, cinnamon (medicinal), cooked beans, and natural sweeteners, like honey; they may also be baked. Baked versions may contain natural sweeteners like bananas, applesauce, grated carrots and zucchini. Either way, they and the no-bake ingredients may be used for the trail.Flavor enhancers: salt, spices, salt substitute, powdered peppers, dried herbs, powdered bullion or cubes, and hot sauce. If food supplies run out, a field guide to edible, medicinal, and poisonous plants may be useful. Or a hiker could study them ahead of time. As the movie "Into the Wild" brought out, some poisonous plants look like edible plants. He had a field guide with him but did not notice the details well enough. Refrigeration: Water and food can be cooled in snow. Evaporation causes cooling and is used in the pot-in-pot refrigerator. Placing green grass or a sponge into a hat and soaking it with water is another technique. Bottled water can be cooled in a stream. Refrigeration: Cooking Ultralight backpackers rely only on food that does not need cooking, and reconstitute dehydrated, pre-cooked food without cooking it. A hot drink or meal may help someone with a lower body temperature or help boost morale. In an emergency, most locations would supply material for a fire, as long as there was a container that could withstand the heat. Some options and tradeoffs for choosing equipment. Refrigeration: Cooking options Cooking options may range from a candle to a bonfire, and may include a solar oven, or a Fresnel lens, or more typical tools and other options: Common utensils: knife, fork, spoon, and spork. A butter knife may not be an optimal weight item considering that other knives may be available, or at least it may be sharpened. Utensils may be carved from wood. A fork spears food, so can a sharp stick or a knife. Sporks trade off the volume of the spoon with the ability to spear food. A mid-sized, sturdy metal spoon may be used for cooking, eating, and digging. Even if not cooking, utensils can be useful as tools. Refrigeration: Mess kit or cookset is a nested set, usually containing a pot with a lid, some times the lid doubles is a frying pan or plate, a bowl, and possibly a cup. Towel, bandana, or cotton T-shirt Biodegradable soap, or natural cleansers like baking soda, vinegar, pure lemon crystals Personal hygiene: Equipment not already in the kitchen. Dental hygiene: toothbrush (may be sharpened for a marlin spike for rope work), etc. Feminine hygiene that doubles as first-aid and tinder: tampons, pads Toilet paper (tinder), wet wipes (surefire) Tweezers, in a kit, in a multi-tool, on a keyring Electronics: Handheld-waterproof electronics (or stored in waterproof bags) with spare batteries for critical gear. Some devices come with different power options: solar, hand-crank, and/or USB. And then there are portable solar-charging systems. Depending on electronics to work in life-and-death situations means that the hiker's life is at risk when the equipment fails. AM-news-weather radio Camera, Drone with extra film/memory card Cell phone Personal-locator beacon or other emergency locator beacon, especially important in possible avalanche areas Emergency-channel scanner Flashlight, a red filter saves night vision, but reduces sight distance and signalling; carry a spare bulb. Electronics: Global Positioning System (GPS), a lightweight yet rugged and waterproof model with a long battery life, memory for topographical maps, base map ability (so a hiker can drive to the trail) plus the ability to store notes. These are not to be used as a primary navigation tool (as some of their instructions read), but when a hiker can only see a few feet, a GPS can help. If conditions are that bad, then recreational hikers may use it to head toward shelter, versus using it to get into worse conditions, farther from help, and risk having it fail. Electronics: Laser pointer for signaling but can cause eye damage Strobe light VHF radio: emergency airband-aircraft communication, amateur/ham radio, FM radio (news), marine-band radio for talking to ships Walkie talkie or citizens band radio UV water purifier: purifies water using laser-UV light, may double as a flashlight Additional equipment: Binoculars, monocular Deep snow: trekking poles with baskets or ski poles, snowshoes, cross-country skies, snow shovel, snow saw Ice: Traction cleats with anti-slip soles, crampons, caulk (cork) boots Jungle: machete, hammock, extra tinder and insect repellent Notepad and writing implements for leaving notes, making notes, drawing, journaling Rain-proof cover for backpack Sewing kit: scissors can be in the multi-tool, a place to store the threaded needles, dental floss and fishing line may double as thread, Kevlar thread, safety pins for repair and fishing hooks, replacements for critical buttons or fasteners. Additional equipment: Umbrella: useful for hiking in the rain or sunshine,; it may be used to help build a small structure Walking stability and uphill effort: a walking stick or two, trekking poles, ski poles Waterproofing supplies Water bottle parka to either delay freezing or when wet, provide cooling Wild food when legal or appropriate: field-guide to plants, trapping-hunting kit: traps, scent lures, hunting weapon, slingshot. Example checklists: Checklists may be compiled from multiple sources, and then tailored to fit a hiker's preferences, and adjusted for specific hikes. Example checklists: Wrist, optional: watch, parachute cord, fishing line, compass, altimeter, mini-versions of survival items Neck-lanyard, optional: neck knife, mini-flashlight, firesteel, lighter Keyring kit: pocket compass, whistle, P-38 can opener (backup blade), optional: keyring knife or multi-tool, mini-flashlight, small firesteel Pockets: keyring kit, lighters and firesteel, folding knife with sharpener or multi-tool with a metal file, bandana, map, cordage, optional electronics Cargo-pocket kits or belt-pouch kits in waterproof bags: pocket items, fire kit, two large-clear-plastic bags: Water: water purification, non-lubed condoms, large oven bag Cordage: parachute cord, thin-wire spool, large-threaded sewing needles, dental floss, duct tape Navigation & signalling: fire, second compass, signal mirror (heliograph), small flashlight with headband or headlamp with spare batteries Other: lip balm, Nitrile gloves, earplugs (can be used as fishing bobbers), mini-first-aid kit, superglue, toilet pager Food: compact-high-energy food, healthy sweetener, salts and baking soda (rehydration etc.), mini-emergency-fishing kit Optional: small containers of sunscreen and insect repellent, binocular/monocular, electronics. Example checklists: Belt: belt-pouch kits, optional: larger cutting tools, water container, sunglass case with glasses, earplugs, etc.; electronics. Belt-knife sheath may include a sharpener, a firesteel, etc. Either the belt-items are worn, or they are included in the waistpack. Example checklists: Waistpack (or haversack) in waterproof containers: previous kits, large-clear plastic bags, wide-mouth-metal water-bottle, space-blanket or bag, bandanas, hats, gloves, scarf, socks, light-weight-wind-rain layer, thin-long base layer, swim-hiking shorts, high-energy-ready-to-eat food, emergency trapping kit, optional electronics Small-to-mid-sized backpack: previous kits, larger cutting-chopping-sawing tools, more water containers (most collapsible for flexibility), mid-weight clothing layer, bivy bag, cooking pot with food kit, personal-hygiene kit, optional: hydration bag, cold-weather coat and pants. The light-weight-rain layer may be replaced with a heavier outer layer. Example checklists: Mid-to-large backpack: previous kits, sleep-system, regular-overnight shelter, snow clothing and equipment, additional food and water, optional: large bucksaw or camp-axe Possible hazards: The possible hazards of hiking may affect equipment choices and decisions about which skills are important to learn. Hazards encountered by hikers include:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Left of the Dial** Left of the Dial: Left of the dial refers to the college and other non-commercial radio stations in the United States that broadcast from the reserved band of the FM spectrum. It can also refer to: "Left of the Dial" (song), a song from the 1985 album Tim by the Replacements that popularized the term Left of the Dial: Dispatches from the '80s Underground, a 2004 American compilation album of 1980s music "Left of the Dial", programme six of the 2007 BBC Two television series Seven Ages of Rock Left of the Dial (film), a 2005 HBO documentary about the founding of Air America
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Isabelle Boutron** Isabelle Boutron: Isabelle Boutron is a professor of epidemiology at the Université Paris Cité and head of the INSERM- METHODS team within the Centre of Research in Epidemiology and Statistics (CRESS). She was originally trained in rheumatology and later switched to a career in epidemiology and public health. She is also deputy director of the French EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Centre, member of the SPIRIT-CONSORT executive committee, director of Cochrane France and co-convenor of the Bias Methods group of the Cochrane Collaboration. Biography and education: Boutron graduated from the Pierre and Marie Curie University in rheumatology in 2002 and obtained her PhD in epidemiology in 2006. She was a postdoctoral fellow in the Centre for Statistics in Medicine, University of Oxford from 2008 to 2009, working under Douglas Altman. Biography and education: After being trained in rheumatology, Boutron was awarded of a fellowship from the French Ministry of Health and a 2-year fellowship from the Assistance Publique - Hôpitaux de Paris. With these fellowships, she switched her career focus to epidemiology and public health. She was awarded a PhD in epidemiology in 2006 and became assistant Professor of Epidemiology in the Paris Diderot University in the Department of Epidemiology and Biostatistics directed by Dr. Philippe Ravaud. After a postdoctoral position in the University of Oxford, she joined the Paris Descartes University as associate professor (2009-2012) and professor since 2012 at Université Paris Cité. Scientific activities: Boutron's research activities mainly focus on meta-research, interventional research on research, transparency, the peer-review process, methodological issues of assessing interventions (blinding, external validity, complex interventions) and research synthesis. She has worked on the internal and external validity of non-pharmacological trials, and co-led the extension of the CONSORT statement on reporting treatment trials for nonpharmacologic treatments. Along with her colleagues she edited a book entitled “Randomized Clinical Trials of Nonpharmacological Treatments.” She also investigates the distorted dissemination of research finding through publication bias, selective reporting of outcomes and spin defined as a distorted interpretation of research findings. Boutron has demonstrated the high prevalence of such distortion in the published scientific literature and shown how the biased presentation, and interpretation of research results may bias the interpretation of readers, which is a critically important aspect of the knowledge translation process. Scientific activities: She led an innovative and ambitious joint doctoral training programme funded by Marie Skłodowska-Curie Actions, dedicated to Methods in Research on Research (MIROR) in the field of clinical research, She is currently leading the COVID-NMA initiative a living mapping and living evidence synthesis of preventive interventions and treatments for COVID-19. Scientific activities: Boutron has published more than 200 peer-reviewed articles. She is an academic editor for the academic journals BMC Medicine, PLOS Biology, and Cochrane Reviews. She is responsible for the teaching programs in Clinical Epidemiology and Public Health, for medical students at Université Paris Cité, and is co-leader (with Pr. Ravaud) of the international Master 2 program, Comparative Effectiveness Research. She is also deputy director of the Doctoral school of Public Health at université Paris Cité. Academic awards and honors: Award Louis-Daniel Beauperthuy, Académie des sciences (2014) Personal: Boutron is married to Emmanuel Boutron. She has two children: Antoine born in 1999 and Augustin born in 2003.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TUBGCP3** TUBGCP3: Gamma-tubulin complex component 3 is a protein that in humans is encoded by the TUBGCP3 gene. It is part of the gamma tubulin complex, which required for microtubule nucleation at the centrosome.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fire OS** Fire OS: Fire OS is a mobile operating system based on the Android Open Source Project (AOSP). It is developed by Amazon for their devices. Fire OS includes proprietary software, a customized user interface primarily centered on content consumption, and heavy ties to content available from Amazon's storefronts and services. History: Amazon began referring to the Android derivative as Fire OS with its third iteration of Fire tablets. Unlike previous Fire models, whose operating system was described as "based on" Android, Fire OS 3.0 was described as "compatible with" Android. History: Fire OS 5 Based on Android 5.1 "Lollipop", it added an updated interface. The home screen has a traditional application grid and pages for content types, as opposed to the previous carousel interface. It also introduced On Deck, a function that automatically moves content out of offline storage to maintain storage space for new content; the Word Runner speed reading tool; and screen color filters. Parental controls were enhanced with a new web browser for FreeTime mode featuring a curated selection of content appropriate for children, and an Activity Center for monitoring children's usage. It removed support for device encryption, which an Amazon spokesperson stated was an enterprise-oriented feature that was underused. In March 2016, after the removal was publicized and criticized in the wake of the FBI–Apple encryption dispute, Amazon announced it would restore the feature in a future patch. History: Fire OS 6 Based on Android 7.1.2 "Nougat", its main changes and additions include: Adoptable storage, allowing users to format and use their SD card as internal storage Doze/App standby, aiming to improve battery life by forcing devices to sleep when not actively used, adding restrictions to apps that would normally continue to run background processes MediaTek exploits (2019) In early 2019, security exploits for six Fire Tablet models and one Fire TV model were discovered that could allow temporary root access, permanent root access, and bootloader unlocking due to security vulnerabilities in multiple MediaTek chipsets. History: Fire OS 7 Based on Android 9.0 "Pie", it was released in 2019 for all 8th-11th generation Fire tablets. In February 2022, Amazon announced that the Docs app would be replaced (in August 2022) by document creation functionality in the Files app; and introduced an improved home editing system. Fire OS 8 FireOS 8 is the latest release of FireOS for 12th generation fire tablets, based on Android 11, information about the release became available via Amazon developer documentation around May 2022. FireOS 8 incorporates changes from Android 10 and Android 11 such as, TLS 1.3 support enabled by default, High Efficiency Image File Format (HEIF) support, Dark mode, One-time permissions, Sharing improvements, Device auto backups (user needs to opt-in to device backups), etc. Although it is noted in the Amazon developer documentation that some Android 11 features such as File Based Encryption (FBE) are not supported yet. Features: Fire OS does not come with Google mobile services pre-installed; therefore, Amazon cannot use the Android trademarks to market the devices. Users are able to sideload the Google Play store however, full compatibility is not guaranteed if the app depends on Google services.Because Google services are not pre-installed, Amazon develops and uses its own apps in their place, some of which include Amazon Appstore, Amazon Alexa, Prime Video, Amazon Music, Audible, Kindle Store, Silk Browser, Goodreads and Here WeGo.Fire OS uses a customized home screen (launcher). As of Fire OS 7.3.2.3, the launcher features three sections: "For You" shows the weather, recently used apps, Alexa integration, then shows recommended content such as apps, books movies, etc. Features: "Home" is the section for the icons of all of the apps currently installed on the device, apps on the Home section can be moved around or put into folders, a search bar is also available at the top of the launcher to search though local content on the device or search online using the Bing search engine. "Library" shows purchased items from Amazon services, such as apps, books, movies and TV shows from Prime Video, etc. The OS features a multi-user system, which allows multiple people to setup and use separate user profiles.Along with Amazon Kids and Amazon Kids+, a suite of parental controls is included which allows parents to create managed child profiles, set limits and set restrictions for minors. Devices: Current Amazon devices running Fire OS: Fire Tablets - manufactured by Quanta Computer Fire TV - manufactured by Foxconn Amazon Echo / Amazon Echo Show - manufactured by AmazonDiscontinued devices running Fire OS: Fire Phone - manufactured by Foxconn List of Fire OS versions: The releases are categorized by major Fire OS versions based upon a certain Android codebase first and then sorted chronologically. List of Fire OS versions: Fire OS 1 – based on Android 2.3 Gingerbread system version = 6.3.1 system version = 6.3.2 – longer movie rentals, Amazon cloud synchronization system version = 6.3.4 – latest version for Kindle Fire (1st Generation) (2011)Fire OS 2 – based on Android 4.0 Ice Cream Sandwich system version = 7.5.1 – latest version for Kindle Fire HD (2nd Generation) (7" 2012) system version = 8.5.1 – latest version for Kindle Fire HD 8.9" (2nd Generation) (2012) system version = 10.5.1 – latest version for Kindle Fire (2nd Generation) (2012)Fire OS 2.4 – based on Android 4.0.3(?)Fire OS 3 Mojito – based on Android 4.2 Jelly Bean 3.1 3.2.8 – rollback point for Kindle Fire HDX (2013) 3.5.0 – introduces support for Fire Phone; Android 4.2.2 codebase 3.5.1 – Fire Phone maintenance versionFire OS 4 Sangria – based on Android 4.4 KitKat 4.1.1 4.5.5.1 4.5.5.2 4.5.5.3 – latest version for some tablets released in 2013, Kindle Fire HDX (3rd Generation), Kindle Fire HDX 8.9" (3rd Generation), Kindle Fire HD (3rd Generation) 4.5.5.5 – latest version for some tablets released in 2013 (e.g. some Kindle Fire tablets of 3rd Generation) 4.6.6.0 – Fire Phone 4.6.6.1 – latest version for the Fire PhoneFire OS 5 Bellini – based on Android 5.1.1 Lollipop 5.0 5.0.5.1 – introduction of Fire TV 5.0.1 5.1.1 5.1.2 5.1.2.1 5.1.4 5.2.1.0 – Fire TV devices 5.2.1.1 5.2.1.2 5.2.4.0 5.2.6.0 5.2.6.1 5.2.6.2 5.2.8.4 5.3.1.0 5.3.1.1 – August 2016 5.3.2.0 – November 2016 5.3.2.1 – December 2016 5.3.3.0 – March 2017 5.3.6.4 – version for Fire HD 8 (6th Generation) 5.3.6.8 5.3.7.0 5.3.7.1 5.3.7.2 – for Fire HD 8 & Fire HD 10 (7th Generation) 5.4.0.0 – June 2017 5.4.0.1 – August 2017 5.5.0.0 – November 2017: Only for Fire HD 10 (2017) with hands-free Alexa 5.6.0.0 – November 2017 5.6.0.1 – January 2018 5.6.1.0 – March 2018: version for tablets released in 2014 (e.g. some Fire tablets of 4th Generation) 5.6.2.0 – July 2018: Hands-Free Alexa For Fire 7 & HD 8 (2017) only 5.6.2.3 – April 2018: Latest version for first and second generation Fire TV devices 5.6.3.0 – November 2018: for Fire 7 (5th to 7th Generation); Due to a mistake, this version was accidentally released as 5.3.6.4 on some Fire tablets instead of 5.6.3.0, but includes the same update features. List of Fire OS versions: 5.6.3.8 – April 2019 5.6.4.0 – May 2019, September 2019: for Fire HD 8 5.6.6.0 – May 2020 5.6.7.0 – August 2020 5.6.8.0 – November 2020: Latest version for Fire (5th Generation), Fire HD 6 (4th Generation), Fire HD 7 (4th Generation), Fire HD 8 (5th and 6th Generation), and Fire HD 10 (5th Generation) 5.6.9.0 - December 2020 5.7.0.0 – February 2022: Latest version for Fire HDX 8.9 (4th Generation), Fire (7th Generation), Fire HD 8 (7th Generation), and Fire HD 10 (7th Generation) 5.8.6.8 – July 2019 5.8.7.9 – August 2019 5.7.8.2 – September 2019Fire OS 6 – based on Android 7.1.2 Nougat 6.2.1.0 – October 2017, released on third generation Fire TV 6.2.1.2 – December 2017 6.2.1.3 – May 2018 6.3.0.1 – November 2018 6.3.1.2 – July 2019: version for Fire 7 (9th Generation) 6.3.1.3 (information needed) 6.3.1.4 (information needed) 6.3.1.5 – September 2019: last version of FireOS 6 for Fire HD 8 (8th generation) 6.5.3.4 – September 2019: Last version for Fire 7 (7th generation) 6.5.3.5 – November 2019Fire OS 7 – based on Android 9.0 Pie 7.3.1.0 – October 2019: First version for Fire HD 10 (9th Generation) 7.3.1.1 – October 2019: Second version for Fire HD 10 (9th Generation) 7.3.1.2 – February 2020: Third version for Fire HD 10 (9th Generation) 7.3.1.3 – April 2020: Fourth version for Fire HD 10 (9th Generation) 7.3.1.4 – June 2020: Fifth version for Fire HD 10 (9th Generation) 7.3.1.5 – August 2020: First version of FireOS 7 for Fire HD 8 (8th Generation) 7.3.1.6 – October 2020 7.3.1.7 – November 2020 7.3.1.8 – February 2021 7.3.1.9 – May 2021 7.3.2.1 – September 2021 7.3.2.2 – November 2021 7.3.2.3 – May 2022 7.3.2.4 – August 2022 7.3.2.6 – November 2022 7.3.2.7 – February 2023 The deletion of skippable ads 7.3.2.8 - April 19, 2023 Latest version for 8th, 9th, 10th and 11th Generation Fire TabletsFire OS 8 - based on Android 11 (Red Velvet Cake ) 8.3.1.0 - June? 2022: Included on some 12th Gen Fire 7 units out of the box 8.3.1.1 – June 28, 2022: Known first version for the 12th Generation Fire 7 8.3.1.2 - September?, 2022 8.3.1.3 - November 26, 2022 8.3.1.4 - March 7, 2023: 8.3.1.9 - April 28, 2023: Latest version for 12th Generation Fire Tablets
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sound speed profile** Sound speed profile: A sound speed profile shows the speed of sound in water at different vertical levels. It has two general representations: tabular form, with pairs of columns corresponding to ocean depth and the speed of sound at that depth, respectively. Sound speed profile: a plot of the speed of sound in the ocean as a function of depth, where the vertical axis corresponds to the depth and the horizontal axis corresponds to the sound speed. By convention, the horizontal axis is placed at the top of the plot, and the vertical axis is labeled with values that increase from top to bottom, thus reproducing visually the ocean from its surface downward.Table 1 shows an example of the first representation; figure 1 shows the same information using the second representation. Sound speed profile: Although given as a function of depth, the speed of sound in the ocean does not depend solely on depth. Rather, for a given depth, the speed of sound depends on the temperature at that depth, the depth itself, and the salinity at that depth, in that order.The speed of sound in the ocean at different depths can be measured directly, e.g., by using a velocimeter, or, using measurements of temperature and salinity at different depths, it can be calculated using a number of different sound speed formulae which have been developed. Examples of such formulae include those by Wilson, Chen and Millero ,and Mackenzie. Each such formulation applies within specific limits of the independent variables.From the shape of the sound speed profile in figure 1, one can see the effect of the order of importance of temperature and depth on sound speed. Near the surface, where temperatures are generally highest, the sound speed is often highest because the effect of temperature on sound speed dominates. Further down the water column, sound speed also decreases as temperature decreases in the ocean thermocline, and sound speed also decreases. At a certain point, however, the effect of depth, i.e., pressure, begins to dominate, and the sound speed increases to the ocean floor. Also visible in figure 1 is a common feature in sound speed profiles: the SOFAR channel. The axis of this channel is found at the depth of minimum sound speed. Sounds emitted at or near the axis of this channel propagate for very long horizontal distances, owing to the refraction of the sound back to the channel's center.Sound speed profile data are necessary for underwater acoustic propagation models, especially those based on ray tracing theory.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quantum Darwinism** Quantum Darwinism: Quantum Darwinism is a theory meant to explain the emergence of the classical world from the quantum world as due to a process of Darwinian natural selection induced by the environment interacting with the quantum system; where the many possible quantum states are selected against in favor of a stable pointer state. It was proposed in 2003 by Wojciech Zurek and a group of collaborators including Ollivier, Poulin, Paz and Blume-Kohout. The development of the theory is due to the integration of a number of Zurek's research topics pursued over the course of 25 years, including pointer states, einselection and decoherence. Quantum Darwinism: A study in 2010 is claimed to provide preliminary supporting evidence of quantum Darwinism with scars of a quantum dot "becoming a family of mother-daughter states" indicating they could "stabilize into multiple pointer states"; additionally, a similar kind of scene has been suggested with perturbation-induced scarring in disordered quantum dots (see scars). However, the claimed evidence is also subject to the circularity criticism by Ruth Kastner (see Implications below). Basically, the de facto phenomenon of decoherence that underlies the claims of Quantum Darwinism may not really arise in a unitary-only dynamics. Thus, even if there is decoherence, this does not show that macroscopic pointer states naturally emerge without some form of collapse. Implications: Along with Zurek's related theory of envariance (invariance due to quantum entanglement), quantum Darwinism seeks to explain how the classical world emerges from the quantum world and proposes to answer the quantum measurement problem, the main interpretational challenge for quantum theory. The measurement problem arises because the quantum state vector, the source of all knowledge concerning quantum systems, evolves according to the Schrödinger equation into a linear superposition of different states, predicting paradoxical situations such as "Schrödinger's cat"; situations never experienced in our classical world. Quantum theory has traditionally treated this problem as being resolved by a non-unitary transformation of the state vector at the time of measurement into a definite state. It provides an extremely accurate means of predicting the value of the definite state that will be measured in the form of a probability for each possible measurement value. The physical nature of the transition from the quantum superposition of states to the definite classical state measured is not explained by the traditional theory but is usually assumed as an axiom and was at the basis of the debate between Niels Bohr and Albert Einstein concerning the completeness of quantum theory. Implications: Quantum Darwinism seeks to explain the transition of quantum systems from the vast potentiality of superposed states to the greatly reduced set of pointer states as a selection process, einselection, imposed on the quantum system through its continuous interactions with the environment. All quantum interactions, including measurements, but much more typically interactions with the environment such as with the sea of photons in which all quantum systems are immersed, lead to decoherence or the manifestation of the quantum system in a particular basis dictated by the nature of the interaction in which the quantum system is involved. In the case of interactions with its environment Zurek and his collaborators have shown that a preferred basis into which a quantum system will decohere is the pointer basis underlying predictable classical states. It is in this sense that the pointer states of classical reality are selected from quantum reality and exist in the macroscopic realm in a state able to undergo further evolution. However, the 'einselection' program depends on assuming a particular division of the universal quantum state into 'system' + 'environment', with the different degrees of freedom of the environment posited as having mutually random phases. This phase randomness does not arise from within the quantum state of the universe on its own, and Ruth Kastner has pointed out that this limits the explanatory power of the Quantum Darwinism program. Zurek replies to Kastner's criticism in Classical selection and quantum Darwinism.As a quantum system's interactions with its environment results in the recording of many redundant copies of information regarding its pointer states, this information is available to numerous observers able to achieve consensual agreement concerning their information of the quantum state. This aspect of einselection, called by Zurek 'Environment as a Witness', results in the potential for objective knowledge. Darwinian significance: Perhaps of equal significance to the light this theory shines on quantum explanations is its identification of a Darwinian process operating as the selective mechanism establishing our classical reality. As numerous researchers have made clear any system employing a Darwinian process will evolve. As argued by the thesis of Universal Darwinism, Darwinian processes are not confined to biology but are all following the simple Darwinian algorithm: Reproduction/Heredity; the ability to make copies and thereby produce descendants. Darwinian significance: Selection; A process that preferentially selects one trait over another trait, leading to one trait being more numerous after sufficient generations. Darwinian significance: Variation; differences in heritable traits that affect "Fitness" or the ability to survive and reproduce leading to differential survival.Quantum Darwinism appears to conform to this algorithm and thus is aptly named: Numerous copies are made of pointer states Successive interactions between pointer states and their environment reveal them to evolve and those states to survive which conform to the predictions of classical physics within the macroscopic world. This happens in a continuous, predictable manner; that is descendants inherit many of their traits from ancestor states. Darwinian significance: The analogy to the Variation principle of "simple Darwinism" does not exist since the Pointer states do not mutate and the selection by the environment is among the pointer states preferred by the environment (e.g. location states).From this view quantum Darwinism provides a Darwinian explanation at the basis of our reality, explaining the unfolding or evolution of our classical macroscopic world.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zemra** Zemra: Zemra is a DDoS Bot which was first discovered in underground forums in May 2012.Zemra is capable of HTTP and SYN Flood flooding and also has a simple Command & Control panel that is protected with 256-bit DES encryption for communicating with its command and control (C&C) server. Zemra also sends information such as Computer name, Language settings, and Windows version. It will send this data to a remote location on a specific date and time. It also opens a backdoor on TCP port 7710 to receive commands from a remote command-and-control server, and it is able to monitor devices, collect system information, execute files, and even update or uninstall itself if necessary.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Monacolin J** Monacolin J: Monacolin J is a statin made by red yeast rice. Monacolin J is a precursor to simvastatin and has potential neuroprotective activities.It can be produced by total mycosynthesis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Automatic grenade launcher** Automatic grenade launcher: An automatic grenade launcher (AGL) or grenade machine gun is a grenade launcher that is capable of fully automatic fire, and is typically loaded with either an ammunition belt or magazine.These weapons are often mounted on vehicles or helicopters, as when these weapons are moved by infantry the weapon, its tripod, and ammunition, are a heavy load, requiring a small team. Other types of grenade launchers are typically much lighter and can easily be carried by just a single soldier. Automatic grenade launcher: The Mark 19 Automatic Grenade Launcher, first fielded by the United States in 1966, and still widely used today, weighs 62.5 kg (137.58 lb) when attached to its tripod, and loaded with a box of ammunition. Automatic grenade launcher: For comparison, the single-shot M79 grenade launcher weighs 2.93 kg (6.45 lb). Regardless of their weight, AGLs are still highly effective, and the Mark 19 is capable of indirect fire up to 2,200 metres, a role traditionally reserved for mortars. Even though the round carries less explosive than a 60mm mortar shell, this is thought to be counterbalanced by its much higher volume of fire. Automatic grenade launcher: The most popular caliber for automatic grenade launchers in Western nations has been 40mm. The Soviet Union successfully fielded a 30mm grenade launcher, the AGS-17, during its war in Afghanistan. In 2002, Russia introduced a successor weapon, the AGS-30, and in 2017, the AGS-40 Balkan. Traditional munitions for automatic grenade launchers include high explosive, fragmentation, and shaped charge for attacking light armored vehicles. Less lethal rounds, like tear gas and sponge grenades for crowd control, have also been made. In the 21st century, AGLs have been made with integrated sight/range systems which can set a fused round to detonate precisely on, above, or behind a designated target.Different weapons use different methods of operation, with blowback and long recoil being two common choices. In all these weapons, the energy released by firing a round loads the next round into the weapon's breech. The Mark 19 is automatically reloaded through the blowback method, where expanding gases blow back the firing bolt. In the long recoil method the bolt is fixed to the firing chamber, and the whole firing chamber is blown back. These weapons are slightly less accurate, but weigh less than blowback weapons.General Dynamics manufactures a long recoil weapon, the Mark 47 Automatic Grenade Launcher, as does the Spanish firm Santa Bárbara. The LAG-40 manufactured by Santa Bárbara has a relatively low rate of fire of 215 rounds per minute.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Emergent evolution** Emergent evolution: Emergent evolution is the hypothesis that, in the course of evolution, some entirely new properties, such as mind and consciousness, appear at certain critical points, usually because of an unpredictable rearrangement of the already existing entities. The term was originated by the psychologist C. Lloyd Morgan in 1922 in his Gifford Lectures at St. Andrews, which would later be published as the 1923 book Emergent Evolution.The hypothesis has been widely criticized for providing no mechanism to how entirely new properties emerge, and for its historical roots in teleology. Historically, emergent evolution has been described as an alternative to materialism and vitalism.Emergent evolution is distinct from the hypothesis of Emergent Evolutionary Potential (EEP) which was introduced in 2019 by Gene Levinson. In EEP, the scientific mechanism of Darwinian natural selection tends to preserve new, more complex entities that arise from interactions between previously existing entities, when those interactions prove useful, by trial-and error, in the struggle for existence. Biological organization arising via EEP is complementary to organization arising via gradual accumulation of incremental variation. Historical context: The term emergent was first used to describe the concept by George Lewes in volume two of his 1875 book Problems of Life and Mind (p. 412). Henri Bergson covered similar themes in his popular 1907 book Creative Evolution on the Élan vital. Emergence was further developed by Samuel Alexander in his Gifford Lectures at Glasgow during 1916–18 and published as Space, Time, and Deity (1920). The related term emergent evolution was coined by C. Lloyd Morgan in his own Gifford lectures of 1921–22 at St. Andrews and published as Emergent Evolution (1923). In an appendix to a lecture in his book, Morgan acknowledged the contributions of Roy Wood Sellars's Evolutionary Naturalism (1922). Origins: Response to Darwin's Origin of Species Charles Darwin and Alfred Russel Wallace's presentation of natural selection, coupled to the idea of evolution in Western thought, had gained acceptance due to the wealth of observational data provided and the seeming replacement of divine law with natural law in the affairs of men. However, the mechanism of natural selection described at the time only explained how organisms adapted to variation. The cause of genetic variation was unknown at the time. Origins: Darwin knew that nature had to produce variations before natural selection could act …The problem had been caught by other evolutionists almost as soon as The Origin of Species was first published. Sir Charles Lyell saw it clearly in 1860 before he even became an evolutionist…(Reid, p.3) St. George Jackson Mivart's On the Genesis of Species (1872) and Edward Cope's Origin of the Fittest (1887) raised the need to address the origin of variation between members of a species. William Bateson in 1884 distinguished between the origin of novel variations and the action of natural selection (Materials for the Study of Variation Treated with Especial Regard to Discontinuity in the Origin of Species). Origins: Wallace's further thoughts Wallace throughout his life continued to support and extend the scope of Darwin's theory of evolution via the mechanism of natural selection. One of his works, Darwinism, was often cited in support of Darwin's theory. He also worked to elaborate and extend Darwin and his ideas on natural selection. However, Wallace also realized that the scope and claim of the theory was limited. Darwin himself had limited it. Origins: the most prominent feature is that I enter into popular yet critical examination of those underlying fundamental problems which Darwin purposely excluded from his works as being beyond the scope of his enquiry. Such are the nature and cause of Life itself, and more especially of its most fundamental and mysterious powers - growth and reproduction ... Darwin always ... adduced the "laws of Growth with Reproduction," and of "Inheritance with Variability," as being fundamental facts of nature, without which Natural Selection would be powerless or even non-existent ... Origins: ... even if it were proved to be an exact representation of the facts, it would not be an explanation... because it would not account for the forces, the directive agency, and the organising power which are essential features of growth … In examining this aspect, excluded ab initio by Darwin, Wallace came to the conclusion that Life itself cannot be understood except by means of a theory that includes "an organising and directive Life-Principle." These necessarily involve a "Creative Power", a "directive Mind" and finally "an ultimate Purpose" (the development of Man). It supports the view of John Hunter that "life is the cause, not the consequence" of the organisation of matter. Thus, life precedes matter and when it infuses matter, forms living matter (protoplasm). Origins: a very well-founded doctrine, and one which was often advocated by John Hunter, that life is the cause and not the consequence of organisation ... if so, life must be antecedent to organisation, and can only be conceived as indissolubly connected with spirit and with thought, and with the cause of the directive energy everywhere manifested in the growth of living things ... endowed with the mysterious organising power we term life ... Origins: Wallace then refers to the operation of another power called "mind" that utilizes the power of life and is connected with a higher realm than life or matter: evidence of a foreseeing mind which...so directed and organised that life, in all its myriad forms, as, in the far-off future, to provide all that was most essential for the growth and development of man's spiritual nature ... Origins: Proceeding from Hunter's view that Life is the directive power above and behind living matter, Wallace argues that logically, Mind is the cause of consciousness, which exists in different degrees and kinds in living matter. If, as John Hunter, T.H. Huxley, and other eminent thinkers have declared, "life is the cause, not the consequence, of organisation," so we may believe that mind is the cause, not the consequence, of brain development. ... So there are undoubtedly different degrees and probably also different kinds of mind in various grades of animal life ... And ... so the mind-giver ... enables each class or order of animals to obtain the amount of mind requisite for its place in nature ... Emergent evolution: Early roots The issue of how change in nature 'emerged' can be found in classical Greek thought - order coming out of chaos and whether by chance or necessity. Aristotle spoke of wholes that were greater than the sum of their parts because of emergent properties. The second-century anatomist and physiologist Galen also distinguished between the resultant and emergent qualities of wholes. (Reid, p. 72)Hegel spoke of the revolutionary progression of life from non-living to conscious and then to the spiritual and Kant perceived that simple parts of an organism interact to produce a progressively complex series of emergences of functional forms, a distinction that carried over to John Stuart Mill (1843), who stated that even chemical compounds have novel features that cannot be predicted from their elements. [Reid, p. 72]The idea of an emergent quality that was something new in nature was further taken up by George Henry Lewes (1874–1875), who again noted, as with Galen earlier, that these evolutionary "emergent" qualities are distinguishable from adaptive, additive "resultants." Henry Drummond in The Descent of Man (1894) stated that emergence can be seen in the fact that the laws of nature are different for the organic or vital compared to the inertial inorganic realm. Emergent evolution: When we pass from the inorganic to the organic we come upon a new set of laws - but the reason why the lower set do not seem to operate in the higher sphere is not that they are annhilated, but that they are overruled. (Drummond 1883, p. 405, quoted in Reid) As Reid points out, Drummond also realized that greater complexity brought greater adaptability. (Reid. p. 73)Samuel Alexander took up the idea that emergences had properties that overruled the demands of the lower levels of organization. And more recently, this theme is taken up by John Holland (1998): If we turn reductionism on its head we add levels. More carefully, we add new laws that satisfy the constraints imposed by laws already in place. Moreover these new laws apply to complex phenomena that are consequences of the original laws; they are at a new level. Emergent evolution: C. Lloyd Morgan and emergent evolution Another major scientist to question natural selection as the motive force of evolution was C. Lloyd Morgan, a zoologist and student of T.H. Huxley, who had a strong influence on Samuel Alexander. His Emergent Evolution (1923) established the central idea that an emergence might have the appearance of saltation but was best regarded as "a qualitative change of direction or critical turning point."(quoted in Reid, p. 73-74) Morgan, due to his work in animal psychology, had earlier (1894) questioned the continuity view of mental evolution, and held that there were various discontinuities in cross-species mental abilities. To offset any attempt to read anthropomorphism into his view, he created the famous, but often misunderstood methodological canon: In no case may we interpret an action as the outcome of the exercise of a higher psychical faculty, if it can be interpreted as the outcome of the exercise of one which stands lower in the psychological scale. Emergent evolution: However, Morgan realizing that this was being misused to advocate reductionism (rather than as a general methodological caution), introduced a qualification into the second edition of his An Introduction to Comparative Psychology (1903): To this, however, it should be added, lest the range of the principle be misunderstood, that the canon by no means excludes the interpretation of a particular activity in terms of the higher processes, if we already have independent evidence of the occurrence of these higher processes in the animal under observation. Emergent evolution: As Reid observes, While the so-called historiographical "rehabilitation of the canon" has been underway for some time now, Morgan's emergent evolutionist position (which was the highest expression of his attempt to place the study of mind back into such a "wider" natural history) is seldom mentioned in more than passing terms even within contemporary history of psychology textbooks. Morgan also fought against the behaviorist school and clarified even more his emergent views on evolution: An influential school of 'behaviorists' roundly deny that mental relations, if such there be, are in any sense or in any manner effective... My message is that one may speak of mental relations as effective no less 'scientifically' than... physical relations... Emergent evolution: His Animal Conduct (1930) explicitly distinguishes between three "grades" or "levels of mentality" which he labeled: 'percipient, perceptive, and reflective.' (p. 42) Alexander and the emergence of mind Morgan's idea of a polaric relationship between lower and higher, was taken up by Samuel Alexander, who argued that the mental process is not reducible to the neural processes on which it depends at the physical-material level. Instead, they are two poles of a unity of function. Further, the neural process that expressed mental process itself possesses a quality (mind) that the other neural processes don’t. At the same time, the mental process, because it is functionally identical to this particular neural process, is also a vital one.And mental process is also "something new, "a fresh creation", which precludes a psycho-physiological parallelism. Reductionism is also contrary to empirical fact. Emergent evolution: All the available evidence of fact leads to the conclusion that the mental element is essential to the neural process which it is said to accompany...and is not accidental to it, nor is it in turn indifferent to the mental feature. Epiphenomenalism is a mere fallacy of observation. At the same time Alexander stated that his view was not one of animism or vitalism, where the mind is an independent entity action on the brain, or conversely, acted upon by the brain. Mental activity is an emergent, a new "thing" not reducible to its initial neural parts. All the available evidence of fact leads to the conclusion that the mental element is essential to the neural process which it is said to accompany...and is not accidental to it, nor is it in turn indifferent to the mental feature. Epiphenomenalism is a mere fallacy of observation. Emergent evolution: For Alexander, the world unfolds in space-time, which has the inherent quality of motion. This motion through space-time results in new “complexities of motion” in the form of a new quality or emergent. The emergent retains the qualities of the prior “complexities of motion” but also has something new that was not there before. This something new comes with its own laws of behavior. Time is the quality that creates motion through Space, and matter is simply motion expressed in forms in Space, or as Alexander says a little later, “complexes of motion.” Matter arises out of the basic ground of Space-Time continuity and has an element of “body” (lower order) and an element of “mind” (higher order), or “the conception that a secondary quality is the mind of its primary substrate.” Mind is an emergent from life and life itself is an emergent from matter. Each level contains and is interconnected with the level and qualities below it, and to the extent that it contains lower levels, these aspects are subject to the laws of that level. All mental functions are living, but not all living functions are mental; all living functions are physico-chemical, but not all physico-chemical processes are living - just as we could say that all people living in Ohio are Americans, but not all Americans live in Ohio. Thus, there are levels of existence, or natural jurisdictions, within a given higher level such that the higher level contains elements of each of the previous levels of existence. The physical level contains the pure dimensionality of Space-Time in addition to the emergent of physico-chemical processes; the next emergent level, life, also contains Space-Time as well as the physico-chemical in addition to the quality of life; the level of mind contains all of the previous three levels, plus consciousness. As a result of this nesting and inter-action of emergents, like fluid Russian dolls, higher emergents cannot be reduced to lower ones, and different laws and methods of inquiry are required for each level. Emergent evolution: Life is not an epiphenomenon of matter but an emergent from it ... The new character or quality which the vital physico-chemical complex possesses stands to it as soul or mind to the neural basis. For Alexander, the "directing agency" or entelechy is found "in the principle or plan". a given stage of material complexity is characterised by such and such special features…By accepting this we at any rate confine ourselves to noting the facts…and do not invent entities for which there seems to be no other justification than that something is done in life which is not done in matter. Emergent evolution: While an emergent is a higher complexity, it also results in a new simplicity as it brings a higher order into what was previously less ordered (a new simplex out of a complex). This new simplicity does not carry any of the qualities or aspects of that emergent level prior to it, but as noted, does still carry within it such lower levels so can be understood to that extent through the science of such levels, yet not itself be understood except by a science that is able to reveal the new laws and principles applicable to it. Emergent evolution: Ascent takes place, it would seem, through complexity.[increasing order] But at each change of quality the complexity as it were gathers itself together and is expressed in a new simplicity. Within a given level of emergence, there are degrees of development. ... There are on one level degrees of perfection or development; and at the same time there is affinity by descent between the existents belonging to the level. This difference of perfection is not the same thing as difference of order or rank such as subsists between matter and life or life and mind ... Emergent evolution: The concept or idea of mind, the highest emergent known to us, being at our level, extends all the way down to pure dimensionality or Space-Time. In other words, time is the “mind” of motion, materialising is the “mind” of matter, living the “mind” of life. Motion through pure time (or life astronomical, mind ideational) emerges as matter “materialising” (geological time, life geological, mind existential), and this emerges as life “living” (biological time, life biological, mind experiential), which in turn give us mind “minding” (historical time, life historical, mind cognitional). But there is also an extension possible upwards of mind to what we call Deity. Emergent evolution: let us describe the empirical quality of any kind of finite which performs to it the office of consciousness or mind as its 'mind.' Yet at the same time let us remember that the 'mind' of a living thing is not conscious mind but is life, and has not the empirical character of consciousness at all, and that life is not merely a lower degree of mind or consciousness, but something different. We are using 'mind' metaphorically by transference from real minds and applying it to the finites on each level in virtue of their distinctive quality; down to Space-Time itself whose existent complexes of bare space-time have for their mind bare time in its empirical variations. Emergent evolution: Alexander goes back to the Greek idea of knowledge being “out there” in the object being contemplated. In that sense, there is not mental object (concept) “distinct” (that is, different in state of being) from the physical object, but only an apparent split between the two, which can then be brought together by proper compresence or participation of the consciousness in the object itself. Emergent evolution: There is no consciousness lodged, as I have supposed, in the organism as a quality of the neural response; consciousness belongs to the totality of objects, of what are commonly called the objects of consciousness or the field of consciousness ... Consciousness is therefore "out there" where the objects are, by a new version of Berkleyanism ... Obviously for this doctrine as for mine there is no mental object as distinct from a physical object: the image of a tree is a tree in an appropriate form... Emergent evolution: Because of the interconnectedness of the universe by virtue of Space-Time, and because the mind apprehends space, time and motion through a unity of sense and mind experience, there is a form of knowing that is intuitive (participative) - sense and reason are outgrowths from it. In being conscious of its own space and time, the mind is conscious of the space and time of external things and vice versa. This is a direct consequence of the continuity of Space-Time in virtue of which any point-instant is connected sooner or later, directly or indirectly, with every other... Emergent evolution: The mind therefore does not apprehend the space of its objects, that is their shape, size and locality, by sensation, for it depends for its character on mere spatio-temporal conditions, though it is not to be had as consciousness in the absence of sensation (or else of course ideation). It is clear without repeating these considerations that the same proposition is true of Time; and of motion ... I shall call this mode of apprehension in its distinction from sensation, intuition. ... Intuition is different from reason, but reason and sense alike are outgrowths from it, empirical determinations of it... Emergent evolution: In a sense, the universe is a participative one and open to participation by mind as well so that mind can intuitively know an object, contrary to what Kant asserted. Participation (togetherness) is something that is “enjoyed” (experienced) not contemplated, though in the higher level of consciousness, it would be contemplated. Emergent evolution: The universe for Alexander is essentially in process, with Time as its ongoing aspect, and the ongoing process consists in the formation of changing complexes of motions. These complexes become ordered in repeatable ways displaying what he calls "qualities." There is a hierarchy of kinds of organized patterns of motions, in which each level depends on the subvening level, but also displays qualities not shown at the subvening level nor predictable from it… On this there sometimes supervenes a further level with the quality called "life"; and certain subtle syntheses which carry life are the foundation for a further level with a new quality. "mind." This is the highest level known to us, but not necessarily the highest possible level. The universe has a forward thrust, called its "nisus" (broadly to be identified with the Time aspect) in virtue of which further levels are to be expected... Emergent evolution: Robert G. B. Reid Emergent evolution was revived by Robert G. B. Reid (March 20, 1939 - May 28, 2016), a biology professor at the University of Victoria (in British Columbia, Canada). In his book Evolutionary Theory: The Unfinished Synthesis (1985), he stated that the modern evolutionary synthesis with its emphasis on natural selection is an incomplete picture of evolution, and emergent evolution can explain the origin of genetic variation. Biologist Ernst Mayr heavily criticized the book claiming it was a misinformed attack on natural selection. Mayr commented that Reid was working from an "obsolete conceptual framework", provided no solid evidence and that he was arguing for a teleological process of evolution. In 2004, biologist Samuel Scheiner stated that Reid's "presentation is both a caricature of evolutionary theory and severely out of date."Reid later published the book Biological Emergences (2007) with a theory on how emergent novelties are generated in evolution. According to Massimo Pigliucci "Biological Emergences by Robert Reid is an interesting contribution to the ongoing debate on the status of evolutionary theory, but it is hard to separate the good stuff from the more dubious claims." Pigliucci noted a dubious claim in the book is that natural selection has no role in evolution. It was positively reviewed by biologist Alexander Badyaev who commented that "the book succeeds in drawing attention to an under appreciated aspect of the evolutionary process". Others have criticized Reid's unorthodox views on emergence and evolution.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Voiced epiglottal affricate** Voiced epiglottal affricate: The voiced epiglottal affricate ([ʡ͡ʢ] in IPA) is a rare affricate consonant that is initiated as an epiglottal stop [ʡ] and released as a voiced epiglottal fricative [ʢ]. It has not been reported to occur phonemically in any language. Features: Features of the voiced epiglottal affricate: Its manner of articulation is affricate, which means it is produced by first stopping the airflow entirely, then allowing air flow through a constricted channel at the place of articulation, causing turbulence. Its place of articulation is epiglottal, which means it is articulated with the aryepiglottic folds against the epiglottis. Its phonation is voiced, which means the vocal cords vibrate during the articulation. It is an oral consonant, which means air is allowed to escape through the mouth only. It is a central consonant, which means it is produced by directing the airstream along the center of the tongue, rather than to the sides. The airstream mechanism is pulmonic, which means it is articulated by pushing air solely with the intercostal muscles and diaphragm, as in most sounds.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Analog device** Analog device: Analog devices are a combination of both analog machine and analog media that can together measure, record, reproduce, receive or broadcast continuous information, for example, the almost infinite number of grades of transparency, voltage, resistance, rotation, or pressure. In theory, the continuous information in an analog signal has an infinite number of possible values with the only limitation on resolution being the accuracy of the analog device. Analog device: Analog media are materials with analog properties, such as photographic film, which are used in analog devices, such as cameras. Example devices: Non-electrical There are notable non-electrical analog devices, such as some clocks (sundials, water clocks), the astrolabe, slide rules, the governor of a steam engine, the planimeter (a simple device that measures the surface area of a closed shape), Kelvin's mechanical tide predictor, acoustic rangefinders, servomechanisms (e.g. the thermostat), a simple mercury thermometer, a weighing scale, and the speedometer. Example devices: Electrical The telautograph is an analogue precursor to the modern fax machine. It transmits electrical impulses recorded by potentiometers to stepping motors attached to a pen, thus being able to reproduce a drawing or signature made by the sender at the receiver's station. It was the first such device to transmit drawings to a stationary sheet of paper; previous inventions in Europe used rotating drums to make such transmissions. Example devices: An analog synthesizer is a synthesizer that uses analog circuits and analog computer techniques to generate sound electronically. The analog television encodes television and transports the picture and sound information as an analog signal, that is, by varying the amplitude and/or frequencies of the broadcast signal. All systems preceding digital television, such as NTSC, PAL, and SECAM are analog television systems. An analog computer is a form of computer that uses electrical, mechanical, or hydraulic phenomena to model the problem being solved. More generally an analog computer uses one kind of physical quantity to represent the behavior of another physical system, or mathematical function. Modeling a real physical system in a computer is called simulation. Example processes: Media The chemical reactions in photographic film and film stock involve analog processes, with camera as machinery. Interfacing the digital and analog worlds: In electronics, a digital-to-analog converter is a circuit for converting a digital signal (usually binary) to an analog signal (current, voltage or electric charge). Digital-to-analog converters are interfaces between the digital world and analog worlds. An analog-to-digital converter is an electronic circuit that converts continuous signals to discrete digital numbers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BAPP** BAPP: The acronyms BAPP (or B.A.P.P.) and BAMP (or B.A.M.P.) refer to a set of open-source software programs commonly used together to run dynamic websites or servers. This set is a solution stack, and an open source web platform. BAPP: BAPP refers to: BSD, family of operating systems; Apache, the web server; PostgreSQL, the database management system (or database server); Perl, PHP, Python, and/or Primate (mod mono), scripting/programming languages.BAMP refers to: BSD, family of operating systems; Apache, the web server; MySQL, the database management system (or database server); Perl, PHP, Python, and/or Primate (mod mono), scripting/programming languages.The two acronyms have three major uses: Define a web server infrastructure Define a programming paradigm of developing software Define a software distribution package Underlying BSD family of operating systems: As an operating system, FreeBSD (a BSD descendant) is generally regarded as reliable and robust, and of the operating systems that accurately report uptime remotely, FreeBSD (and other BSD descendants) are the most common free operating system listed in Netcraft's list of the 50 web servers with the longest uptime (uptime on some operating systems such as some versions of Linux cannot be determined remotely), making it a top choice among ISPs and hosting providers. A long uptime also indicates that no kernel updates have been deemed necessary, as installing a new kernel requires a reboot and resets the uptime counter of the system. Solution stack: Though the originators of these open source programs did not design them all to work specifically with each other, the combination has become popular because of its low acquisition cost and because of the ubiquity of its components (which come bundled with most current BSD distributions particularly as deployed by ISPs). When used in combination they represent a solution stack of technologies that support application servers. Other such stacks include unified application development environments such as Apple's WebObjects, Java/Jakarta EE, Grails, and Microsoft's .NET architecture. Interface: The scripting component of the BAPP stack has its origins in the CGI web interfaces that became popular in the early 1990s. This technology allows the user of a web browser to execute a program on the web server, and to thereby receive dynamic as well as static content. Programmers used scripting languages with these programs because of their ability to manipulate text streams easily and efficiently, even when they originate from disparate sources. For this reason system designers often referred to such scripting systems as glue languages. Variants: Other variants of the term include: Instead of BSD: LAPP, using Linux. MAPP, using Macintosh WAPP, using Windows. Instead of PostgreSQL: BAMP, using MySQL. FBAP, using Firebird. BAIP, using Informix. BAPS, using servlets. Others or Some Combination of the Above BAPPS, with the S for SSL. BCHS, with for C, OpenBSD httpd, and SQLite. LAMP, using Linux, Apache and MySQL. WAMP, using Windows, Apache and MySQL. WIPP, for Microsoft Windows, Microsoft IIS, PostgreSQL, and PHP. WISP, for Microsoft Windows, Microsoft IIS, Microsoft SQL Server, and PHP. WISA, for Microsoft Windows, Microsoft IIS, Microsoft SQL Server, and ASP.NET. MARS, for MySQL, Apache, Ruby, and Solaris FWIP, for Firebird, Windows, IIS, and PHP. FWAP, for Firebird, Windows, Apache, and PHP.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Histolysis** Histolysis: Histolysis is the decay and dissolution of organic tissues or of blood. It is sometimes referred to as histodialysis. In cells, histolysis may be caused by uracil-DNA degradation.Origin: New Latin, from Greek ‘ιστος (histos) tissue + λυσις (lusis) dissolution from λυειν to loosen, dissolve. Histolysis is associated with metamorphosis as well as other morphological changes. The loss of organs or blood begins with cell death, which can be caused by a number of factors. In frogs, the histolysis of the tail associated with metamorphosis is also associated with a lowering of the pH of the blood. Increases in histolysis has been found to correspond with the pupal phase of insect metamorphosis, wherein larval organs break down before the histogenesis of the adult tissues occur. The histolysis is associated with an increase in the production of ATP and a decrease in metabolism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Conservation and restoration of historic firearms** Conservation and restoration of historic firearms: The conservation and restoration of historic firearms is preventative care, damage repair, stabilization, replacement of missing components, and potentially the return of the firearm to firing capabilities. It requires an understanding of the different types of historic firearms and knowledge in the care and treatment of organic and inorganic materials, as firearms are composed of many types of materials, from wood to metal, that are fitted together. History: The term historic firearm refers to armaments used prior to the 20th Century. Firearms vary greatly in type, function, firing mechanisms, and decorative elements. Firearms are composite objects, meaning they are made of different materials Generally, the core components of historic firearms are metal (iron, steel, brass) and wood. Decorative elements may include other metals like gold or silver, and organic materials such as bone, antler, and ivory. Animal hide and sinew may also have been used to repair damaged firearms or build a new weapon using multiple parts. Historic firearms are identified by barrel style, how they are loaded, and by lock, the firing mechanism. Firearms can first be classified by the barrel style which is described as either a smoothbore or rifle. The barrel of a smoothbore has a smooth interior whereas a rifle barrel has a helical grove cut into it Historic firearms may be grouped based on how they are loaded; they will be referred to as muzzle-loaded or breech-loaded. Muzzle-loaded firearms are loaded through the front end of the barrel and were typically left loaded and ready for use A muzzle-loaded firearm means that a lock was used to hold and ignite the gunpower which fired the gun. Breech-loaded firearms, first introduced in the 16th century, are loaded with cartridges or shells through the rear of the barrel and have a variety of ignition switches. A third way historic firearms can also be categorized is by their firing mechanism known as a lock. Several types of locks have been developed over the centuries; however, the most common types found in the United States are the matchlock, wheel-lock, flintlock, and percussion cap. History: Types of Historic Firearms The earliest lock was the Matchlock that used a match to ignite the powder. These were smoothbore and muzzle-loaded. The Harquebus (Arquebus) and muskets prior to the 17th century are two examples of a matchlock The Wheellock, was developed around 1500, used a spring loaded wheel to create an ignition. Like the matchlock, wheel-locks were smoothbore and muzzle-loaded. Muskets and pistols were made with the wheel-lock. Developed in the 17th century, the Flintlock used a flint strike to ignite the gunpower and fire the weapon. Flintlocks were used for a variety of firearms, ranging from pistols to muskets and rifles. Their barrels could be smoothbore or rifle and were muzzle-loaded or breech-loaded. The Percussion Cap was introduced in the early 1800s and eventually replaced the flintlock. The percussion cap lock was very similar to the flintlock and many flintlocks were converted into percussion caps. Percussion cap firearms were muzzle-loaded, but as with flintlocks, they could have a smooth or rifled barrel. Agents of Deterioration and Preventative Conservation: Agents of deterioration are the forces that cause physical, chemical, and biological damage and lead to irreversible losses to museum collections Preventative conservators work to maintain the health of museum collections by taking steps to prevent or reduce the effects of agents of deterioration on objects. Agents of Deterioration and Preventative Conservation: Physical Forces Physical forces may damage objects directly or cause indirect damage through the collision between the object and parts or other objects Damage to objects is the result of five possible physical forces: impact, shock, vibration, abrasion, and pressure. For historic firearms the highest risk of damage is from impact. If not held properly, there is a risk that the firearm may be dropped or bumped into surrounding objects or fixtures. Impact damage can also happen if the firearm is not mounted correctly when it is on display and it falls. Damage from impact may cause cracking, breaking, and chipping of wood and other organic materials; and may dent, scratch, or break off metal components. Agents of Deterioration and Preventative Conservation: To reduce the risk of damage by physical forces, historic firearms should be supported with both hands and held in front of the body, have proper display mounts, and be stored in appropriate cases padded with microfoam. The person handling the firearm should exercise caution and be aware of his/her surroundings when moving the object to avoid bumping it into other firearms, objects, or fixtures. Agents of Deterioration and Preventative Conservation: Incorrect Temperature Incorrect temperature can damage the wood components. A combination of long-term or excessive light exposure with high temperatures will cause the wood to crack and lose its shape. High temperatures will have a similar effect on rawhide and semi-tanned leather, ivory, bone, and antler. To slow the process of deterioration, historic firearms should be kept at temperatures kept below 72F Incorrect Relative Humidity Relative humidity affects both the wood and metal components of firearms. Too low of a humidity can cause the wood and other organic material to dry and crack. Humidity that is too high, above 65%, can cause the wood to swell, can corrode the metal, and conducive to mold growth. The preferred range for relative humidity is between 45% and 50%; however, accounting for seasonal changes an acceptable range is 35% to 60%. The preferred range of relative humidity for antler, bone, and ivory is 45% to 55% Rawhide or semi-tanned leather, if used on the firearm, is also affected by humidity. Animal hide will absorb moisture creating an ideal habitat for mold growth and if the humidity is too low, the hide will split or potentially damage the firearm. As with antler, bone, and ivory, animal hide is most stable with a relative humidity of 45% to 55% Water Water is damaging to most historic objects. For historic firearms, water will cause corrosion of metal components, swell wood, and encourage mold growth. Water will also cause antler, bone, and ivory to swell. Animal rawhide and semi-tanned leather will absorb moisture and will be at risk for mold growth. It could also become discolored. Historic firearms should be stored away or protected from potential water sources, such as exposed water pipes or off the ground if there is a risk of flooding. Agents of Deterioration and Preventative Conservation: Fire Historic firearms are at risk for damage if they are exposed to fire and can result in the loss of the object. Wood and other organic materials will burn, and metal may melt or become disfigured. If the firearm has not been checked for residual gunpowder or lodged ammunition and is still loaded, the fire may ignite the gunpowder and result in an accidental discharge. Firearms should be kept away from combustible chemicals and objects. Agents of Deterioration and Preventative Conservation: Light, UV, Infrared High light levels are harmful to the organic materials found on historic firearms. Long term or high intensity exposure to light will cause darkening or fading of wood, ivory, bone, and antler, depending on the material. Exposure to UV light will fade and bleach wood, and infrared light will dry and fade the wood components. To prevent damage from light exposure, UV filters can be used on lights and window and display glass. Light sources for display cases should be positioned outside of the case. After reaching the maximum exposure of 100 lux for eight hours per day, six days a week for a year, the historic firearm should be removed from display and stored in a box or cabinet. Agents of Deterioration and Preventative Conservation: Pests Pests are insects and animals that disfigure, damage, and destroy museum collections The organic materials on historic firearms are susceptible to pest infestation. Woodboring beetle s and rodents are a threat to the wood materials. Bone, antler, ivory, and rawhide or semi-tanned leather attract Dermestidae, as the black carpet beetle and black larder beetle which feed on protein. Firearm collections should be monitored for evidence of borings, larva castings, and gnaw marks. Regular cleaning of the firearm and display area will help to prevent a pest infestation. Agents of Deterioration and Preventative Conservation: Pollutants Pollutants are environmental agents, chemical or physical, that can alter the aesthetic appearance or damage objects Dust, composed of dirt, fibers, skin cells, and pollen, is a common pollutant, but if not removed can lead to damage. On historic firearms, dust will absorb moisture, attract pests, and abrade the surface. Regular cleaning will reduce the build-up of dust and other particulate matter. Other pollutants will react chemically with the object, potentially causing permanent damage. Improper storage or display materials can react to the metal components and cause corrosion. One material that should not be stored with firearms is leather which will corrode the metal as it off-gases. Storage and display materials can be tested using the Oddy Test to determine if they will react to the firearm's components. Agents of Deterioration and Preventative Conservation: Disassociation Disassociation is the loss of data, objects, or association between objects It can occur when documentation is lost, objects or their parts are damaged or removed and lost. With historic firearms, disassociation can happen when the weapon is disassembled for cleaning or the firing pins are removed to deter theft or accidental firing. Without proper labeling and tracking, the parts may be lost. Serial numbers and maker's marks can be removed if the firearm is cleaned with abrasive chemicals. To prevent disassociation, all components should be labeled, serial numbers noted, and if the part is removed its location should be recorded. When cleaning the firearm, appropriate cleaning agents and waxes should be used to prevent the loss of historical data. Agents of Deterioration and Preventative Conservation: Thieves and Vandals In the United States, historic firearms do not need to be registered and are a popular collector's item, making them potential targets for theft. To deter theft museums may decide to remove the firing pin, thus making the gun inoperable and less desirable to thieves. When on display, firearms should be kept in cases made of acrylic, polycarbonate, or safety glass, and they should be locked or screwed shut. In storage areas firearms should also be locked in a secure area or in metal drawer units and, there should be a minimal number of keys available to unlock display or storage cases The keys also should not be easily replicated. If the firearms are used for research, researchers should be monitored by staff and not be allowed coats or backpacks in the research area because some historic firearms are quite small and could easily fit in a coat pocket. Handling: Proper handling is important to preventing damage to historic firearms. Gloves, either nitrile or white cotton, should be worn every time the firearm is handled as the natural oils, salts, and acids from skin can cause corrosion on the metal surfaces Longarms should not be carried by the wrist of the stock and should instead be kept in front of the body and supported by both hands. As a safety precaution, firearms should be checked to make sure they are not loaded. For breech-loading firearms, after opening the action, a light should be shined down the barrel or a small mirror used to look at the breech to see if the barrel is blocked. A conservator should be contacted if the barrel is blocked and the cartridge cannot be easily removed. To check muzzle-loading firearms, a cleaning rod is inserted down the barrel and marked, the rod is then removed and aligned with the muzzle. If the difference between where the rod is marked and the touch hole is greater than 1.5 inches, the barrel is most likely loaded and a conservator should be consulted. Storage: Historic firearms should be stored away from pollutants such as wool or silk which contain sulfur and leather, each of which will corrode metal as it off-gases. Small firearms and long arms can be stored in acid-free boxes and should be supported or cushioned with polyethylene foam or acid free tissue. Long arms can also be stored in boxes and supported like small firearms. If, however; the longarms cannot be stored in boxes, they should be stored vertically with barrels down and padded with polyethlene foam. The firearms should be stored with the locks upright. Environment: The environment in which historic objects are held greatly impacts their overall long-term health. For historic firearms it is important to recognize that because of their composite nature, each material will respond differently to environmental stresses which can in turn affect adjacent surfaces. Generally, the acceptable environmental controls of historic firearms are a relative humidity between 35 and 50% and a temperature below 72°. Although their tolerances may vary, UV and infrared light will damage the materials of historic firearms and should be monitored. Treatment: The conservation of historic firearms requires knowledge of the care and treatment of organic and inorganic materials. Treatment can range from simple cleaning to conservation-restoration and return to firing capabilities. Prior to treatment, three factors should be considered: has the source of the damage been eliminated, what is the extent of the damage, and what are the long-term effects of the treatment. General guidelines for conservation treatment of any object is to make sure the treatment is reversible, treatments and materials should be chemically and physically compatible, and the treatment must use stable chemicals that will not off-gas or react with the object creating further deterioration. Other than general cleaning as part of a museum's housekeeping routine, treatment should be performed by a conservator. Treatment: Cleaning Cleaning historic firearms begins with an examination of its condition to ensure it can withstand the cleaning procedure. A partial disassembly may be necessary; however, it is not recommended for matchlocks and wheellocks as their screws and pins are not easily removed. Loose dust and dirt can be removed with a soft brush, such as a hake brush, and the debris directed toward a vacuum nozzle. Commercial cleaning products and varnishes, and brass and silver polishes should be avoided as these items may damage the firearm. Sometimes conservation grade waxes, like Renaissance wax, are used to improve the firearm's appearance and guard against dust. Treatment: Conservation-Restoration Conservation-restoration work on historic firearms is a series of procedures designed to stabilize, repair or restore parts, and stop deterioration. Stabilizing a firearm means establishing the ideal environment conditions, removing corrosion, replacing missing components, and repairing broken parts. For example, if the wrist of a long arm is cracked, a conservator would repair the breakage which would stabilize the firearm, enabling its use for display or research. The goal of restoration work may be returning the firearm to original form, returning all the mechanical components to working order, or returning it to firing capabilities. Conservation-restoration of historic firearms requires the skilled work of a conservator.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Butanol fuel** Butanol fuel: Butanol may be used as a fuel in an internal combustion engine. It is more similar to gasoline than it is to ethanol. A C4-hydrocarbon, butanol is a drop-in fuel and thus works in vehicles designed for use with gasoline without modification. Both n-butanol and isobutanol have been studied as possible fuels. Both can be produced from biomass (as "biobutanol" ) as well as from fossil fuels (as "petrobutanol"). The chemical properties depend on the isomer (n-butanol or isobutanol), not on the production method. Although intriguing in many ways, butanol fuel is rarely economically competitive. Genetically modified organisms: Obtaining higher yields of butanol involves manipulation of the metabolic networks using metabolic engineering and genetic engineering. While significant progress has been made, fermentation pathways for producing butanol remain inefficient. Titer and yields are low and separation is very expensive. As such, microbial production of butanol is not cost-competitive relative to petroleum-derived butanol.Although unproven commercially, combining electrochemical and microbial production methods may offer a way to produce butanol from sustainable sources. Genetically modified organisms: Escherichia coli Escherichia coli, or E. coli, is a Gram-negative, rod-shaped bacterium. E. coli is the microorganism most likely to move on to commercial production of isobutanol. In its engineered form, E. coli produces the highest yields of isobutanol of any microorganism. Methods such as elementary mode analysis have been used to improve the metabolic efficiency of E. coli so that larger quantities of isobutanol may be produced. E. coli is an ideal isobutanol bio-synthesizer for several reasons: E. coli is an organism for which several tools of genetic manipulation exist, and it is an organism for which an extensive body of scientific literature exists. This wealth of knowledge allows E. coli to be easily modified by scientists. Genetically modified organisms: E. coli has the capacity to use lignocellulose (waste plant matter left over from agriculture) in the synthesis of isobutanol. The use of lignocellulose prevents E. coli from using plant matter meant for human consumption, and prevents any food-fuel price relationship which would occur from the biosynthesis of isobutanol by E. coli. Genetically modified organisms: Genetic modification has been used to broaden the scope of lignocellulose which can be used by E. coli. This has made E. coli a useful and diverse isobutanol bio-synthesizer.The primary drawback of E. coli is that it is susceptible to bacteriophages when being grown. This susceptibility could potentially shut down entire bioreactors. Furthermore, the native reaction pathway for isobutanol in E. coli functions optimally at a limited concentration of isobutanol in the cell. To minimize the sensitivity of E. coli in high concentrations, mutants of the enzymes involved in synthesis can be generated by random mutagenesis. By chance, some mutants may prove to be more tolerant of isobutanol which will enhance the overall yield of the synthesis. Genetically modified organisms: Clostridia n-Butanol can be produced by fermentation of biomass by the A.B.E. process using Clostridium acetobutylicum, Clostridium beijerinckii. C. acetobutylicum was once used for the production of acetone from starch. The butanol was a by-product of fermentation (twice as much butanol was produced). The feedstocks for biobutanol are the same as those for ethanol: energy crops such as sugar beets, sugar cane, corn grain, wheat and cassava, prospective non-food energy crops such as switchgrass and even guayule in North America, as well as agricultural byproducts such as bagasse, straw and corn stalks. According to DuPont, existing bioethanol plants can cost-effectively be retrofitted to biobutanol production. Additionally, butanol production from biomass and agricultural byproducts could be more efficient (i.e. unit engine motive power delivered per unit solar energy consumed) than ethanol or methanol production.A strain of Clostridium can convert nearly any form of cellulose into butanol even in the presence of oxygen.A strain of Clostridium cellulolyticum, a native cellulose-degrading microbe, affords isobutanol directly from cellulose.A combination of succinate and ethanol can be fermented to produce butyrate (a precursor to butanol fuel) by utilizing the metabolic pathways present in Clostridium kluyveri. Succinate is an intermediate of the TCA cycle, which metabolizes glucose. Anaerobic bacteria such as Clostridium acetobutylicum and Clostridium saccharobutylicum also contain these pathways. Succinate is first activated and then reduced by a two-step reaction to give 4-hydroxybutyrate, which is then metabolized further to crotonyl-coenzyme A (CoA). Crotonyl-CoA is then converted to butyrate. The genes corresponding to these butanol production pathways from Clostridium were cloned to E. coli. Genetically modified organisms: Cyanobacteria Cyanobacteria are a phylum of photosynthetic bacteria. They are suited for isobutanol biosynthesis when genetically engineered to produce isobutanol and its corresponding aldehydes. Isobutanol-producing species of cyanobacteria offer several advantages as biofuel synthesizers: Cyanobacteria grow faster than plants and also absorb sunlight more efficiently than plants. This means they can be replenished at a faster rate than the plant matter used for other biofuel biosynthesizers. Genetically modified organisms: Cyanobacteria can be grown on non-arable land (land not used for farming). This prevents competition between food sources and fuel sources. Genetically modified organisms: The supplements necessary for the growth of cyanobacteria are CO2, H2O, and sunlight. This presents two advantages: Because CO2 is derived from the atmosphere, cyanobacteria do not need plant matter to synthesize isobutanol (in other organisms which synthesize isobutanol, plant matter is the source of the carbon necessary to synthetically assemble isobutanol). Since plant matter is not used by this method of isobutanol production, the necessity to source plant matter from food sources and create a food-fuel price relationship is avoided. Genetically modified organisms: Because CO2 is absorbed from the atmosphere by cyanobacteria, the possibility of bioremediation (in the form of cyanobacteria removing excess CO2 from the atmosphere) exists.The primary drawbacks of cyanobacteria are: They are sensitive to environmental conditions when being grown. Cyanobacteria suffer greatly from sunlight of inappropriate wavelength and intensity, CO2 of inappropriate concentration, or H2O of inappropriate salinity, though a wealth of cyanobacteria are able to grow in brackish and marine waters. These factors are generally hard to control, and present a major obstacle in cyanobacterial production of isobutanol. Genetically modified organisms: Cyanobacteria bioreactors require high energy to operate. Cultures require constant mixing, and the harvesting of biosynthetic products is energy-intensive. This reduces the efficiency of isobutanol production via cyanobacteria.Cyanobacteria can be re-engineered to increase their butanol production, showing the importance of ATP and cofactor driving forces as a design principle in pathway engineering. Many organisms have the capacity to produce butanol utilizing an acetyl-CoA dependent pathway. The main problem with this pathway is the first reaction involving the condensation of two acetyl-CoA molecules to acetoacetyl-CoA. This reaction is thermodynamically unfavorable due to the positive Gibbs free energy associated with it (dG = 6.8 kcal/mol). Genetically modified organisms: Bacillus subtilis Bacillus subtilis is a gram-positive rod-shaped bacteria. Bacillus subtilis offers many of the same advantages and disadvantages of E. coli, but it is less prominently used and does not produce isobutanol in quantities as large as E. coli. Similar to E. coli, B. subtilis is capable of producing isobutanol from lignocellulose, and is easily manipulated by common genetic techniques. Elementary mode analysis has also been used to improve the isobutanol-synthesis metabolic pathway used by B. subtilis, leading to higher yields of isobutanol being produced. Genetically modified organisms: Saccharomyces cerevisiae Saccharomyces cerevisiae, or S. cerevisiae, is a species of yeast. It naturally produces isobutanol in small quantities via its valine biosynthetic pathway. S. cerevisiae is an ideal candidate for isobutanol biofuel production for several reasons: S. cerevisiae can be grown at low pH levels, helping prevent contamination during growth in industrial bioreactors. S. cerevisiae cannot be affected by bacteriophages because it is a eukaryote. Genetically modified organisms: Extensive scientific knowledge about S. cerevisiae and its biology already exists.Overexpression of the enzymes in the valine biosynthetic pathway of S. cerevisiae has been used to improve isobutanol yields. S. cerevisiae, however, has proved difficult to work with because of its inherent biology: As a eukaryote, S. cerevisiae is genetically more complex than E. coli or B. subtilis, and is harder to genetically manipulate as a result. Genetically modified organisms: S. cerevisiae has the natural ability to produce ethanol. This natural ability can "overpower" and consequently inhibit isobutanol production by S. cerevisiae. S. cerevisiae cannot use five-carbon sugars to produce isobutanol. The inability to use five-carbon sugars restricts S. cerevisiae from using lignocellulose, and means S. cerevisiae must use plant matter intended for human consumption to produce isobutanol. This results in an unfavorable food/fuel price relationship when isobutanol is produced by S. cerevisiae. Ralstonia eutropha Cupriavidus necator (=Ralstonia eutropha) is a Gram-negative soil bacterium of the class Betaproteobacteria. It is capable of indirectly converting electrical energy into isobutanol. This conversion is completed in several steps: Anodes are placed in a mixture of H2O and CO2. An electric current is run through the anodes, and through an electrochemical process H2O and CO2 are combined to synthesize formic acid. A culture of C. necator (composed of a strain tolerant to electricity) is kept within the H2O and CO2 mixture. The culture of C. necator then converts formic acid from the mixture into isobutanol. The biosynthesized isobutanol is then separated from the mixture, and can be used as a biofuel. Feedstocks: High cost of raw material is considered as one of the main obstacles to commercial production of butanols. Using inexpensive and abundant feedstocks, e.g., corn stover, could enhance the process economic viability.Metabolic engineering can be used to allow an organism to use a cheaper substrate such as glycerol instead of glucose. Because fermentation processes require glucose derived from foods, butanol production can negatively impact food supply (see food vs fuel debate). Glycerol is a good alternative source for butanol production. While glucose sources are valuable and limited, glycerol is abundant and has a low market price because it is a waste product of biodiesel production. Butanol production from glycerol is economically viable using metabolic pathways that exist in the bacterium Clostridium pasteurianum. Feedstocks: Improving efficiency A process called cloud point separation could allow the recovery of butanol with high efficiency. Producers and distribution: DuPont and BP plan to make biobutanol the first product of their joint effort to develop, produce, and market next-generation biofuels. In Europe the Swiss company Butalco is developing genetically modified yeasts for the production of biobutanol from cellulosic materials. Gourmet Butanol, a United States-based company, is developing a process that utilizes fungi to convert organic waste into biobutanol. Celtic Renewables makes biobutanol from waste that results from the production of whisky, and low-grade potatoes. Properties of common fuels: Isobutanol Isobutanol is a second-generation biofuel with several qualities that resolve issues presented by ethanol.Isobutanol's properties make it an attractive biofuel: relatively high energy density, 98% of that of gasoline. does not readily absorb water from air, preventing the corrosion of engines and pipelines. can be mixed at any proportion with gasoline, meaning the fuel can "drop into" the existing petroleum infrastructure as a replacement fuel or major additive. can be produced from plant matter not connected to food supplies, preventing a fuel-price/food-price relationship. assuming that it is produced from residual lignocellulosic feedstocks, blending isobutanol with gasoline may reduce GHG emissions considerably. Properties of common fuels: n-Butanol Butanol better tolerates water contamination and is less corrosive than ethanol and more suitable for distribution through existing pipelines for gasoline. In blends with diesel or gasoline, butanol is less likely to separate from this fuel than ethanol if the fuel is contaminated with water. There is also a vapor pressure co-blend synergy with butanol and gasoline containing ethanol, which facilitates ethanol blending. This facilitates storage and distribution of blended fuels. Properties of common fuels: The octane rating of n-butanol is similar to that of gasoline but lower than that of ethanol and methanol. n-Butanol has a RON (Research Octane number) of 96 and a MON (Motor octane number) of 78 (with a resulting "(R+M)/2 pump octane number" of 87, as used in North America) while t-butanol has octane ratings of 105 RON and 89 MON. t-Butanol is used as an additive in gasoline but cannot be used as a fuel in its pure form because its relatively high melting point of 25.5 °C (79 °F) causes it to gel and solidify near room temperature. On the other hand, isobutanol has a lower melting point than n-butanol and favorable RON of 113 and MON of 94, and is thus much better suited to high fraction gasoline blends, blends with n-butanol, or as a standalone fuel.A fuel with a higher octane rating is less prone to knocking (extremely rapid and spontaneous combustion by compression) and the control system of any modern car engine can take advantage of this by adjusting the ignition timing. This will improve energy efficiency, leading to a better fuel economy than the comparisons of energy content different fuels indicate. By increasing the compression ratio, further gains in fuel economy, power and torque can be achieved. Conversely, a fuel with lower octane rating is more prone to knocking and will lower efficiency. Knocking can also cause engine damage. Engines designed to run on 87 octane will not have any additional power/fuel economy from being operated with higher octane fuel. Properties of common fuels: Butanol characteristics: air-fuel ratio, specific energy, viscosity, specific heat Alcohol fuels, including butanol and ethanol, are partially oxidized and therefore need to run at richer mixtures than gasoline. Standard gasoline engines in cars can adjust the air-fuel ratio to accommodate variations in the fuel, but only within certain limits depending on model. If the limit is exceeded by running the engine on pure ethanol or a gasoline blend with a high percentage of ethanol, the engine will run lean, something which can critically damage components. Compared to ethanol, butanol can be mixed in higher ratios with gasoline for use in existing cars without the need for retrofit as the air-fuel ratio and energy content are closer to that of gasoline.Alcohol fuels have less energy per unit weight and unit volume than gasoline. To make it possible to compare the net energy released per cycle a measure called the fuels specific energy is sometimes used. It is defined as the energy released per air fuel ratio. The net energy released per cycle is higher for butanol than ethanol or methanol and about 10% higher than for gasoline. Properties of common fuels: The viscosity of alcohols increase with longer carbon chains. For this reason, butanol is used as an alternative to shorter alcohols when a more viscous solvent is desired. The kinematic viscosity of butanol is several times higher than that of gasoline and about as viscous as high quality diesel fuel.The fuel in an engine has to be vaporized before it will burn. Insufficient vaporization is a known problem with alcohol fuels during cold starts in cold weather. As the heat of vaporization of butanol is less than half of that of ethanol, an engine running on butanol should be easier to start in cold weather than one running on ethanol or methanol. Properties of common fuels: Butanol fuel mixtures Standards for the blending of ethanol and methanol in gasoline exist in many countries, including the EU, the US, and Brazil. Approximate equivalent butanol blends can be calculated from the relations between the stoichiometric fuel-air ratio of butanol, ethanol and gasoline. Common ethanol fuel mixtures for fuel sold as gasoline currently range from 5% to 10%. It is estimated that around 9.5 gigaliter (Gl) of gasoline can be saved and about 64.6 Gl of butanol-gasoline blend 16% (Bu16) can potentially be produced from corn residues in the US, which is equivalent to 11.8% of total domestic gasoline consumption.Consumer acceptance may be limited due to the potentially offensive banana-like smell of n-butanol. Plans are underway to market a fuel that is 85% ethanol and 15% butanol (E85B), so existing E85 internal combustion engines can run on a 100% renewable fuel that could be made without using any fossil fuels. Because its longer hydrocarbon chain causes it to be fairly non-polar, it is more similar to gasoline than it is to ethanol. Butanol has been demonstrated to work in vehicles designed for use with gasoline without modification. Properties of common fuels: Butanol in vehicles Currently no production vehicle is known to be approved by the manufacturer for use with 100% butanol. As of early 2009, only a few vehicles are approved for even using E85 fuel (i.e. 85% ethanol + 15% gasoline) in the USA. However, in Brazil all vehicle manufacturers (Fiat, Ford, VW, GM, Toyota, Honda, Peugeot, Citroen and others) produce "flex-fuel" vehicles that can run on 100% Gasoline and or any mix of ethanol and gasoline up to 85% ethanol (E85). These flex fuel cars represent 90% of the sales of personal vehicles in Brazil, in 2009. BP and DuPont, engaged in a joint venture to produce and promote butanol fuel, claim that "biobutanol can be blended up to 10%v/v in European gasoline and 11.5%v/v in US gasoline". In the 2009 Petit Le Mans race, the No. 16 Lola B09/86 - Mazda MZR-R of Dyson Racing ran on a mixture of biobutanol and ethanol developed by team technology partner BP.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**API Sanity Checker** API Sanity Checker: API Sanity Checker is an automatic unit test generator for C/C++ shared libraries. The main feature of this tool is the ability to completely automatically generate reasonable (in most, but unfortunately not all, cases) input arguments for every API function straight from the library header files. The tool can be used as a smoke test or fuzzer for a library API to catch serious problems like crashes or program hanging.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solute carrier family 16 member 12** Solute carrier family 16 member 12: Solute carrier family 16 member 12 is a protein that in humans is encoded by the SLC16A12 gene. Function: This gene encodes a transmembrane transporter that likely plays a role in monocarboxylic acid transport. A mutation in this gene has been associated with juvenile cataracts with microcornea and renal glucosuria. [provided by RefSeq, Mar 2010].
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Deadweight tester** Deadweight tester: A dead weight tester apparatus uses unknown traceable weights to apply pressure to a fluid for checking the accuracy of readings from a pressure gauge. A dead weight tester (DWT) is a calibration standard method that uses a piston cylinder on which a load is placed to make an equilibrium with an applied pressure underneath the piston. Deadweight testers are so called primary standards which means that the pressure measured by a deadweight tester is defined through other quantities: length, mass and time. Deadweight tester: Typically deadweight testers are used in calibration laboratories to calibrate pressure transfer standards like electronic pressure measuring devices. Formula: The formula on which the design of a DWT is based basically is expressed as follows : where : To be able to do accurate measurements, this formula has to be refined. Absolute pressure with vacuum reference Gauge pressure Nomenclature Piston cylinder design: In general there are three different kind of DWT's divided by the medium which is measured and the lubricant which is used for its measuring element : gas operated gas lubricated PCU's gas operated oil lubricated PCU's oil operated oil lubricated PCU'sAll three systems have their own specific operational demands. Some points of attention : Gas-gas Make sure that the PCU is clean. This is a very important issue as the PCU's operation is sensitive to contamination. Also when connecting a DUT, make sure that the DUT does not introduce contamination in the measuring system. Piston cylinder design: Gas-oil Lubricant of the PCU 'leaks' in the gas-circuit of the DWT. For this reason there is a small reservoir incorporated in the system. Before commencing a calibration it is a good practice to purge this reservoir. If the reservoir is full, oil will be introduced in critical tubing and will cause an uncontrollable oil-head. Oil-oil When connecting an oil filled DUT on an oil DWT make sure that the DUT oil will not contaminate the DWT oil. If in doubt. Incorporate a small volume between DUT and DWT and manipulate pressure in such a matter that the oil flow is directed to the DUT. For high accuracy measurement, friction can be lowered by rotation of the piston.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**2,3-dihydroxybenzoate 3,4-dioxygenase** 2,3-dihydroxybenzoate 3,4-dioxygenase: In enzymology, a 2,3-dihydroxybenzoate 3,4-dioxygenase (EC 1.13.11.14) is an enzyme that catalyzes the chemical reaction 2,3-dihydroxybenzoate + O2 ⇌ 3-carboxy-2-hydroxymuconate semialdehydeThus, the two substrates of this enzyme are 2,3-dihydroxybenzoate and O2, whereas its product is 3-carboxy-2-hydroxymuconate semialdehyde. 2,3-dihydroxybenzoate 3,4-dioxygenase: This enzyme belongs to the family of oxidoreductases, specifically those acting on single donors with O2 as oxidant and incorporation of two atoms of oxygen into the substrate (oxygenases). The oxygen incorporated need not be derived from O2. The systematic name of this enzyme class is 2,3-dihydroxybenzoate:oxygen 3,4-oxidoreductase (decyclizing). Other names in common use include o-pyrocatechuate oxygenase, 2,3-dihydroxybenzoate 1,2-dioxygenase, 2,3-dihydroxybenzoic oxygenase, and 2,3-dihydroxybenzoate oxygenase. This enzyme participates in benzoate degradation via hydroxylation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Precomposition** Precomposition: In music, precompositional decisions are those decisions which a composer decides upon before or while beginning to create a composition. These limits may be given to the composer, such as the length or style needed, or entirely decided by the composer. Precompositional decisions may also include which key, scale, musical form, style, genre, or idiom in which to write, to use techniques such as the twelve tone technique, serialism, or not to (consciously) use a system at all. Other examples may include isorhythm, ostinato, passacaglia, chaconne, rhythms, or chord progression. Precomposition: Precompositional decisions do not necessarily, and almost always do not, preclude compositional decisions, and may actually allow the initial consideration of the choices to be made. One might say that, "thus, while it liberates imagination as to what the world may be, it refuses to legislate as to what the world is" (Bertrand Russell, Our Knowledge of the External World). Thus precompositional decisions do not necessarily ease the compositional choices. Precomposition: On the other hand, the concept of precompositional decisions is unclear as it is often impossible to determine which decisions occur before or during a composition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hexapropymate** Hexapropymate: Hexapropymate is a hypnotic/sedative. It has effects similar to those of barbiturates and was used in the 1970s-1980s in the treatment of insomnia before being replaced with newer drugs with improved safety profiles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Smoke screen** Smoke screen: A smoke screen is smoke released to mask the movement or location of military units such as infantry, tanks, aircraft, or ships. Smoke screens are commonly deployed either by a canister (such as a grenade) or generated by a vehicle (such as a tank or a warship). Smoke screen: Whereas smoke screens were originally used to hide movement from enemies' line of sight, modern technology means that they are now also available in new forms; they can screen in the infrared as well as visible spectrum of light to prevent detection by infrared sensors or viewers, and they are also available for vehicles in a super-dense form used to block laser beams of enemy laser designators or rangefinders. Technology: Smoke grenades These are canister-type grenades used as a ground-to-ground or ground-to-air signalling device. The body consists of a steel sheet metal cylinder with a few emission holes on the top and/or bottom to allow smoke release when the smoke composition inside the grenade is ignited. In those that produce colored smoke, the filler consists of 250 to 350 grams of colored (red, green, yellow or violet) smoke mixture (mostly potassium chlorate, sodium bicarbonate, lactose and a dye). In those that produce screening smoke, the filler usually consists of HC smoke mixture (hexachloroethane/zinc) or TA smoke mixture (terephthalic acid). Another type of smoke grenade is filled with white phosphorus (WP), which is spread by explosive action. The phosphorus catches fire in the presence of air, and burns with a brilliant yellow flame, while producing copious amounts of white smoke (phosphorus pentoxide). WP grenades double as incendiary grenades. Technology: Smoke shell Artillery and mortars can also fire smoke generating munitions, and are the main means of generating tactical smokescreens on land. As with grenades, artillery shells are available as both emission type smoke shell, and bursting smoke shell. Mortars nearly always use bursting smoke rounds because of the smaller size of mortar bombs and the greater efficiency of bursting rounds. Technology: Smoke generators Very large or sustained smoke screens are produced by a smoke generator. This machine heats a volatile material (typically oil or an oil based mixture) to evaporate it, then mixes the vapor with cool external air at a controlled rate so it condenses to a mist with a controlled droplet size. Cruder designs simply boiled waste oil over a heater, while more sophisticated ones sprayed a specially formulated oily composition ("fog oil") through nozzles onto a heated plate. Choice of a suitable oil, and careful control of cooling rate, can produce droplet sizes close to the ideal size for Mie scattering of visible light. This produces a very effective obscuration per weight of material used. This screen can then be sustained as long as the generator is supplied with oil, and—especially if a number of generators are used—the screen can build up to a considerable size. One 50 gallon drum of fog oil can obscure 60 miles (97 km) of land in 15 minutes. Technology: Whilst producing very large amounts of smoke relatively cheaply, these generators have a number of disadvantages. They are much slower to respond than pyrotechnic sources, and require a valuable piece of equipment to be sited at the point of emission of the smoke. They are also relatively heavy and not readily portable, which is a significant problem if the wind shifts. To overcome this latter problem, they may be used in fixed posts widely dispersed over the battlefield, or else mounted on specially adapted vehicles. An example of the latter is the M56 Coyote generator. Technology: Many armoured fighting vehicles can create smoke screens in a similar way, generally by injecting diesel fuel onto the hot exhaust. Technology: Naval methods Warships have sometimes used a simple variation of the smoke generator, by injecting fuel oil directly into the funnel, where it evaporates into a white cloud. An even simpler method that was used in the days of steam-propelled warships was to restrict the supply of air to the boiler. This resulted in incomplete combustion of the coal or oil, which produced a thick black smoke. Because the smoke was black, it absorbed heat from the sun and tended to rise above the water. Therefore, navies turned to various chemicals, such as titanium tetrachloride, that produce a white, low-lying cloud. Infrared smokes: The proliferation of thermal imaging FLIR systems on the battlefields necessitates the use of obscurant smokes that are effectively opaque in the infrared part of electromagnetic spectrum. This kind of obscurant smoke is sometimes referred to as "Visual and Infrared Screening Smoke" (VIRSS). To achieve this, the particle size and composition of the smokes has to be adjusted. One of the approaches is using an aerosol of burning red phosphorus particles and aluminium-coated glass fibers; the infrared emissions of such smoke curtains hides the weaker emissions of colder objects behind it, but the effect is only short-lived. Carbon (most often graphite) particles present in the smokes can also serve to absorb the beams of laser designators. Yet another possibility is a water fog sprayed around the vehicle; the presence of large droplets absorbs in infrared band and additionally serves as a countermeasure against radars in 94 GHz band. Other materials used as visible/infrared obscurants are micro-pulverized flakes of brass or graphite, particles of titanium dioxide, or terephthalic acid. Infrared smokes: Older systems for production of infrared smoke work as generators of aerosol of dust with controlled particle size. Most contemporary vehicle-mounted systems use this approach. However, the aerosol stays airborne only for a short time. Infrared smokes: The brass particles used in some infrared smoke grenades are typically composed of 70% copper and 30% zinc. They are shaped as irregular flakes with a diameter of about 1.7 µm and thickness of 80–320 nm.Some experimental obscurants work in both infrared and millimeter wave region. They include carbon fibers, metal coated fibers or glass particles, metal microwires, particles of iron and of suitable polymers. Chemicals used: Zinc chloride Zinc chloride smoke is grey-white and consists of tiny particles of zinc chloride. The most common mixture for generating these is a zinc chloride smoke mixture (HC), consisting of hexachloroethane, grained aluminium and zinc oxide. The smoke consists of zinc chloride, zinc oxychlorides, and hydrochloric acid, which absorb the moisture in the air. The smoke also contains traces of organic chlorinated compounds, phosgene, carbon monoxide, and chlorine. Chemicals used: Its toxicity is caused mainly by the content of strongly acidic hydrochloric acid, but also due to thermal effects of reaction of zinc chloride with water. These effects cause lesions of the mucous membranes of the upper airways. Damage of the lower airways can manifest itself later as well, due to fine particles of zinc chloride and traces of phosgene. In high concentrations the smoke can be very dangerous when inhaled. Symptoms include dyspnea, retrosternal pain, hoarseness, stridor, lachrymation, cough, expectoration, and in some cases haemoptysis. Delayed pulmonary edema, cyanosis or bronchopneumonia may develop. The smoke and the spent canisters contain suspected carcinogens. Chemicals used: The prognosis for the casualties depends on the degree of the pulmonary damage. All exposed individuals should be kept under observation for 8 hours. Most affected individuals recover within several days, with some symptoms persisting for up to 1–2 weeks. Severe cases can suffer of reduced pulmonary function for some months, the worst cases developing marked dyspnoea and cyanosis leading to death. Chemicals used: Respirators are required for people coming into contact with the zinc chloride smoke. Chlorosulfuric acid Chlorosulfuric acid (CSA) is a heavy, strongly acidic liquid. When dispensed in air, it readily absorbs moisture and forms dense white fog of hydrochloric acid and sulfuric acid. In moderate concentrations it is highly irritating to eyes, nose, and skin. When chlorosulfuric acid comes in contact with water, a strong exothermic reaction scatters the corrosive mixture in all directions. CSA is highly corrosive, so careful handling is required. Low concentrations cause prickling sensations on the skin, but high concentrations or prolonged exposure to field concentrations can cause severe irritation of the eyes, skin, and respiratory tract, and mild cough and moderate contact dermatitis can result. Liquid CSA causes acid burns of skin and exposure of eyes can lead to severe eye damage. Affected body parts should be washed with water and then with sodium bicarbonate solution. The burns are then treated like thermal burns. The skin burns heal readily, while cornea burns can result in residual scarring. Respirators are required for any concentrations sufficient to cause any coughing, irritation of the eyes or prickling of the skin. Titanium tetrachloride Titanium tetrachloride (FM) is a colorless, non-flammable, corrosive liquid. In contact with damp air it hydrolyzes readily, resulting in a dense white smoke consisting of droplets of hydrochloric acid and particles of titanium oxychloride. The titanium tetrachloride smoke is an irritant and unpleasant to breathe. It is dispensed from aircraft to create vertical smoke curtains, and during World War II it was a favorite smoke generation agent on warships. Goggles and a respirator should be worn when in contact with the smoke, full protective clothing should be worn when handling liquid FM. In direct contact with skin or eyes, liquid FM causes acid burns. Chemicals used: Phosphorus Red phosphorus and white phosphorus (WP) are red or waxy yellow or white substances. White phosphorus is pyrophoric - can be handled safely when under water, but in contact with air it spontaneously ignites. It is used as an incendiary. Both types of phosphorus are used for smoke generation, mostly in artillery shells, bombs, and grenades. White phosphorus smoke is typically very hot and may cause burns on contact. Red phosphorus is less reactive, does not ignite spontaneously, and its smoke does not cause thermal burns - for this reason it is safer to handle, but cannot be used so easily as an incendiary. Chemicals used: Aerosol of burning phosphorus particles is an effective obscurant against thermal imaging systems. However, this effect is short-lived. After the phosphorus particles fully burn, the smoke reverts from emission to absorption. While very effective in the visible spectrum, cool phosphorus smoke has only low absorption and scattering in infrared wavelengths. Additives in the smoke that involve this part of the spectrum may be visible to thermal imagers or IR viewers. Chemicals used: Dyes Various signalling purposes require the use of colored smoke. The smoke created is a fine mist of dye particles, generated by burning a mixture of one or more dyes with a low-temperature pyrotechnic composition, usually based on potassium chlorate and lactose (also known as milk sugar). Chemicals used: Colored smoke screen is also possible by adding a colored dye into the fog oil mixture. Typical white smoke screen uses titanium dioxide (or other white pigment), but other colors are possible by replacing titanium dioxide with another pigment. When the hot fog oil condenses on contact with air, the pigment particles are suspended along with the oil vapor. Early smoke screen experiments attempted the use of colored pigment, but found that titanium dioxide was the most light scattering particle known and therefore best for use in obscuring troops and naval vessels. Colored smoke became primarily used for signaling rather than obscuring. In today's military, smoke grenades are found to be non-cancer causing, unlike the 1950s AN-M8 model. Chemicals used: Sulfonic acid The smoke generator on the Medium Mark B tank used sulfonic acid. Tactics: History The first documented use of a smoke screen was circa 2000 B.C. in the wars of ancient India, where incendiary devices and toxic fumes which caused people to fall sleep.It was later recorded by a Greek historian, Thucydides, who described that the smoke created by the burning of sulphur, wood and pitch was carried by the wind into Plataea (428 B.C.) and later at Delium (423 B.C.) and that at Delium, defenders were driven from the city walls.In 1622, a smoke screen was used at the Battle of Macau by the Dutch. A barrel of damp gunpowder was fired into the wind so that the Dutch could land under the cover of smoke.Later, between 1790 and 1810, Thomas Cochrane, 10th Earl of Dundonald (1775-1860), a Scottish Naval commander and officer in the Royal Navy who fought during the French Revolutionary and Napoleonic Wars, devised a smoke screen created through the burning of sulphur which would be used in warfare after learning about the same methods used at Delium and Plataea.Thomas Cochrane, 10th Earl of Dundonald's grandson, Douglas Cochrane, 12th Earl of Dundonald, described in his autobiography how he spoke to Winston Churchill (who once galloped for him when he had a brigade at manœuvres in England) of the importance of using smoke-screens on the battleground, it would in turn be used in both WWI & WW2. Tactics: Land warfare Smoke screens are usually used by infantry to conceal their movement in areas of enemy fire. They can also be used by armoured fighting vehicles, such as tanks, to conceal a withdrawal. They have regularly been used since earliest times to disorient or drive off attackers. A toxic variant of the smokescreen was used and devised by Frank Arthur Brock who used it during the Zeebrugge Raid on 23 April 1918, the British Royal Navy's attempt to neutralize the key Belgian port of Bruges-Zeebrugge. Tactics: For the crossing of the Dnieper river in October 1943, the Red Army laid a smoke screen 30 kilometres (19 mi) long. At the Anzio beachhead in 1944, US Chemical Corps troops maintained a 25 km (16 mi) "light haze" smokescreen around the harbour throughout daylight hours, for two months. The density of this screen was adjusted to be sufficient to prevent observation by German forward observers in the surrounding hills, yet not inhibit port operations. Tactics: In the Vietnam War, "Smoke Ships" were introduced as part of a new Air Mobile Concept to protect crew and man on the ground from small arms fire. In 1964 and 1965, the "Smoke Ship" was first employed by the 145th Combat Aviation Battalion using the UH-1B. Tactics: Naval warfare There are a number of early examples of using incendiary weapons at sea, such as Greek fire, stinkpots, fire ships, and incendiaries on the decks of turtle ships, which also had the effect of creating smoke. The naval smoke screen is often said to have been proposed by Sir Thomas Cochrane in 1812, although Cochrane's proposal was as much an asphyxiant as an obscurant. It is not until the early twentieth century that there is clear evidence of deliberate use of large scale naval smokescreens as a major tactic. Tactics: During the American Civil War, the first smoke screen was used by the R.E. Lee, running the blockade and escaping the USS Iroquois. The use of smoke screens was common in the naval battles of World War I and World War II.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Antiperistasis** Antiperistasis: Antiperistasis, in philosophy, is a general term for various processes, real or contrived, in which one quality heightens the force of another, opposing, quality. Overview: Historically, antiperistasis as a type of explanation was applied to numerous phenomena, from the interaction of quicklime with cold water, to the origin of thunder and lightning. Overview: In his Timaeus, Plato introduces the concept of periosis pushing around in order to explain various phenomena. Plato, for instance, appeals to it to explain how respiration functions in human beings. His 'theory' has been most famously adopted by Aristotle who made popular the term antiperistasis. In a nutshell it was "the doctrine that a moving object, which is no longer in touch with the mover, is moved by the medium through which it moves. Logically, it is connected to the idea that void does not exist. Overview: It was using this explanation that academic philosophers claimed that cold, on many occasions, increases a body's temperature, and dryness increases its moisture. Thus, it was said, quicklime (CaO) was apparently set ablaze when doused with cold water (an effect later explained as an exothermic reaction). It was also the understood reason for why water, such as that in wells, appeared warmer in winter than in summer (later explained as an example of sensory adaptation). It was also suggested that thunder and lightning were the results of antiperistasis caused by the coldness of the sky. Overview: Peripatetic philosophers, who were followers of Aristotle, made extensive use of the principle of antiperistasis. According to such authors, 'Tis necessary that Cold and Heat be both of them endued with a self-invigorating Power, which each may exert when surrounded by its contrary; and thereby prevent their mutual Destruction. Thus it is supposed that in Summer, the Cold expelled from the Earth and Water by the Sun's scorching Beams, retires to the middle Region of the Air, and there defends itself against the Heat of the superior and inferior. And thus, also, in Summer, when the Air is about us in sultry hot, we find that Cellars and Vaults have the opposite Quality: so in Winter, when the external Air freezes the Lakes and Rivers, the internal Air, in the same Vaults and Cellars, becomes the Sanctuary of Heat; and Water, fresh drawn out of deeper Wells and Springs, in a cold Season, not only feels warm, but manifestly smokes. Overview: Other examples used by the proponents of antiperistasis included the aphoristical saying of Hippocrates, "the viscera are hottest in the winter"; and the production of hail in the upper atmosphere, believed to occur only in the summer due to the increased heat of the sun. Robert Boyle examined the doctrine in his work, "New Experiments and Observations upon Cold."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Subsequence** Subsequence: In mathematics, a subsequence of a given sequence is a sequence that can be derived from the given sequence by deleting some or no elements without changing the order of the remaining elements. For example, the sequence ⟨A,B,D⟩ is a subsequence of ⟨A,B,C,D,E,F⟩ obtained after removal of elements C, E, and F. The relation of one sequence being the subsequence of another is a preorder. Subsequences can contain consecutive elements which were not consecutive in the original sequence. A subsequence which consists of a consecutive run of elements from the original sequence, such as ⟨B,C,D⟩, from ⟨A,B,C,D,E,F⟩, is a substring. The substring is a refinement of the subsequence. The list of all subsequences for the word "apple" would be "a", "ap", "al", "ae", "app", "apl", "ape", "ale", "appl", "appe", "aple", "apple", "p", "pp", "pl", "pe", "ppl", "ppe", "ple", "pple", "l", "le", "e", "" (empty string). Common subsequence: Given two sequences X and Y, a sequence Z is said to be a common subsequence of X and Y, if Z is a subsequence of both X and Y. For example, if then Z is said to be a common subsequence of X and Y. This would not be the longest common subsequence, since Z only has length 3, and the common subsequence ⟨B,E,E,B⟩ has length 4. The longest common subsequence of X and Y is ⟨B,E,G,C,E,B⟩. Applications: Subsequences have applications to computer science, especially in the discipline of bioinformatics, where computers are used to compare, analyze, and store DNA, RNA, and protein sequences. Applications: Take two sequences of DNA containing 37 elements, say: SEQ1 = ACGGTGTCGTGCTATGCTGATGCTGACTTATATGCTA SEQ2 = CGTTCGGCTATCGTACGTTCTATTCTATGATTTCTAAThe longest common subsequence of sequences 1 and 2 is: LCS(SEQ1,SEQ2) = CGTTCGGCTATGCTTCTACTTATTCTAThis can be illustrated by highlighting the 27 elements of the longest common subsequence into the initial sequences: SEQ1 = ACGGTGTCGTGCTATGCTGATGCTGACTTATATGCTA SEQ2 = CGTTCGGCTATCGTACGTTCTATTCTATGATTTCTAAAnother way to show this is to align the two sequences, that is, to position elements of the longest common subsequence in a same column (indicated by the vertical bar) and to introduce a special character (here, a dash) for padding of arisen empty subsequences: SEQ1 = ACGGTGTCGTGCTAT-G--C-TGATGCTGA--CT-T-ATATG-CTA- | || ||| ||||| | | | | || | || | || | ||| SEQ2 = -C-GT-TCG-GCTATCGTACGT--T-CT-ATTCTATGAT-T-TCTAASubsequences are used to determine how similar the two strands of DNA are, using the DNA bases: adenine, guanine, cytosine and thymine. Theorems: Every infinite sequence of real numbers has an infinite monotone subsequence (This is a lemma used in the proof of the Bolzano–Weierstrass theorem). Every infinite bounded sequence in Rn has a convergent subsequence (This is the Bolzano–Weierstrass theorem). For all integers r and s, every finite sequence of length at least (r−1)(s−1)+1 contains a monotonically increasing subsequence of length r or a monotonically decreasing subsequence of length s (This is the Erdős–Szekeres theorem). A metric space (X,d) is compact if every sequence in X has a convergent subsequence whose limit is in X Notes: This article incorporates material from subsequence on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SUPER BASIC** SUPER BASIC: SUPER BASIC, sometimes SBASIC for short, is an advanced dialect of the BASIC programming language offered on Tymshare's SDS 940 systems starting in 1968 and available well into the 1970s. SUPER BASIC: Like the Dartmouth BASIC it was based on, SUPER BASIC was a compile and go language, as opposed to an interpreter. In addition to offering most of the commands and functions from Dartmouth BASIC Version 4, in including matrix math commands, SUPER BASIC also included a number of features from the seminal JOSS language developed at Rand Corporation, via Tymshare's version, CAL, and added a variety of new functions, complex numbers as a built-in type, and double precision support. SUPER BASIC: SUPER BASIC also greatly improved string handling over the rudimentary system in Dartmouth, introducing the LEFT, MID and RIGHT string functions, simple string concatenation and other features. These were later used in DEC's BASIC-PLUS, which was later used as the basis for the original Microsoft BASIC that saw widespread use in the 1980s. History: The original Dartmouth BASIC was released in 1964 but was largely experimental at the time. It went through several revisions before becoming truly useful with the Fourth Edition when it was ported to the GE 635 machine and was published in 1968. Dartmouth specifically placed the underlying design in the public domain, so that anyone could port it to their platforms and call it BASIC. Its spread was further helped by the tireless efforts of its authors to promote it. However, as the code was designed to run on the DTSS operating system, some porting was required to run it on production systems. This led to a proliferation of versions with minor differences.Tymshare was formed within the University of California, Berkeley, initially renting out time on the University's computers on off-hours. Tymshare's original BASIC, simply Tymshare BASIC, was based on source code "from elsewhere" in the University, that Dan Lewis began enhancing. Frank Bracher added the routines for file input/output (I/O), which made it far more practical than the original Dartmouth code that relied purely on DATA statements embedded in the program. Dartmouth's workflow was tape based so loading and saving individual files was not practical and direct I/O was not addressed until later versions. Bracher's I/O code had originally been developed for Tymshare's SUPER FORTRAN offering.One oft-noted feature of the system was the documentation, written by Caroline Diehl. The manuals were written in a conversational style.Tymshare maintained SUPER BASIC through the 1970s, but as the market for rented timeshare programming services dwindled the system went into maintenance and Lewis and Bracher left to do SUPER BASIC consulting for those companies still using it. Maintenance within Tymshare passed primarily to Walter Main.Tymshare filed for a trademark on SUPER BASIC on 7 January 1970, and refreshed it on 17 October 1977, which became the property of McDonnell Douglas in 1984 when the company purchased Tymshare. Language: Direct and indirect mode Like most BASIC systems of the era, SUPER BASIC had a single command line editor that worked both as an interactive language and a program editor. Commands that were typed without a line number were executed immediately, which they referred to as "direct mode". If the same line was prefixed with a line number, it was instead copied into the program code storage area, known as "indirect mode". New lines were added to the program if the line number was unique, replaced existing lines with the same number, or removed from the program if an existing line number was typed in without any code following it. Language: Program statements Line numbers ran from 0 to 999999. The DELETE (or short-form DEL) could be used to delete a range of lines using typical LIST notation, for instance, DELETE 5,10-50. The ENTER command started an automatic line-number system. It took two optional parameters, a starting line number and a step, separated with BY. The starting number was assumed to be zero if not provided, and the step was 10. For instance, ENTER would produce 0,10,20,..., ENTER BY 5 would produce 0,5,10,..., and ENTER 10 BY 10 would produce 10,20,30... RENUMBER took three parameters, a new starting line number, a range of lines to renumber (like 20-100) and the step.Although the built-in editor loaded and saved only the lines in the program itself, the user could edit the resulting text file to add additional commands that would run in direct mode. A common example was to edit a program and add RUN on its own line at the end of the file. When loaded, the system would see the RUN and immediately compile and start the program on loading. This is unusual for BASIC systems, although this was commonly used in JOSS. Language: Statements In keeping with the overall Dartmouth BASIC concept, SUPER BASIC was a compile and go system that compiled the source code when the program was run. SUPER BASIC had two commands for this, the typical RUN seen in most BASICs, as well as START which did the same thing. Remarks could be placed anywhere using !.SUPER BASIC expanded the FOR statement in several ways. A minor change was to allow BY in place of STEP, and allowed the step to be placed at the end as in most BASICs, or in the middle as in JOSS and other languages. Thus FOR I=1 TO 10 BY 2 and FOR I=1 BY 2 TO 10 were both valid. Additionally, SUPER BASIC provided alternate forms of the range definition using WHILE and UNTIL, whereas most other languages used completely separate loop structures for these. For instance, FOR X=1 WHILE X<Y will continue as long as X<Y, while FOR X=1 UNTIL X<Y stops when the condition is met. As in Microsoft BASIC, multiple loops could end with a single NEXT I,J, although it did not include the feature of later version of MS where the index variable could be left off entirely. Finally, in JOSS fashion, one could replace the typical range specifier 1 TO 10 with an explicit list of values, FOR I=1,4,5,6,10.A more major change, following the JOSS model, was the concept of "statement modifiers" that allowed an IF or FOR to be placed after the statement it controlled. For instance, PRINT "IT IS" IF X=5 is equivalent to IF X=5 THEN PRINT "IT IS". This can make some commonly found use-cases easier to understand. It also included the syntactic sugar UNLESS which was an IF with the opposite sense; for instance, PRINT "IT IS NOT FIVE" UNLESS X=5. One could also use a loop in these cases, which made single one-statement loops easy to implement, for instance PRINT X FOR X=1 TO 10. One could also use a "bare" WHILE or UNTIL without the for, X=X+2 UNTIL X>10. The modifiers could also be ganged, PRINT "YES" IF A=B UNLESS N=0. Language: Expressions Variables Variable names could consist of one or two letters or one letter and a digit. SUPER BASIC did not require variables to be typed, a variable could hold a number at one point and a string at another, a side-effect of the way they were stored. This required the system to test the variable type at runtime during INPUT and PRINT for instance, which reduced performance. This could be addressed by explicitly declaring the variable type using a variety of commands.In most dialects of BASIC, variables are created on-the-fly as they are encountered in the code, and generally set to zero (or the empty string) when created. This can lead to problems where variables are supposed to be set up by previous code that is not being properly called, but at run time it can be difficult to know if 0 is an uninitialized value or one with the perfectly legal 0 values. SUPER BASIC addressed this with the VAR command. There were two primary forms, VAR=ZERO which made all undefined variables automatically get the value zero when accessed, which is the normal pattern for BASIC, and VAR=UNDEF which would instead cause a "VARIABLE HAS NO VALUE" error to occur when a previously unseen variable was used in a way that attempted to access its value. The later is very useful in debugging scenarios, where the normal behavior can hide the fact that a variable being used in a calculation has not been correctly initialized. Language: Numeric Unless otherwise specified, variables were stored in a 48-bit single precision floating point format with eleven digits of precision. One could also explicitly define a variable as REAL A, which was the single-precision format. This was not a consideration in other BASICs where some sort of suffix, like $, indicated the type wherever it was encountered.When required, a double precision format with seventeen digits, stored in three 24-bit words instead of two, could be used by declaring a variable with DOUBLE A. An existing single precision value or expression could be converted to double using the DBL(X) function. For instance, one could force an expression to evaluate using double precision using DBL(10+20).Likewise, one could declare INTEGER A to produce a one-word 24-bit integer value.A more unusual addition was direct support for complex numbers. These were set up in a fashion similar to other variables, using COMPLEX I,J to set aside two single precision slots. When encountered in programs, other statements like INPUT would trigger alternative modes that asked for two numbers instead of one, with similar modifications to READ (used with DATA statements), PRINT and others. A single complex number could be created from two singles using the CMPLX(X,Y) function, while REAL(I) and IMAG(I) extracted the real and imaginary parts, respectively, into singles. A small number of additional utility functions were also offered. Language: Operators and functions There were seven basic math operators: ↑ for exponents - the exponent is converted to a 12-bit integer * for multiplication / for division MOD for modulo, the remainder of an integer division DIV for integer division + for addition - for subtractionSUPER BASIC's list of mathematical functions was longer than most BASICs, including a series of inverse trigonometric functions and logarithms for base 2 and 10. Language: SUPER BASIC included a number of functions from JOSS as well: Arrays and matrix math In addition to basic math, SUPER BASIC included array functionality like many other BASIC implementations. One could DIM A(5,5) to make a two-dimensional array, and as a consequence of the way they were stored, all variables otherwise undeclared were actually DIMed to have ten indexes, so one could LET B(5)=20 without previously DIMing B.In contrast with other BASICs, SUPER BASIC allowed one to define the range of one or both of the dimensions, assuming 1 if not defined. So A in the example above has indexes 1..5, but one might also DIM A(-5:5,0:5) to produce an array that has 11 indexes from -5 to +5 for X, and 0 to +5 for Y. One could also use the BASE command to change the default, so BASE 0, for example, makes all dimensions start at 0.In addition to these traditional BASIC concepts, SUPER BASIC also included most of the matrix math features found in later versions of Dartmouth BASIC. These were invoked by adding the keyword MAT to the front of other commands. For instance, MAT A=B*C multiplies all the items in array B by their corresponding item in C, whereas MAT A=B*5 multiplies all the elements in B by 5. Functions for common matrix operations like inversion and identity were included. Language: Binary math and logical values As in most versions of BASIC, SUPER BASIC included the standard set of comparison operators, =, <>, >=, <=, > and <, as well as boolean operators OR, AND and NOT. In addition, # could be used as an alternate form of <>, a form that was found on a number of BASIC implementations in that era. SUPER BASIC also added XOR, EQV for "equivalence" (equals) and IMP for "implication".To this basic set, SUPER BASIC also added three new commands for comparing small differences between numbers, these were >>, << and =#. The much-greater-than and much-less-than operators compared the values of the two operands, for instance, A and B in the expression A >> B. If adding B to A results in A being unchanged after the inherent rounding, >> returned true. Internally this was performed by IF A=A-B. =#, the close-to-equals, simply compared both values to an internal meta-variable, EPS, performing ABS(A/B-1)<EPS.Most dialects of BASIC allow the result of such logical comparisons to be stored in variables, using some internal format to represent the logical value, often 0 for false and 1 or -1 for true. SUPER BASIC also allowed this, which resulted in the somewhat confusing behavior of LET A=B=5, which, following operator precedence, assigns 5 to B and then returns true or false if A=B. SUPER BASIC also added true logical variables, declared in a similar fashion as doubles or complex, using LOGICAL A, and other variables could be conveyed to logical using L().In contrast to logical comparisons and operators, SUPER BASIC also added a number of bitwise logical operators. These applied a basic logical operation to the individual bits in a word. These included BAN, BOR and BEX, for and, or and exclusive or. Additional functions include LSH(X) and RSH(X) for bit-shifting left and right, respectively. To ease the entry of binary values, constants could be entered in octal format by prefixing a number with an "O", like LET A=O41. Language: Strings SUPER BASIC allowed string constants (literals) to be enclosed with single or double quotes, so PRINT "HELLO, WORLD!" and PRINT 'HELLO, WIKIPEDIA!' were both valid statements.In contrast to later dialects of BASIC, one could assign a string to any variable and the $ signifier was not used, so A="HELLO, WORLD!" was valid. This could lead to some confusion when a user provided a value combining digits and letters, and SUPER BASIC assumed anything starting with a digit was a number. To guide the system when this might result in confusing input, one could explicitly declare string variables using STRING A. As with all variables in SUPER BASIC, these could be arrays, STRING A(5). Additionally, SUPER BASIC added the additional statement TEXT which took a second parameter to define the length of the string elements, so TEXT A(12):10 makes an array with 12 elements of 10 characters each, while TEXT B(5:10):15 is an array of six elements, 5..10, each 15 characters long. Language: String operators and functions SUPER BASIC included operators for = for comparison and + for concatenation. It included the following functions: ASC(S), returns the ASCII number for the first character in the string CHAR(N), returns a string with a single ASCII character, same as MS CHR() COMP(A,B), compares two strings, returns -1,0,1 depending on which is "bigger" INDEX(A,B), returns the index of B within A. Optional 3rd parameter is an offset starting point LENGTH(A), length of the string SPACE(X), returns a string consisting of X number of spaces VAL(A), looks through the string for a number and returns it STR(N), converts a number into a string LEFT, as in MS RIGHT SUBSTR, as MS's MID Utility functions Typical utility functions are also included: SUPER BASIC also included pseudo-variables for PI and DPI, the later being double-precision, as well as the previously mentioned EPS to represent the smallest possible value. Language: Print formatting SUPER BASIC included two forms of print formatting that could be used with the PRINT statement. PRINT IN IMAGE X: used a format string, in this case stored in X, in a fashion similar to what other BASICs implemented using PRINT USING or the more common examples found in C and its follow-ons. Field type included integers, specified decimal formats, and exponents, as well as strings and text. % signs indicated a single digit in either an integer or real field, and # indicated a digit in an E field. * and $ could be used to prefix any value.PRINT IN FORMAT worked generally the same way, the difference being that spaces had to be explicitly defined using B. Thus the format string "%% BBB %%.%%" would print two numerical values with three spaces between them, whereas if this was an image the "BBB" would be printed out with a space on either side. The FORMAT version supported a wider variety of format strings and included items like inline carriage returns, but the examples given in the manuals do not make it clear why there are two such systems when they accomplish the same thing in the end.Interestingly, the same format commands could be used for INPUT, not just PRINT. In this case the user input would be properly formatted based on the string, so 1.2345 might be truncated to 1.2 if the format is %.%. Language: File I/O SUPER BASIC included a file input/output system based on INPUT ON X and PRINT ON X where X is file handle, a number. The number was assigned using OPEN filename FOR [INPUT|OUTPUT] AS FILE X. WRITE ON X was provided as an alternative to PRINT ON X, but they are identical internally. When complete, the file can be released with CLOSE X or CLOSE filename. When working with files, one could read the next-read location using LOC(X) and change it using LOCATE 100 ON 2. POS(X) returned the position within a form if IN FORM was being used. SIZE(N) returned the file size. The ENDFILE(X) could be used in loops to test whether the end of the file was reached during reads. Language: The system also included a function TEL that returned whether or not there was input waiting in the terminal. SUPER BASIC programs often included code like to wait for user input and test it every second before continuing. Additionally, it included a pseudo-filename "TEL" that could be opened for reading and writing using OPEN "TEL" FOR OUTPUT AS 2 and then WRITE ON 2 "HELLO WORLD". In addition to "TEL", both "T" and "TELETYPE" also referenced the command teletype.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ACM Transactions on Algorithms** ACM Transactions on Algorithms: ACM Transactions on Algorithms (TALG) is a quarterly peer-reviewed scientific journal covering the field of algorithms. It was established in 2005 and is published by the Association for Computing Machinery. The editor-in-chief is Edith Cohen. The journal was created when the editorial board of the Journal of Algorithms resigned out of protest to the pricing policies of the publisher, Elsevier. Apart from regular submissions, the journal also invites selected papers from the ACM-SIAM Symposium on Discrete Algorithms (SODA). Abstracting and indexing: The journal is abstracted and indexed in the Science Citation Index Expanded, Current Contents/Engineering, Computing & Technology, and Scopus. According to the Journal Citation Reports, the journal has a 2022 impact factor of 1.3. Past editors: The following persons have been editors-in-chief of the journal: Harold N. Gabow (2005-2008) Susanne Albers (2008-2014) Aravind Srinivasan (2014-2021)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**VIMCAS** VIMCAS: VIMCAS, standing for Vertical Interval Multiple Channel Audio System, is a dual-channel Sound-in-Syncs mechanism for transmitting digitally encoded audio in a composite video analogue television signal. Invented by Australian company IRT in the 1980s, the basic concept of VIMCAS is to transmit two channels of PCM-encoded (i.e. digital) audio during the vertical blanking interval of a composite video signal. VIMCAS: The encoded audio was transmitted over 6 horizontal scan lines during that interval, the digitally encoded signal being placed onto a series of mid-grey pedestals, in much the same way that the colour subcarrier is placed on top of the monochrome signal.As with the colour subcarrier, there is 4.7kHz bandwidth, so six lines would provide 28kHz of bandwidth (actually slightly less, there being deliberate redundancy between the final packet of encoded audio on one line and the first packet of encoded audio on the next, in order to avoid signal corruption). VIMCAS: This could be used as a pair of 14kHz channels for stereo audio, or as separate channels to carry dual-language transmissions. VIMCAS: In outside broadcast (OB) work, where VIMCAS was used from the OB site back to the studio, it could be used for separate audio channels where one would be effects (i.e. the ambient sound of a sports match) and the other would be the main audio (e.g. the voice of the commentator), or alternatively with the effects audio carried by VIMCAS and the main audio carried as NICAM 728.To fit into the available bandwidth, the audio signal would first be companded and limited before being sampled for PCM encoding. VIMCAS: The encoded signal would be transmitted in the six scanlines in time compressed form, i.e. much faster than its actual speed. VIMCAS: Decoding was simply the reverse process, with 100ms of audio (at a time) stored in the transmitted digital form into a digital memory and played out from that memory at original speed through a digital-to-analogue converter, with appropriate timing circuits to synchronize this playout with the accompanying video.A reduced version, using just one scan line instead of six and thus providing narrower bandwidth, was called VISCAS (Vertical Interval Single Channel Audio System), which was good enough for talkback between the studio and the OB or foldback.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DLD/NP1** DLD/NP1: DLD/NP1 is a tumor suppressor gene discovered by a team of medical students from an American University in Lebanon. The new medical achievement was described by two students, Maroun Hajjar, and Mozes Chalovski, and tested for in samples taken from a population of 984 individuals (p= 0.0056*), including their mentor, Dr. Davis Davis.In 2010, work began in the only classroom of the "outstanding" medical school, to describe why the doctor in charge had such weird symptoms attributed to his unusual tumor. Almost one year later, DLD/NP1 gene was identified and found repeatedly in Davis' DNA. DLD/NP1: DNA methylation of a CpG island near the DLD/NP1 promoter as well as histone acetylation may represent possible epigenetic mechanisms leading to decreased gene expression in human tumors. Nomenclature: The naming was completely the choice the two students, after an arbitrary nickname for the Doctor. The second D in the name is attributed to Dr. Davis. Signs and symptoms: The most common symptoms in patients with DLD/NP1 inactivation tumors are rectal prolapse, tenesmus, small bowel obstruction, lingual striated muscle hypertrophy, and priapism. Colonoscopic examination may also reveal "polypy" appearance. Location and structure: Human DLD/NP1 is located on the long arm of chromosome 8 (q811.2) and covers 2.62kb on the reverse strand. Psychosocial Burnden: This type of tumor is most common in poor, black, African women especially if they have AIDS. Treatment: Although no randomized clinical trials have been completed, the best treatment methodology as of today is the use of COX-3 Inhibitors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ChapStick** ChapStick: ChapStick is a brand name of lip balm manufactured by Haleon and used in many countries worldwide. It is intended to help treat and prevent chapped lips, hence the name. Many varieties also include sunscreen in order to prevent sunburn. Due to its popularity, the term has become a genericized trademark. It popularly refers to any lip balm contained in a lipstick-style tube and applied in the same manner as lipstick. However, the term is still a registered trademark, with rights exclusively owned by Haleon. History: In the early 1880s, Charles Browne Fleet, a physician and pharmacological thinker from Lynchburg, Virginia, invented ChapStick as a lip balm product. The handmade product, which resembled a wickless candle wrapped in tin foil, was sold locally and did not have much success.In 1912, John Morton, also a Lynchburg resident, bought the rights to the product for five dollars. In their kitchen, Mrs. Morton melted the pink ChapStick mixture, cooled it, and cut it into sticks. Their lucrative sales were used to found the Morton Manufacturing Corporation.In 1935, Frank Wright, Jr., a commercial artist from Lynchburg, Virginia, was commissioned to design the CHET ChapStick logo that is still used today. He was paid a one-time fee of $15.In 1963, The A.H. Robins Company acquired ChapStick from Morton Manufacturing Corporation. At that time, only ChapStick Lip Balm regular stick was being marketed to consumers; subsequently, many more varieties have been introduced. This includes ChapStick four flavored sticks in 1971, ChapStick Sunblock 15 in 1981, ChapStick Petroleum Jelly Plus in 1985, and ChapStick Medicated in 1992. History: Robins was purchased by American Home Products (AHP) in 1988. AHP later changed its name to Wyeth. ChapStick was a Wyeth product until 2009 when Wyeth was acquired by Pfizer. Pfizer sold the manufacturing facility in Richmond, Virginia, on October 3, 2011, to Fareva Richmond, who now manufactures and packages ChapStick for Pfizer. In 2019, GlaxoSmithKline Consumer Healthcare acquired ChapStick from Pfizer. Composition: Ingredients commonly include camphor, beeswax, menthol, petrolatum, phenol, vitamin E, aloe and oxybenzone. However, there are many variants of ChapStick, each with its own composition. Due to safety concerns, phenol is banned from use in cosmetic products in the European Union and Canada. Composition: The full list of ingredients in a regular-flavored ChapStick is: arachidyl propionate, camphor, carnauba wax, cetyl alcohol, D&C red no. 6 barium lake, FD&C yellow no. 5 aluminum lake, fragrance, isopropyl lanolate, isopropyl myristate, lanolin, light mineral oil, methylparaben, octyldodecanol, oleyl alcohol, paraffin, phenyl trimethicone, propylparaben, titanium dioxide, white wax, propanol. Its net weight is usually 4 grams (0.14 oz). Composition: When manufactured by Wyeth, Chapstick contained no parabens. Uses: ChapStick functions as both a sunscreen, available with SPFs as high as 50, and a skin lubricant to help prevent and protect chafed, chapped, sunburned, cracked, and windburned lips. "Medicated" varieties also contain analgesics to relieve sore lips. In addition to medical uses, ChapStick has had other uses; the lubricating properties have been useful on precision instruments such as slide rules. Other lubricants, while appropriate to the instruments, might have been harmful to the skin, while ChapStick is not. Marketing: ChapStick is sometimes available in special flavors developed in connection with marketing partners such as Disney (as in cross-promotions with Winnie the Pooh or the movie Cars) or with charitable causes such as breast cancer awareness, in which 30¢ is donated for each stick sold (as in the Susan G. Komen Pink Pack). The Flava-Craze line is marketed to preteens and young teens, with colorful applicators and "fun" flavors such as Grape Craze and Blue Crazeberry. Marketing: US Olympic skier Suzy Chaffee starred in ChapStick television commercials in which she dubbed herself "Suzy ChapStick". Another very famous ChapStick advertisement includes basketball legend Julius Erving (commonly known as Dr. J) naming himself Dr. ChapStick and telling young children about the great things that ChapStick can do.Diana Golden, a U.S. Olympic Gold medal-winning skier and 1988, Ski Racing Magazine and United States Olympic Committee female skier of the year, was also a spokesperson for ChapStick. Former ski racer Picabo Street, for a time, was seen on television commercials as one of the company's endorsers.Its main competitors in the US, Carmex, and Blistex, also use the popular lipstick-style tube for their lip balm products. In Iceland and in the United Kingdom, the product's main competitor is Lypsyl, made by Novartis Consumer Health and distributed in similar packaging to ChapStick.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Uranium acid mine drainage** Uranium acid mine drainage: Uranium acid mine drainage refers to acidic water released from a uranium mining site using processes like underground mining and in-situ leaching. Underground, the ores are not as reactive due to isolation from atmospheric oxygen and water. When uranium ores are mined, the ores are crushed into a powdery substance, thus increasing surface area to easily extract uranium. The ores, along with nearby rocks, may also contain sulfides. Once exposed to the atmosphere, the powdered tailings react with atmospheric oxygen and water. After uranium extraction, sulfide minerals in uranium tailings facilitates the release of uranium radionuclides into the environment, which can undergo further radioactive decay while lowering the pH of a solution. Uranium chemistry: Uranium may exist naturally as U+6 in ores but also forms the water-soluble uranyl ion UO2+2 when uranium tailings are oxidized by atmospheric oxygen in the following reaction. U+6 + O2 → UO2+2The solubility of uranium increases under similar oxidizing conditions when it forms uranyl carbonate complexes in the following reaction. Uranium chemistry: U+6 + O2 + 2CO32−→ [UO2(CO3)2]2+Extraction of uranium from the ore may occur under acid or alkaline leaching processes using sulfuric acid and sodium carbonate respectively. If leached with sulfuric acid, uranyl forms a soluble uranyl sulfate complex in the following reaction. Hydrogen ions in solution react with water to produce hydronium ions which lowers a solution's pH making it more acidic. Uranium chemistry: UO2 + 3H2SO4 + 1/2 O2 → [UO2(SO4)3]4− + H2O + 4H+H+(aq) + H2O(l) → H3O+(aq)During in-situ leaching uranyl reacts with iron, a common natural oxidant, to produce uranyl trioxide which is further oxidized then leached using alkaline sodium carbonate in the following reactions. Uranium chemistry: UO2 + 2Fe3+ → UO2+2 + 2Fe2+UO2 + 1/2 O2 → UO3UO3 + 3Na2CO3 + H2O → [UO2(CO3)3]4+ + 4Na+ + 2NaOHWhen considering the formation secondary uranium minerals, as discussed in the case study section below, the pH of the solution that contains uranophane is one of determining factors of how much of the uranophane is in mineral form or in the form of its ions. Shown in figure 2, from a study performed by Tatiana Shvareva et al. in 2011, is the dissolution of uranophane in pH of 3 (Figure 3b) and pH of 4 (Figure 3a). The graphs demonstrate that in a more acidic environment, the concentrations of Ca, U, and Si are more likely to be more abundant in more basic environments where it is more likely that they will form minerals. This is more likely to happen when the acidic mine drainage is released into rivers or large water deposits and they become diluted to a pH closer to that of water.The enthalpies of formation (from elements and from oxide species) and Gibbs free energies of formation (from elements) of the uranium minerals boltwoodite, Na-boltwoodite, and uranophane are shown in Table 1. Solubility constants (dissociation of minerals to ions) of the same minerals, determined using a bomb calorimeter in a study by Shvareva, Tatiana et al. in 2011, are shown in Table 2. The Gibbs free energies of formation show that the process, when the reactions from the individual elements to the oxides are taken into account, is spontaneous. The enthalpies of formation, when only considering the reaction from the oxides to the mineral, suggest a relatively high probability for their Gibbs free energy of formation values to also be spontaneous. Uranium chemistry: Table 1. The enthalpy of formation (from oxide to mineral), enthalpy of formation (from individual elements to mineral), and Gibbs free energy (from individual elements to mineral) of boltwoodite, Na-boltwoodite, and uranophane. Table 2. Solubility constants and mass action equations for boltwoodite, Na-boltwoodite, and uranophane. Uranium acid mine drainage case study: Two uranium mines in northern Portugal, Quinta do Bispo and Cunha Baixa, have been inactive since 1991. Acidic water is pumped out of the mines for neutralization and precipitation of radionuclides using calcium hydroxide. Studies in 2002 found that there were high concentrations of soluble and suspended uranium radionuclides in river water samples near the mines. Castelo river reached suspended uranium isotope concentrations of -72 kBq/kg which is roughly 170x higher than normal concentrations in the Mondego River but returned to normal after 7 km. The mine waters of Quinta do Bispo and Cunha Baixa had low pH values at 2.67 and 3.48 with U-238 concentrations of 92,000 mBq/L and 2,200 mBq/L, respectively.Results from studies done in 2002 showed a significant negative correlation between both dissolved uranium radionuclides and hydrogen ions with pH in mine waters. Sorption of dissolved uranium radionuclides in rivers combine with nearby rock sediments can form minerals like uranophane. The chemistry and findings in this case is essentially representative of other uranium mines in the world. Uranium radionuclides in the environment: A uranium radionuclide is a radioactive isotope. Radioactivity is natural in the environment, however uranium radionuclides can lead to radioactive decay. In the case of uranium mines, these radionuclides can leach into the water and cause the radioactivity to be carried elsewhere, as well as form precipitates that can be harmful to the environment. The uranium radionuclides can eventually be carried to fruits and vegetables via contaminated waters. Sulfuric acid, oxidation, and alkaline leaching are processes of how radionuclides make their way into the environment. When uranium decays it also produces the isotopes 226Ra and 222Rn, which may be environmentally harmful due to the fact that radon is present as an inert gas and therefore, might enter into the soil or atmosphere. Radon then can emit alpha particles and gamma radiation. The three different radioactive isotopes of uranium are uranium-238, uranium-235, and uranium-234. Each has a different half-life which determines the isotope's decay rate. When uranium-235 combines with other molecules it creates a chemical reaction that can cause detrimental effects to water. Even though isotope formation occurs naturally, when combined with other elements it can cause the pH of water to become more acidic as discussed previously.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hemifacial microsomia** Hemifacial microsomia: Hemifacial microsomia (HFM) is a congenital disorder that affects the development of the lower half of the face, most commonly the ears, the mouth and the mandible. It usually occurs on one side of the face, but both sides are sometimes affected. If severe, it may result in difficulties in breathing due to obstruction of the trachea—sometimes even requiring a tracheotomy. With an incidence in the range of 1:3500 to 1:4500, it is the second most common birth defect of the face, after cleft lip and cleft palate.[1] HFM shares many similarities with Treacher Collins syndrome. Presentation: The clinical presentation of HFM is quite variable. The severity may depend on the extent of the area with an insufficient blood supply in utero, and the gestational age of the fetus at which this occurs. In some people, the only physical manifestation may be a small and underdeveloped external ear. In more severe cases, multiple parts of the face may be affected. Some people with HFM may have sensorineural hearing loss and decreased visual acuity or even blindness.It can be thought of as a particularly severe form of HFM, in which extracranial anomalies are present to some extent. Some of the internal organs (especially the heart, kidneys, and lungs) may be underdeveloped, or in some cases even absent altogether. The affected organs are typically on the same side as the affected facial features, but bilateral involvement occurs in approximately 10% of cases. Deformities of the vertebral column such as scoliosis may also be observed.While there is no universally accepted grading scale, the OMENS scale (standing for Orbital, Mandible, Ear, Nerves and Soft tissue) was developed to help describe the heterogeneous phenotype that makes up this sequence or syndrome.Intellectual disability is not typically seen in people with HFM. Hemifacial microsomia sometimes results in temporomandibular joint disorders. Cause: The condition develops in the fetus at approximately 4 weeks gestational age, when some form of vascular problem such as blood clotting leads to insufficient blood supply to the face. This can be caused by physical trauma, though there is some evidence of it being hereditary [2]. This restricts the developmental ability of that area of the face. Currently there are no definitive reasons for the development of the condition. Diagnosis: Classification Figueroa and Pruzanksky classified HFM patients into three different types: Type I : Mild hypoplasia of the ramus, and the body of the mandible is slightly affected. Type II : The condyle and ramus are small, the head of the condyle is flattened, the glenoid fossa is absent, the condyle is hinged on a flat, often convex, infratemporal surface, the coronoid may be absent. Type III: The ramus is reduced to a thin lamina of bone or is completely absent. There is no evidence of a TMJ. Treatment: Depending upon the treatment required, it is sometimes most appropriate to wait until later in life for a surgical remedy – the childhood growth of the face may highlight or increase the symptoms. When surgery is required, particularly when there is a severe disfiguration of the jaw, it is common to use a rib graft to help correct the shape.According to literature, HFM patients can be treated with various treatment options such as functional therapy with an appliance, distraction osteogenesis, or costochondral graft. The treatment is based on the type of severity for these patients. According to Pruzanksky's classification, if the patient has moderate to severe symptoms, then surgery is preferred. If patient has mild symptoms, then a functional appliance is generally used.According to Dr. Harry Pepe, a pediatrician from Hollywood, Florida, the goal of treatment in hemifacial microsomia is to elongate the deficient jaw bone to restore facial symmetry and correct the slanting bite (occlusion).Patients can also benefit from a Bone Anchored Hearing Aid (BAHA). Terminology: The condition is also known by various other names: Lateral facial dysplasia First and second branchial arch syndrome Oral-mandibular-auricular syndrome Otomandibular dysostosis Craniofacial microsomia
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Basal sliding** Basal sliding: Basal sliding is the act of a glacier sliding over the bed due to meltwater under the ice acting as a lubricant. This movement very much depends on the temperature of the area, the slope of the glacier, the bed roughness, the amount of meltwater from the glacier, and the glacier's size. Basal sliding: The movement that happens to these glaciers as they slide is that of a jerky motion where any seismic events, especially at the base of glacier, can cause movement. Most movement is found to be caused by pressured meltwater or very small water-saturated sediments underneath the glacier. This gives the glacier a much smoother surface on which to move as opposed to a harsh surface that tends to slow the speed of the sliding. Although meltwater is the most common source of basal sliding, it has been shown that water-saturated sediment can also play up to 90% of the basal movement these glaciers make. Basal sliding: Most activity seen from basal sliding is within thin glaciers that are resting on a steep slope, and this most commonly happens during the summer seasons when surface meltwater runoff peaks. Factors that can slow or stop basal sliding relate to the glacier's composition and also the surrounding environment. Glacier movement is resisted by debris, whether it is inside the glacier or under the glacier. This can affect the amount of movement that is made by the glacier by a large percentage especially if the slope on which it lies is low. The traction caused by this sediment can halt a steadily moving glacier if it interferes with the underlying sediment or water that was helping to carry it. Basal sliding: The Great Lakes were created due to basal erosion as a result of sliding over relatively weak bedrock.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Estrobin** Estrobin: Estrobin, also known as α,α-di(p-ethoxyphenyl)-β-phenylbromoethylene and commonly abbreviated as DBE, is a synthetic, nonsteroidal estrogen of the triphenylethylene group that was never marketed. Chlorotrianisene, and subsequently clomifene and tamoxifen, were derived from it. Estrobin, similarly to other triphenylethylenes, is very lipophilic and hence very long-lasting in its duration of action. Similarly to chlorotrianisene, estrobin behaves a prodrug to a much more potent estrogen in the body.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cephalops signatus** Cephalops signatus: Cephalops signatus is a species of fly in the family Pipunculidae. Distribution: Austria, Belgium, Great Britain, Czech Republic, France, Germany, Hungary, Italy, Latvia, Slovakia, Netherlands, Yugoslavia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Parkes process** Parkes process: The Parkes process is a pyrometallurgical industrial process for removing silver from lead during the production of bullion. It is an example of liquid–liquid extraction. Parkes process: The process takes advantage of two liquid-state properties of zinc. The first is that zinc is immiscible with lead, and the other is that silver is 3000 times more soluble in zinc than it is in lead. When zinc is added to liquid lead that contains silver as a contaminant, the silver preferentially migrates into the zinc. Because the zinc is immiscible in the lead it remains in a separate layer and is easily removed. The zinc-silver solution is then heated until the zinc vaporizes, leaving nearly pure silver. If gold is present in the liquid lead, it can also be removed and isolated by the same process.The process was patented by Alexander Parkes in 1850. Parkes received two additional patents in 1852.The Parkes process was not adopted in the United States, due to the low native production of lead. The problems were overcome during the 1880s and by 1923 only when the Parkes process was used.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Competent Crew** Competent Crew: Competent Crew is the entry-level course of the Royal Yachting Association for those who wish to be active crew members of a sailing yacht. It is a hands-on course and by the end of the course participants should be able to steer, handle sails, keep a lookout, row a dinghy and assist in all the day-to-day duties on board. No pre-course knowledge or experience are assumed. The minimum duration of the course is 5 days. It may be run over 5 days or over 3 weekends or 3 days plus a weekend. For those who have done the Start Yachting course, this course can be completed in 3 or 4 days. There is no minimum age. Course content: Knowledge of sea terms and parts of a boat, sail handling, ropework, fire precautions and fighting, personal safety equipment, man overboard, emergency equipment, meteorology, seasickness, helmsmanship, general duties, manners and customs, rules of the road, dinghies. Progression: The Competent Crew course is one of a structured series of courses run by the RYA. Additional courses in the series include Day Skipper Shorebased & Practical, Coastal Skipper and Yachtmaster Shorebased, Coastal Skipper Practical, .
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carathéodory's theorem (convex hull)** Carathéodory's theorem (convex hull): Carathéodory's theorem is a theorem in convex geometry. It states that if a point x lies in the convex hull Conv(P) of a set P⊂Rd , then x can be written as the convex combination of at most d+1 points in P . More sharply, x can be written as the convex combination of at most d+1 extremal points in P , as non-extremal points can be removed from P without changing the membership of x in the convex hull. Carathéodory's theorem (convex hull): Its equivalent theorem for conical combinations states that if a point x lies in the conical hull Cone(P) of a set P⊂Rd , then x can be written as the conical combination of at most d points in P .: 257 The similar theorems of Helly and Radon are closely related to Carathéodory's theorem: the latter theorem can be used to prove the former theorems and vice versa.The result is named for Constantin Carathéodory, who proved the theorem in 1911 for the case when P is compact. In 1914 Ernst Steinitz expanded Carathéodory's theorem for arbitrary set. Example: Carathéodory's theorem in 2 dimensions states that we can construct a triangle consisting of points from P that encloses any point in the convex hull of P. For example, let P = {(0,0), (0,1), (1,0), (1,1)}. The convex hull of this set is a square. Let x = (1/4, 1/4) in the convex hull of P. We can then construct a set {(0,0),(0,1),(1,0)} = P′, the convex hull of which is a triangle and encloses x. Proof: Note: We will only use the fact that R is an ordered field, so the theorem and proof works even when R is replaced by any field F , together with a total order. We first formally state Carathéodory's theorem: The essence of Carathéodory's theorem is in the finite case. This reduction to the finite case is possible because Conv(S) is the set of finite convex combination of elements of S (see the convex hull page for details). With the lemma, Carathéodory's theorem is a simple extension: Alternative proofs uses Helly's theorem or the Perron–Frobenius theorem. Variants: Carathéodory's number For any nonempty P⊂Rd , define its Carathéodory's number to be the smallest integer r , such that for any x∈Conv(P) , there exists a representation of x as a convex sum of up to r elements in P Carathéodory's theorem simply states that any nonempty subset of Rd has Carathéodory's number ≤d+1 . This upper bound is not necessarily reached. For example, the unit sphere in Rd has Carathéodory's number equal to 2, since any point inside the sphere is the convex sum of two points on the sphere. Variants: With additional assumptions on P⊂Rd , upper bounds strictly lower than d+1 can be obtained. Dimensionless variant Recently, Adiprasito, Barany, Mustafa and Terpai proved a variant of Caratheodory's theorem that does not depend on the dimension of the space. Colorful Carathéodory theorem Let X1, ..., Xd+1 be sets in Rd and let x be a point contained in the intersection of the convex hulls of all these d+1 sets. Variants: Then there is a set T = {x1, ..., xd+1}, where x1 ∈ X1, ..., xd+1 ∈ Xd+1, such that the convex hull of T contains the point x.By viewing the sets X1, ..., Xd+1 as different colors, the set T is made by points of all colors, hence the "colorful" in the theorem's name. The set T is also called a rainbow simplex, since it is a d-dimensional simplex in which each corner has a different color.This theorem has a variant in which the convex hull is replaced by the conical hull.: Thm.2.2  Let X1, ..., Xd be sets in Rd and let x be a point contained in the intersection of the conical hulls of all these d sets. Then there is a set T = {x1, ..., xd}, where x1 ∈ X1, ..., xd ∈ Xd, such that the conical hull of T contains the point x.Mustafa and Ray extended this colorful theorem from points to convex bodies.The computational problem of finding the colorful set lies in the intersection of the complexity classes PPAD and PLS.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Iconv** Iconv: In Unix and Unix-like operating systems, iconv (an abbreviation of internationalization conversion) is a command-line program and a standardized application programming interface (API) used to convert between different character encodings. "It can convert from any of these encodings to any other, through Unicode conversion." History: Initially appearing on the HP-UX operating system,iconv() as well as the utility was standardized within XPG4 and is part of the Single UNIX Specification (SUS). Implementations: Most Linux distributions provide an implementation, either from the GNU Standard C Library (included since version 2.1, February 1999), or the more traditional GNU libiconv, for systems based on other Standard C Libraries. The iconv function on both is licensed as LGPL, so it is linkable with closed source applications. Unlike the libraries, the iconv utility is licensed under GPL in both implementations. The GNU libiconv implementation is portable, and can be used on various UNIX-like and non-UNIX systems. Version 0.3 dates from December 1999. The uconv utility from International Components for Unicode provides an iconv-compatible command-line syntax for transcoding. Most BSD systems use NetBSD's implementation, first appeared in December 2004. Support Currently, over a hundred different character encodings are supported. Ports Under Microsoft Windows, the iconv library and the utility is provided by GNU's libiconv found in Cygwin and GnuWin32 environments; there is also a "purely Win32" implementation called "win-iconv" that uses Windows' built-in routines for conversion. The iconv function is also available for many programming languages. The iconv command has also been ported to the IBM i operating system. Usage: stdin can be converted from ISO-8859-1 to current locale and output to stdout using: An input file infile can be converted from ISO-8859-1 to UTF-8 and output to output file outfile using:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Enamel-dentine fracture** Enamel-dentine fracture: Enamel-dentine fracture is a complete fracture of the tooth enamel and dentine without the exposure of the pulp. Pulp sensibility testing is recommended to confirm pulpal health. Treatment depends on how close the fracture is in relation to the pulp. If a tooth fragment is available, it can be bonded to the tooth. Otherwise, provisional treatment can be done, which the exposed dentine can be covered using glass ionomer cement or a more permanent treatment restoration using dental composite resin or other accepted restorative dental materials. If the exposed dentine is within 0.5mm of the pulp, clinically a pink appearance can be seen. This shows close proximity to the pulp. In this case, calcium hydroxide is used to place at the base and then covered with a material such as ionomer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Punter (football)** Punter (football): A punter (P) in gridiron football is a special teams player who receives the snapped ball directly from the line of scrimmage and then punts (kicks) the football to the opposing team so as to limit any field position advantage. This generally happens on a fourth down in American football and a third down in Canadian football. Punters may also occasionally take part in fake punts in those same situations, when they throw or run the football instead of punting. Skills and usage: The purpose of the punt is to force the team that is receiving the kick to start as far as possible from the punting team's end zone. Accordingly, the most effective punts land just outside the receiving team's end zone and land either out of bounds (making it impossible to advance the ball until the next play) or after being kicked exceptionally high (allowing the kicking team time to run down the field and prevent the punt returner from advancing the ball). Punters therefore must be able to kick the ball high, long distances, and precisely. One standard is that a punt should be in the air for at least 1 second for every 10 yards it travels, but the linear relationship drops off once it hits over 50 yards. Skills and usage: Punters may also impart a spin to the ball that makes it harder to catch, increasing the odds of a muff that may lead to the punter's team regaining possession. Skills and usage: The punter frequently serves as the holder on field goal attempts. The punter has typically developed chemistry with the long snapper and is thus accustomed to catching a long-snapped ball. Additionally, punters are also kickers and understand kicking mechanics, such as how far back to lean the ball as the kicker makes an attempt, and when a field goal attempt should be aborted. Punters may pass or run the ball on fake field goal attempts and fake punts. Skills and usage: Many punters also double duty as kickoff specialists as most punters have been at one point field goal kickers as well, and some, such as Craig Hentrich, have filled in as worthy backup field goal kickers. Punters seldom receive much attention or fan support, in part because they are called upon when a team's offense has just failed. Career lengths: Certain punters can have exceptionally long careers, compared to other NFL position players (there is a similar tendency with kickers). One reason for this is that their limited time on the field and heavy protection by penalties against defensive players for late hits makes them far less likely to be injured than other positions. Sean Landeta, for instance, played 19 NFL seasons and three USFL seasons for eight different teams. Jeff Feagles played 22 seasons as a punter, on five different teams. Career lengths: Conversely, placekickers and punters can also have very short careers, mainly because of a lack of opportunity. Because the risk of injury is remote, NFL teams typically only carry one punter on their roster at any given time. Thus, the only opportunity a punter has of breaking into the league is if the incumbent punter leaves the team or is injured. Some NFL teams will carry two punters during the preseason, but the second punter is typically "camp fodder" and seldom makes the opening day roster. Unlike backups at other positions, backup placekickers and punters are not employed by any given team until they are needed; most indoor American football teams, because of smaller rosters and fields along with rules that either ban or discourage punting, do not employ punting specialists. Notable records: Bob Cameron of the Winnipeg Blue Bombers (CFL), in a 23-year career, has the most career punting yards, with 134,301 yards. Jeff Feagles holds the NFL record for career punting yards with 71,211 yards. He played from 1988-2009 for five different teams in the NFL. Notable records: Two CFL punters share the record for the longest punt in professional football history at 108 yards. Such a punt is theoretically possible in American football, but would likely result in a touchback, moreover this would require the line of scrimmage to be on the punting team's two yard line, thus increasing the difficulty of achieving an exceptionally long punt. Notable records: Steve O'Neal set the record for the longest punt in a National Football League game in 1969 with a punt measuring 98 yards. It is the longest recorded punt in a game possible that did not end in a touchback. Draft status: Former Oakland Raiders player Ray Guy is the only pure punter to be inducted into the Pro Football Hall of Fame, as well as the only pure punter to be picked in the first round of the NFL Draft. Russell Erxleben was selected as the 11th pick in the first round of the 1979 draft by the New Orleans Saints as a punter but performed other kicking duties as well. Guy is credited with raising the status of punters in the NFL because he proved to be a major ingredient in the Raiders' success during the 1970s by preventing opponents from gaining field position advantage. Evolution: Before Guy's arrival in Oakland, many teams trained a position player to double as a punter (the placekicker was likewise expected to "double-up" at another position), even after the one-platoon system (which effectively required a punter to play offensive and defensive positions on top of their duties) was abolished in the 1940s. The Green Bay Packers won Super Bowl I and Super Bowl II using running back Donny Anderson as their punter. The Packers' regular placekicker, Don Chandler, was an All-Pro punter with the New York Giants but Vince Lombardi brought Chandler in from his old team to serve exclusively as a kicker after Paul Hornung, who set the NFL single-season scoring record with 176 points in 12 games in 1960, was suspended for gambling in 1963 and suffered a sharp decline in accuracy in 1964. Linebacker Paul Maguire served as a punter for the AFL-champion San Diego Chargers and Buffalo Bills in the 1960s. Evolution: The Kansas City Chiefs, who played in Super Bowl I and won Super Bowl IV, bucked the trend at the time by signing Jerrel Wilson as a punting specialist in 1966. Wilson punted for the Chiefs for 13 seasons, and combined with placekicker Jan Stenerud to give the team one of the best kicking combinations in the league. Evolution: Backup quarterbacks were commonly used to punt well into the 1970s. Steve Spurrier, who was stuck behind John Brodie at quarterback for the San Francisco 49ers, served as the team's primary punter for the first four years of his career. Bob Lee took on the same role for the Minnesota Vikings in the late 1960s and early 1970s, punting for the club in Super Bowl IV. Evolution: Danny White played little as a backup quarterback to Roger Staubach with the Dallas Cowboys from 1976 through 1979, but was the team's primary punter from 1975 through 1984, when he gave up the kicking duties to Mike Saxon. One of the last examples of a punting quarterback was Tom Tupa. A quarterback and punter in college, Tupa started his career in the NFL as a quarterback but eventually settled into a role as a full-time punter and emergency quarterback. Evolution: Starting in the 1990s, some NFL teams turned to retired Australian rules football players to punt for them, as punting is a basic skill in that game. Darren Bennett, who played for the San Diego Chargers and Minnesota Vikings in his career, was one of the first successful Australian rules football players to make the jump from that sport's top professional competition, the Australian Football League (AFL), to the NFL, doing so in 1994. Ben Graham, who entered the league with the New York Jets, became the first AFL player to play in a Super Bowl when he played in Super Bowl XLIII with the Arizona Cardinals. Graham is now a free agent. Other former AFL players who made the transition to NFL punters include former NFL punter Mat McBriar and Sav Rocca, formerly of the Washington Redskins. In recent years, an increasing number of Australians have been making the transition to gridiron football at earlier ages, with a significant number now playing for U.S. college teams. Between 2013 and 2017, all five Ray Guy Awards, presented to the top punter in NCAA Division I football, were won by Australians: Tom Hornsey (Memphis, 2013), Tom Hackett (Utah, 2014 and 2015), Mitch Wishnowsky (Utah, 2016) and Michael Dickson (Texas, 2017). All three finalists for the 2016 award were Australians. In the 2018 season, nearly one-fourth of the schools in college football's top level, Division I FBS, had at least one Australian punter on their roster.Sam Koch revolutionized punting by developing many variations, due to his flexible hips in an effort to increase net punting average by giving the ball variable trajectories and bounce, making it more difficult for returners to catch and return.The New England Patriots were noted for almost exclusively employing left-footed punters during the coaching tenure of Bill Belichick, who claimed it was unintentional. Left-footed punters have been increasingly used at the NFL level; at the start of the 2001 NFL season, there were 26 right-footed punters, four left-footed ones and one (Chris Hanson) who was dual-footed. By the 2017 NFL season, there were 22 right-footed punters and 10 left-footed ones.By the late 2010s and early 2020s, punters were highly specialized players on an NFL roster. Louis Bien of SB Nation wrote: A punter's job is no longer simply to kick the ball high and far while fans hold their collective breath that this time isn’t the time when the ball flies sideways into the stands. No, punters are now neutralizing and terrorizing the most electric return men in the NFL with kicks that spin and move and bounce and flip in all sorts of unpredictable, terror-inducing ways".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Frey's procedure** Frey's procedure: Frey's procedure is a surgical technique used in the treatment of chronic pancreatitis in which the diseased portions of the pancreas head are cored out. A lateral pancreaticojejunostomy (LRLPJ) is then performed in which a loop of the jejunum is then mobilized and attached over the exposed pancreatic duct to allow better drainage of the pancreas, including its head. Indication: Frey's operation is indicated on patients with chronic pancreatitis who have "head dominant" disease. Comparison to Puestow procedure: Compared with a Puestow procedure, a Frey's procedure allows for better drainage of the pancreatic head. Complications: Postoperative complications after LRLPJ are usually septic in nature and are likely to occur more often in patients in whom endoscopic pancreatic stenting has been performed before surgical intervention. Pancreatic endocrine insufficiency occurs in 60% of patients. Eponym: It is named for the American surgeon Charles Frederick Frey (b.1929) of Michigan, who first described it in 1987.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Digital prototyping** Digital prototyping: Digital Prototyping gives conceptual design, engineering, manufacturing, and sales and marketing departments the ability to virtually explore a complete product before it's built. Industrial designers, manufacturers, and engineers use Digital Prototyping to design, iterate, optimize, validate, and visualize their products digitally throughout the product development process. Innovative digital prototypes can be created via CAutoD through intelligent and near-optimal iterations, meeting multiple design objectives (such as maximised output, energy efficiency, highest speed and cost-effectiveness), identifying multiple figures of merit, and reducing development gearing and time-to-market. Marketers also use Digital Prototyping to create photorealistic renderings and animations of products prior to manufacturing. Companies often adopt Digital Prototyping with the goal of improving communication between product development stakeholders, getting products to market faster, and facilitating product innovation. Digital prototyping: Digital Prototyping goes beyond simply creating product designs in 3D. It gives product development teams a way to assess the operation of moving parts, to determine whether or not the product will fail, and see how the various product components interact with subsystems—either pneumatic or electric. By simulating and validating the real-world performance of a product design digitally, manufacturers often can reduce the number of physical prototypes they need to create before a product can be manufactured, reducing the cost and time needed for physical prototyping. Many companies use Digital Prototyping in place of, or as a complement to, physical prototyping.Digital Prototyping changes the traditional product development cycle from design>build>test>fix to design>analyze>test>build. Instead of needing to build multiple physical prototypes and then testing them to see if they'll work, companies can conduct testing digitally throughout the process by using Digital Prototyping, reducing the number of physical prototypes needed to validate the design. Studies show that by using Digital Prototyping to catch design problems up front, manufacturers experience fewer change orders downstream. Because the geometry in digital prototypes is highly accurate, companies can check interferences to avoid assembly issues that generate change orders in the testing and manufacturing phases of development. Companies can also perform simulations in early stages of the product development cycle, so they avoid failure modes during testing or manufacturing phases. With a Digital Prototyping approach, companies can digitally test a broader range of their product's performance. They can also test design iterations quickly to assess whether they're over- or under-designing components. Digital prototyping: Research from the Aberdeen Group shows that manufacturers that use Digital Prototyping build half the number of physical prototypes as the average manufacturer, get to market 58 days faster than average, and experience 48 percent lower prototyping costs. History of Digital Prototyping: The concept of Digital Prototyping has been around for over a decade, particularly since software companies such as Autodesk, PTC, Siemens PLM (formerly UGS), and Dassault began offering computer-aided design (CAD) software capable of creating accurate 3D models. History of Digital Prototyping: It may even be argued that the product lifecycle management (PLM) approach was the harbinger of Digital Prototyping. PLM is an integrated, information-driven approach to a product's lifecycle, from development to disposal. A major aspect of PLM is coordinating and managing product data among all software, suppliers, and team members involved in the product's lifecycle. Companies use a collection of software tools and methods to integrate people, data, and processes to support singular steps in the product's lifecycle or to manage the product's lifecycle from beginning to end. PLM often includes product visualization to facilitate collaboration and understanding among the internal and external teams that participate in some aspect of a product's lifecycle. History of Digital Prototyping: While the concept of Digital Prototyping has been a longstanding goal for manufacturing companies for some time, it's only recently that Digital Prototyping has become a reality for small-to-midsize manufacturers that cannot afford to implement complex and expensive PLM solutions. Digital Prototyping and PLM: Large manufacturing companies rely on PLM to link otherwise unconnected, siloed activities, such as concept development, design, engineering, manufacturing, sales, and marketing. PLM is a fully integrated approach to product development that requires investments in application software, implementation, and integration with enterprise resource planning (ERP) systems, as well as end-user training and a sophisticated IT staff to manage the technology. PLM solutions are highly customized and complex to implement, often requiring a complete replacement of existing technology. Because of the high expense and IT expertise required to purchase, deploy, and run a PLM solution, many small-to-midsized manufacturers cannot implement PLM. Digital Prototyping and PLM: Digital Prototyping is a viable alternative to PLM for these small-to-midsized manufacturers. Like PLM, Digital Prototyping seeks to link otherwise unconnected, siloed activities, such as concept development, design, engineering, manufacturing, sales, and marketing. However, unlike PLM, Digital Prototyping does not support the entire product development process from conception to disposal, but rather focuses on the design-to-manufacture portion of the process. The realm of Digital Prototyping ends when the digital product and the engineering bill of materials are complete. Digital Prototyping aims to resolve many of the same issues as PLM without involving a highly customized, all-encompassing software deployment. With Digital Prototyping, a company may choose to address one need at a time, making the approach more pervasive as its business grows. Other differences between Digital Prototyping and PLM include: Digital Prototyping involves fewer participants than PLM. Digital Prototyping and PLM: Digital Prototyping has a less complex process for collecting, managing, and sharing data. Manufacturers can keep product development activities separate from operations management with Digital Prototyping. Digital Prototyping solutions don't need to be integrated with ERP (but can be), customer relationship management (CRM), and project and portfolio management (PPM) software. Digital Prototyping Workflow: A Digital Prototyping workflow involves using a single digital model throughout the design process to bridge the gaps that typically exist between workgroups such as industrial design, engineering, manufacturing, sales, and marketing. Product development can be broken into the following general phases at most manufacturing companies: Conceptual Design Engineering Manufacturing Customer Involvement Marketing Communications Conceptual Design The conceptual design phase involves taking customer input or market requirements and data to create a product design. In a Digital Prototyping workflow, designers work digitally, from the very first sketch, throughout the conceptual design phase. They capture their designs digitally, and then share that data with the engineering team using a common file format. The industrial design data is then incorporated into the digital prototype to ensure technical feasibility. Digital Prototyping Workflow: In a Digital Prototyping workflow, designers and their teams review digital design data via high-quality digital imagery or renderings to make informed product design decisions. Designers may create and visualize several iterations of design, changing things like materials or color schemes, before a concept is finalized. Digital Prototyping Workflow: Engineering During the engineering phase of the Digital Prototyping workflow, engineers create the product's 3D model (the digital prototype), integrating design data developed during the conceptual design phase. Teams also add electrical systems design data to the digital prototype while it's being developed, and evaluate how different systems interact. At this stage of the workflow, all data related to the product's development is fully integrated into the digital prototype. Working with mechanical, electrical, and industrial design data, companies engineer every last product detail in the engineering phase of the workflow. At this point, the digital prototype is a fully realistic digital model of the complete product. Digital Prototyping Workflow: Engineers test and validate the digital prototype throughout their design process to make the best possible design decisions and avoid costly mistakes. Using the digital prototype, engineers can: Perform integrated calculations, and stress, deflection, and motion simulations to validate designs Test how moving parts will work and interact Evaluate different solutions to motion problems Test how the design functions under real-world constraints Conduct stress analysis to analyze material selection and displacement Verify the strength of a partBy incorporating integrated calculations, stress, deflection, and motion simulations into the Digital Prototyping workflow, companies can speed development cycles by minimizing physical prototyping phases. By implementing a digital prototype of a partially or fully automated vehicle and its sensor suite into a dynamic co-simulation of traffic flow and vehicle dynamics, a novel toolchain methodology comprising virtual testing is available for the development of automated driving functions by the automotive industry.Also during the engineering phase of the Digital Prototyping workflow, engineers create documentation required by the production team. Digital Prototyping Workflow: Manufacturing In a Digital Prototyping workflow, manufacturing teams are involved early in the design process. This input helps engineers and manufacturing experts work together on the digital prototype throughout the design process to ensure that the product can be produced cost effectively. Manufacturing teams can see the product exactly as it's intended, and provide input on manufacturability. Companies can perform molding simulations on digital prototypes for plastic part and injection molds to test the manufacturability of their designs, identifying potential manufacturing defects before they cut mold tooling. Digital Prototyping Workflow: Digital Prototyping also enables product teams to share detailed assembly instructions digitally with manufacturing teams. While paper assembly drawings can be confusing, 3D visualizations of digital prototypes are unambiguous. This early and clear collaboration between manufacturing and engineering teams helps minimize manufacturing problems on the shop floor. Finally, manufacturers can use Digital Prototyping to visualize and simulate factory-floor layouts and production lines. They can check for interferences to detect potential issues such as space constraints and equipment collisions. Digital Prototyping Workflow: Customer Involvement Customers are involved throughout the Digital Prototyping workflow. Rather than waiting for a physical prototype to be complete, companies that use Digital Prototyping bring customers into the product development process early. They show customers realistic renderings and animations of the product's digital prototype so they'll know what the product looks like and how it will function. This early customer involvement helps companies get sign-off up front, so they don't waste time designing, engineering, and manufacturing a product that doesn't fulfill the customer's expectations. Digital Prototyping Workflow: Marketing Using 3D CAD data from the digital prototype, companies can create realistic visualizations, renderings, and animations to market products in print, on the web, in catalogues, or in television commercials. Without needing to produce expensive physical prototypes and conduct photo shoots, companies can create virtual photography and cinematography nearly indistinguishable from reality. One aspect of this is creating the illumination environment for the subject, an area of new development. Digital Prototyping Workflow: Realistic visualizations not only help marketing communications, but the sales process as well. Companies can respond to requests for proposals and bid on projects without building physical prototypes, using visualizations to show the potential customer what the end product will be like. In addition, visualizations can help companies bid more accurately by making it more likely that everyone has the same expectations about the end product. Companies can also use visualizations to facilitate the review process once they've secured the business. Reviewers can interact with digital prototypes in realistic environments, allowing for the validation of design decisions early in the product development process. Digital Prototyping Workflow: Connecting Data and Teams To support a Digital Prototyping workflow, companies use data management tools to coordinate all teams at every stage in the workflow, streamline design revisions and automate release processes for digital prototypes, and manage engineering bills of materials. These data management tools connect all workgroups to critical Digital Prototyping data. Digital Prototyping and Sustainability: Companies increasingly use Digital Prototyping to understand sustainability factors in new product designs, and to help meet customer requirements for sustainable products and processes. They minimize material use by assessing multiple design scenarios to determine the optimal amount and type of material required to meet product specifications. In addition, by reducing the number of physical prototypes required, manufacturers can trim down their material waste. Digital Prototyping and Sustainability: Digital Prototyping can also help companies reduce the carbon footprint of their products. For example, WinWinD, a company that creates innovative wind turbines, uses Digital Prototyping to optimize the energy production of wind-power turbines for varying wind conditions. Furthermore, the rich product data supplied by Digital Prototyping can help companies demonstrate conformance with the growing number of product-related environmental regulations and voluntary sustainability standards.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**International Graphoanalysis Society** International Graphoanalysis Society: IGAS is the abbreviation for International Graphoanalysis Society. The organization is far more commonly referred to by its initials than the full name. International Graphoanalysis Society: IGAS traces its beginnings back to 1929, when Milton N. Bunker formed The American Grapho Analysis Society. The company has seen an ownership change many times since it was founded. Around 1957, that organization was replaced by The International Graphoanalysis Society, which was run by V. Peter Ferrara. Upon V. Peter Ferrara's death, ownership of the company fell to his daughter, Kathleen Kusta. In June 2003, Kathleen Kusta sold most of the assets of IGAS by private auction to Greg Greco. International Graphoanalysis Society: From the early seventies through the early eighties, the organization put energy into graphological research, the most important being Crumbaugh & Stockholm (1977) and Stockholm (1980), (1983). Members: IGAS is a privately held corporation. As such, information about its finances, membership numbers, actual number of graduates, and related items can not be independently verified. The masthead of its publication The Journal of Graphoanalysis lists the organization's current claimed statistical data. IGAS has roughly 30 chapters, covering the United States, parts of Canada, the UK, and South Africa. Members: The highest reported membership number, which was issued to a student of Graphoanalysis was just over 50,000. There are no reliable figures on the number of students at any specific time. Both the number of dues paying members, and the number of students are believed to have peaked during the late seventies. This was just before IGASHQ started the wholesale purging of members and chapters. The Courses: Eight Basic Steps in Graphoanalysis is the beginning course that many members teach people interested in handwriting analysis. The General Course of Graphoanalysis is the course taught by IGAS. Graduates of that course are awarded the designation Certified Graphoanalysts, more commonly referred to as CGA. Graduates of The Advanced Course of Graphoanalysis are awarded the designation of Master Graphoanalyst or MGA. Attendees of the annual Congress are awarded Three Year Study Certificates, or Six Year Study Certificates, upon attending the Congress the appropriate number of times. The monthly study packet is a four-page lesson that challenges members to improve their ability to analyze handwriting. Beginning in the early-90s, the monthly study packet was incorporated into the IGAS's monthly journal publication. The Dissenters: Because of the tight control that IGAS had on its members, the field of handwriting analysis is functionally divided into two groups—Graphoanalysts, and Graphologists. The Dissenters: A clause that was responsible for the expulsion of hundreds of members of IGAS between 1970 and 1990 was: Further, I will not affiliate with any group of handwriting analysts not sanctioned by the International Graphoanalysis Society, Inc., the last clause of the 1980 Code of Ethics of IGAS.' In 1957, Charlie Cole set up a series of graphology lectures, which evolved into The American Handwriting Analysis Foundation. The lectures were intended for graduates of the MGA program only. Klara G. Roman gave the first series of lectures. Later lectures were given by other Holistic Graphologers. As a result of that study, Charlie Cole, and most of the people that attended that lecture series, were expelled from IGAS. The Dissenters: Handwriting Analysts of Minnesota was another group that was started as a direct result of the entire chapter being expelled for the unethical conduct of having a Holistic Graphologer lecture at their quarterly meeting. The list of people who were thus expelled goes on and on. The net result of this is that the majority of currently active organizations of handwriting analysts in the United States were formed due to this wall of separation that IGAS required its members to keep. Reference Texts: Crumbaugh, James C & Stockholm, Emilie (1977) Validation of Graphoanalysis by 'Global' or 'Holistic' Method. Perceptual And Motor Skills April 1977, 44(2), 403–410. Stockholm, Emilie (1980) Statistical Data For Basic Traits of Graphoanalysis: IGAS Trait Norm Project.Perceptual and Motor Skills, 1980, 51, 220-222 Stockholm, Emilie (1983) Research Department releases findings of new reliability study Journal of Graphoanalysis , December 1983, 3-4 Graphology journals: The Canadian Analyst published by Alex Sjoberg documented most of the history of both The American Grapho Analysis Society, and IGAS. Prior its demise, it was the longest running graphological periodical in North America. "The Grapho Analyst" was published by IGAS from 1940 to 1962. The Journal of Graphoanalysis was published by IGAS, from 1962 to 2003. Publication of the IGAS Journal has been published from 2004 to present. For Further Research: Handwriting Analysis Research Library Greenfield, MA. Appointments are required to visit the library.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Protamine** Protamine: Protamines are small, arginine-rich, nuclear proteins that replace histones late in the haploid phase of spermatogenesis and are believed essential for sperm head condensation and DNA stabilization. They may allow for denser packaging of DNA in the spermatozoon than histones, but they must be decompressed before the genetic data can be used for protein synthesis. However, part of the sperm's genome is packaged by histones (10-15% in humans and other primates) thought to bind genes that are essential for early embryonic development.Protamine and protamine-like (PL) proteins are collectively known as the sperm-specific nuclear basic proteins (SNBPs). The PL proteins are intermediate in structure between protamine and Histone H1. The C-terminal domain of PL could be the precursor of vertebrate protamine. Spermatogenesis: During the formation of sperm, protamine binds to the phosphate backbone of DNA using the arginine-rich domain as an anchor. DNA is then folded into a toroid, an O-shaped structure, although the mechanism is not known. A sperm cell can contain up to 50,000 toroid-shaped structures in its nucleus with each toroid containing about 50 kilobases. Before the toroid is formed, histones are removed from the DNA by transition nuclear proteins, so that protamine can condense it. The effects of this change are 1) an increase in sperm hydrodynamics for better flow through liquids by reducing the head size 2) decrease in the occurrence of DNA damage 3) removal of the epigenetic markers that occur with histone modifications.The structure of the sperm head is also related to protamine levels. The ratio of protamine 2 to protamine 1 and transition nuclear proteins has been found to change the sperm head shape in various species of mice, by altering the expression of protamine 2 via mutations in its promoter region. A decrease in the ratio has been found to increase the competitive ability of sperm in Mus species. However, further testing is required to determine how this ratio influences the shape of the head and whether monogamy influences this selection. In humans, studies show that men who have unbalanced Prm1/Prm2 are subfertile or infertile. Protamine 2 is encoded as a longer protein that needs its N-terminal cleaved before becoming functional. Human and chimp protamine has undergone rapid evolution. Medical uses: When mixed with insulin, protamines slow down the onset and increase the duration of insulin action (see NPH insulin).Protamine is used in cardiac surgery, vascular surgery, and interventional radiology procedures to neutralize the anti-clotting effects of heparin. Adverse effects include increased pulmonary artery pressure and decrease peripheral blood pressure, myocardial oxygen consumption, cardiac output, and heart rate.Protamine sulfate is an antidote for heparin overdose, but severe allergy may occur. A chain shortened version of protamine also acts as a potent heparin antagonist, but with markedly reduced antigenicity. It was initially produced as a mixture made by thermolysin digestion of protamine, but the actual effective peptide portion VSRRRRRRGGRRRR has since been isolated. An analogue of this peptide has also been produced.In gene therapy, protamine sulfate's ability to condense plasmid DNA along with its approval by the U.S. Food and Drug Administration (FDA) have made it an appealing candidate to increase transduction rates by both viral and nonviral (e.g. utilizing cationic liposomes) mediated delivery mechanisms. Medical uses: Protamine may be used as a drug to prevent obesity. Protamine has been shown to deter increases in body weight and low-density lipoprotein in high-fat diet rats. This effect occurs through the inhibition of lipase activity, an enzyme responsible for triacylglycerol digestion and absorption, resulting in a decrease in the absorption of dietary fat. No liver damage was found when the rats were treated with protamine. However, emulsification of long-chain fatty acids for digestion and absorption in the small intestine is less constant in humans than rats, which will vary the effectiveness of protamine as a drug. Furthermore, human peptidases may degrade protamine at different rates, thus further tests are required to determine protamine's ability to prevent obesity in humans. Species distribution and isoforms: Mice, humans and certain fish have two or more different protamines, whereas the sperm of bull and boar, have one form of protamine due to a mutation in the PRM2 gene. In the rat, although the gene for PRM2 is present, expression of this protein is extremely small because of limited transcription due to an inefficient promoter in addition to altered processing of the mRNA transcript. Species distribution and isoforms: Mammals The 2 human protamines are denoted PRM1 and PRM2. In mice and humans, PRM1, PRM2, and TNP2 are co-located in a conserved gene cluster.Eutherian mammals generally have both PRM1 and PRM2. Metatherians on the other hand only have a homolog to P1. Fish Examples of protamines from fish are: salmine and protamine sulfate from salmon clupeine from herring sperm (Clupea) iridine from rainbow trout thinnine from tunafish (Thunnus) stelline from starry sturgeon (Acipenser stellatus) scylliorhinine from dogfish (Scylliorhinus)Fish protamine are generally shorter than that of mammals, with a higher amount of arginine. Sequence: The primary structure of protamine P1, the protamine used for packaging DNA in sperm cells, in placental mammals is usually 49 or 50 amino acids long. This sequence is divided into three separate domains: an arginine-rich domain for DNA binding flanked by shorter peptide sequences containing mostly cysteine residues. The arginine-rich domain consists of 3-11 arginine residues and is conserved between fish protamine and mammalian protamine 1 sequences at about 60-80% sequence identity. Structure: After translation, the protamine P1 structure is immediately phosphorylated at all three of the above-mentioned domains. Another round of phosphorylation occurs when the sperm enters the egg, but the function of these phosphorylations is uncertain.The exact secondary and tertiary structure of protamine is not known with certainty, but several proposals have been published since the 1970s. The broad consensus is that protamine forms beta strand structures that then crosslink through disulfide bonds (and potentially dityrosine and cysteine-tyrosine bonds). When protamine P1 binds to DNA, cysteine from the amino terminal of one protamine P1 forms disulfide bonds with the cysteine from the carboxy-terminal of another protamine P1. By neutralizing the backbone charge protamine enables the DNA to more tightly coil. The disulfide bonds function to prevent the dissociation of protamine P1 from DNA until the bonds are reduced when the sperm enters the egg. These long protamine polymers may then wrap around the DNA within the major groove.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Manhole** The Manhole: The Manhole is an adventure video game in which the player opens a manhole and reveals a gigantic beanstalk, leading to fantastic worlds. Summary: The game was first released on floppy disks in 1988 by Cyan, Inc. (now Cyan Worlds) and distributed through mail order. In 1989, it was produced for Activision as a CD-ROM version based on the floppy disk game. This version was the first computer game distributed on CD-ROM (although there had already been two games released in late 1988 in Japan for NEC's PC Engine game console on its CD-ROM² format). It runs in black-and-white on the Macintosh line of computers. It was created using the HyperTalk programming language by brothers Rand and Robyn Miller, who founded the company Cyan and would go on to produce the best-selling adventure game Myst. The Manhole was later also released for the PC Engine and FM Towns. Summary: The game was re-released for MS-DOS twice, once in 1992 by Activision as The Manhole: New and Enhanced (including a Windows 3.1 version) and again in 1995 as The Manhole: CD-ROM Masterpiece Edition by Broderbund, which featured the use of color, music, voice, sound effects, and some new characters. Cyan artist Chuck Carter designed all of the color graphics in about three months using StrataVision 3D. In 2007, the game was released on GameTap. As of February 2011, the game is available from GOG.com, iTunes, and as part of the "Cyan Complete Pack" on Steam.The Manhole is a notable computer game because, like Cosmic Osmo and Spelunx, it has no goal and no end; as a software toy the object is simply to explore and have fun. Reception: Describing The Manhole as "the first children's software to require a hard disk", Macworld in March 1989 stated that its "realistic sounds, the fantasy-filled graphics, and the stack construction are truly impressive". The magazine "highly recommended [the game] for young children[, and] it's hard to imagine a playful soul of any age who wouldn't enjoy exploring the mind-tickling world inside The Manhole".The Manhole won a Software Publishers Association Excellence in Software Award in 1989 for Best New Use of a Computer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Autosuggestion** Autosuggestion: Autosuggestion is a psychological technique related to the placebo effect, developed by pharmacist Émile Coué at the beginning of the 20th century. It is a form of self-induced suggestion in which individuals guide their own thoughts, feelings, or behavior. The technique is often used in self-hypnosis. Typological distinctions: Émile Coué identified two very different types of self-suggestion: intentional, "reflective autosuggestion": made by deliberate and conscious effort, and unintentional, "spontaneous auto-suggestion": which is a "natural phenomenon of our mental life … which takes place without conscious effort [and has its effect] with an intensity proportional to the keenness of [our] attention".In relation to Coué's group of "spontaneous auto-suggestions", his student Charles Baudouin (1920, p. 41) made three further useful distinctions, based upon the sources from which they came: "Instances belonging to the representative domain (sensations, mental images, dreams, visions, memories, opinions, and all intellectual phenomena)." "Instances belonging to the affective domain (joy or sorrow, emotions, sentiments, tendencies, passions)." "Instances belonging to the active or motor domain (actions, volitions, desires, gestures, movements at the periphery or in the interior of the body, functional or organic modifications)." Émile Coué: Émile Coué, who had both B.A. and B.Sc. degrees before he was 21, graduated top of his class (with First Class Honours) with a degree in pharmacology from the prestigious Collège Sainte-Barbe in Paris in 1882. Having spent an additional six months as an intern at the Necker-Enfants Malades Hospital in Paris, he returned to Troyes, where he worked as an apothecary from 1882 to 1910. Émile Coué: "Hypnosis" à la Ambroise-Auguste Liébeault and Hippolyte Bernheim In 1885, his investigations of hypnotism and the power of the imagination began with Ambroise-Auguste Liébeault and Hippolyte Bernheim, two leading exponents of "hypnosis", of Nancy, with whom he studied in 1885 and 1886 (having taken leave from his business in Troyes). Following this training, "he dabbled with ‘hypnosis’ in Troyes in 1886, but soon discovered that their Liébeault's techniques were hopeless, and abandoned ‘hypnosis’ altogether". Émile Coué: Hypnotism à la James Braid and Xenophon LaMotte Sage In 1901, Coué sent to the United States for a free book, Hypnotism as It is (i.e., Sage, 1900a), which purported to disclose "secrets [of the] science that brings business and social success" and "the hidden mysteries of personal magnetism, hypnotism, magnetic healing, etc.". Deeply impressed by its contents, he purchased the French language version of the associated correspondence course (i.e., Sage, 1900b, and 1900c), created by stage hypnotist extraordinaire, "Professor Xenophon LaMotte Sage, A.M., Ph.D., LL.D., of Rochester, New York" (who had been admitted into the prestigious Medico-Legal Society of New York in 1899). Émile Coué: In real life, Xenophon LaMotte Sage was none other than Ewing Virgil Neal (1868-1949), the multi-millionaire, calligrapher, hypnotist, publisher, advertising/marketing pioneer (he launched the career of Carl R. Byoir), pharmaceutical manufacturer, parfumier, international businessman, confidant of Mussolini, Commandatore of the Order of the Crown of Italy, Officer of the Legion of Honour, and fugitive from justice, who moved to France in the 1920s.Sage's course supplied the missing piece of the puzzle — namely, Braid-style hypnotic inductions — the solution for which had, up to that time, eluded Coué: "Coué immediately recognised that the course’s Braid-style of hypnotism was ideal for mental therapeutics. He undertook an intense study, and was soon skilled enough to offer hypnotism alongside his pharmaceutical enterprise. In the context of Liébeault’s ‘hypnosis’, Braid’s hypnotism, and Coué’s (later) discoveries about autosuggestion, one must recognise the substantially different orientations of Liébeault’s "suggestive therapeutics", which concentrated on imposing the coercive power of the operator’s suggestion, and Braid’s "psycho-physiology", which concentrated on activating the transformative power of the subject’s mind."Although he had abandoned Liébeault's "hypnosis" in 1886, he adopted Braid's hypnotism in 1901; and, in fact, in addition to, and (often) separate from, his auto-suggestive practices, Coué actively used Braid's hypnotism for the rest of his professional life. Émile Coué: Suggestion and Auto-suggestion Coué was so deeply impressed by Bernheim's concept of “suggestive therapeutics” — in effect, "an imperfect re-branding of the ‘dominant idea’ theory that Braid had appropriated from Thomas Brown" — that, on his return to Troyes from his (1886–1886) interlude with Liébeault and Bernheim, he made a practice of reassuring his clients by praising each remedy's efficacy. He noticed that, in specific cases, he could increase a medicine's efficacy by praising its effectiveness. He realized that, when compared with those to whom he said nothing, those to whom he praised the medicine had a noticeable improvement (this is suggestive of what would later be identified as a "placebo response"). Émile Coué: "Around 1903, Coué recommended a new patent medicine, based on its promotional material, which effected an unexpected and immediate cure (Baudouin, 1920, p.90; Shrout, 1985, p.36). Coué (the chemist) found “[by subsequent] chemical analysis in his laboratory [that there was] nothing in the medicine which by the remotest stretch of the imagination accounted for the results” (Shrout, ibid.). Coué (the hypnotist) concluded that it was cure by suggestion; but, rather than Coué having cured him, the man had cured himself by continuously telling himself the same thing that Coué had told him." The birth of "Conscious Autosuggestion": Coué discovered that subjects could not be hypnotized against their will and, more importantly, that the effects of hypnotic suggestion waned when the subjects regained consciousness. He thus eventually developed the Coué method, and released his first book, Self-Mastery Through Conscious Autosuggestion (published in 1920 in England and two years later in the United States). He described autosuggestion itself as: ... an instrument that we possess at birth, and with which we play unconsciously all our life, as a baby plays with its rattle. It is however a dangerous instrument; it can wound or even kill you if you handle it imprudently and unconsciously. It can on the contrary save your life when you know how to employ it consciously. The birth of "Conscious Autosuggestion": Although Coué never doubted pharmaceutical medicine, and still advocated its application, he also came to believe that one's mental state could positively affect, and even amplify, the pharmaceutical action of medication. He observed that those patients who used his mantra-like conscious suggestion, "Every day, in every way, I'm getting better and better", (French: Tous les jours, à tous points de vue, je vais de mieux en mieux; lit. 'Every day, from all points of view, I'm getting better and better') — in his view, replacing their "thought of illness" with a new "thought of cure", could augment their pharmaceutical regimen in an efficacious way. The birth of "Conscious Autosuggestion": Conceptual difference from Autogenic Training By contrast with the conceptualization driving Coué's auto-suggestive self-administration procedure — namely, that constant repetition creates a situation in which "a particular idea saturates the microcognitive environment of 'the mind'…", which, then, in its turn, "is converted into a corresponding ideomotor, ideosensory, or ideoaffective action, by the ideodynamic principle of action", "which then, in its turn, generates the response" — the primary target of the entirely different self-administration procedure developed by Johannes Heinrich Schultz, known as Autogenic Training, was to affect the autonomic nervous system, rather than (as Coué's did) to affect 'the mind'. The Coué method: The Coué method centers on a routine repetition of this particular expression according to a specified ritual, in a given physical state, and in the absence of any sort of allied mental imagery, at the beginning and at the end of each day. Coué maintained that curing some of our troubles requires a change in our subconscious/unconscious thought, which can only be achieved by using our imagination. Although stressing that he was not primarily a healer but one who taught others to heal themselves, Coué claimed to have affected organic changes through autosuggestion. The Coué method: Underlying principles Coué thus developed a method which relied on the belief that any idea exclusively occupying the mind turns into reality, although only to the extent that the idea is within the realm of possibility. For instance, a person without hands will not be able to make them grow back. However, if a person firmly believes that his or her asthma is disappearing, then this may actually happen, as far as the body is actually able to physically overcome or control the illness. On the other hand, thinking negatively about the illness (e.g. "I am not feeling well") will encourage both mind and body to accept this thought. The Coué method: Willpower Coué observed that the main obstacle to autosuggestion was willpower. For the method to work, the patient must refrain from making any independent judgment, meaning that he must not let his will impose its own views on positive ideas. Everything must thus be done to ensure that the positive "autosuggestive" idea is consciously accepted by the patient, otherwise one may end up getting the opposite effect of what is desired.Coué noted that young children always applied his method perfectly, as they lacked the willpower that remained present among adults. When he instructed a child by saying "clasp your hands" and then "you can't pull them apart" the child would thus immediately follow his instructions and be unable to unclasp their hands. The Coué method: Self-conflict Coué believed a patient's problems were likely to increase if his willpower and imagination opposed each other, something Coué referred to as "self-conflict." As the conflict intensifies, so does the problem i.e., the more the patient consciously wants to sleep, the more he becomes awake. The patient must thus abandon his willpower and instead put more focus on his imaginative power in order to fully succeed with his cure. The Coué method: Effectiveness With his method, which Coué called "un truc," patients of all sorts would come to visit him. The list of ailments included kidney problems, diabetes, memory loss, stammering, weakness, atrophy and all sorts of physical and mental illnesses. According to one of his journal entries (1916), he apparently cured a patient of a uterus prolapse as well as "violent pains in the head" (migraine). The Coué method: Evidence Advocates of autosuggestion appeal to brief case histories published by Émile Coué describing his use of autohypnosis to cure, for example, enteritis and paralysis from spinal cord injury. The Coué method: Autogenic training Autogenic training is an autosuggestion-centered relaxation technique influenced by the Coué method. In 1932, German psychiatrist Johannes Schultz developed and published on autogenic training. Unlike autosuggestion, autogenic training has been proven in clinical trials and, along with other relaxation techniques, such as progressive relaxation and meditation, has replaced autosuggestion in therapy. The co-author of Schultz's multi-volume tome on autogenic training, Wolfgang Luthe, was a firm believer that autogenic training was a powerful approach that should only be offered to patients by qualified professionals. Its effectiveness has been confirmed in several studies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jordan operator algebra** Jordan operator algebra: In mathematics, Jordan operator algebras are real or complex Jordan algebras with the compatible structure of a Banach space. When the coefficients are real numbers, the algebras are called Jordan Banach algebras. The theory has been extensively developed only for the subclass of JB algebras. The axioms for these algebras were devised by Alfsen, Schultz & Størmer (1978). Those that can be realised concretely as subalgebras of self-adjoint operators on a real or complex Hilbert space with the operator Jordan product and the operator norm are called JC algebras. The axioms for complex Jordan operator algebras, first suggested by Irving Kaplansky in 1976, require an involution and are called JB* algebras or Jordan C* algebras. By analogy with the abstract characterisation of von Neumann algebras as C* algebras for which the underlying Banach space is the dual of another, there is a corresponding definition of JBW algebras. Those that can be realised using ultraweakly closed Jordan algebras of self-adjoint operators with the operator Jordan product are called JW algebras. The JBW algebras with trivial center, so-called JBW factors, are classified in terms of von Neumann factors: apart from the exceptional 27 dimensional Albert algebra and the spin factors, all other JBW factors are isomorphic either to the self-adjoint part of a von Neumann factor or to its fixed point algebra under a period two *-anti-automorphism. Jordan operator algebras have been applied in quantum mechanics and in complex geometry, where Koecher's description of bounded symmetric domains using Jordan algebras has been extended to infinite dimensions. Definitions: JC algebra A JC algebra is a real subspace of the space of self-adjoint operators on a real or complex Hilbert space, closed under the operator Jordan product a ∘ b = 1/2(ab + ba) and closed in the operator norm. JC algebra A JC algebra is a norm-closed self-adjoint subspace of the space of operators on a complex Hilbert space, closed under the operator Jordan product a ∘ b = 1/2(ab + ba) and closed in the operator norm. Jordan operator algebra A Jordan operator algebra is a norm-closed subspace of the space of operators on a complex Hilbert space, closed under the Jordan product a ∘ b = 1/2(ab + ba) and closed in the operator norm. Jordan Banach algebra A Jordan Banach algebra is a real Jordan algebra with a norm making it a Banach space and satisfying || a ∘ b || ≤ ||a||⋅||b||. JB algebra A JB algebra is a Jordan Banach algebra satisfying ‖a2‖=‖a‖2,‖a2‖≤‖a2+b2‖. Definitions: JB* algebras A JB* algebra or Jordan C* algebra is a complex Jordan algebra with an involution a ↦ a* and a norm making it a Banach space and satisfying ||a ∘ b || ≤ ||a||⋅||b|| ||a*|| = ||a|| ||{a,a*,a}|| = ||a||3 where the Jordan triple product is defined by {a,b,c} = (a ∘ b) ∘ c + (c ∘ b) ∘ a − (a ∘ c) ∘ b. Definitions: JW algebras A JW algebra is a Jordan subalgebra of the Jordan algebra of self-adjoint operators on a complex Hilbert space that is closed in the weak operator topology. Definitions: JBW algebras A JBW algebra is a JB algebra that, as a real Banach space, is the dual of a Banach space called its predual. There is an equivalent more technical definition in terms of the continuity properties of the linear functionals in the predual, called normal functionals. This is usually taken as the definition and the abstract characterization as a dual Banach space derived as a consequence. Definitions: For the order structure on a JB algebra (defined below), any increasing net of operators bounded in norm should have a least upper bound. Normal functionals are those that are continuous on increasing bounded nets of operators. Positive normal functional are those that are non-negative on positive operators. For every non-zero operator, there is a positive normal functional that does not vanish on that operator. Properties of JB algebras: If a unital JB algebra is associative, then its complexification with its natural involution is a commutative C* algebra. It is therefore isomorphic to C(X) for a compact Hausdorff space X, the space of characters of the algebra. Spectral theorem. If a is a single operator in a JB algebra, the closed subalgebra generated by 1 and a is associative. It can be identified with the continuous real-valued functions on the spectrum of a, the set of real λ for which a − λ1 is not invertible. The positive elements in a unital JB algebra are those with spectrum contained in [0,∞). By the spectral theorem, they coincide with the space of squares and form a closed convex cone. If b ≥ 0, then {a,b,a} ≥ 0. A JB algebra is a formally real Jordan algebra: if a sum of squares of terms is zero, then each term is zero. In finite dimensions, a JB algebra is isomorphic to a Euclidean Jordan algebra. The spectral radius on a JB algebra defines an equivalent norm also satisfying the axioms for a JB algebra. A state on a unital JB algebra is a bounded linear functional f such that f(1) = 1 and f is non-negative on the positive cone. The state space is a convex set closed in the weak* topology. The extreme points are called pure states. Given a there is a pure state f such that |f(a)| = ||a||. Gelfand–Naimark–Segal construction: If a JB algebra is isomorphic to the self-adjoint n by n matrices with coefficients in some associative unital *-algebra, then it is isometrically isomorphic to a JC algebra. The JC algebra satisfies the additional condition that (T + T*)/2 lies in the algebra whenever T is a product of operators from the algebra. A JB algebra is purely exceptional if it has no non-zero Jordan homomorphism onto a JC algebra. The only simple algebra that can arise as the homomorphic image of a purely exceptional JB algebra is the Albert algebra, the 3 by 3 self-adjoint matrices over the octonions. Every JB algebra has a uniquely determined closed ideal that is purely exceptional, and such that the quotient by the ideal is a JC algebra. Shirshov–Cohn theorem. A JB algebra generated by 2 elements is a JC algebra. Properties of JB* algebras: The definition of JB* algebras was suggested in 1976 by Irving Kaplansky at a lecture in Edinburgh. The real part of a JB* algebra is always a JB algebra. Wright (1977) proved that conversely the complexification of every JB algebra is a JB* algebra. JB* algebras have been used extensively as a framework for studying bounded symmetric domains in infinite dimensions. This generalizes the theory in finite dimensions developed by Max Koecher using the complexification of a Euclidean Jordan algebra. Properties of JBW algebras: Elementary properties The Kaplansky density theorem holds for real unital Jordan algebras of self-adjoint operators on a Hilbert space with the operator Jordan product. In particular a Jordan algebra is closed in the weak operator topology if and only if it is closed in the ultraweak operator topology. The two topologies coincide on the Jordan algebra. For a JBW algebra, the space of positive normal functionals is invariant under the quadratic representation Q(a)b = {a,b,a}. If f is positive so is f ∘ Q(a). The weak topology on a JW algebra M is define by the seminorms |f(a)| where f is a normal state; the strong topology is defined by the seminorms |f(a2)|1/2. The quadratic representation and Jordan product operators L(a)b = a ∘ b are continuous operators on M for both the weak and strong topology. An idempotent p in a JBW algebra M is called a projection. If p is a projection, then Q(p)M is a JBW algebra with identity p. If a is any element of a JBW algebra, the smallest weakly closed unital subalgebra it generates is associative and hence the self-adjoint part of an Abelian von Neumann algebra. In particular a can be approximated in norm by linear combinations of orthogonal projections. The projections in a JBW algebra are closed under lattice operations. Thus for a family pα there is a smallest projection p such that p ≥ pα and a largest projection q such that q ≤ pα. The center of a JBW algebra M consists of all z such L(z) commutes with L(a) for a in M. It is an associative algebra and the real part of an Abelian von Neumann algebra. A JBW algebra is called a factor if its center consists of scalar operators. If A is a JB algebra, its second dual A** is a JBW algebra. The normal states are states in A* and can be identified with states on A. Moreover, A** is the JBW algebra generated by A. Properties of JBW algebras: A JB algebra is a JBW algebra if and only if, as a real Banach space, it is the dual of a Banach space. This Banach space, its predual, is the space of normal functionals, defined as differences of positive normal functionals. These are the functionals continuous for the weak or strong topologies. As a consequence the weak and strong topologies coincide on a JBW algebra. Properties of JBW algebras: In a JBW algebra, the JBW algebra generated by a Jordan subalgebra coincides with its weak closure. Moreover, an extension of the Kaplansky density theorem holds: the unit ball of the subalgebra is weakly dense in the unit ball of the JBW algebra it generates. Tomita–Takesaki theory has been extended by Haagerup & Hanche-Olsen (1984) to normal states of a JBW algebra that are faithful, i.e. do not vanish on any non-zero positive operator. The theory can be deduced from the original theory for von Neumann algebras. Properties of JBW algebras: Comparison of projections Let M be a JBW factor. The inner automorphisms of M are those generated by the period two automorphisms Q(1 – 2p) where p is a projection. Two projections are equivalent if there is an inner automorphism carrying one onto the other. Given two projections in a factor, one of them is always equivalent to a sub-projection of the other. If each is equivalent to a sub-projection of the other, they are equivalent. Properties of JBW algebras: A JBW factor can be classified into three mutually exclusive types as follows: It is type I if there is a minimal projection. It is type In if 1 can be written as a sum of n orthogonal minimal projections for 1 ≤ n ≤ ∞. Properties of JBW algebras: It is Type II if there are no minimal projections but the subprojections of some fixed projections e form a modular lattice, i.e. p ≤ q implies (p ∨ r) ∧ q = p ∨ (r ∧ q) for any projection r ≤ e. If e can be taken to be 1, it is Type II1. Otherwise it is type II≈. Properties of JBW algebras: It is Type III if the projections do not form a modular lattice. All non-zero projections are then equivalent.Tomita–Takesaki theory permits a further classification of the type III case into types IIIλ (0 ≤ λ ≤ 1) with the additional invariant of an ergodic flow on a Lebesgue space (the "flow of weights") when λ = 0. Classification of JBW factors of Type I The JBW factor of Type I1 is the real numbers. Properties of JBW algebras: The JBW factors of Type I2 are the spin factors. Let H be a real Hilbert space of dimension greater than 1. Set M = H ⊕ R with inner product (u⊕λ,v⊕μ) =(u,v) + λμ and product (u⊕λ)∘(v⊕μ)=( μu + λv) ⊕ [(u,v) + λμ]. With the operator norm ||L(a)||, M is a JBW factor and also a JW factor. Properties of JBW algebras: The JBW factors of Type I3 are the self-adjoint 3 by 3 matrices with entries in the real numbers, the complex numbers or the quaternions or the octonions. The JBW factors of Type In with 4 ≤ n < ∞ are the self-adjoint n by n matrices with entries in the real numbers, the complex numbers or the quaternions. Properties of JBW algebras: The JBW factors of Type I∞ are the self-adjoint operators on an infinite-dimensional real, complex or quaternionic Hilbert space. The quaternionic space is defined as all sequences x = (xi) with xi in H and Σ |xi|2 < ∞. The H-valued inner product is given by (x,y) = Σ (yi)*xi. There is an underlying real inner product given by (x,y)R = Re (x,y). The quaternionic JBW factor of Type I∞ is thus the Jordan algebra of all self-adjoint operators on this real inner product space that commute with the action of right multiplication by H. Properties of JBW algebras: Classification of JBW factors of Types II and III The JBW factors not of Type I2 and I3 are all JW factors, i.e. can be realized as Jordan algebras of self-adjoint operators on a Hilbert space closed in the weak operator topology. Every JBW factor not of Type I2 or Type I3 is isomorphic to the self-adjoint part of the fixed point algebra of a period 2 *-anti-automorphism of a von Neumann algebra. In particular each JBW factor is either isomorphic to the self-adjoint part of a von Neumann factor of the same type or to the self-adjoint part of the fixed point algebra of a period 2 *-anti-automorphism of a von Neumann factor of the same type. For hyperfinite factors, the class of von Neumann factors completely classified by Connes and Haagerup, the period 2 *-antiautomorphisms have been classified up to conjugacy in the automorphism group of the factor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pyrogallol** Pyrogallol: Pyrogallol is an organic compound with the formula C6H3(OH)3. It is a water-soluble, white solid although samples are typically brownish because of its sensitivity toward oxygen. It is one of three isomers of benzenetriols. Production and reactions: It is produced in the manner first reported by Scheele in 1786: heating gallic acid to induce decarboxylation. Gallic acid is also obtained from tannin. Many alternative routes have been devised. One preparation involves treating para-chlorophenoldisulfonic acid with potassium hydroxide, a variant on the time-honored route to phenols from sulfonic acids.Polyhydroxybenzenes are relatively electron-rich. One manifestation is the easy C-acetylation of pyrogallol. Uses: It was once used in hair dyeing, dyeing of suturing materials. It also has antiseptic properties. In alkaline solution, pyrogallol undergoes deprotonation. Such solutions absorb oxygen from the air, turning brown. This conversion can be used to determine the amount of oxygen in a gas sample, notably by the use of the Orsat apparatus. Alkaline solutions of pyrogallol have been used for oxygen absorption in gas analysis. Uses: Use in photography Pyrogallol was also used as a developing agent in the 19th and early 20th centuries in black-and-white developers. Hydroquinone is more commonly used today. Its use is largely historical except for special purpose applications. It was still used by a few notable photographers including Edward Weston. In those days it had a reputation for erratic and unreliable behavior, due possibly to its propensity for oxidation. It experienced a revival starting in the 1980s due largely to the efforts of experimenters Gordon Hutchings and John Wimberley. Hutchings spent over a decade working on pyrogallol formulas, eventually producing one he named PMK for its main ingredients: pyrogallol, Metol, and Kodalk (the trade name of Kodak for sodium metaborate). This formulation resolved the consistency issues, and Hutchings found that an interaction between the greenish stain given to film by pyro developers and the color sensitivity of modern variable-contrast photographic papers gave the effect of an extreme compensating developer. From 1969 to 1977, Wimberley experimented with the Pyrogallol developing agent. He published his formula for WD2D in 1977 in Petersen's Photographic. PMK and other modern pyro formulations are now used by many black-and-white photographers. The Film Developing Cookbook has examples.Another developer mainly based on pyrogallol was formulated by Jay DeFehr. The 510-pyro, is a concentrate that uses triethanolamine as alkali, and pyrogallol, ascorbic acid, and phenidone as combined developers in a single concentrated stock solution with long shelf life . This developer has both staining and tanning properties and negatives developed with it are immune to the callier effect. It can be used for small and large negative formats. The Darkroom Cookbook (Alternative Process Photography) has examples. Safety: Pyrogallol use, e.g. in hair dye formulations, is declining because of concerns about its toxicity. Its LD50 (oral, rat) is 300 mg/kg.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TREML1** TREML1: Trem-like transcript 1 protein is a protein that in humans is encoded by the TREML1 gene.TREML1 is located in a gene cluster on chromosome 6 with the single Ig variable (IgV) domain activating receptors TREM1 (MIM 605085) and TREM2 (MIM 605086), but it has distinct structural and functional properties. TREML1 enhances calcium signaling in an SHP2 (PTPN11; MIM 176876)-dependent manner (Allcock et al., 2003; Barrow et al., 2004).[supplied by OMIM]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**EF-4** EF-4: Elongation factor 4 (EF-4) is an elongation factor that is thought to back-translocate on the ribosome during the translation of RNA to proteins. It is found near-universally in bacteria and in eukaryotic endosymbiotic organelles including the mitochondria and the plastid. Responsible for proofreading during protein synthesis, EF-4 is a recent addition to the nomenclature of bacterial elongation factors.Prior to its recognition as an elongation factor, EF-4 was known as leader peptidase A (LepA), as it is the first cistron on the operon carrying the bacterial leader peptidase. In eukaryotes it is traditionally called GUF1 (GTPase of Unknown Function 1). It has the preliminary EC number 3.6.5.n1. Evolutionary background: LepA has a highly conserved sequence. LepA orthologs have been found in bacteria and almost all eukaryotes. The conservation in LepA has been shown to cover the entire protein. More specifically, the amino acid identity of LepA among bacterial orthologs ranges from 55%-68%.Two forms of LepA have been observed; one form of LepA branches with mitochondrial LepA sequences, while the second form branches with cyanobacterial orthologs. These findings demonstrate that LepA is significant for bacteria, mitochondria, and plastids. LepA is absent from archaea. Structure: The gene encoding LepA is known to be the first cistron as part of a bicistron operon. LepA is a polypeptide of 599 amino acids with a molecular weight of 67 kDa. The amino acid sequence of LepA indicates that it is a G protein, which consists of five known domains. The first four domains are strongly related to domains I, II, III, and V of primary elongation factor EF-G. However, the last domain of LepA is unique. This specific domain resides on the C-terminal end of the protein structure. This arrangement of LepA has been observed in the mitochondria of yeast cells to human cells. Function: LepA is suspected to improve the fidelity of translation by recognizing a ribosome with mistranslocated tRNA and consequently inducing a back-translocation. By back-translocating the already post-transcriptionally modified ribosome, the EF-G factor capable of secondary translocation. Back-translocation by LepA occurs at a similar rate as an EF-G-dependent translocation. As mentioned above, EF-G's structure is highly analogous to LepA's structure; LepA's function is thus similarly analogous to EF-G's function. However, Domain IV of EF-G has been shown through several studies to occupy the decoding sequence of the A site after the tRNAs have been translocated from A and P sites to the P and E sites. Thus, domain IV of EF-G prevents back-movement of the tRNA. Despite the structural similarities between LepA and EF-G, LepA lacks this Domain IV. Thus LepA reduces the activation barrier between Pre and POST states in a similar way to EF-G but is, at the same time, able to catalyze a back-translocation rather that a canonical translocation. Activity: LepA exhibits uncoupled GTPase activity. This activity is stimulated by the ribosome to the same extent as the activity of EF-G, which is known to have the strongest ribosome-dependent GTPase activity among all characterized G proteins involved in translation. Conversely, uncoupled GTPase activity occurs when the ribosome stimulation of GTP cleavage is not directly dependent on protein synthesis. In the presence of GTP, LepA works catalytically. On the other hand, in the presence of the nonhydrolysable GTP – GDPNP – the LepA action becomes stoichiometric, saturating at about one molecule per 70S ribosomes. This data demonstrates that GTP cleavage is required for dissociation of LepA from the ribosome, which is demonstrative of a typical G protein. At low concentrations of LepA (less than or equal to 3 molecules per 70S ribosome), LepA specifically recognizes incorrectly translocated ribosomes, back-translocates them, and thus allows EF-G to have a second chance to catalyze the correct translocation reaction. At high concentrations (about 1 molecule per 70S ribosome), LepA loses its specificity and back-translocates every POST ribosome. This places the translational machinery in a nonreproductive mode. This explains the toxicity of LepA when it is found in a cell in high concentrations. Hence, at low concentrations LepA significantly improves the yield and activity of synthesized proteins; however, at high concentrations LepA is toxic to cells. Activity: Additionally, LepA has an effect on peptide bond formation. Through various studies in which functional derivatives of ribosomes were mixed with puromycin (an analog of the 3' end of an aa-tRNA) it was determined that adding LepA to a post transcriptionally modified ribosome prevents dipeptide formation as it inhibits the binding of aa-tRNA to the A site. Experimental data: There have been various experiments elucidating the structure and function of LepA. One notable study is termed the "toeprinting experiment": this experiment helped to determine LepA's ability to back-translocate. In this case, a primer was extended via reverse transcription along mRNA which was ribosome-bound. The primers from modified mRNA strands from various ribosomes were extended with and without LepA. An assay was then conducted with both PRE and POST states, and cleavage studies revealed enhanced positional cleavage in the POST state as opposed to the PRE state. Since the POST state had been in the presence of LepA (plus GTP), it was determined that the strong signal characteristic of the POST state was the result of LepA which then brought the signal down to the level of the PRE state. Such a study demonstrated that that ribosome, upon binding to the LepA-GTP complex assumes the PRE state configuration.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chrysler Comprehensive Compensation System** Chrysler Comprehensive Compensation System: The Chrysler Comprehensive Compensation System (commonly referred to as "C3") was a project in the Chrysler Corporation to replace several payroll applications with a single system. The new system was built using Smalltalk and GemStone. The software development techniques invented and employed on this project are of interest in the history of software engineering. C3 has been referenced in several books on the extreme programming (XP) methodology. The software went live in 1997, paying around ten thousand people. The project continued, intending to take on a larger proportion of the payroll, but new development was stopped in 1999. Project history: The C3 project was initiated in 1993 by Tom Hadfield, the Director of Payroll Systems, under the direction of CIO Susan Unger. Hadfield had developed a small object-oriented prototype which inspired the project. Smalltalk development began in 1994, with the aim of creating a new system to support all payroll processing for 87,000 employees by 1999.[1]In 1996 software engineer Kent Beck was brought on to oversee development, as the system had not yet printed a single paycheck. Beck in turn brought in Ron Jeffries. In March of that year, the development team estimated that the system would be ready to go into production in around one year. In 1997 the development team adopted a way of working now formalized as Extreme Programming. Although there was a slight delay due to unclear business requirements, the system was launched just a couple of months later than the one-year delivery target.The plan was to roll out the system to different payroll 'populations' in stages, but C3 never managed to make another release despite two more years of development. The C3 system paid 9,000 people, representing the "vast majority of monthly Chrysler salaries." Performance was initially a problem, with early estimates indicating it would take 1000 hours to run the payroll. However, profiling activities reduced this to approximately 40 hours, with another month's effort further reducing this to 18 hours. By the time the system was launched, the figure was down to 12 hours, and during the first year of production, performance was improved to 9 hours.A few months after the initial launch, the project's customer representative, a key role in the Extreme Programming methodology, resigned due to burnout and stress, and could not be replaced.Chrysler was bought out by Daimler-Benz in 1998, after the merger the company was known as DaimlerChrysler. DaimlerChrysler stopped the C3 project on 1 February 2000.Frank Gerhardt, a manager at the company, announced to the XP conference in 2000 that DaimlerChrysler had de facto banned XP after shutting down C3; however, some time later DaimlerChrysler resumed the use of XP.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Golf ball** Golf ball: A golf ball is a special ball designed to be used in the game of golf. Golf ball: Under the rules of golf, a golf ball has a mass no more than 1.620 oz (45.9 g), has a diameter not less than 1.680 inches (42.7 mm), and performs within specified velocity, distance, and symmetry limits. Like golf clubs, golf balls are subject to testing and approval by The R&A (formerly part of the Royal and Ancient Golf Club of St Andrews) and the United States Golf Association, and those that do not conform with regulations may not be used in competitions (Rule 5–1). History: Early balls It is commonly believed that hard wooden, round balls, made from hardwoods such as beech and box, were used for golf from the 14th through the 17th centuries. Though wooden balls were no doubt used for other similar contemporary stick and ball games, there is no definite evidence that they were actually used in golf in Scotland. It is equally, if not more likely, that leather balls filled with cows' hair were used, imported from the Netherlands from at least 1486 onward. History: Featherie Then or later, the featherie ball was developed and introduced. A featherie, or feathery, is a hand-sewn round leather pouch stuffed with chicken or goose feathers and coated with paint, usually white in color. A standard featherie used a gentleman's top hat full of feathers. The feathers were boiled and softened before they were stuffed into the leather pouch. Making a featherie was a tedious and time-consuming process. An experienced ball maker could only make a few balls in one day, so they were expensive. A single ball would cost 2–5 shillings, which is equivalent to US$10–20 today.There were a few drawbacks to the featherie. First, it was difficult to make a perfectly spherical ball, so the featherie often flew irregularly. Second, when the featherie became too wet, its distance would be reduced, and there was a possibility of its splitting open upon impact, both when hit or when contacting the ground or another hard surface. Despite these drawbacks, the featherie was a dramatic improvement over the wooden ball, and remained the standard golf ball well into the 19th century. History: Guttie In 1848, the Rev. Dr. Robert Adams Paterson (sometimes spelled Patterson) invented the gutta-percha ball (or guttie, gutty). The guttie was made from dried sap of the Malaysian sapodilla tree. The sap had a rubber-like feel and could be made spherical by heating and shaping it in a mold. Because gutties were cheaper to produce, could be re-formed if they became out-of-round or damaged, and had improved aerodynamic qualities, they soon became the preferred ball for use.Accidentally, it was discovered that nicks in the guttie from normal use actually provided a ball with a more consistent ball flight than a guttie with a perfectly smooth surface. Thus, makers began intentionally making indentations into the surface of new balls using either a knife or hammer and chisel, giving the guttie a textured surface. Many patterns were tried and used. These new gutties, with protruding nubs left by carving patterned paths across the ball's surface, became known as "brambles" due to their resemblance to bramble fruit (blackberries). History: Wound golf ball The next major breakthrough in golf ball development came in 1898. Coburn Haskell of Cleveland, Ohio, had driven to nearby Akron, Ohio, for a golf date with Bertram Work, the superintendent of the B.F. Goodrich Company. While he waited in the plant for Work, Haskell picked up some rubber thread and wound it into a ball. When he bounced the ball, it flew almost to the ceiling. Work suggested Haskell put a cover on the creation, and that was the birth of the 20th-century wound golf ball that would soon replace the guttie bramble ball. The new design became known as the rubber Haskell golf ball. History: For decades, the wound rubber ball consisted of a liquid-filled or solid round core that was wound with a layer of rubber thread into a larger round inner core and then covered with a thin outer shell made of balatá sap. The balatá is a tree native to Central and South America and the Caribbean. The tree is tapped and the soft, viscous fluid released is a rubber-like material similar to gutta-percha, which was found to make an ideal cover for a golf ball. Balatá, however, is relatively soft. If the leading edge of a highly lofted short iron contacts a balatá-covered ball in a location other than the bottom of the ball a cut or "smile" will often be the result, rendering the ball unfit for play. History: Addition of dimples In the early 1900s, it was found that dimpling the ball provided even more control of the ball's trajectory, flight, and spin. David Stanley Froy, James McHardy, and Peter G. Fernie received a patent in 1897 for a ball with indentations; Froy played in the Open in 1900 at the Old Course at St. Andrews with the first prototype.Players were able to put additional backspin on the new wound, dimpled balls when using more lofted clubs, thus inducing the ball to stop more quickly on the green. Manufacturers soon began selling various types of golf balls with various dimple patterns to improve the length, trajectory, spin, and overall "feel" characteristics of the new wound golf balls. Wound, balatá-covered golf balls were used into the late 20th century. History: Modern resin and polyurethane covered balls In the mid-1960s, a new synthetic resin, an ionomer of ethylene acid named Surlyn was introduced by DuPont as were new urethane blends for golf ball covers, and these new materials soon displaced balatá as they proved more durable and more resistant to cutting.Along with various other materials that came into use to replace the rubber-wound internal sphere, golf balls came to be classified as either two-piece, three-piece, or four-piece balls, according to the number of layered components. These basic materials continue to be used in modern balls, with further advances in technology creating balls that can be customized to a player's strengths and weaknesses, and even allowing for the combination of characteristics that were formerly mutually-exclusive. History: Titleist's Pro V1, Taylormade TP5, and Callaway Supersoft exemplify modern advancements in golf ball aerodynamics. The Titleist Pro V1 boasts a tightly wound 388-dimple design, minimizing gaps between dimples for better aerodynamics. On the other hand, the Taylormade TP5 features a combination of circular and hexagonal dimples to reduce drag. Lastly, Callaway balls showcase a sleek, completely hexagonal design for straighter ball flights. History: Liquid cores were commonly used in golf balls as early as 1917. The liquid cores in many of the early balls contained a caustic liquid, typically an alkali, causing eye injuries to children who happened to dissect a golf ball out of curiosity. By the 1920s, golf ball manufacturers had stopped using caustic liquids, but into the 1970s and 1980s golf balls were still at times exploding when dissected and were causing injuries due to the presence of crushed crystalline material present in the liquid cores.In 1967, Spalding purchased a patent for a solid golf ball from Jim Bartsch. His original patent defined a ball devoid of the layers in earlier designs, but Bartsch's patent lacked the chemical properties needed for manufacturing. Spalding's chemical engineering team developed a chemical resin that eliminated the need for the layered components entirely. Since then, the majority of non-professional golfers have transitioned to using solid core (or "2-piece") golf balls.The specifications for the golf ball continue to be governed by the ruling bodies of the game; namely, The R&A, and the United States Golf Association (USGA). Regulations: The Rules of Golf, jointly governed by the R&A and the USGA, state in Appendix III that the diameter of a "conforming" golf ball cannot be any smaller than 1.680 inches (42.67 mm), and the weight of the ball may not exceed 1.620 ounces (45.93 g). The ball must also have the basic properties of a spherically symmetrical ball, generally meaning that the ball itself must be spherical and must have a symmetrical arrangement of dimples on its surface. While the ball's dimples must be symmetrical, there is no limit to the number of dimples allowed on a golf ball. Additional rules direct players and manufacturers to other technical documents published by the R&A and USGA with additional restrictions, such as radius and depth of dimples, maximum launch speed from test apparatus (generally defining the coefficient of restitution) and maximum total distance when launched from the test equipment. Regulations: In general, the governing bodies and their regulations seek to provide a relatively level playing field and maintain the traditional form of the game and its equipment, while not completely halting the use of new technology in equipment design. Regulations: Until 1990, it was permissible to use balls of less than 1.68 inches in diameter in tournaments under the jurisdiction of the R&A, which differed in its ball specifications rules from those of the USGA. This ball was commonly called a "British" ball, while the golf ball approved by the USGA was simply the "American ball". The smaller diameter gave the player a distance advantage, especially in high winds, as the smaller ball created a similarly smaller "wake" behind it. Aerodynamics: When a golf ball is hit, the impact, which lasts less than a millisecond, determines the ball's velocity, launch angle and spin rate, all of which influence its trajectory and its behavior when it hits the ground. A ball moving through air experiences two major aerodynamic forces, lift and drag. Dimpled balls fly farther than non-dimpled balls due to the combination of these two effects. Aerodynamics: First, the dimples on the surface of a golf ball cause the boundary layer on the upstream side of the ball to transition from laminar to turbulent. The turbulent boundary layer is able to remain attached to the surface of the ball much longer than a laminar boundary with fewer eddies and so creates a narrower low-pressure wake and hence less pressure drag. The reduction in pressure drag causes the ball to travel further.Second, backspin generates lift by deforming the airflow around the ball, in a similar manner to an airplane wing. This is called the Magnus effect. The dimples on a golf ball deform the air around the ball quickly causing a turbulent airflow that results in more Magnus lift than a smooth ball would experience.Backspin is imparted in almost every shot due to the golf club's loft (i.e., angle between the clubface and a vertical plane). A backspinning ball experiences an upward lift force which makes it fly higher and longer than a ball without spin.Curvature of the ball flight occurs when the clubface is not aligned perpendicularly to the club direction at impact, leading to an angled spin axis that causes the ball to curve to one side or the other based on difference between the face angle and swing path at impact. Because the ball's spin during flight is angled, and because of the Magnus effect, the ball will take on a curved path during its flight. Some dimple designs claim to reduce the sidespin effects to provide a straighter ball flight. Aerodynamics: Other effects can change the flight behaviour of the ball. Factors such as dynamic lie (The angle of the shaft at impact relative to the ground and its manufactured neutral angle), strike location if the player is using a wood due to the curved face, and external factors such as wind and debris. To keep the aerodynamics optimal, the golf ball needs to be clean, including all dimples. Thus, it is advisable that golfers wash their balls whenever permissible by the rules of golf. Golfers can wash their balls manually using a wet towel or using a ball washer of some type. Design: Dimples first became a feature of golf balls when English engineer and manufacturer William Taylor, co-founder of the Taylor-Hobson company, registered a patent for a dimple design in 1905. William Taylor had realized that golf players were trying to make irregularities on their balls, noticing that used balls were going further than new ones. Hence he decided to make systematic tests to determine what surface formation would give the best flight. He then developed a pattern consisting of regularly spaced indentations over the entire surface, and later tools to help produce such balls in series. Design: Other types of patterned covers were in use at about the same time, including one called a "mesh" and another named the "bramble", but the dimple became the dominant design due to "the superiority of the dimpled cover in flight".Most modern golf balls have about 300–500 dimples, though there have been balls with more than 1000 dimples. The record holder was a ball with 1,070 dimples—414 larger ones (in four different sizes) and 656 pinhead-sized ones.Officially sanctioned balls are designed to be as symmetrical as possible. This symmetry is the result of a dispute that stemmed from the Polara, a ball sold in the late 1970s that had six rows of normal dimples on its equator but very shallow dimples elsewhere. This asymmetrical design helped the ball self-adjust its spin axis during the flight. The USGA refused to sanction it for tournament play and, in 1981, changed the rules to ban aerodynamic asymmetrical balls. Polara's producer sued the USGA and the association paid US$1.375 million in a 1985 out-of-court settlement.Golf balls are traditionally white, but are commonly available in other colors, some of which may assist with finding the ball when lost or when playing in low-light or frosty conditions. As well as bearing the maker's name or logo, balls are usually printed with numbers or other symbols to help players identify their ball. Behavior: Today, golf balls are manufactured using a variety of different materials, offering a range of playing characteristics to suit the player's abilities and desired flight and landing behaviours. Behavior: A key consideration is "compression", typically determined by the hardness of the ball's core layers. A harder "high-compression" ball will fly further because of the more efficient transfer of energy into the ball, but will also transmit more of a shock through the club to the player's hands (a "hard feel"). A softer "low-compression" ball will do just the opposite. Golfers typically prefer a softer feel, especially in the "short game", as the softer ball typically also has greater backspin with lofted irons. However, a softer ball reduces drive distance, as it wastes more energy in compression. This makes it more difficult for players to get a birdie or eagle, as it can take more strokes to get on the green. Behavior: Another consideration is "spin", affected by compression and by the cover material – a "high-spin" ball allows more of the ball's surface to contact the clubface at impact, allowing the grooves of the clubface to "grip" the ball and induce more backspin at launch. Backspin creates lift that can increase carry distance, and also provides "bite" which allows a ball to arrest its forward motion at the initial point of impact, bouncing straight up or even backwards, allowing for precision placement of the ball on the green with an approach shot. However, high-spin cover materials, typically being softer, are less durable which shortens the useful life of the ball, and backspin is not desirable on most long-distance shots, such as with the driver, as it causes the shot to "balloon" and then to bite on the fairway, when additional rolling distance is usually desired. Behavior: Lastly, the pattern of dimples plays a role. By regulation, the arrangement of the dimples on the ball must be as symmetrical as possible. However, the dimples do not all have to be the same size, nor be in a uniform distribution. This allows designers to arrange the dimple patterns in such a way that the resistance to spinning is lower along certain axes of rotation and higher along others. This causes the ball to "settle" into one of these low-resistance axes that (golfers hope) is close to parallel with the ground and perpendicular to the direction of travel, thereby eliminating "sidespin" induced by a slight mishit, which will cause the ball to curve off its intended flight path. A badly mishit ball will still curve, as the ball will settle into a spin axis that is not parallel with the ground which, much like an aircraft's wings, will cause the shot to bank either to the left or to the right. Selection: There are many types of golf balls on the market, and customers often face a difficult decision. Golf balls are divided into two categories: recreational and advanced balls. Recreational balls are oriented toward the ordinary golfer, who generally have low swing speeds (80 miles per hour (130 km/h) or lower) and lose golf balls on the course easily. These balls are made of two layers, with the cover firmer than the core. Their low compression and side spin reduction characteristics suit the lower swing speeds of average golfers quite well. Furthermore, they generally have lower prices than the advanced balls, lessening the financial impact of losing a ball to a hazard or out of bounds. Selection: Advanced balls are made of multiple layers (three or more), with a soft cover and firm core. They induce a greater amount of spin from lofted shots (wedges especially), as well as a sensation of softness in the hands in short-range shots. However, these balls require a much greater swing speed and thus greater physical strength to properly compress at impact. If the compression of a golf ball does not match a golfer's swing speed, either a lack of compression or over-compression will occur, resulting in loss of distance. Other choices consumers must make include brand and color, with colored balls and better brands generally being more expensive. Selection: Practice/range balls A practice ball or range ball is similar to a recreational golf ball, but is designed to be inexpensive, durable and have a shorter flight distance, while still retaining the principal behaviors of a "real" golf ball and so providing useful feedback to players. All of these are desirable qualities for use in an environment like a driving range, which may be limited in maximum distance, and must have many thousands of balls on-hand at any time that are each hit and mis-hit hundreds of times during their useful life. Selection: To accomplish these ends, practice balls are typically harder-cored than even recreational balls, have a firmer, more durable cover to withstand the normal abrasion caused by a club's hitting surface, and are made as cheaply as possible while maintaining a durable, quality product. Practice balls are typically labelled with "PRACTICE" in bold lettering, and often also have one or more bars or lines printed on them, which allow players (and high-speed imaging aids) to see the ball's spin more easily as it leaves the tee or hitting turf. Selection: Practice balls conform to all applicable requirements of the Rules of Golf, and as such are legal for use on the course, but as the hitting characteristics are not ideal, players usually opt for a better-quality ball for actual play. Selection: Recycled balls Players, especially novice and casual players, lose a large number of balls during the play of a round. Balls hit into water hazards, penalty areas, buried deeply in sand, and otherwise lost or abandoned during play are a constant source of litter that groundskeepers must contend with, and can confuse players during a round who may hit an abandoned ball (incurring a penalty by strict rules). An estimated 1.2 billion balls are manufactured every year and an estimated 300 million are lost in the US alone.A variety of devices such as nets, harrows, sand rakes etc. have been developed that aid the groundskeeping staff in efficiently collecting these balls from the course as they accumulate. Once collected, they may be discarded, kept by the groundskeeping staff for their own use, repurposed on the club's driving range, or sold in bulk to a recycling firm. These firms clean and resurface the balls to remove abrasions and stains, grade them according to their resulting quality, and sell the various grades of playable balls back to golfers through retailers at a discount. Selection: Used or recycled balls with obvious surface deformation, abrasion or other degradation are known informally as "shags", and while they remain useful for various forms of practice drills such as chipping, putting and driving, and can be used for casual play, players usually opt for used balls of higher quality, or for new balls, when playing in serious competition. Other grades are typically assigned letters or proprietary terms, and are typically differentiated by the cost and quality of the ball when new and the ability of the firm to restore the ball to "like-new" condition. The "top grade" balls are typically balls that are considered the current state of the art and, after cleaning and surfacing, are indistinguishable externally from a new ball sold by the manufacturer. Selection: Markouts/X-outs In addition to recycled balls, casual golfers wishing to procure quality balls at a discount price can often purchase "X-outs". These are "factory seconds" – balls which have failed the manufacturer's quality control testing standards and which the manufacturer therefore does not wish to sell under its brand name. To avoid a loss of money on materials and labor, however, the balls which still generally conform to the Rules are marked to obscure the brand name (usually with a series of "X"s, hence the most common term "X-out"), packaged in generic boxes and sold at a deep discount. Selection: Typically, the flaw that caused the ball to fail QC does not have a significant effect on its flight characteristics (balls with serious flaws are usually discarded outright at the manufacturing plant), and so these "X-outs" will often perform identically to their counterparts that have passed the company's QC. They are thus a good choice for casual play. However, because the balls have been effectively "disowned" for practical and legal purposes by their manufacturer, they are not considered to be the same as the brand-name balls on the USGA's published Conforming Golf Ball List. Therefore, when playing in a tournament or other event that requires the ball used by the player to appear on this list as a "condition of competition", X-outs of any kind are illegal. Marking and personalization: Golfers need to distinguish their ball from other players' to ensure that they do not play the wrong ball. This is often done by making a mark on the ball using a permanent marker pen such as a Sharpie. A wide number of markings are used; a majority of players either simply write their initial in a particular color, or color in a particular arrangement of the dimples on the ball. Many players make multiple markings so that at least one can be seen without having to lift the ball. Marking tools such as stamps and stencils are available to speed the marking process. Marking and personalization: Alternatively, balls are usually sold pre-marked with the brand and model of golf ball, and also with a letter, number or symbol. This combination can usually (but not always) be used to distinguish a player's ball from other balls in play and from lost or abandoned balls on the course. Companies, country clubs and event organizers commonly have balls printed with their logo as a promotional tool, and some professional players are supplied with balls by their sponsors which have been custom-printed with something unique to that player (their name, signature, or a personal symbol). Radio location: Golf balls with embedded radio transmitters to allow lost balls to be located were first introduced in 1973, only to be rapidly banned for use in competition. More recently RFID transponders have been used for this purpose, though these are also illegal in tournaments. This technology can however be found in some computerized driving ranges. In this format, each ball used at the range has an RFID with its own unique transponder code. When dispensed, the range registers each dispensed ball to the player, who then hits them towards targets in the range. When the player hits a ball into a target, they receive distance and accuracy information calculated by the computer. The use of this technology was first commercialized by World Golf Systems Group to create TopGolf, a brand and chain of computerized ranges now owned by Callaway Golf. World records: Canadian long drive champion Jason Zuback broke the world ball speed record on an episode of Sport Science with a golf ball speed of 328 km/h (204 mph). The previous record of 302 km/h (188 mph) was held by José Ramón Areitio, a Jai Alai player.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Facial feedback hypothesis** Facial feedback hypothesis: The facial feedback hypothesis, rooted in the conjectures of Charles Darwin and William James, is that one's facial expression directly affects their emotional experience. Specifically, physiological activation of the facial regions associated with certain emotions holds a direct effect on the elicitation of such emotional states, and the lack of or inhibition of facial activation will result in the suppression (or absence altogether) of corresponding emotional states.Variations of the facial feedback hypothesis differ in regards to what extent of engaging in a given facial expression plays in the modulation of affective experience. Particularly, a "strong" version (facial feedback is the decisive factor in whether emotional perception occurs or not) and a "weak" version (facial expression plays a limited role in influencing affect). While a plethora of research exists on the facial feedback hypothesis and its variations, only the weak version has received substantial support, thus it is widely suggested that facial expression likely holds a minor facilitative impact on emotional experience. However, a 2019 meta-analysis, which generally confirmed small but significant effects, found larger effect sizes in the absence of emotional stimuli, suggesting that facial feedback has a stronger initiating effect rather than a modulating one.Further evidence showed that facial feedback is not essential to the onset of affective states. This is reflected in studies investigating emotional experience in facial paralysis patients when compared to participants without the condition. Results of these studies commonly found that emotional experiences did not significantly differ in the unavoidable absence of facial expression within facial paralysis patients. Background: Charles Darwin was among the first to suggest that physiological changes caused by an emotion had a direct impact on, rather than being just the consequence of that emotion. He wrote: The free expression by outward signs of an emotion intensifies it. On the other hand, the repression, as far as this is possible, of all outward signs softens our emotions... Even the simulation of an emotion tends to arouse it in our minds.: 366  Succeeding this postulation, William James (who was also a principal contributor to the related James-Lange theory) proposed that instead of the common belief an emotional state results in muscular expression, proprioception activated by a stimulus "is the emotion".: 449  and should one "refuse to express a passion...it dies".: 463  In other words, in the absence of awareness of bodily movement, there is only intellectual thought, with consequently the mind being devoid of emotional warmth. Background: During this period, the posits culminating in the facial feedback hypothesis lacked evidence, apart from limited research in animal behavior and studies of people with severely impaired emotional functioning. Formalized research on Darwin's and James' proposals were not commonly conducted until the latter half of the 1970s and the 1980s; almost a century after Darwin's first proposal on the topic. Furthermore, the term "facial feedback hypothesis" was not popularized in research until around 1980, with one early definition of the hypothesis being "skeletal muscle feedback from facial expressions plays a causal role in regulating emotional experience and behaviour." Development of the theory: While James included the influence of all bodily changes on the creation of an emotion, "including among them visceral, muscular, and cutaneous effects",: 252  modern research mainly focuses on the effects of facial muscular activity. One of the first to do so, Silvan Tomkins wrote in 1962 that "the face expresses affect, both to others and the self, via feedback, which is more rapid and more complex than any stimulation of which the slower moving visceral organs are capable".: 255 Two versions of the facial feedback hypothesis came to be commonly referenced, albeit sometimes being unclear in distinction. Development of the theory: The weak version, rooted in Darwin's writings, proposes that facial expression modulates emotional states in a minor and limited manner. Thomas McCanne and Judith Anderson (1987) instructed participants imagine pleasant or unpleasant imagery while they increased or suppressed activity with certain facial muscle regions responsible for the actions of smiling or frowning: respectively the zygomatic and corrugator muscle regions. A subsequent change in participants' emotional response was implied to have occurred as a result of intentional manipulation of the aforementioned facial muscle regions. Development of the theory: The strong variation—coinciding with James' postulations—implies that facial feedback is independently and chiefly responsible for the onset and perception of an emotional state.Since the writings of Darwin and James, extensive research on the facial feedback hypothesis has been conducted, with multiple studies being largely formative to how the facial feedback hypothesis is defined, tested, and accepted, with some of the most notable studies conducted in the 1970s and 1980s—a period of time that was critical to the contemporary development of the facial feedback hypothesis. For example, arguably one of the most—if not the most—influential studies on the facial feedback hypothesis was conducted by Fritz Strack, Leonard L. Martin, and Sabine Stepper in 1988. Strack, Martin, and Stepper pioneered a technique in which researchers were able to measure the effect of the actions of smiling and frowning on affect through inducing such expressions in an undetectable manner to the participant, offering a supposed level of control not yet before utilized in similar studies. This was achieved by asking each participant to hold a pen in between their teeth (inducing a smile) or between their lips (inducing a frown) while instructed to view comedic cartoons. The study concluded that participants who engaged in a smiling expression (pen between teeth) reported a higher humor response to the cartoons as opposed to when participants held a frowning expression (pen between lips). This study proved to be highly influential in not only widespread acceptance of the facial feedback hypothesis (e.g., being commonly cited in introductory psychology classes), but also influenced numerous other ensuing studies to utilize elements from the 1988 procedure.In 2016, a large-scale Registered Replication Report was conducted with the purpose of meticulously replicating Strack, Martin, and Stepper's study and testing the facial feedback hypothesis across 17 different labs across varying countries and cultures. However, this study failed to reproduce the 1988 study's results, consequently failing to support the facial feedback hypothesis and shedding doubt upon the validity of Strack, Martin, and Stepper's study. Development of the theory: Furthermore, Lanzetta et al. (1976) conducted an influential study in support of the facial feedback hypothesis finding that participants who inhibited the display of pain-related expression had lower skin conductance response (a measure commonly used to measure the activation of the sympathetic nervous system, or stress response) and subjective ratings of pain, compared with participants who openly expressed intense pain. Development of the theory: However, in general, research of the facial feedback hypothesis is characterized by difficulty in determining how to measure the effect of facial expressions on affect without alerting the participant to the nature of the study and also ensure that the connection between facial activity and corresponding emotion is not implicit in the procedure. Methodological issues: Originally, the facial feedback hypothesis studied the enhancing or suppressing effect of facial efference on emotion in the context of spontaneous, "real" emotions, using stimuli. This resulted in "the inability of research using spontaneous efference to separate correlation from causality".: 264  Laird (1974) used a cover story (measuring muscular facial activity with electrodes) to induce particular facial muscles contraction in his participants without mentioning any emotional state. However, the higher funniness ratings of the cartoons obtained by those participants "tricked" into smiling may have been caused by their recognizing the muscular contraction and its corresponding emotion: the "self-perception mechanism", which Laird (1974) thought was at the root of the facial feedback phenomenon. Perceiving physiological changes, people "fill the blank" by feeling the corresponding emotion. In the original studies, Laird had to exclude 16% (Study 1) and 19% (Study 2) of the participants as they had become aware of the physical and emotional connection during the study. Methodological issues: Another difficulty is whether the process of manipulation of the facial muscles did not cause so much exertion and fatigue that those, partially or wholly, caused the physiological changes and subsequently the emotion. Finally, the presence of physiological change may have been induced or modified by cognitive process. Experimental confirmation: In an attempt to provide a clear assessment of the theory that a purely physical facial change, involving only certain facial muscles, can result in an emotion, Strack, Martin, & Stepper (1988) devised a cover story that would ensure the participants adopt the desired facial posing without being able to perceive either the corresponding emotion or the researchers' real motive. Told they were taking part in a study to determine the difficulty for people without the use of their hands or arms to accomplish certain tasks, participants held a pen in their mouth in one of two ways. The Lip position would contract the orbicularis oris muscle, resulting in a frown. The Teeth position would cause the zygomaticus major or the risorius muscle, resulting in a smile. The control group would hold the pen in their nondominant hand. All had to fill a questionnaire in that position and rate the difficulty involved. The last task, which was the real objective of the test, was the subjective rating of the funniness of a cartoon. The test differed from previous methods in that there were no emotional states to emulate, dissimulate or exaggerate. Experimental confirmation: As predicted, participants in the Teeth condition reported significantly higher amusement ratings than those in the Lips condition. The cover story and the procedure were found to be very successful at initiating the required contraction of the muscles without arising suspicion, 'cognitive interpretation of the facial action, and avoiding significant demand and order effects. It has been suggested that more effort may be involved in holding a pen with the lips compared with the teeth.To avoid the possible effort problem, Zajonc, Murphy and Inglehart (1989) had subjects repeat different vowels, provoking smiles with "ah" sounds and frowns with "ooh" sounds for example, and again found a measurable effect of facial feedback. Ritual chanting of smile vowels has been found to be more pleasant than chanting of frown vowels, which may explain their comparative prevalence in religious mantra traditions.However, doubts about the robustness of these findings was voiced in 2016 when a replication series of the original 1988 experiment coordinated by Eric-Jan Wagenmakers and conducted in 17 labs did not find systematic effects of facial feedback. A subsequent analysis by Noah et al. identified a discrepancy in method to the original 1988 experiment as a possible reason for the lack of systematic effect in the replication series. Experimental confirmation: Together, a number of methodological issues associated with the facial feedback hypothesis seem to be resolved in favor of Darwin's hypothesis. The moderate, yet significant effect of facial feedback on emotions opens the door to new research on the "multiple and nonmutually exclusive plausible mechanisms" of the effects of bodily activity on emotions. A 2019 meta-analysis of 138 studies confirmed small but robust effects. Studies using botulinum toxin (botox): Because facial expressions involve both motor (efferent) and sensory (afferent) mechanisms, it is possible that effects attributed to facial feedback are due solely to feedback mechanisms, or feed-forward mechanisms, or some combination of both. Recently, strong experimental support for a facial feedback mechanism is provided through the use of botulinum toxin (commonly known as Botox) to temporarily paralyze facial muscles. Botox selectively blocks muscle feedback by blocking presynaptic acetylcholine receptors at the neuromuscular junction. Thus, while motor efference commands to the facial muscles remain intact, sensory afference from extrafusal muscle fibers, and possibly intrafusal muscle fibers, is diminished. Studies using botulinum toxin (botox): Several studies have examined the correlation of botox injections and emotion and these suggest that the toxin could be used as a treatment for depression. Further studies have used experimental control to test the hypothesis that botox affects aspects of emotional processing. It has been suggested that the treatment of nasal muscles would reduce the ability of the person to form a disgust response which could offer a reduction of symptoms associated with obsessive compulsive disorder.In a functional neuroimaging study, Andreas Hennenlotter and colleagues asked participants to perform a facial expression imitation task in an fMRI scanner before and two weeks after receiving botox injections in the corrugator supercilii muscle used in frowning. During imitation of angry facial expressions, botox decreased activation of brain regions implicated in emotional processing and emotional experience (namely, the amygdala and the brainstem), relative to activations before botox injection. These findings show that facial feedback modulates neural processing of emotional content, and that botox changes how the human brain responds to emotional situations. Studies using botulinum toxin (botox): In a study of cognitive processing of emotional content, David Havas and colleagues asked participants to read emotional (angry, sad, happy) sentences before and two weeks after botox injections in the corrugator supercilii muscle used in frowning. Reading times for angry and sad sentences were longer after botox injection than before injection, while reading times for happy sentences were unchanged. This finding shows that facial muscle paralysis has a selective effect on processing of emotional content. It also demonstrates that cosmetic use of botox affects aspects of human cognition – namely, the understanding of language. Autism spectrum disorders: A study by Mariëlle Stel, Claudia van den Heuvel, and Raymond C. Smeets has shown that the facial feedback hypothesis does not hold for people with autism spectrum disorders (ASD); that is, "individuals with ASD do not experience feedback from activated facial expressions as controls do".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**R-spondin 1** R-spondin 1: R-spondin-1 is a secreted protein that in humans is encoded by the Rspo1 gene, found on chromosome 1. In humans, it interacts with WNT4 in the process of female sex development. Loss of function can cause female to male sex reversal. Furthermore, it promotes canonical WNT/β catenin signaling. Structure: The protein has two cysteine-rich, furin-like domains and one thrombospondin type 1 domain. Function: Sex Development Early Gonads RSPO1 is required for the early development of gonads, regardless of sex. It has been found in mice only eleven days after fertilization. To induce cell proliferation, it acts synergistically with WNT4. They help stabilize β-catenin, which activates downstream targets. If both are deficient in XY mice, there is less expression of SRY and a reduction in the amount of SOX9. Moreover, defects in vascularization are found. These occurrences result in testicular hypoplasia. Male to female sex reversal, however, does not occur because Leydig cells remain normal. They are maintained by steroidogenic cells, now unrepressed. Function: Ovaries RSPO1 is necessary in female sex development. It augments the WNT/β catenin pathway to oppose male sex development. In critical gonadal stages, between six and nine weeks after fertilization, the ovaries upregulate it while the testes downregulate it. Function: Mucositis Oral mucosa has been identified as a target tissue for RSPO1. When administered to normal mice, it causes nuclear translocation of β-catenin to this region. Modulation of the WNT/β catenin pathway occurs through the relief of Dkk1 inhibition. This occurrence results in increased basal cellularity, thickened mucosa, and elevated epithelial cell proliferation in the tongue. RSPO1 can therefore potentially aid in the treatment of mucositis, which is characterized by inflammation of the oral cavity. This unfortunate condition often accompanies chemotherapy and radiation in cancer patients with head and neck tumors. RSPO1 has also been shown to promote gastrointestinal epithelial cell proliferation in mice.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Timeline of plastic development** Timeline of plastic development: This is a timeline of the development of plastics, comprising key discoveries and developments in the production of plastics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Seven basic tools of quality** Seven basic tools of quality: The seven basic tools of quality are a fixed set of visual exercises identified as being most helpful in troubleshooting issues related to quality. They are called basic because they are suitable for people with little formal training in statistics and because they can be used to solve the vast majority of quality-related issues. Overview: The seven tools are: Cause-and-effect diagram (also known as the "fishbone diagram" or Ishikawa diagram) Check sheet Control chart Histogram Pareto chart Scatter diagram Stratification (alternatively, flow chart or run chart)The designation arose in postwar Japan, inspired by the seven famous weapons of Benkei. It was possibly introduced by Kaoru Ishikawa who in turn was influenced by a series of lectures W. Edwards Deming had given to Japanese engineers and scientists in 1950. At that time, companies that had set about training their workforces in statistical quality control found that the complexity of the subject intimidated most of their workers and scaled back training to focus primarily on simpler methods which suffice for most quality-related issues. The Project Management Institute references the seven basic tools in A Guide to the Project Management Body of Knowledge as an example of a set of general tools useful for planning or controlling project quality.The seven basic tools stand in contrast to more advanced statistical methods such as survey sampling, acceptance sampling, statistical hypothesis testing, design of experiments, multivariate analysis, and various methods developed in the field of operations research.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**High Com** High Com: The High Com (also as HIGH COM, both written with a thin space) noise reduction system was developed by Telefunken, Germany, in the 1970s as a high quality high compression analogue compander for audio recordings. High Com: The idea of a compander for consumer devices was based on studies of a fixed two-band compander by Jürgen Wermuth of AEG-Telefunken ELA, Wolfenbüttel, developer of the Telefunken telcom c4 (formally abbreviated as "TEL" in professional broadcasting) four-band audio compander for professional use. In April 1974, the resulting "RUSW-200" prototype first led to the development of a sliding two-band compander by Ernst F. Schröder of Telefunken Grundlagenlaboratorium, Hannover since July 1974. High Com: However, the finally released High Com system, which was marketed by Telefunken since 1978, worked as a broadband 2:1:2 compander, achieving almost 15 dB of noise reduction for low and up to 20 dB RMS A-weighted for higher frequencies, reducing the noise power down to 1% while avoiding most of the acoustic problems observed with other high compression broadband companders such as EMT/NoiseBX, Burwen or dbx.In order to facilitate cost-effective mass-production in consumer devices such as cassette decks, the compander system was integrated into an analogue IC, TFK U401B / U401BG / U401BR, developed by Dietrich Höppner and Kurt Hintzmann of AEG-Telefunken Halbleiterwerk, Heilbronn. The chip contained more than 500 transistors.With minimal changes in the external circuitry the IC could also be used to emulate a mostly Dolby B-compatible compander as in the DNR (Dynamic Noise Reduction) system for backward compatibility. Consequently, second-generation tape decks with High Com incorporated a DNR expander as well, whereas in some late-generation Telefunken, ASC and Universum tape decks this even worked during recording, but was left undocumented for legal reasons. High-Com II and III: Nakamichi, one of the more than 25 licensees of the High Com system, supported the development of a noise reduction system that could exceed the capabilities of the then-prevalent Dolby B-type system. However, it became apparent that a single-band compander without sliding-band technology, which was protected by Dolby patents, suffered too many audible artifacts. So, High Com was further developed into the two-band High Com II and three-band High Com III 2:1:2 systems by Werner Scholz and Ernst F. Schröder of Telefunken assisted by Harron K. Appleman of Nakamichi in 1978/1979. The two-band variant was eventually released exclusively as Nakamichi High-Com II Noise Reduction System later in 1979, increasing the amount of noise reduction on analogue recordings and transmissions by as much as 25 dB A-weighted. High-Com II for records: While originally designed for tape recordings, Nakamichi demonstrated the usage of High Com II on LP records as well in 1979.In 1982, the same AEG-Telefunken team, who designed the High Com noise reduction system, also developed the IC U2141B for the CBS Laboratories CX noise reduction system for LP records, a system also incorporated into FMX, a noise reduction system for FM broadcasting developed by CBS. High Com FM: Similar to the earlier Dolby FM system in the US, a High Com FM system was evaluated in Germany between July 1979 and December 1981 by IRT. It was also considered to be adopted for AM broadcasting. It was based on the High Com broadband compander, but eventually changed to achieve 10 dB(A) only to improve compatibility with the existing base of receivers without built-in expander. The system was field-trialed in public German FM broadcasting between 1981 and 1984 and also discussed as an option to be introduced in Austria and France. However, despite the improvements it was eventually not introduced commercially because of the listening artifacts it created for receivers without expander. Impact: Besides Telefunken's own CN 750 High Com compander box, other companies also offered external High Com compander boxes such as the Aiwa HR-7 and HR-50, the Rotel RN-500 and RN-1000, or the Diemme Sonic-distributed Aster Dawn SC 505 and the Starsonic DL 506, as distributed by D.A.A.F. A low-cost implementation of the Telefunken High Com system as external compander box became available as Hobby-Com, developed by Telefunken product development and Thomsen-Elektronik for WDR, distributed by vgs, and promoted for do-it-yourself assembly in the popular TV series Hobbythek format by Jean Pütz on 7 February 1980. In 1981 and 1982, do-it-yourself High Com kits were introduced from elektor (elektor compander/Hi-Fi-Kompander) and G.B.C. Amtron (micro line High-Com System UK 512 (W)). The only compander available for High-Com II was Nakamichi's own High-Com II unit.More than one million High Com systems were sold between 1978 and 1982. While implemented in dozens of European and Japanese consumer device models and acoustically much superior to other systems such as Dolby B, C, dbx, adres or Super D, the High Com family of systems never gained a similar market penetration. This was caused by several factors, including the existing pre-dominance of the Dolby system, with Dolby Laboratories introducing the "good enough" Dolby C update (with up to 15 dB A-weighted improvement) in 1980 as well, and also by the fact that High Com required higher quality tape decks and tapes to work with in order to give satisfactory results. Impact: High-Com II even required calibration of the playback level using a 400 Hz, 0 dB, 200 nWb/m calibration tone for optimum results, and with prices in the several hundred dollars for the external Nakamichi compander box it was much too expensive to be used by many people outside the small group of audiophiles using high-end tape recorders or open-reel decks. Impact: When AEG-Telefunken struggled financially in 1981/1982 and the Hannover development site was partially disbanded and refocused on digital technologies in 1983, this also put the High Com development to an end. The latest tape decks to come with High Com were produced in 1986. Several software decoders were developed for telcom c4 and High Com, and are considered to be implemented for High-Com II Tape decks with High Com: These tape decks are known to provide built-in support for High Com: Akai GX-F37 ASC AS 3000 Rosita Audion D 700 Blaupunkt XC-240, XC-1400 ELIN Professional Micro Component Cassette Deck - Modell TC-97 Eumig FL-1000µP High Com Filtronic FSK-200 Grundig MCF 200, MCF 600, CF 5100, SCF 6200 hgs ELECTRONIC Mini Altus HiFi-System Micro Component Stereo Cassette Deck Hitachi D-E75 DB/SL Imperial TD 6100 Intel Professional Micro Component Cassette Deck - Modell TC-97 Körting C 102, C 220 Revox B710 High Com Neckermann Palladium Mico Line 2000C Nikko ND-500H nippon TD-3003 Saba CD278, CD 362, CD 363 Schneider SL 7270 C Sencor SD-6650 Siemens RC 333, RC 300 Studer A710 High Com Telefunken TC 750, TC 450, TC 450M, TC 650, TC 650M, STC 1 / CC 20, MC 1, MC 2, HC 700, HC 800, HC 1500, HC 3000, HC 750M, HC 730T, RC 100, RC 200, RC 300, Hifi Studio 1, Hifi Studio 1M, Studio Center 5004, Studio Center 5005, Studio Center 7004 Tensai TFL-812 Uher CG 321, CG 325, CG 344, CG 356, CG 365, mini-hit Quelle Universum Senator CT 2307, Senator CT 2307A, CT 2318 (for SYSTEM HIFI 7500 SL), Senator CT 2337, Senator VTCF 407 Wangine K-3M, WSK-120, WSK-220Other devices can be used with an external High Com compander box.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Liouvillian function** Liouvillian function: In mathematics, the Liouvillian functions comprise a set of functions including the elementary functions and their repeated integrals. Liouvillian functions can be recursively defined as integrals of other Liouvillian functions. Liouvillian function: More explicitly, a Liouvillian function is a function of one variable which is the composition of a finite number of arithmetic operations (+, −, ×, ÷), exponentials, constants, solutions of algebraic equations (a generalization of nth roots), and antiderivatives. The logarithm function does not need to be explicitly included since it is the integral of 1/x It follows directly from the definition that the set of Liouvillian functions is closed under arithmetic operations, composition, and integration. It is also closed under differentiation. It is not closed under limits and infinite sums.Liouvillian functions were introduced by Joseph Liouville in a series of papers from 1833 to 1841. Examples: Examples of well-known functions which are Liouvillian but not elementary are the nonelementary antiderivatives, for example: The error function, erf(x)=2π∫0xe−t2dt, The exponential (Ei), logarithmic (Li or li) and Fresnel (S and C) integrals.All Liouvillian functions are solutions of algebraic differential equations, but not conversely. Examples of functions which are solutions of algebraic differential equations but not Liouvillian include: the Bessel functions (except special cases); the hypergeometric functions (except special cases).Examples of functions which are not solutions of algebraic differential equations and thus not Liouvillian include all transcendentally transcendental functions, such as: the gamma function; the zeta function.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sewing gauge** Sewing gauge: A sewing gauge is a ruler, typically 6 inches long, used for measuring short spaces. It is typically a metal scale, marked in both inches and centimeters with a sliding pointer, similar in use to a caliper. It is used to mark hems for alterations as well as intervals between pleats and buttonholes and buttonhole lengths. It can be also used as a compass to draw arcs and circles by anchoring the slider with a pin and placing the tip of a marking pencil in the hole located at the end of the scale. Some models also incorporate a button shank and a blunt point for turning corners right side out.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Essoin** Essoin: In old English law, an essoin (, , Old French essoignier, "to excuse") is an excuse for nonappearance in court. Essoining is the seeking of the same. The person sent to deliver the excuse to the court is an essoiner or essoineur.There were several kinds of essoins in common law in the Middle Ages: An essoin de malo lecti, the "excuse of the bed of sickness", was an excuse that the person was too ill to get out of bed, and was generally only invoked in civil actions involving real property. This required that the invoker be observed in bed by a commission of four knights. Essoin: An essoin de ultra mare, the "excuse of being overseas" (literally "beyond the sea"), was an excuse that the person was abroad. The only resultant delay to litigation permissible for this excuse was enough time for word to be sent to the person and for them to return to England ("forty days and one ebb and one flood" being a conventional formula), and the excuse could only be invoked once, at the start of litigation. Essoin: An essoin de servicio (or per servitium) regis, the "excuse of the King's service", was the excuse that the person concerned was in the King's service at the time and thus unavailable. It required the production of the King's writ of service for proof. By the Statute of Essoins 1318 (12 Edw. II. St. 2), women (with a few exceptions) could not make this excuse. Essoin: An essoin de malo veniendi, the "excuse of becoming ill en route", was the excuse that the person had fallen ill on the way to court. It originally required either some form of proof from the messenger who carried word that the person had fallen ill, or the sworn testimony of the person concerned that he had been ill once he finally arrived at court. However, during the 13th century these requirements gradually came to be waived, and even considered to be oppressive.Essoins were originally received at court on essoin day, the first day of the term of the court. However, by 11 Geo. IV and 1 Wil. IV, essoin days were abolished. Essoins, and the day to which proceedings had as a result been adjourned, would be entered on an essoin roll. Sources: Ranulf de Glanville and John Beames (1812). Translation of Glanville. London: W. Reed. pp. 7–36. Henry of Bratton (1210–1268). Bracton: De Legibus Et Consuetudinibus Angliæ (Bracton on the Laws and Customs of England). pp. 72–91.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carbon fibers** Carbon fibers: Carbon fibers or carbon fibres (alternatively CF, graphite fiber or graphite fibre) are fibers about 5 to 10 micrometers (0.00020–0.00039 in) in diameter and composed mostly of carbon atoms. Carbon fibers have several advantages: high stiffness, high tensile strength, high strength to weight ratio, high chemical resistance, high-temperature tolerance, and low thermal expansion. These properties have made carbon fiber very popular in aerospace, civil engineering, military, motorsports, and other competition sports. However, they are relatively expensive compared to similar fibers, such as glass fiber, basalt fibers, or plastic fibers.To produce a carbon fiber, the carbon atoms are bonded together in crystals that are more or less aligned parallel to the fiber's long axis as the crystal alignment gives the fiber a high strength-to-volume ratio (in other words, it is strong for its size). Several thousand carbon fibers are bundled together to form a tow, which may be used by itself or woven into a fabric. Carbon fibers are usually combined with other materials to form a composite. For example, when permeated with a plastic resin and baked, it forms carbon-fiber-reinforced polymer (often referred to as carbon fiber), which has a very high strength-to-weight ratio and is extremely rigid although somewhat brittle. Carbon fibers are also composited with other materials, such as graphite, to form reinforced carbon-carbon composites, which have a very high heat tolerance. Carbon fibers: Carbon fiber-reinforced composite materials are used to make aircraft and spacecraft parts, racing car bodies, golf club shafts, bicycle frames, fishing rods, automobile springs, sailboat masts, and many other components where light weight and high strength are needed. History: In 1860, Joseph Swan produced carbon fibers for the first time, for use in light bulbs. In 1879, Thomas Edison baked cotton threads or bamboo slivers at high temperatures carbonizing them into an all-carbon fiber filament used in one of the first incandescent light bulbs to be heated by electricity. In 1880, Lewis Latimer developed a reliable carbon wire filament for the incandescent light bulb, heated by electricity.In 1958, Roger Bacon created high-performance carbon fibers at the Union Carbide Parma Technical Center located outside of Cleveland, Ohio. Those fibers were manufactured by heating strands of rayon until they carbonized. This process proved to be inefficient, as the resulting fibers contained only about 20% carbon. In the early 1960s, a process was developed by Dr. Akio Shindo at Agency of Industrial Science and Technology of Japan, using polyacrylonitrile (PAN) as a raw material. This had produced a carbon fiber that contained about 55% carbon. In 1960 Richard Millington of H.I. Thompson Fiberglas Co. developed a process (US Patent No. 3,294,489) for producing a high carbon content (99%) fiber using rayon as a precursor. These carbon fibers had sufficient strength (modulus of elasticity and tensile strength) to be used as a reinforcement for composites having high strength to weight properties and for high temperature resistant applications. History: The high potential strength of carbon fiber was realized in 1963 in a process developed by W. Watt, L. N. Phillips, and W. Johnson at the Royal Aircraft Establishment at Farnborough, Hampshire. The process was patented by the UK Ministry of Defence, then licensed by the British National Research Development Corporation to three companies: Rolls-Royce, who were already making carbon fiber; Morganite; and Courtaulds. Within a few years, after successful use in 1968 of a Hyfil carbon-fiber fan assembly in the Rolls-Royce Conway jet engines of the Vickers VC10, Rolls-Royce took advantage of the new material's properties to break into the American market with its RB-211 aero-engine with carbon-fiber compressor blades. Unfortunately, the blades proved vulnerable to damage from bird impact. This problem and others caused Rolls-Royce such setbacks that the company was nationalized in 1971. The carbon-fiber production plant was sold off to form Bristol Composite Materials Engineering Ltd (Often referred to as Bristol Composites). History: In the late 1960s, the Japanese took the lead in manufacturing PAN-based carbon fibers. A 1970 joint technology agreement allowed Union Carbide to manufacture Japan's Toray Industries product. Morganite decided that carbon-fiber production was peripheral to its core business, leaving Courtaulds as the only big UK manufacturer. Courtaulds's water-based inorganic process made the product susceptible to impurities that did not affect the organic process used by other carbon-fiber manufacturers, leading Courtaulds ceasing carbon-fiber production in 1991. History: During the 1960s, experimental work to find alternative raw materials led to the introduction of carbon fibers made from a petroleum pitch derived from oil processing. These fibers contained about 85% carbon and had excellent flexural strength. Also, during this period, the Japanese Government heavily supported carbon fiber development at home and several Japanese companies such as Toray, Nippon Carbon, Toho Rayon and Mitsubishi started their own development and production. Since the late 1970s, further types of carbon fiber yarn entered the global market, offering higher tensile strength and higher elastic modulus. For example, T400 from Toray with a tensile strength of 4,000 MPa and M40, a modulus of 400 GPa. Intermediate carbon fibers, such as IM 600 from Toho Rayon with up to 6,000 MPa were developed. Carbon fibers from Toray, Celanese and Akzo found their way to aerospace application from secondary to primary parts first in military and later in civil aircraft as in McDonnell Douglas, Boeing, Airbus, and United Aircraft Corporation planes. In 1988, Dr. Jacob Lahijani invented balanced ultra-high Young's modulus (greater than 100 Mpsi) and high tensile strength pitch carbon fiber (greater than 500 kpsi) used extensively in automotive and aerospace applications. In March 2006, the patent was assigned to the University of Tennessee Research Foundation. Structure and properties: Carbon fiber is frequently supplied in the form of a continuous tow wound onto a reel. The tow is a bundle of thousands of continuous individual carbon filaments held together and protected by an organic coating, or size, such as polyethylene oxide (PEO) or polyvinyl alcohol (PVA). The tow can be conveniently unwound from the reel for use. Each carbon filament in the tow is a continuous cylinder with a diameter of 5–10 micrometers and consists almost exclusively of carbon. The earliest generation (e.g. T300, HTA and AS4) had diameters of 16–22 micrometers. Later fibers (e.g. IM6 or IM600) have diameters that are approximately 5 micrometers.The atomic structure of carbon fiber is similar to that of graphite, consisting of sheets of carbon atoms arranged in a regular hexagonal pattern (graphene sheets), the difference being in the way these sheets interlock. Graphite is a crystalline material in which the sheets are stacked parallel to one another in regular fashion. The intermolecular forces between the sheets are relatively weak Van der Waals forces, giving graphite its soft and brittle characteristics. Structure and properties: Depending upon the precursor to make the fiber, carbon fiber may be turbostratic or graphitic, or have a hybrid structure with both graphitic and turbostratic parts present. In turbostratic carbon fiber the sheets of carbon atoms are haphazardly folded, or crumpled, together. Carbon fibers derived from polyacrylonitrile (PAN) are turbostratic, whereas carbon fibers derived from mesophase pitch are graphitic after heat treatment at temperatures exceeding 2200 °C. Turbostratic carbon fibers tend to have high ultimate tensile strength, whereas heat-treated mesophase-pitch-derived carbon fibers have high Young's modulus (i.e., high stiffness or resistance to extension under load) and high thermal conductivity. Applications: Carbon fiber can have higher cost than other materials which has been one of the limiting factors of adoption. In a comparison between steel and carbon fiber materials for automotive materials, carbon fiber may be 10-12x more expensive. However, this cost premium has come down over the past decade from estimates of 35x more expensive than steel in the early 2000s. Applications: Composite materials Carbon fiber is most notably used to reinforce composite materials, particularly the class of materials known as carbon fiber or graphite reinforced polymers. Non-polymer materials can also be used as the matrix for carbon fibers. Due to the formation of metal carbides and corrosion considerations, carbon has seen limited success in metal matrix composite applications. Reinforced carbon-carbon (RCC) consists of carbon fiber-reinforced graphite, and is used structurally in high-temperature applications. The fiber also finds use in filtration of high-temperature gases, as an electrode with high surface area and impeccable corrosion resistance, and as an anti-static component. Molding a thin layer of carbon fibers significantly improves fire resistance of polymers or thermoset composites because a dense, compact layer of carbon fibers efficiently reflects heat.The increasing use of carbon fiber composites is displacing aluminum from aerospace applications in favor of other metals because of galvanic corrosion issues. Note, however, that carbon fiber does not eliminate the risk of galvanic corrosion. In contact with metal, it forms "a perfect galvanic corrosion cell ..., and the metal will be subjected to galvanic corrosion attack" unless a sealant is applied between the metal and the carbon fiber.Carbon fiber can be used as an additive to asphalt to make electrically conductive asphalt concrete. Using this composite material in the transportation infrastructure, especially for airport pavement, decreases some winter maintenance problems that lead to flight cancellation or delay due to the presence of ice and snow. Passing current through the composite material 3D network of carbon fibers dissipates thermal energy that increases the surface temperature of the asphalt, which is able to melt ice and snow above it. Applications: Textiles Precursors for carbon fibers are polyacrylonitrile (PAN), rayon and pitch. Carbon fiber filament yarns are used in several processing techniques: the direct uses are for prepregging, filament winding, pultrusion, weaving, braiding, etc. Carbon fiber yarn is rated by the linear density (weight per unit length; i.e., 1 g/1000 m = 1 tex) or by number of filaments per yarn count, in thousands. For example, 200 tex for 3,000 filaments of carbon fiber is three times as strong as 1,000 carbon filament yarn, but is also three times as heavy. This thread can then be used to weave a carbon fiber filament fabric or cloth. The appearance of this fabric generally depends on the linear density of the yarn and the weave chosen. Some commonly used types of weave are twill, satin and plain. Carbon filament yarns can also be knitted or braided. Applications: Microelectrodes Carbon fibers are used for fabrication of carbon-fiber microelectrodes. In this application typically a single carbon fiber with diameter of 5–7 μm is sealed in a glass capillary. At the tip the capillary is either sealed with epoxy and polished to make a carbon-fiber disk microelectrode, or the fiber is cut to a length of 75–150 μm to make a carbon-fiber cylinder electrode. Carbon-fiber microelectrodes are used either in amperometry or fast-scan cyclic voltammetry for detection of biochemical signaling. Applications: Flexible heating Despite being known for their electrical conductivity, carbon fibers can carry only very low currents on their own. When woven into larger fabrics, they can be used to reliably provide (infrared) heating in applications requiring flexible electrical heating elements and can easily sustain temperatures past 100 °C. Many examples of this type of application can be seen in DIY heated articles of clothing and blankets. Due to its chemical inertness, it can be used relatively safely amongst most fabrics and materials; however, shorts caused by the material folding back on itself will lead to increased heat production and can lead to a fire. Synthesis: Each carbon filament is produced from a polymer such as polyacrylonitrile (PAN), rayon, or petroleum pitch. All these polymers are known as a precursor. For synthetic polymers such as PAN or rayon, the precursor is first spun into filament yarns, using chemical and mechanical processes to initially align the polymer molecules in a way to enhance the final physical properties of the completed carbon fiber. Precursor compositions and mechanical processes used during spinning filament yarns may vary among manufacturers. After drawing or spinning, the polymer filament yarns are then heated to drive off non-carbon atoms (carbonization), producing the final carbon fiber. The carbon fibers filament yarns may be further treated to improve handling qualities, then wound on to bobbins. Synthesis: A common method of manufacture involves heating the spun PAN filaments to approximately 300 °C in air, which breaks many of the hydrogen bonds and oxidizes the material. The oxidized PAN is then placed into a furnace having an inert atmosphere of a gas such as argon, and heated to approximately 2000 °C, which induces graphitization of the material, changing the molecular bond structure. When heated in the correct conditions, these chains bond side-to-side (ladder polymers), forming narrow graphene sheets which eventually merge to form a single, columnar filament. The result is usually 93–95% carbon. Lower-quality fiber can be manufactured using pitch or rayon as the precursor instead of PAN. The carbon can become further enhanced, as high modulus, or high strength carbon, by heat treatment processes. Carbon heated in the range of 1500–2000 °C (carbonization) exhibits the highest tensile strength (5,650 MPa, or 820,000 psi), while carbon fiber heated from 2500 to 3000 °C (graphitizing) exhibits a higher modulus of elasticity (531 GPa, or 77,000,000 psi).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IRiver Clix** IRiver Clix: The iRiver Clix (stylised iriver clix) is a portable media player that was developed and sold by iriver through two generations. The Clix was originally known as the U10, released in 2005. The next year it was revised and essentially rebranded to Clix. A second generation player, often called the Clix 2, was released in 2007, and later a minor revision called Clix+. The players are navigated by four buttons embedded on its sides, referred to as D-Click. U10: iRiver introduced the U10 in June 2005. It was available in capacities of 512MB and 1GB. The player has a 2.2-inch (55 mm) 18-bit (262,144 colors) QVGA (320 x 240) TFT LCD screen covering most of its faceplate. It sits above the buttons, called the D-Click System. It allows the device to be used in a touch sensitive fashion despite it not being touch sensitive. There are also minimal-sized buttons on the sides for power, button lock, volume, and a Pivot key that instantly changes the screen orientation.The U10 supports audio formats of MP3, WMA (including protected WMA), and Ogg Vorbis. As with some previous iRiver players it includes SRS WOW 3D sound technology. Additionally it also plays content in the MPEG-4 SP video format (other formats are converted with included software), the Unicode text format, and Flash Lite games and animation. There is also a built-in FM tuner and recorder, a microphone and an alarm clock.An optional docking cradle was also sold for the U10, alongside a remote control. The cradle has stereo speakers, an additional line in input, and a snooze button on the top so that it can be used like an alarm clock. Clix: In May 2006, the Iriver Clix was introduced. While physically identical to the U10, the Clix had an overhauled user interface with improved performance. It was provided initially in 1 GB and 2 GB capacities and retailed for a lower price than the U10 did. In November 2006, a 4 GB version was released, retailing for $200 in the United States.The Pivot key was also replaced by the "Smart Key" which is a customisable button that can be assigned by the user to various functions. Clix: iRiver also worked with Microsoft and MTV, offering immediate compatibility with Windows Media Player 11 (then in beta) and MTV's Urge online music service. The Clix is also PlaysForSure certified. Clix 2: iRiver previewed several new players at the 2007 Consumer Electronics Show, including a smaller version of the Clix (the S10), a screenless one (the S7), and a new version of the Clix. In April 2007, the second generation Clix (stylised clix2) was released worldwide in 2 GB, 4 GB and later 8 GB versions. This version is much thinner (12.8 millimetres (0.50 in) instead of 16.4 millimetres (0.65 in)), and its screen is now in AMOLED (Active-matrix organic light-emitting diode), which enables unlimited viewing angles compared to LCDs. It was the world's first multimedia device with an AMOLED display.In addition, the second generation Clix improved MPEG 4 video support to 30 frames per second. There is also WMV support. The free, Java-based iriverter program can convert most video formats into playable files using the firmware's unofficial support of the XviD 1.1.0 codec. Clix 2: The 8 GB version of the player was released on 11 July 2007 in South Korea and by September elsewhere. A Red Line version was later released which has a red stripe on its edges. It was released initially in 8 GB but a 4 GB version was also sold. In July 2007, a version called the Clix Rhapsody debuted in the United States, supporting the DRM-based subscription service Rhapsody. The second generation Clix was a key product in the attempt to overturn the company's fortunes.This new Clix is also highly customisable with support for interface themes, backgrounds and custom TrueType fonts support. It provides MTP or direct access of its UMS filesystem through mini USB in place of proprietary connectors. Clix 2: Clix+ An update to the second generation Clix was released in South Korea in December 2007 which added a DMB receiver. It was also previewed at the 2008 Consumer Electronics Show with a western release announced. Lplayer: The Lplayer is essentially a smaller version of the Clix and U10. It has a 2-inch display. It was released in 2008. Reception: Trusted Reviews called the iRiver U10's interface "innovative" and the player generally "feature laden", but criticised high price and difficulty of getting music on it. CNET, with a score of 8.3 out of 10, called it "sleek and stylish" and praised the battery life, but disliked its price, the maximum 1 GB capacity, and lack of album art support.The original Clix was well received by most reviewers, and became the highest scored MP3 player on CNET with a score of 8.4. CNET called the user interface "excellent" and praised its features. PC Mag UK gave it 4 out of 5, giving praise to design, sound quality and extras, but criticised the lack of pack-in video conversion software and that the D-Click "can be annoying". AnythingButiPod.com commented that the previous U10 was too overpriced, but the Clix is more reasonable while still having improvements. It noted some of its market rivals being the Sansa e200 and the Samsung YP-Z5.The second generation Clix has been received well by most reviewers. CNET's editorial review, which gave the player an Editor's Choice award, praised its "unique and intuitive interface and stellar audio quality". Calling it the "Nano killer", it scored 8.7 out of 10, dethroning its predecessor to become CNET's highest rated MP3 player. PC Magazine stated that the player had "very good audio and photo quality, long battery life, and a host of extras.". Trusted Reviews, with a score of 4.5 out of 5, called it "possibly the most desirable portable media player", giving praise to the style, screen and sound quality. Computerworld said that the Clix line had evolved into the "ideal media player".Commonly mentioned disadvantages of the Clix 2 included a lack of included video conversion software, although it later became available for download via iRiver America's site. Reception: Sales The second generation Clix, from launch in February 2007 to December 2007, sold about 180,000 units in South Korea.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded