text
stringlengths
60
353k
source
stringclasses
2 values
**NAS Award for Chemistry in Service to Society** NAS Award for Chemistry in Service to Society: The NAS Award for Chemistry in Service to Society is awarded by the U.S. National Academy of Sciences "for contributions to chemistry, either in fundamental science or its application, that clearly satisfy a societal need." It has been awarded every two years since its inception in 1991. List of NAS Award for Chemistry in Service to Society winners: Source: NAS John C. Martin (2019)For his contributions to the development of antiviral medications used to treat even the most refractory of the deadly diseases, including HIV/AIDS, HCV, HBV, CMV, and flu, impacting hundreds of millions of individuals around the world and for his tireless efforts to ensure all of humanity, rich and poor alike, benefit.Leroy E. Hood (2017)For his invention, commercialization and development of multiple chemical tools that address biological complexity, including the automated DNA sequencer that spearheaded the human genome project.Bruce D. Roth (2015)For his discovery, synthesis and commercial development of atorvastatin (Lipitor), the most successful cholesterol lowering medicine in history.Edward C. Taylor (2013)For his contributions to heterocyclic chemistry, in particular the discovery of the new-generation antifolate pemetrexed, approved for the treatment of mesothelioma and non-small cell lung cancer and under clinical investigation for treatment of a variety of other solid tumors.Paul J. Reider (2011)For his contributions to the discovery and development of numerous approved drugs, including those for treating asthma and for treating AIDS.John D. Roberts (2009)For seminal contributions in physical organic chemistry, in particular the introduction of NMR spectroscopy to the chemistry community.Arthur A. Patchett (2007)For innovative contributions in discoveries of Mevacor, the first statin that lowers cholesterol levels, and of Vasotec and Prinivil for treating hypertension and congestive heart failure.Marvin H. Caruthers (2005)For his invention and development of chemical reagents and methods currently used for the automated synthesis of DNA oligonucleotides (i.e., the "gene machine").Paul S. Anderson (2003)For his scientific leadership in two drugs approved for the treatment of AIDS and for his widely cited basic research related to the glutamate receptor.Paul C. Lauterbur (2001)For his research on nuclear magnetic resonance and its applications in chemistry and medicine, and his contributions to the development of magnetic resonance imaging in medicine.Grant Willson (1999)For his fundamental contribution to the chemistry of materials that produce micropatterns in semiconductors, and for its widespread application in the microelectronics industry for the benefit of society.Ernest L. Eliel (1997)For his seminal and far-reaching contributions in organic stereochemistry and for his wise and energetic leadership in professional societies that represent the interests of chemists and of society, both in the United States and abroad.P. Roy Vagelos (1995)For his fundamental contributions to the understanding of fatty acid biosynthesis, cholesterol metabolism, and phospholipid metabolism, and for his leadership at Merck that led to the discovery of a number of important therapeutic and preventive agents.Harold S. Johnston (1993)For his pioneering efforts to point out that man-made emissions could affect the chemistry of the stratosphere, in particular, the danger of the depletion by nitrogen oxide of the earth's critical and fragile ozone layer.Vladimir Haensel (1991)For his outstanding research in the catalytic reforming of hydrocarbons, that has greatly enhanced the economic value of our petroleum natural resources.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Staircase paradox** Staircase paradox: In mathematical analysis, the staircase paradox is a pathological example showing that limits of curves do not necessarily preserve their length. It consists of a sequence of "staircase" polygonal chains in a unit square, formed from horizontal and vertical line segments of decreasing length, so that these staircases converge uniformly to the diagonal of the square. However, each staircase has length two, while the length of the diagonal is the square root of 2, so the sequence of staircase lengths does not converge to the length of the diagonal. Martin Gardner calls this "an ancient geometrical paradox". It shows that, for curves under uniform convergence, the length of a curve is not a continuous function of the curve.For any smooth curve, polygonal chains with segment lengths decreasing to zero, connecting consecutive vertices along the curve, always converge to the arc length. The failure of the staircase curves to converge to the correct length can be explained by the fact that some of their vertices do not lie on the diagonal. In higher dimensions, the Schwarz lantern provides an analogous example showing that polyhedral surfaces that converge pointwise to a curved surface do not necessarily converge to its area, even when the vertices all lie on the surface.As well as highlighting the need for careful definitions of arc length in mathematics education, the paradox has applications in digital geometry, where it motivates methods of estimating the perimeter of pixelated shapes that do not merely sum the lengths of boundaries between pixels.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MicroStation** MicroStation: MicroStation is a CAD software platform for two- and three-dimensional design and drafting, developed and sold by Bentley Systems and used in the architectural and engineering industries. It generates 2D/3D vector graphics objects and elements and includes building information modeling (BIM) features. The current version is MicroStation CONNECT Edition. History: MicroStation was initially developed by 3 Individual developers and sold and supported by Intergraph in the 1980s. The latest versions of the software are released solely for Microsoft Windows operating systems, but historically MicroStation was available for Macintosh platforms and a number of Unix-like operating systems. History: From its inception MicroStation was designed as an IGDS (Interactive Graphics Design System) file editor for the PC. Its initial development was a result of the developers experience developing PseudoStation released in 1984, a program designed to replace the use of proprietary Intergraph graphic workstations to edit DGN files by substituting the much less expensive Tektronix compatible graphics terminals. PseudoStation as well as Intergraph's IGDS program ran on a modified version of Digital Equipment Corporation's VAX super-mini computer. History: In 1985, MicroStation 1.0 was released as a DGN file read-only and plot program designed to run exclusively on the IBM PC-AT personal computer. In 1987, MicroStation 2.0 was released, and was the first version of MicroStation to read and write DGN files. Almost two years later, MicroStation 3.0 was released, which took advantage of the increasing processing power of the PC, particularly with respect to dynamics. History: Intergraph MicroStation 4.0 was released in late 1990 and added many features: reference file clipping and masking, a DWG translator, fence modes, the ability to name levels, as well as GUI enhancements. The 1992 release of version 4 introduced the ability to write applications using the MicroStation Development Language (MDL).In 1993, MicroStation 5.0 was released. New capabilities included binary raster support, custom line styles, settings manager, and dimension driven design. The "V5 for Power Macintosh provided a comprehensive tool set for both 2-D and 3-D CAD ... with added several truly useful features ... the high-end PowerPC- native CAD package runs on steroids." This was the last version to be supported in Unix. This version was branded both Intergraph (on CLIX) and Bentley MicroStation (on PC). Later versions were all branded Bentley. This was the last version to run on Intergraph CLIX. All platforms other than the PC used 32-bit processors. History: In 1995, Windows 95 was released. Bentley soon followed with a release of MicroStation for that operating system. Aside from being the first version of MicroStation to not include the version number in its name (MicroStation 95 was actually MicroStation v5.5), MicroStation 95 included the ability to be mostly driven by graphic icon buttons. This version introduced a host of new features: Accudraw, dockable dialogs, Smartline, revised view controls, movie generation, and the ability to use two application windows (similar to previous Unix driven Intergraph terminals. Many of these features are among the most popular used today. MicroStation 95 was the first version of MicroStation for a PC platform to use 32-bit hardware. History: The last multi-platform release, MicroStation SE (SE standing for special edition, but it was actually MicroStation 5.7) was released late in 1997, and was the first MicroStation release to include color button icons. These icons could also be made borderless, just like in Office 97. This version of MicroStation also included several features to enable more work over the internet. This version also introduced enhanced precision and a very commonly used tool in MicroStation - PowerSelector. History: MicroStation/J (a.k.a. MicroStation 7.0, a.k.a. MicroStation V7) was released almost a year after SE. The J in the software title stood for Java, as this version introduced a Java-enhanced version of MDL, called JMDL. Other features included QuickvisionGL and a revised help system. MicroStation/J was the last version to be based upon the IGDS file format; since MicroStation/J was actually Version 7, the file format became known as "V7 DGN". That file format had been used for about 20 years. History: However, with the advent of MicroStation V8 in 2001 came a new IEEE-754 based 64-bit file format, referred to as V8 DGN. Along with the new file format came many new enhancements, including unlimited levels, a nearly limitless design plane and no limits on file size. Other features that were added were: Accusnap, Design History, models, unlimited undo, VBA programming, .Net interoperability, True Scale, and standard definitions for working units (as the new file format stored everything internally in meters, but can recognize rational unit conversions so that it can know the size of geometry)(some of these features were also available in MicroStation 95 to MicroStation/J). It also included the ability to work natively with DWG files. History: MicroStation V8 2004 Edition (V8.5) followed nearly three years later with support for newer DWG releases, Multi-snaps, PDF creation, the Standards Checker and Feature modeling. MicroStation V8 XM Edition (V8.9) was released in May 2006. It builds upon the changes made by V8. The XM edition includes a completely revised Direct3d-based graphics subsystem, PDF References, task navigation, element templates, color books, support for PANTONE and RAL color systems and keyboard mapping. In MicroStation V8i (V8.11) the task navigation was overhauled and the then newest DWG format was supported. MicroStation now contains a module for GPS data. The current version is MicroStation CONNECT Edition (V10). This version updated the application architecture to 64-bit and changed to a Ribbon Interface. Future versions are being delivered as (roughly) quarterly updates. File format support: Its native format is the DGN format, though it can also read and write a variety of standard CAD formats including DWG, DXF, SKP and OBJ and produce media output in such forms as rendered images (JPEG and BMP), animations (AVI), 3D web pages in Virtual Reality Modeling Language (VRML), and Adobe Systems PDF. File format support: At its inception, MicroStation was used in the engineering and architecture fields primarily for creating construction drawings; however, it has evolved through its various versions to include advanced parametric modeling and rendering features, including Boolean solids, VUE Rendering, raytracing, pathtracing, PBR Materials, and keyframe animation. It can provide specialized environments for architecture, civil engineering, mapping, or plant design, among others. File format support: In 2000, Bentley made revisions to the DGN file format in V8 to add additional features like Digital Rights and Design History - a revision control ability that allows reinstating previous revisions either globally or by selection, and to better support import/export of Autodesk's DWG format. Additionally, the V8 DGN file format removed many data restrictions from earlier releases such as limited design levels and drawing area. File format support: CONNECT Edition versions continue to use the V8 DGN file format.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Virescence** Virescence: Virescence is the abnormal development of green pigmentation in plant parts that are not normally green, like shoots or flowers (in which case it is known as floral virescence). Virescence is closely associated with phyllody (the abnormal development of flower parts into leaves) and witch's broom (the abnormal growth of a dense mass of shoots from a single point). They are often symptoms of the same disease affecting the plants, typically those caused by phytoplasmas. The term chloranthy is also sometimes used for floral virescence, though it is more commonly used for phyllody.The term was coined around 1825, from Latin virescere, "to become green". In the English language the term virescent may also refer to greenness (cf. verdant).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oseledets theorem** Oseledets theorem: In mathematics, the multiplicative ergodic theorem, or Oseledets theorem provides the theoretical background for computation of Lyapunov exponents of a nonlinear dynamical system. It was proved by Valery Oseledets (also spelled "Oseledec") in 1965 and reported at the International Mathematical Congress in Moscow in 1966. A conceptually different proof of the multiplicative ergodic theorem was found by M. S. Raghunathan. The theorem has been extended to semisimple Lie groups by V. A. Kaimanovich and further generalized in the works of David Ruelle, Grigory Margulis, Anders Karlsson, and François Ledrappier. Cocycles: The multiplicative ergodic theorem is stated in terms of matrix cocycles of a dynamical system. The theorem states conditions for the existence of the defining limits and describes the Lyapunov exponents. It does not address the rate of convergence. A cocycle of an autonomous dynamical system X is a map C : X×T → Rn×n satisfying C(x,0)=Inforallx∈X C(x,t+s)=C(x(t),s)C(x,t)forallx∈Xandt,s∈T where X and T (with T = Z⁺ or T = R⁺) are the phase space and the time range, respectively, of the dynamical system, and In is the n-dimensional unit matrix. Cocycles: The dimension n of the matrices C is not related to the phase space X. Examples A prominent example of a cocycle is given by the matrix Jt in the theory of Lyapunov exponents. In this special case, the dimension n of the matrices is the same as the dimension of the manifold X. For any cocycle C, the determinant det C(x, t) is a one-dimensional cocycle. Statement of the theorem: Let μ be an ergodic invariant measure on X and C a cocycle of the dynamical system such that for each t ∈ T, the maps log ⁡‖C(x,t)‖ and log ⁡‖C(x,t)−1‖ are L1-integrable with respect to μ. Then for μ-almost all x and each non-zero vector u ∈ Rn the limit lim log ⁡‖C(x,t)u‖‖u‖ exists and assumes, depending on u but not on x, up to n different values. Statement of the theorem: These are the Lyapunov exponents. Further, if λ1 > ... > λm are the different limits then there are subspaces Rn = R1 ⊃ ... ⊃ Rm ⊃ Rm+1 = {0}, depending on x, such that the limit is λi for u ∈ Ri \ Ri+1 and i = 1, ..., m. The values of the Lyapunov exponents are invariant with respect to a wide range of coordinate transformations. Suppose that g : X → X is a one-to-one map such that ∂g/∂x and its inverse exist; then the values of the Lyapunov exponents do not change. Additive versus multiplicative ergodic theorems: Verbally, ergodicity means that time and space averages are equal, formally: lim t→∞1t∫0tf(x(s))ds=1μ(X)∫Xf(x)μ(dx) where the integrals and the limit exist. Space average (right hand side, μ is an ergodic measure on X) is the accumulation of f(x) values weighted by μ(dx). Since addition is commutative, the accumulation of the f(x)μ(dx) values may be done in arbitrary order. In contrast, the time average (left hand side) suggests a specific ordering of the f(x(s)) values along the trajectory. Additive versus multiplicative ergodic theorems: Since matrix multiplication is, in general, not commutative, accumulation of multiplied cocycle values (and limits thereof) according to C(x(t0),tk) = C(x(tk−1),tk − tk−1) ... C(x(t0),t1 − t0) — for tk large and the steps ti − ti−1 small — makes sense only for a prescribed ordering. Thus, the time average may exist (and the theorem states that it actually exists), but there is no space average counterpart. In other words, the Oseledets theorem differs from additive ergodic theorems (such as G. D. Birkhoff's and J. von Neumann's) in that it guarantees the existence of the time average, but makes no claim about the space average.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Time-Slip** Time-Slip: Time-Slip is a novel by Graham Dunstan Martin published in 1986. Plot summary: Time-Slip is a novel in which a new Messiah appears in a post-holocaust Scotland. Reception: Dave Langford reviewed Time-Slip for White Dwarf #78, and stated that "Martin makes it blackly clear that his protagonist's religious cure-all leads to an upswing in the evil it explains away." Reviews: Review by Brian Stableford (1986) in Fantasy Review, May 1986 Review by Mark Greener (1986) in Vector 133
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lithium metasilicate** Lithium metasilicate: Lithium metasilicate is an ionic compound with the formula Li2SiO3 Preparation: Lithium metasilicate is prepared by the reaction of lithium carbonate and silicon dioxide at temperatures between 515 and 565 °C. Applications: The melting of lithium metasilicate is used for the calibration of thermocouples.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Problem-Oriented Medical Information System** Problem-Oriented Medical Information System: The Problem-Oriented Medical Information System, or PROMIS, was a hypertext system specially designed for maintaining health care records. PROMIS was developed at the University of Vermont in 1976, primarily by Jan Schultz and Dr. Lawrence Weed, M.D. Apparently, the developers of Carnegie Mellon University's ZOG system were so impressed with PROMIS that it reinspired them to return to their own work. PROMIS was an interactive, touchscreen system that allowed users to access a medical record within a large body of medical knowledge. At its peak, the PROMIS system had over 60,000 frames of knowledge. PROMIS was also known for its fast responsiveness, especially for its time.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Circular analysis** Circular analysis: In statistics, circular analysis is the selection of the details of a data analysis using the data that is being analysed. It is often referred to as double dipping, as one uses the same data twice. Circular analysis unjustifiably inflates the apparent statistical strength of any results reported and, at the most extreme, can lead to the apparently significant result being found in data that consists only of noise. In particular, where an experiment is implemented to study a postulated effect, it is a misuse of statistics to initially reduce the complete dataset by selecting a subset of data in ways that are aligned to the effects being studied. A second misuse occurs where the performance of a fitted model or classification rule is reported as a raw result, without allowing for the effects of model-selection and the tuning of parameters based on the data being analyzed. Examples: At its most simple, it can include the decision to remove outliers, after noticing this might help improve the analysis of an experiment. The effect can be more subtle. In functional magnetic resonance imaging (fMRI) data, for example, considerable amounts of pre-processing is often needed. These might be applied incrementally until the analysis 'works'. Similarly, the classifiers used in a multivoxel pattern analysis of fMRI data require parameters, which could be tuned to maximise the classification accuracy. Examples: In geology, the potential for circular analysis has been noted in the case of maps of geological faults, where these may be drawn on the basis of an assumption that faults develop and propagate in a particular way, with those maps being later used as evidence that faults do actually develop in that way. Solutions: Careful design of the analysis one plans to perform, prior to collecting the data, means the analysis choice is not affected by the data collected. Alternatively, one might decide to perfect the classification on one or two participants, and then use the analysis on the remaining participant data. Regarding the selection of classification parameters, a common method is to divide the data into two sets, and find the optimum parameter using one set and then test using this parameter value on the second set. This is a standard technique used (for example) by the princeton MVPA classification library.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sennichite** Sennichite: Sennichite (千日手, lit. "moves (for) a thousand days") or repetition draw is a rule in shogi stating that the game will end in a draw if the same position is repeated four times during a game as long as the repetitions do not involve checks. Explanation: If the same game position occurs four times with the same player to move and the same pieces in hand for each player, then the game ends in sennichite iff the positions are not due to perpetual check. (Perpetual check is an illegal move, which ends the game in a loss in tournament play.) In professional shogi, a sennichite outcome is not a final result of a match as draws essentially do not count. There can be only one victorious through wins. This is a significant difference from western chess, in which a player can play specifically to obtain draws for gaining half points. In the case of sennichite, professional shogi players will have to immediately play a subsequent game (or as many games as necessary) with sides reversed in order to obtain a true win outcome. (That is, the player who was White becomes Black, and vice versa.) Also, depending on the tournament, professional players play the subsequent game in the remainder of the allowed game time. Usually there is a break time period between the replay games. Explanation: Sennichite is rare in professional shogi occurring in about 1–2% of games and even rarer in amateur games. In professional shogi, sennichite usually occurs in the opening as certain positions are reached that are theoretically disadvantaged for both sides (zugzwang). In amateur shogi, sennichite tends to occur in the middle or endgame as a result of poor positions. Explanation: Strategy Aiming for sennichite may be a possible professional strategy for the White player in order to play the second game as Black, which has a slight statistical advantage and/or greater initiative. For instance, Bishop Exchange Fourth File Rook is a passive strategy for White with the goal of a sennichite (as it requires two tempo losses – swinging the rook and trading the bishops) while it is a very aggressive strategy if played by Black. History: Pre-1983 sennichite. The sennichite rule was previously defined by a sequence of moves (and not a position) that had three repetitions. The rule was changed to its current form in May 1983. Historical sennichite. There was yet another repetition rule used historically by rule codifier Sōko Ōhashi who was the second Meijin from 1635 until his death in 1654: the player that started a repetition lost the game. Example: Repetition draws have historically been associated with the traditional Double Fortress opening (especially the Complete Fortress formation). Example: Watanabe vs Habu 2012 A surprising repetition draw occurred in the endgame of a game between Akira Watanabe (Black) and Yoshiharu Habu on October 3, 2012. The opening was Third File Rook. After the 121st move (= 61st move in western notation), White (Habu) found himself in a threatmate situation where Black (Watanabe) had a possible 9-move mate sequence of 62.R*83 Gx83 63.Sx83+ Kx83 64.R*82 Kx74 65.N*66 K-63 (or K-64) 66.G*54 [mate]. In order to prevent Black's future knight drop (N*66), White dropped a silver to the 66 square (61...S*66) forcing Black to capture it with his pawn (62.Px66) leaving the 66 square occupied and unable to accept a knight drop. After this, White started setting up the repetition sequence starting with 62...G*89. Dropping the gold to the 89 square puts Black in his own threatmate situation as White is threatening the mate-in-one 63...Bx88+ [mate] on his next move. Example: Therefore, Black defends the 88 square in the only way he can by dropping a rook to the seventh file (63.R*78). He cannot remove White's gold with his own gold on 88 (63.Gx89) since that gold is pinned by White's bishop on 79. Example: White, then, trades his golds via the 88 square (63...Gx88). This move is actually forced as Black is threatening to create a 3-move brinkmate sequence via 64.S*82 Gx82 65.Nx82+. And, since White does not have any checkmate sequence available to him, after this, Black will have a mate-in-one with +N83 (or G*83 or R*83). Thus, White must defend against the brinkmate by creating another threatmate against Black with 63...Gx88. This threatens the 3-move mate sequence: 64...Gx78 65.K-98 B-88+ [mate]. Example: Black, of course, must defend against the threatmate by capturing White's gold with his rook (64.Rx88). Example: After White's gold is removed, the board position is very similar to the position at after the 123rd move (the first diagram shown above). The only difference is that instead of Black having a gold on the 88 square Black has a rook on 88. However, this is sufficiently similar to force Black into a sennichite sequence in that Black's rook like the previous gold cannot capture White's bishop on 79 and also is pinned by the same bishop. And, since White still has a gold available to drop, he drops a gold again to the 89 square (64...G*89). This creates another threatmate (threatening again the same mate-in-one ...Bx88+). Black must again stop the threatmate by defending the 88 square – this time with a gold (65.G*78). Example: Similarly, White captures the rook on 88 with their gold creating the same threatmate as above (65...Gx88). It is here on the 130th move that the sennichite sequence technically starts. White must again remove the threatmate by capturing White's gold (66.Gx88). Example: After these eight moves, we have a near identical position to the position after the 122nd move (62. Px66). However, there is a small difference in that now White has a rook in hand instead of the two golds and Black has a gold in hand instead of two rooks. Thus, although very similar (and functionally the same in terms of game play), this is not a repetition of the board position at move 122 and why the actual repetition sequence starts at move 130. Example: After 66...G*89 67.G*78 Gx88, there is a second repeat of the position at move 130. After 68.Gx88 G*89 69.G*78 Gx88, there is a third repetition. And, after 70.Gx88 G*89 71.G*78 Gx88, White makes the fourth repetition leading to sennichite. After this, according to professional shogi rules, a new game was started with Habu playing Black and Watanabe playing White.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anti-tank mine** Anti-tank mine: In anti-tank warfare, an anti-tank mine (abbreviated to "AT mine") is a type of land mine designed to damage or destroy vehicles including tanks and armored fighting vehicles. Compared to anti-personnel mines, anti-tank mines typically have a much larger explosive charge, and a fuze designed to be triggered by vehicles or, in some cases, remotely or by tampering with the mine. History: First World War The first anti-tank mines were improvised during the First World War as a countermeasure against the first tanks introduced by the British towards the end of the war. Initially they were nothing more than a buried high-explosive shell or mortar bomb with its fuze upright. Later, purpose-built mines were developed, including the Flachmine 17, which was simply a wooden box packed with explosives and triggered either remotely or by a pressure fuze. By the end of the war, the Germans had developed row mining techniques, and mines accounted for 15% of U.S. tank casualties during the Battle of Saint-Mihiel, Third Battle of the Aisne, Battle of Selle and Meuse-Argonne Offensive. History: Inter-War Period The Soviet Union began developing mines in the early 1920s, and in 1924 produced its first anti-tank mine, the EZ mine. The mine, which was developed by Yegorov and Zelinskiy, had a 1 kg charge, which was enough to break the tracks of contemporary tanks. Meanwhile, in Germany, defeat spurred the development of anti-tank mines, with the first truly modern mine, the Tellermine 29, entering service in 1929. It was a disc-shaped device approximately 30 cm across filled with about 5 kg of high explosives. A second mine, the Tellermine 35 was developed in 1935. Anti-tank mines were used by both sides during the Spanish Civil War. Notably, Republican forces lifted mines placed by Nationalist forces and used them against the Nationalists. This spurred the development of anti-handling devices for anti-tank mines. History: The Winter War between the Soviet Union and Finland also saw widespread use of anti-tank mines. Finnish forces, facing a general shortage of anti-tank weapons, could exploit the predictable movements of motorized units imposed by difficult terrain and weather conditions. History: Second World War The German Tellermine was a purpose-built anti-tank mine developed during the period between the first and second world wars, the first model being introduced in 1929. Some variants were of a rectangular shape, but in all cases the outer casing served only as container for the explosives and fuze, without being used to destructive effect (e.g. shrapnel). Tellermine was the prototypical anti-tank mine, with many elements of its design emulated in the Pignone P-1, NR 25, and M6 mine (among others). Because of its rather high operating pressure, a vehicle would need to pass directly over top of the mine to set it off. But since the tracks represent only about 20% of a tank's width, the pressure fuse had a limited area of effect. History: As one source has it: "Since they were pressure-detonated, these early anti-tank mines typically did most of their damage to a tank's treads, leaving its crew unharmed and its guns still operational but immobilised and vulnerable to aircraft and enemy anti-tank weapons ... During World War II they (the Wehrmacht) began using a mine with a tilt-rod fuze, a thin rod standing approximately two feet up from the center of the charge and nearly impossible to see after the mine had been buried. As a tank passed over the mine, the rod was pushed forward, causing the charge to detonate directly beneath it. The blast often killed the crew and sometimes exploded onboard ammunition. Now that tank crews were directly at risk, they were less likely to plow through a minefield."Although other measures such as satchel charges, sticky bombs and bombs designed to magnetically adhere to tanks were developed, they do not fall within the category of land mines as they are not buried and detonated remotely or by pressure. The Hawkins mine was a British anti-tank device that could be employed as a mine laid on the road surface for a tank to run over setting off a crush fuze or thrown at the tank in which case a timer fuze was used. History: Shaped charge devices like the Hohl-Sprung mine 4672 were also developed by Germany later in the war, although these did not see widespread use. The most advanced German anti-tank mine of the war was their minimal metal Topfmine. History: In contrast to the dinner plate mines such as the German Tellermine were bar mines such as the German Riegel mine 43 and Italian B-2 mine. These were long mines designed to increase the probability of a vehicle triggering it, the B2 consisted of multiple small shaped-charge explosive charges along its length designed to ensure a mobility kill against enemy vehicles by destroying their tracks. This form of mine was the inspiration for the British L9 bar mine. History: Modern Several advances have been made in the development of modern anti-tank mines, including: more effective explosive payloads (different explosive compounds and shaped charge effects) use of non-ferrous materials making them harder to detect new methods of deployment (from aircraft or with artillery) more sophisticated fuzes (e.g. triggered by magnetic and seismic effects, which make a mine blast resistant, or which ignore the first target vehicle to drive over it and therefore can be used against convoys or mine rollers) sophisticated "anti-handling" devices to prevent or discourage tampering or removal. Design: More modern anti-tank mines are usually more advanced than simple containers full of explosives detonated by remote or the vehicles pressure. The biggest advances were made in the following areas: Power of the explosives (explosives such as RDX). Shaped charges to increase the armour piercing effect. Advanced dispersal systems. More advanced or specific detonation triggers.Most modern mine bodies or casings are made of plastic material to avoid easy detection. They feature combinations of pressure or magnetically activated detonators to ensure that they are only triggered by vehicles. Dispersal systems: There are several systems for dispersing mines to quickly cover wide areas, as opposed to a soldier laying each one individually. These system can take the form of cluster bombs or be artillery fired. Cluster bombs contain several mines each, which could be a mixture of anti-personnel mines. When the cluster bomb reaches a preset altitude it disperses the mines over a wide area. Some anti-tank mines are designed to be fired by artillery, and arm themselves once they impact the target area. Off-route mines: Off-route mines are designed to be effective when detonated next to a vehicle instead of underneath the vehicle. They are useful in cases where the ground or surface is not suitable for burying or concealing a mine. They normally employ a Misznay–Schardin shaped charge to fire a penetrating slug through the target armour. This self forging projectile principle has been used for some French and Soviet off route mines and has earned infamy as an improvised explosive devices (IED) technique in Israel and especially Iraq. Off-route mines: Due to the critical standoff necessary for penetration and the development of standoff neutralization technologies, shaped charge off-route mines using the Munroe effect are more rarely encountered, though the British/French/German ARGES mine with a tandem warhead is an example of one of the more successful. The term "off-route mine" refers to purpose-designed and manufactured anti-tank mines. Explosively Formed Projectiles (EFPs) are one type of IED that was used in Iraq, but most "home made" IEDs are not employed in this manner. Countermeasures: The most effective countermeasure deployed against mine fields is mine clearing, using either explosive methods or mechanical methods. Explosive methods, such as the Giant Viper and the SADF Plofadder 160 AT, involve laying explosives across a minefield, either by propelling the charges across the field with rockets, or by dropping them from aircraft, and then detonating the explosive, clearing a path. Mechanical methods include plowing and pressure-forced detonation. In plowing, a specially designed plow attached to the front end of a heavily armored tank is used to push aside the earth and any mines embedded in it, clearing a path as wide as the pushing tank. In pressure-forced detonation, a heavily armored tank pushes a heavy spherical or cylindrical solid metal roller ahead of it, causing mines to detonate. Countermeasures: There are also several ways of making vehicles resistant to the effects of a mine detonation to reduce the chance of crew injury. In case of a mine's blast effect, this can be done by absorbing the blast energy, deflecting it away from the vehicle hull or increasing the distance between the crew and the points where wheels touch the ground–where any detonations are likely to centre. Countermeasures: Another way to protect a vehicle from mines was to attach wooden planks to the sides of armored vehicles to prevent enemy soldiers from attaching magnetic mines. In the close combat on Iwo Jima, for example, some tanks were protected in this manner. A Japanese soldier running up from a concealed foxhole would not be able to stick a magnetic mine on the side of a tank encased in wood. Countermeasures: A simple, and highly effective, technique to protect the occupants of a wheeled vehicle is to fill the tires with water. This will have the effect of absorbing and deflecting the mine's blast energy. Steel plates between the cabin and the wheels can absorb the energy and their effectiveness is enhanced if they can be angled to deflect it away from the cabin. Increasing the distance between the wheels and passenger cabin, as is done on the South African Casspir personnel carrier, is an effective technique, although there are mobility and ease of driving problems with such a vehicle. A V-hull vehicle uses a wedge-shaped passenger cabin, with the thin edge of the wedge downwards, to divert blast energy away from occupants. Improvised measures such as sandbags in the vehicle floor or bulletproof vests placed on the floor may offer a small measure of protection against tiny mines. Countermeasures: Steel plates on the floor and sides and armoured glass will protect the occupants from fragments. Mounting seats from the sides or roof of the vehicle, rather than the floor, will help protect occupants from shocks transmitted through the structure of the vehicle and a four-point seat harness will minimise the chance of injury if the vehicle is flung onto its side or its roof–a mine may throw a vehicle 5 – 10 m from the detonation point. Countermeasures: Police and military can use a robot to remove mines from an area. Combat use: Anti-tank mines have played an important role in most wars fought since they were first used. Combat use: Second World War Anti-tank mines played a major role on the Eastern front, where they were used in huge quantities by Soviet troops. The most common included the TM-41, TM-44, TMSB, YAM-5, and AKS. In the Battle of Kursk, combat engineers laid a staggering 503,663 AT mines, for a density of 1500 mines per kilometer. This was four times greater than what was seen in the Battle of Moscow. Combat use: Furthermore, mobile detachments were tasked with laying more mines directly in the path of advancing enemy tanks. According to one source: "... Each artillery battalion and, in some cases, each artillery battery, had a mobile reserve of 5 to 8 combat engineers equipped with 4 to 5 mines each. Their function was to mine unguarded tank approaches after the direction of the enemy attack had been definitely ascertained. These mines proved highly effective in stopping and even in destroying many enemy tanks."The Wehrmacht also relied heavily on anti-tank mines to defend the Atlantic Wall, having planted six million mines of all types in Northern France alone. Mines were usually laid in staggered rows about 500 yards (460 meters) deep. Along with the anti-personnel types, there were various model of Tellermines, Topfmines, and Riegel mines. On the Western front, anti-tank mines were responsible for 20-22% of Allied tank losses. Since the majority of these mines were equipped with pressure fuzes (rather than tilt-rods), tanks were more often crippled than destroyed outright. Combat use: Vietnam War During the Vietnam War, both 'regular' NVA and Viet Cong forces used AT mines. These were of Soviet, Chinese or local manufacture. Anti-tank mines were also used extensively in Cambodia and along the Thai border, planted by Pol Pot's Maoist guerrillas and the Vietnamese army, which invaded Cambodia in 1979 to topple the Khmer Rouge. Millions of these mines remain in the area, despite clearing efforts. It is estimated that they cause hundreds of deaths annually. Combat use: Southern Africa Conflict in southern Africa since the 1960s have often involved Soviet, United States or South African supported irregular armies or fighters engaged in guerrilla warfare. What makes these conflicts significant to the study of anti-tank mines is that they featured the widespread use of these mines in situations other than conventional warfare (or static minefields) and also saw the development of effective mine resistant vehicles. As a result, both Angola and Mozambique are littered with such devices to this day (as with Cambodia). Combat use: In the Angolan Civil War or South African Border War that covered vast sparsely populated area of southern Angola and northern Namibia, it was easy for small groups to infiltrate and lay their mines on roads before escaping again often undetected. The anti-tank mines were most often placed on public roads used by civilian and military vehicles and had a great psychological effect. Combat use: Mines were often laid in complex arrangements. One tactic was to lay multiple mines on top of each other to increase the blast effect. Another common tactic was to link together several mines placed within a few metres of each other, so that all would detonate when any one was triggered. Combat use: It was because of this threat that some of the first successful mine protected vehicles were developed by South African military and police forces. Chief amongst these were the Buffel and Casspir armoured personnel carriers and Ratel armoured fighting vehicle. They employed v-shaped hulls that deflected the blast force away from occupants. In most cases occupants survived anti-tank mine detonations with only minor injuries. The vehicles themselves could often be repaired by replacing the wheels or some drive train components that were designed to be modular and replaceable for exactly this reason. Combat use: Most countries involved in Middle Eastern peace keeping missions deploy modern developments of these vehicles like the RG-31 (Canada, United Arab Emirates, United States) and RG-32 (Sweden).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Semi-invariant of a quiver** Semi-invariant of a quiver: In mathematics, given a quiver Q with set of vertices Q0 and set of arrows Q1, a representation of Q assigns a vector space Vi to each vertex and a linear map V(α): V(s(α)) → V(t(α)) to each arrow α, where s(α), t(α) are, respectively, the starting and the ending vertices of α. Given an element d ∈ N Q0, the set of representations of Q with dim Vi = d(i) for each i has a vector space structure. Semi-invariant of a quiver: It is naturally endowed with an action of the algebraic group Πi∈Q0 GL(d(i)) by simultaneous base change. Such action induces one on the ring of functions. The ones which are invariants up to a character of the group are called semi-invariants. They form a ring whose structure reflects representation-theoretical properties of the quiver. Definitions: Let Q = (Q0,Q1,s,t) be a quiver. Consider a dimension vector d, that is an element in N Q0. The set of d-dimensional representations is given by Rep := Rep ⁡(Q):Vi=d(i)} Once fixed bases for each vector space Vi this can be identified with the vector space Hom k⁡(kd(s(α)),kd(t(α))) Such affine variety is endowed with an action of the algebraic group GL(d) := Πi∈ Q0 GL(d(i)) by simultaneous base change on each vertex: Rep Rep ⁡(Q,d)((gi),(Vi,V(α)))⟼(Vi,gt(α)⋅V(α)⋅gs(α)−1) By definition two modules M,N ∈ Rep(Q,d) are isomorphic if and only if their GL(d)-orbits coincide. Definitions: We have an induced action on the coordinate ring k[Rep(Q,d)] by defining: Rep Rep := f(g−1.−) Polynomial invariants An element f ∈ k[Rep(Q,d)] is called an invariant (with respect to GL(d)) if g⋅f = f for any g ∈ GL(d). The set of invariants := Rep ⁡(Q,d)]GL(d) is in general a subalgebra of k[Rep(Q,d)]. Definitions: Example Consider the 1-loop quiver Q: For d = (n) the representation space is End(kn) and the action of GL(n) is given by usual conjugation. The invariant ring is I(Q,d)=k[c1,…,cn] where the cis are defined, for any A ∈ End(kn), as the coefficients of the characteristic polynomial det (A−tI)=tn−c1(A)tn−1+⋯+(−1)ncn(A) Semi-invariants In case Q has neither loops nor cycles the variety k[Rep(Q,d)] has a unique closed orbit corresponding to the unique d-dimensional semi-simple representation, therefore any invariant function is constant. Definitions: Elements which are invariants with respect to the subgroup SL(d) := Π{i ∈ Q0} SL(d(i)) form a ring, SI(Q,d), with a richer structure called ring of semi-invariants. It decomposes as SI(Q,d)=⨁σ∈ZQ0SI(Q,d)σ where := Rep det (gi)σif,∀g∈GL(d)}. A function belonging to SI(Q,d)σ is called semi-invariant of weight σ. Definitions: Example Consider the quiver Q: 1→α2 Fix d = (n,n). In this case k[Rep(Q,(n,n))] is congruent to the set of square matrices of size n: M(n). The function defined, for any B ∈ M(n), as detu(B(α)) is a semi-invariant of weight (u,−u) in fact det det det det det u(B) The ring of semi-invariants equals the polynomial ring generated by det, i.e. Characterization of representation type through semi-invariant theory: For quivers of finite representation-type, that is to say Dynkin quivers, the vector space k[Rep(Q,d)] admits an open dense orbit. In other words, it is a prehomogenous vector space. Sato and Kimura described the ring of semi-invariants in such case. Sato–Kimura theorem Let Q be a Dynkin quiver, d a dimension vector. Let Σ be the set of weights σ such that there exists fσ ∈ SI(Q,d)σ non-zero and irreducible. Then the following properties hold true. i) For every weight σ we have dimk SI(Q,d)σ ≤ 1. ii) All weights in Σ are linearly independent over Q iii) SI(Q,d) is the polynomial ring generated by the fσ's, σ ∈ Σ. Characterization of representation type through semi-invariant theory: Furthermore, we have an interpretation for the generators of this polynomial algebra. Let O be the open orbit, then k[Rep(Q,d)] \ O = Z1 ∪ ... ∪ Zt where each Zi is closed and irreducible. We can assume that the Zis are arranged in increasing order with respect to the codimension so that the first l have codimension one and Zi is the zero-set of the irreducible polynomial f1, then SI(Q,d) = k[f1, ..., fl]. Characterization of representation type through semi-invariant theory: Example In the example above the action of GL(n,n) has an open orbit on M(n) consisting of invertible matrices. Then we immediately recover SI(Q,(n,n)) = k[det]. Skowronski–Weyman provided a geometric characterization of the class of tame quivers (i.e. Dynkin and Euclidean quivers) in terms of semi-invariants. Skowronski–Weyman theorem Let Q be a finite connected quiver. The following are equivalent: i) Q is either a Dynkin quiver or a Euclidean quiver. ii) For each dimension vector d, the algebra SI(Q,d) is complete intersection. iii) For each dimension vector d, the algebra SI(Q,d) is either a polynomial algebra or a hypersurface. Example Consider the Euclidean quiver Q: Pick the dimension vector d = (1,1,1,1,2). An element V ∈ k[Rep(Q,d)] can be identified with a 4-ple (A1, A2, A3, A4) of matrices in M(1,2). Call Di,j the function defined on each V as det(Ai,Aj). Such functions generate the ring of semi-invariants: SI(Q,d)=k[D1,2,D3,4,D1,4,D2,3,D1,3,D2,4]D1,2D3,4+D1,4D2,3−D1,3D2,4
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cajun accordion** Cajun accordion: A Cajun accordion (in Cajun French: accordéon), also known as a squeezebox, is single-row diatonic button accordion used for playing Cajun music. History: Many different accordions were developed in Europe throughout the 19th century, and exported worldwide. Accordions were brought to Acadiana in the 1890s and became popular by the early 1900s (decade), eventually becoming a staple of Cajun music. History: Many of the German factories producing diatonic accordions for the United States market were destroyed during World War II. As a result, some Cajuns, such as Sidney Brown, began producing their own instruments, based on the popular one-row German accordions but with modifications to suit the nuances of the Cajun playing style. Since the end of World War II, there has been a surge in the number of Cajun accordion makers in Louisiana, as well as several in Texas. Construction: The Cajun accordion is generally defined as a single-row diatonic accordion, as compared to multiple-row instruments commonly used in Irish, Italian, polka, and other styles of music. The Cajun accordion has multiple reeds for every button, and the number of reeds that sound is controlled by four stops or knobs. The standard number of melody buttons is ten, with two buttons on the left-hand side: one for the bass note and one for the chord. The tonic note and major chord of the key play on when the bellows are pushed, and the dominant note and major chord when pulled (for instance, C major and G major respectively in the key of C). Louisiana-constructed accordions are usually built in small backyard shops like Marc Savoy's Acadian brand and Larry Miller's Bon Cajun brand. Clarence "Junior" Martin of Lafayette Louisiana is a Master Craftsman who also builds accordions in his shop. Characteristics: The most common tuning utilized is the key of C, although the key of D is also relatively common. Some rarer accordions are constructed in the key of B flat. Cajun accordions are traditionally tuned to a Just Intonation. Notable players: Although the instrument is called a Cajun accordion, both zydeco and creole musicians play the Cajun accordion with a zydeco and creole sound respectively. Each musician below is considered important in influencing accordion technique and image. Nathan Abshire Alphonse "Bois Sec" Ardoin Amédé Ardoin Lee Benoit Jackie Caillier Boozoo Chavis Geno Delafose John Delafose Dewey Balfa Joe Falcon Iry LeJeune Steve Riley Aldus Roger Marc Savoy Jo-El Sonnier Wayne Toups Lawrence Walker William LaBouve Terrance Simien and the Zydeco Experience Manufacturers and builders: Hohner (Germany) Andre Michot (Louisiana, United States) Larry Miller (Louisiana, United States) Marc Savoy (Louisiana, United States) Greg Mouton (Louisiana, United States)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Input hypothesis** Input hypothesis: The input hypothesis, also known as the monitor model, is a group of five hypotheses of second-language acquisition developed by the linguist Stephen Krashen in the 1970s and 1980s. Krashen originally formulated the input hypothesis as just one of the five hypotheses, but over time the term has come to refer to the five hypotheses as a group. The hypotheses are the input hypothesis, the acquisition–learning hypothesis, the monitor hypothesis, the natural order hypothesis and the affective filter hypothesis. The input hypothesis was first published in 1977.The hypotheses put primary importance on the comprehensible input (CI) that language learners are exposed to. Understanding spoken and written language input is seen as the only mechanism that results in the increase of underlying linguistic competence, and language output is not seen as having any effect on learners' ability. Furthermore, Krashen claimed that linguistic competence is only advanced when language is subconsciously acquired, and that conscious learning cannot be used as a source of spontaneous language production. Finally, learning is seen to be heavily dependent on the mood of the learner, with learning being impaired if the learner is under stress or does not want to learn the language. Input hypothesis: Krashen's hypotheses have been influential in language education, particularly in the United States, but have received criticism from some academics. Two of the main criticisms state that the hypotheses are untestable, and that they assume a degree of separation between acquisition and learning that has not been proven to exist. Overview: The five hypotheses that Krashen proposed are as follows: The input hypothesis. This states that learners progress in their knowledge of the language when they comprehend language input that is slightly more advanced than their current level. Krashen called this level of input "i+1", where "i" is the learner's interlanguage and "+1" is the next stage of language acquisition. The acquisition–learning hypothesis claims that there is a strict separation between acquisition and learning; Krashen saw acquisition as a purely subconscious process and learning as a conscious process, and claimed that improvement in language ability was only dependent upon acquisition and never on learning. The monitor hypothesis states that consciously learned language can only be used to monitor language output; it can never be the source of spontaneous speech. The natural order hypothesis states that language is acquired in a particular order, and that this order does not change between learners, and is not affected by explicit instruction. The affective filter hypothesis. This states that learners' ability to acquire language is constrained if they are experiencing negative emotions such as fear or embarrassment. At such times the affective filter is said to be "up." Input hypothesis: If i represents previously acquired linguistic competence and extra-linguistic knowledge, the hypothesis claims that we move from i to i+1 by understanding input that contains i+1. Extra-linguistic knowledge includes our knowledge of the world and of the situation, that is, the context. The +1 represents 'the next increment' of new knowledge or language structure that will be within the learner's capacity to acquire.'Comprehensible input' is the crucial and necessary ingredient for the acquisition of language. The comprehensible input hypothesis can be restated in terms of the natural order hypothesis. For example, if we acquire the rules of language in a linear order (1, 2, 3...), then i represents the last rule or language form learned, and i+1 is the next structure that should be learned. It must be stressed, however, that just any input is not sufficient; the input received must be comprehensible. According to Krashen, there are three corollaries to his theory. Input hypothesis: Corollaries of the input hypothesis Talking (output) is not practicing.Krashen stresses yet again that speaking in the target language does not result in language acquisition. Although speaking can indirectly assist in language acquisition, the ability to speak is not the cause of language learning or acquisition. Instead, comprehensible output is the effect of language acquisition. When enough comprehensible input is provided, i+1 is present.If language models and teachers provide enough comprehensible input, then the structures that acquirers are ready to learn will be present in that input. According to Krashen, this is a better method of developing grammatical accuracy than direct grammar teaching. The teaching order is not based on the natural order.Instead, students will acquire the language in a natural order by receiving comprehensible input. Acquisition-learning hypothesis: In modern linguistics, there are many theories as to how humans are able to develop language ability. According to Stephen Krashen's acquisition-learning hypothesis, there are two independent ways in which we develop our linguistic skills: acquisition and learning. This theory is at the core of modern language acquisition theory, and is perhaps the most fundamental of Krashen's theories. Acquisition-learning hypothesis: Acquisition of language is a natural, intuitive, and subconscious process of which individuals need not be aware. One is unaware of the process as it is happening and, when the new knowledge is acquired, the acquirer generally does not realize that they possess any new knowledge. According to Krashen, both adults and children can subconsciously acquire language, and either written or oral language can be acquired. This process is similar to the process that children undergo when learning their native language. Acquisition requires meaningful interaction in the target language, during which the acquirer is focused on meaning rather than form.Learning a language, on the other hand, is a conscious process, much like what one experiences in school. New knowledge or language forms are represented consciously in the learner's mind, frequently in the form of language "rules" and "grammar", and the process often involves error correction. Language learning involves formal instruction and, according to Krashen, is less effective than acquisition. Learning in this sense is conception or conceptualisation: instead of learning a language itself, students learn an abstract, conceptual model of a language, a "theory" about a language (a grammar). Monitor hypothesis: The monitor hypothesis asserts that a learner's learned system acts as a monitor to what they are producing. In other words, while only the acquired system is able to produce spontaneous speech, the learned system is used to check what is being spoken. Monitor hypothesis: Before the learner produces an utterance, he or she internally scans it for errors, and uses the learned system to make corrections. Self-correction occurs when the learner uses the Monitor to correct a sentence after it is uttered. According to the hypothesis, such self-monitoring and self-correction are the only functions of conscious language learning.The Monitor model then predicts faster initial progress by adults than children, as adults use this ‘monitor’ when producing L2 (target language) utterances before having acquired the ability for natural performance, and adult learners will input more into conversations earlier than children. Monitor hypothesis: Three conditions for use of the monitor According to Krashen, for the Monitor to be successfully used, three conditions must be met: The acquirer/learner must know the ruleThis is a very difficult condition to meet because it means that the speaker must have had explicit instruction on the language form that he or she is trying to produce. The acquirer must be focused on correctnessHe or she must be thinking about form, and it is difficult to focus on meaning and form at the same time. The acquirer/learner must have time to use the monitorUsing the monitor requires the speaker to slow down and focus on form. Difficulties using the monitor There are many difficulties with the use of the monitor, making the monitor rather weak as a language tool. Knowing the rule: this is a difficult condition to meet, because even the best students do not learn every rule that is taught, cannot remember every rule they have learned, and can't always correctly apply the rules they do remember. Furthermore, not every rule of a language is always included in a text or taught by the teacher. Monitor hypothesis: Having time to use the monitor: there is a price that is paid for the use of the monitor- the speaker is then focused on form rather than meaning, resulting in the production and exchange of less information, thus slowing the flow of conversation. Some speakers over-monitor to the point that the conversation is painfully slow and sometimes difficult to listen to. Monitor hypothesis: The rules of language make up only a small portion of our language competence: Acquisition does not provide 100% language competence. There is often a small portion of grammar, punctuation, and spelling that even the most proficient native speakers may not acquire. While it is important to learn these aspects of language, since writing is the only form that requires 100% competence, these aspects of language make up only a small portion of our language competence.Due to these difficulties, Krashen recommends using the monitor at times when it does not interfere with communication, such as while writing. Natural order hypothesis: The natural order hypothesis states that all learners acquire a language in roughly the same order. This order is not dependent on the ease with which a particular language feature can be taught; some features, such as third-person "-s" ("he runs") are easy to teach in a classroom setting, but are not typically acquired until the later stages of language acquisition. This hypothesis was based on the morpheme studies by Dulay and Burt, which found that certain morphemes were predictably learned before others during the course of second-language acquisition. Affective filter hypothesis: The affective filter is an impediment to learning or acquisition caused by negative emotional ("affective") responses to one's environment. It is a hypothesis of second-language acquisition theory, and a field of interest in educational psychology and general education. Affective filter hypothesis: According to the affective filter hypothesis, certain emotions, such as anxiety, self-doubt, and mere boredom interfere with the process of acquiring a second language. They function as a filter between the speaker and the listener that reduces the amount of language input the listener is able to understand. These negative emotions prevent efficient processing of the language input. The hypothesis further states that the blockage can be reduced by sparking interest, providing low-anxiety environments, and bolstering the learner's self-esteem. Affective filter hypothesis: According to Krashen (1982), there are ways to lower the affective filter. One is allowing for a silent period (not expecting the student to speak before they have received an adequate amount of comprehensible input according to their individual needs). A teacher needs to be aware of the student's home life, as this domain is the biggest contributor to the affective filter. It is also important to take into note that those who are learning English for the first time in the USA have many hurdles to get over. To lower the affective filter a teacher needs to not add to the hurdles to jump over. Reception and influence: According to Wolfgang Butzkamm & John A. W. Caldwell (2009), comprehensible input, defined by Krashen as understanding messages, is indeed the necessary condition for acquisition, but it is not sufficient. Learners will crack the speech code only if they receive input that is comprehended at two levels. They must not only understand what is meant but also how things are quite literally expressed, i.e. how the different meaning components are put together to produce the message. This is the principle of dual comprehension. In many cases, both types of understanding can be conflated into one process, in others not. The German phrase "Wie spät ist es?" is perfectly understood as "What time is it?" However, learners need to know more: *How late is it? That’s what the Germans say literally, which gives us the anatomy of the phrase, and the logic behind it. Only now is understanding complete, and we come into full possession of the phrase which can become a recipe for many more sentences, such as "Wie alt ist es?" / "How old is it?" etc. According to Butzkamm & Caldwell (2009:64) "dually comprehended language input is the fuel for our language learning capacities". It is both necessary and sufficient. Reception and influence: The theory underlies Krashen and Terrell's comprehension-based language learning methodology known as the natural approach (1983). The Focal Skills approach, first developed in 1988, is also based on the theory. English as a Second Language Podcast was also inspired by Krashen's ideas on providing comprehensible input to language acquirers. The most popular competitors are the skill-building hypothesis and the comprehensible output hypothesis. The input hypothesis is related to instructional scaffolding. Applications in language teaching: The input hypothesis is often applied in practice with TPR Storytelling. Applications in language teaching: Levels Krashen designates learners into beginner and intermediate levels: Beginning level Intermediate level Teaching uses comprehensible input drawn from academic texts, but modified so that subject-matter is sheltered, or limited. (Note that sheltered subject-matter teaching is not for beginners or native speakers of the target language.) In sheltered instruction classes, the focus is on meaning, not form.As a practical matter, comprehensible input works with the following teaching techniques: The teacher should slow down and speak clearly and slowly, using short sentences and clauses. Applications in language teaching: The teacher needs to prepare and use graphical or visual aids. Courses should use textbooks or supporting materials that are not overly cluttered. For students above 2nd grade, a study guide is useful. Classes should make use of multi-modal teaching techniques. Students may read aloud, with other students paraphrasing what they said. A small set of content vocabulary used repeatedly will be more easily acquired and allow students to acquire language structures.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Used car** Used car: A used car, a pre-owned vehicle, or a secondhand car, is a vehicle that has previously had one or more retail owners. Used cars are sold through a variety of outlets, including franchise and independent car dealers, rental car companies, buy here pay here dealerships, leasing offices, auctions, and private party sales. Some car retailers offer "no-haggle prices," "certified" used cars, and extended service plans or warranties. Used car industry: Used car export industry Depreciation levels of vehicles differ a lot in exporting and importing countries due to differences in income levels. The price of a vehicle depreciates faster in high-income countries than in low-income countries. Used vehicles sellers in high-income countries can thus sell their used vehicles for a higher price in low-income countries. This is the incentive to export used vehicles.The major car exporting countries (which includes both new and used vehicles) are Japan, the EU, USA, and Canada.In the EU, 60% of used cars are marketed in other EU countries. The used car exports in the EU are focused on East Europe, the Caucasus, Central Asia, and Africa.In the US, used vehicle exports are focused on Mexico, Nigeria, and Benin.The African continent gets 90% of its imports from Europe and many of these cars would not meet European emissions. Used car industry: Used car industry in the USA Established in 1898, the Empire State Motor Wagon Company in Catskill, New York was one of the first American used car lots.The used vehicle market is substantially larger than other large retail sectors, such as the school and office products market (US$206 billion in estimated annual sales) and the home improvement market (US$291 billion in estimated annual sales).With annual sales of over US$350 billion, the used vehicle industry represents almost half of the U.S. auto retail market and is the largest retail segment of the economy. In 2016, about 17.6 million used cars and trucks were sold in the United States, and 38.5 million were sold worldwide. Used vehicle retailer: The Federal Trade Commission recommends that consumers consider a car retailer's reputation when deciding where to purchase a used car. Used vehicle retailer: Vehicle history reports In 2006, an estimated 34% of American used-vehicle buyers bought a vehicle history report. Vehicle history reports are one way to check the track record of any used vehicle. Vehicle history reports provide customers with a record based on the vehicle's vehicle identification number (VIN). These reports will indicate items of public record, such as vehicle title branding, lemon law buybacks, odometer fraud, and product recall. The report may indicate minor/moderate collision damage or improper vehicle maintenance. An attempt to identify vehicles that have been previously owned by hire car rental agencies, police and emergency services or taxi fleets is also made. Consumers should research vehicles carefully, as these reporting services only report the information to which they have access. Used vehicle retailer: In some countries, the government is a provider of vehicle history, but this is usually a limited service providing information on just one aspect of the history, such as the United Kingdom's Ministry of Transport history. The U.S. Department of Justice's National Motor Vehicle Title Registration System has only about a dozen approved data providers, about half of which sell car history data to consumers; the rest work only with car dealers. None of them are currently free of charge to consumers and many are not free even to the car dealers. The Better Business Bureau recommends using one of these approved data providers when researching a used car. The history reports use several sources to gather the data for each vehicle, including the police, the Driver and Vehicle Licensing Agency (DVLA), finance houses, the national mileage register, insurance companies, and industry bodies.Several of the services, most notably those in the United Kingdom and the United States, sell reports to dealers and then encourage the dealers to display the reports on their Internet sites. These reports are paid for by the dealer and then offered for free to potential buyers of the vehicle. Used vehicle retailer: In the UK, the DVLA provides information on the registration of vehicles to certain companies for consumer protection and anti-fraud purposes. Companies may add to the reports of additional information gathered from police, finance, and insurance companies. Car history check services are available online for the public and motor trade customers. In India, the Ministry of Road Transport and Highways is responsible for providing information related to vehicle registration and service history. Used car pricing: Used car pricing reports typically produce three forms of the pricing information. Dealer or retail price is the price expected to pay if buying from a licensed new-car or used-car dealer. Dealer trade-in price or wholesale price is the price a shopper should expect to receive from a dealer if trading in a car. This is also the price that a dealer will typically pay for a car at a dealer wholesale auction. Used car pricing: Private-party price is the price expected to pay if buying from an individual. A private-party seller is hoping to get more money than they would with a trade-in to a dealer. A private-party buyer is hoping to pay less than the dealer retail price.The growth of the Internet has fueled the availability of information on the prices of used cars. This information was once only available in trade publications that dealers had access to. There are now numerous sources, such as online appraisal tools and internet classified ads, for used car pricing. Multiple sources of used car pricing means that listed values from different sources may differ. Each pricing guide receiving data from different sources and makes different judgments about that data. Used car pricing: The pricing of used cars can be affected by geography. For example, convertibles have a higher demand in warmer climates than in cooler areas. Similarly, pickup trucks may be more in demand in rural than urban settings. The overall condition of the vehicle has a major impact on pricing. Condition is based on appearances, vehicle history, mechanical condition, and mileage. There is much subjectivity in how the condition of a car is evaluated.There are various theories as to how the market determines the prices of used cars sold by private parties, especially relative to new cars. One theory suggests that new car dealers are able to put more effort into selling a car, and can therefore stimulate stronger demand. Another theory suggests that owners of problematic cars ("lemons") are more likely to want to sell their cars than owners of perfectly functioning vehicles. Therefore, someone buying a used car bears a higher risk of buying a lemon, and the market price tends to adjust downwards to reflect that. Laws and regulations by region: Africa There are some 54 African countries that set import age restrictions on used vehicle imports, while 27 African countries do not place any import restrictions on used vehicle imports, and just 5 African countries (Egypt, South Africa, Sudan, Morocco) ban all used vehicle imports. Laws and regulations by region: Mauritius, Seychelles, Algeria, and Chad set an age restriction of 3 years Gabon and Senegal set an age restriction of 4 years Libya, Mozambique, Niger, and Tunesia set an age restriction of 5 years Côte d'Ivoire sets an age restriction of 7 years Kenya, Mauritania, Namibia set an age restriction of 8 years Eritrea, Benin, Democratic Republic of Congo set an age restriction of 10 years Liberia, Nigeria and Eswatini set an age restriction of 12 yearsGambia, Ghana, Mali, Côte d'Ivoire, and Cape Verde have also implemented punitive taxation for vehicles beyond a certain age. Algeria also has an internal consumption tax and Uganda has an environmental tax. Zambia and South Africa also have an inspection test requirement as a precondition to vehicle registration on vehicle imports Asia China Japan Japan has an inspection tests as a precondition to vehicle registration on vehicle imports Europe European Union Used cars have a statutory warranty according to the system of laws of the European Union, the so-called "Liability for defects", which lasts for 12 months. Laws and regulations by region: North America Canada In Ontario, Canada, new and used vehicle sales are regulated by the Ontario Motor Vehicle Industry Council (OMVIC). In Alberta, Canada, new and used vehicle sales are regulated by the Alberta Motor Vehicle Industry Council (AMVIC). Transport Canada mandates that all vehicles that are not made to comply with U.S. Federal Motor Vehicle Safety Standards are only eligible for importation if its age is 15 years old and above. Laws and regulations by region: United States Used vehicles usually must be 25 years or older to be imported, but that requirement can be waived if a Show or Display exemption is given. The exemption for Show or Display limits the mileage to 2,500 miles (4,023 Kilometers) a year, and only select cars are eligible for the exemption. Canadian-market vehicles can also be federalized under separate regulations. There are no age limits for used car exporting. Used cars can be exported at any time regardless of age or condition. Laws and regulations by region: Central America Panama has a used vehicle import age restriction of 10 years, while Mexico has an age restriction of 5 years.In the Caribbean, most countries have age restrictions on used vehicle imports. South America Bolivia, Paraguay, and Peru are the only countries in South America that allow used vehicle imports. Paraguay has a used vehicle age limit of 10 years, while Peru has it set to 5 years. Laws and regulations by region: Australia In the Australian state of Queensland, when the odometer reading is fewer than 160,000 kilometres (99,000 mi), and the car was manufactured less than 10 years before the sale date, the warranty is three months or 5,000 kilometres (3,100 mi), whichever happens first. If the odometer reading is 160,000 kilometres (99,000 mi) or more, or the car was manufactured 10 years or more before the sale date, there is no warranty. Also, motorcycles, caravans, and commercial vehicles do not have a warranty at all. A commercial vehicle is a car with nine seats or more, as well as a vehicle that is able to carry one ton of goods, unless it is a utility, which is a car designed to carry goods. For example, a Holden Commodore Utility or a Ford Falcon Utility. As these are cars that come with the back section being part of the car's body. Other vehicles that have an interchangeable back section are regarded as cab chassis and the back of the vehicle can be ordered from the factory or can be custom-built to suit the needs of the vehicle buyer. The same as a light truck can be ordered as a tip truck or a body truck
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Répertoire de vedettes-matière de l'Université Laval** Répertoire de vedettes-matière de l'Université Laval: The Répertoire de vedettes-matière de l'Université Laval (RVM) is a controlled vocabulary made up of four mostly bilingual thesauruses. It is designed for document indexers, organizations that want to describe the content of their documents or of their products and services, as well as anyone who wants to clarify vocabulary in English and French as part of their work or research. Répertoire de vedettes-matière de l'Université Laval: RVM was created and is updated by the Répertoire de vedettes-matière section of the Université Laval Library in Québec City, and it contains over 300,000 authority records. It is used by over 200 public and private libraries and documentation centres in Québec, across Canada and some other countries, mostly in Europe. History: The Répertoire de vedettes-matière was created in 1946, when Université Laval librarians took the initiative to reuse the catalogue records from the Library of Congress's National Union Catalog to describe the documents in their collection. Before the copy cataloguing movement, the practice of reusing these records provided considerable savings despite the time needed to translate them. A list of subject headings was gradually compiled in French to provide descriptions of and access to local documentary resources. History: After the first official edition of the Répertoire was published in 1962, RVM attracted the attention of major documentary institutions in Québec and across Canada that recognized it as an essential tool for systemizing and standardizing methods for representing the content of their collections. The National Library of Canada adopted RVM in the French version of Canadiana, the National Bibliography of Canada, and gave it national standard status in 1974. A formal collaboration agreement between the two institutions was signed, and the first team of librarians dedicated solely to the intellectual management of RVM was established at Université Laval. History: It was also in 1974 that the Bibliothèque publique d'information in Paris started using RVM, which was emerging on the scene both in French-speaking countries and around the world. In 1980 the Université Laval Library signed a collaboration agreement with France's National Library, Bibliothèque nationale de France (BnF), allowing it to use RVM as the basic corpus for creating its subject headings authority index, known as Répertoire d'autorité matière encyclopédique et alphabétique unifié (RAMEAU), the only French-language equivalent of RVM still in use today.RVM's content and structure reflect its close relationship to the Library of Congress Subject Headings (LCSH), but RVM has nonetheless maintained its independence. In addition to developing original headings, RVM has expanded by adding subject headings whose English equivalents come from other source files more specialized than the LCSH. This expansion began in 1978 with the Canadian Subject Headings, followed in 1994 by the Medical Subject Headings from the United States National Library of Medicine and the Art & Architecture Thesaurus (AAT) from the J. Paul Getty Trust. History: The 9th and final paper edition of RVM was published in 1983 and so was the first user manual. In 2008 Université Laval and ASTED (Association pour l'avancement des sciences et des techniques de la documentation) jointly published a practical guide on the Répertoire de vedettes-matière.In 2010 the first RVM website was launched. In 2017 it launched its new Web platform, which provides access to more powerful search and display features, new indexing support tools, and translations and adaptations of three new thesauruses developed by the Library of Congress: the Library of Congress Genre/Form Terms (LCGFT), the Library of Congress Medium of Performance Thesaurus for Music (LCMPT), and the Library of Congress Demographic Group Terms (LCDGT). The thesauruses: RVM (Topics) RVM (Topics) is an encyclopedic thesaurus whose terms are classified according to a specific syntax. It is a translation and adaptation of the following thesauruses : Library of Congress Subject Headings (LCSH) Canadian Subject Headings (CSH) of Library and Archives Canada Medical Subject Headings (MeSH) of the National Library of Medicine Art & Architecture Thesaurus (AAT) of the J. Paul Getty TrustIt also contains original authorities exclusive to RVM, as well as equivalence with RAMEAU of Bibliothèque nationale de France. The thesauruses: RVM (Topics) includes: Subject headings (topical and geographic names, buildings, events, and certain other categories of proper names : gods, dukes, last names, programming languages, fictional and legendary characters, sects, etc.). A subject heading is an indexing term (consisting of a heading by itself or a heading followed by one or more subdivisions) used to accurately represent the topic of a document. The thesauruses: Subdivisions (topical, form, chronological, and geographic). A subdivision is a component of a subject heading that supplements the main heading to clarify the concept or topic it represents.Here are a few examples of subject headings used in the Université Laval Library catalogue. These headings appear as is in the topic field in the bibliographic records : Réseaux sociaux--Histoire--21e siècle (Social networks—History—21st century) Agriculture--Aspect économique--Modèles mathématiques (Agriculture—Economic aspects—Mathematical models) Canada--Accords commerciaux--Mexique--Congrès (Canada—Commercial treaties—Mexico—Congresses) Écriture--Philosophie--Ouvrages avant 1800 (Writing—Philosophy—Early works to 1800) Personnes âgées--Services communautaires de santé (Community health services for older people) Génie nucléaire--Sécurité--Mesures (Nuclear engineering—Safety measures) RVMGF RVMGF is a translation and adaptation of Library of Congress Genre/Form Terms (LCGFT). It includes genre/form terms that make it possible to identify resources by their genre or form, thereby providing more options for indexing and identifying various kinds of documents. The thesauruses: Some examples : Romans historiques (Historical fiction) Films d'horreur (Horror films) Bandes dessinées de science-fiction (Science fiction comics) Jeux de plateaux (Board games) Cartes topographiques (Topographic maps) Données géospatiales (Geospatial data) Catalogues d'exposition (Exhibition catalogs) Rapports annuels (Annual reports) RVMMEM RVMMEM is a translation and adaptation of the Library of Congress Medium of Performance Thesaurus for Music (LCMPT). It includes medium of performance terms for music designed to represent the instruments, performers, and ensembles involved in a musical score or sound recording. These terms make it possible to describe and identify musical pieces based on specific criteria with regard to the number of performance media involved in the performance of the piece being searched. The thesauruses: Some examples : cithare (zither) djembé (djembe) clarinette sopranino (sopranino clarinet) harpe celtique (Irish harp) ensemble de flûtes (flute choir) chœur d'enfants (children's chorus) baryton (baritone voice) RVMGD RVMGD is a translation and adaptation of the Library of Congress Demographic Group Terms (LCDGT). It includes demographic group terms that make it possible to define the characteristics of the intended audiences of resources, and also the creators of, and contributors to, those resources. The thesauruses: Some examples : Adolescents (Teenagers) Guadeloupéens (Guadeloupians) Doctorants (Doctoral students) Écologistes (Ecologists) Mères (Mothers) Ingénieurs (Engineers) Locuteurs du catalan (Catalan speakers) Archéologues (Archaeologists) Bibliothécaires de référence (Reference librarians) Sources: Bélair, Jo-Anne, Bélanger, Sylvie, Dolbec, Denise, Hudon, Michèle. Guide pratique du Répertoire de vedettes-matière de l'Université Laval, Montréal : Éditions ASTED ; Québec : Université Laval, 2008. Dolbec, Denise. « Le répertoire de vedettes-matière : outil du XXIe siècle », Documentation et bibliothèques, 2006, vol. 52, no 2, p. 99-108. Gascon, Pierre. « Le Répertoire de vedettes-matière de la Bibliothèque de l'Université Laval : sa genèse et son évolution », Documentation et bibliothèques, 1993, vol. 39, no 3, pp. 129–139 ; 1994, vol. 40, no 1, p. 25-32.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dysprosium titanate** Dysprosium titanate: Dysprosium titanate (Dy2Ti2O7) is an inorganic compound, a ceramic of the titanate family, with pyrochlore structure. Dysprosium titanate, like holmium titanate and holmium stannate, is a spin ice material. In 2009, quasiparticles resembling magnetic monopoles were observed at low temperature and high magnetic field.Dysprosium titanate (Dy2TiO5) is used since 1995 as material for control rods of commercial nuclear reactor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Raw device** Raw device: In computing, specifically in Unix and Unix-like operating systems, a raw device is a special kind of logical device associated with a character device file that allows a storage device such as a hard disk drive to be accessed directly, bypassing the operating system's caches and buffers (although the hardware caches might still be used). Applications like a database management system can use raw devices directly, enabling them to manage how data is cached, rather than deferring this task to the operating system. Raw device: In FreeBSD, all device files are in fact raw devices. Support for non-raw devices was removed in FreeBSD 4.0 in order to simplify buffer management and increase scalability and performance.In Linux kernel, raw devices were deprecated and scheduled for removal at one point, because the O_DIRECT flag can be used instead. However, later the decision was made to keep raw devices support since some software cannot use the O_DIRECT flag. Raw devices simply open block devices as if the O_DIRECT flag would have been specified. Raw devices are character devices (major number 162). The first minor number (i.e. 0) is reserved as a control interface and is usually found at /dev/rawctl. A command-line utility called raw can be used to bind a raw device to an existing block device. These "existing block devices" may be disks or CD-ROMs/DVDs whose underlying interface can be anything supported by the Linux kernel (for example, IDE/ATA or SCSI).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IOE engine** IOE engine: The intake/inlet over exhaust, or "IOE" engine, known in the US as F-head, is a four-stroke internal combustion engine whose valvetrain comprises OHV inlet valves within the cylinder head and exhaust side-valves within the engine block.IOE engines were widely used in early motorcycles, initially with the inlet valve being operated by engine suction instead of a cam-activated valvetrain. When the suction-operated inlet valves reached their limits as engine speeds increased, the manufacturers modified the designs by adding a mechanical valvetrain for the inlet valve. A few automobile manufacturers, including Willys, Rolls-Royce and Humber also made IOE engines for both cars and military vehicles. Rover manufactured inline four and six cylinder engines with a particularly efficient version of the IOE induction system. IOE engine: A few designs with the reverse system, exhaust over inlet (EOI), have been manufactured, such as the Ford Quadricycle of 1896. Description: In a F-head/IOE engine, the intake manifold and its valves are located in the cylinder head above the cylinders, and are operated by rocker arms which reverse the motion of the pushrods so that the intake valves open downward into the combustion chamber. The exhaust manifold and its valves are located beside or as part of the cylinders, in the block. The exhaust valves are either roughly or exactly parallel with the pistons; their faces point upwards and they are not operated by separate pushrods, but by contact with a camshaft through the tappet or valve lifter and an integrated valve stem/pushrod. The valves were offset to one side, forming what seemed to be a pocket, leading to the term "pocket valve" being used for IOE engines. An F-head engine combines features from both overhead-valve and flathead type engines, the inlet valve operating via pushrod and rocker arm and opening downward like an overhead valve engine, while the exhaust valve is offset from the cylinder and opens upward via an integrated pushrod/valve stem directly actuated by the camshaft, much like the valves in a flathead engine. Origin: The earliest IOE layouts used atmospheric inlet valves which were held closed with a weak spring and were opened by the pressure differential created when the piston went down on the inlet stroke. This worked well with low-speed early engines and had the benefit of being very simple and cheap, but the weak spring was unable to close the valve fast enough as engine speed increased. This required stronger springs, which in turn required direct mechanical action to open, as the atmospheric pressure of 15 PSI limits the total force available from creating a pressure differential, meaning that a 15 pounds (6.8 kg) spring is the strongest that can be used (for practical purposes, it would have to be lighter still). When the limits of this system were reached, the design was improved without substantial changes to the head casting by adding a mechanical system to open the inlet valves and stronger springs to close them. In both cases, the exhaust valves were in the block and were opened by contact with a camshaft through a tappet or valve lifter and closed by springs. Advantages and disadvantages: The IOE design allows the use of larger valves than a sidevalve (or L-head) or overhead valve engine. Its advantages over the sidevalve/flathead also include a compact combustion chamber, a well-located spark plug, and a cooling effect from the mixture swirl, along with better intake mixture flow. Disadvantages include a combustion chamber of more complex shape than that of an overhead valve engine, which affects combustion rates and can create hot spots in the piston head, and inferior valve location, which hinders efficient scavenging. Due to the added complications of rocker arms and pushrods, it is also more complex and expensive to make than a sidevalve engine, as well as being physically larger due to the rocker arms being placed over the cylinder head, and it requires an inlet valve and ports in the cylinder head, while the cylinder of a sidevalve engine is simply a closed-end cylinder. Rover IOE engines: Rover used a more advanced form of IOE engine. It was designed by Jack Swaine in the mid-late 1940s and was in production from 1948 to the early 1990s. Unlike the conventional F-head IOE, this had an efficient combustion chamber designed for good combustion, rather than simple manufacture. The top surface of the block was machined at an angle, with the piston crowns angled in a "pitched roof" to match. At TDC, the piston almost touched the angled inlet valve and provided good 'squish' to the combustion chamber itself, offset to the side by half a cylinder diameter. The resultant combustion chamber shape was a near-ideal hemisphere, although inverted and tilted from the usual "hemi-head" design. The spark plug was centrally mounted and this, together with the turbulence generated by the squish, provided a short flame path. The thinness of the gas layer between piston and inlet valve was so confined as to reduce the risk of detonation on poor fuel, one factor that kept it in service with Land Rover for so long. During the late 1940s and early 1950s when the only petrol available was low octane 'pool' petrol it also allowed Rover to run higher compression ratios than many competitors with the more usual side- or overhead valve designs.The unusual combustion chamber arrangement with its angled valves also led to an unusual valve train. The block-mounted camshaft operates small wedge shaped rockers, one for each valve. In early models the camshaft acts on a simple pad on the rocker, but for later models this pad was replaced by a roller follower. The exhaust rockers act directly on the valves, whilst the inlet rockers act on pushrods running up to a second set of longer flat rockers operating the inlet valves. The Rover engine, like many 1940s and earlier British designs, was a small bore, long stroke (undersquare) engine to keep the RAC tax horsepower rating as low as possible, thus keeping the road tax as low as possible. The IOE layout enabled Rover to use larger valves than would normally be possible in a small bore engine, allowing better breathing and better performance.The Rover IOE engine family encompassed straight-4 (1.6- and 2.0-litres) and straight-6 (2.1-, 2.2-, 2.3-, 2.4-, 2.6- and 3.0-litres) engines and powered much of the company's post-war range in the form of the P3, P4 and P5 models. Adapted versions of the 1.6 and 2.0 IOE engines were used in early version of the Land Rover as well. Power outputs ranged from 50bhp (Land Rover 1.6) to 134bhp (P5 3 litre MkII & III). The 2.6 6-cylinder IOE engine had a particularly long career. After being used in Rover P4 saloon cars it was added to long-wheelbase Land Rover models from 1963 in the 2A Forward Control models, then in 1967 in the bonneted 109", and remained an optional fitment until 1980 when it was replaced by the Rover V8. Rover IOE engines: Similar Packard cylinder head The shape of the combustion chamber as an "inverted hemi-head", along with the angled cylinder head joint and pitched-roof piston crowns, had earlier been used in the 1930 Van Ranst-designed Packard V12 engine, although in this case the valves were both in the block as side valves and the spark plug was poorly placed at the extremity of the combustion chamber. Other users: MotorcyclesThe IOE valvetrain layout was used extensively in early American motorcycles, mainly based on a French design by De Dion-Bouton. Harley-Davidson used IOE engines with atmospheric inlet valves until 1912, and with mechanically driven inlet valves from 1911 to 1929. Indian used IOE valvetrains on all of their four-cylinder bikes except those built in 1936 and 1937. Other American motorcycle manufacturers that used IOE engines included Excelsior, Henderson, and Ace. Other users: AutomobilesHudson used an IOE inline-four engine in its Essex line of cars from 1919 to 1923 and an IOE straight-six engine in its Hudson line of cars from 1927 to 1929.In Europe in the same period Humber Limited of Coventry, England produced a full range of cars using IOE engines, these were however phased out at the end of the 1920s in favour of models using cheaper L head engines shared with Hillman Post WW2 Willys, and its successor Kaiser-Jeep, used variants of the Willys Hurricane engine from 1950 to 1971.Rolls-Royce used an IOE straight-six engine originally designed immediately prior to WW2 in their post-war Silver Wraith. From this engine Rolls-Royce derived the B series engines for British Army combat vehicles which were produced in four, six and eight cylinder versions(the B40, B60 and B80) by Rolls-Royce (and in the case of the B40 used in the Austin Champ by Morris Motors)for military vehicles, fire appliances and even buses. A more advanced shorter stroke passenger car development the FB60 engine, a straight-six IOE engine displacing 3909cc and producing a claimed 175 , was used by BMC in the Vanden Plas Princess 4-litre R saloon car. Over 6000 of these cars were made. Exhaust over intake (EOI): Some engines have been made with the reverse configuration, having the exhaust valve located in the cylinder head and the intake valve in the block. The ABC Skootamota began production with an engine of this configuration, but this was changed to an overhead valve engine before production ended.In 1936 and 1937, the Indian Four had the valve positions reversed, with the exhaust valve in the head and the inlet valve in the block. In theory, this would improve fuel vaporization, and the engine was actually more powerful. However, the new system made the cylinder head very hot. The exhaust valve linkage required frequent adjustment. The design returned to the original IOE configuration in 1938.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Penbutolol** Penbutolol: Penbutolol (brand names Levatol, Levatolol, Lobeta, Paginol, Hostabloc, Betapressin) is a medication in the class of beta blockers, used in the treatment of high blood pressure. Penbutolol is able to bind to both beta-1 adrenergic receptors and beta-2 adrenergic receptors (the two subtypes), thus making it a non-selective β blocker.: Table 10–2, p 252  Penbutolol is a sympathomimetic drug with properties allowing it to act as a partial agonist at β adrenergic receptors.It was approved by the FDA in 1987 and was withdrawn from the US market by January 2015. Medical uses: Penbutolol is used to treat mild to moderate high blood pressure. Like other beta blockers it is not a first line treatment for this indication.It should not be used or only used with caution in people with heart failure and people with asthma. It may mask signs of low blood sugar in people with diabetes and it may mask signs of hyperthyroidism.Animal studies showed some signs of potential trouble for women who are pregnant, and it has not been tested in women who are pregnant. It is not known if penbutolol is secreted in breast milk. Side effects: Penbutolol has a low frequency of side effects. These side effects include dizziness, light headedness, and nausea. Pharmacology: Pharmacodynamics Penbutolol is able to bind to both beta-1 adrenergic receptors and beta-2 adrenergic receptors (the two subtypes), thus making it a non-selective β blocker.: Table 10–2, p 252  Penbutolol is a sympathomimetic drug with properties allowing it to act as a partial agonist at β adrenergic receptors.Blocking β adrenergic receptors decreases the heart rate and cardiac output to lower arterial blood pressure. β blockers also decrease renin levels, which ultimately results in less water being reabsorbed by the kidneys and therefore a lower blood volume and blood pressure.Penbutolol acts on the β1 adrenergic receptors in both the heart and the kidney. When β1 receptors are activated by a catecholamine, they stimulate a coupled G protein which activates adenylyl to convert adenosine triphosphate (ATP) to cyclic adenosine monophosphate (cAMP). The increase in cAMP ultimately alters the movement of calcium ions in heart muscle and increases heart rate.: 213–214  Penbutolol blocks this and decreases heart rate, which lowers blood pressure.: 40 The ability of penbutolol to act as a partial agonist proves useful in the prevention of bradycardia as a result of decreasing the heart rate excessively. Penbutolol binding β1 adrenergic receptors also alters kidney functions. Under normal physiological conditions, the enzyme renin converts angiotensinogen to angiotensin I, which will then be converted to angiotensin II. Angiotensin II stimulates the release of aldosterone from the adrenal gland, causing a decrease in electrolyte and water retention, ultimately increasing water excretion and decreasing blood volume and pressure.: 221 Like propanolol and pindolol, it is a serotonin 5-HT1A and 5-HT1B receptor antagonist; this discovery by several groups in the 1980s generated excitement among those doing research on the serotonin system as such antagonists were rare at that time.: 111–14 Pharmacokinetics Penbutolol is rapidly absorbed from the gastrointestinal tract, has a bioavailability over 90%, and has a rapid onset of effect. Penbutolol has a half life of five hours.: Table 10–2, p 252 Society and culture: Availability Penbutolol was approved by the FDA in 1987. In January 2015 the FDA acknowledged that the penbutolol was no longer marketed in the US, and determined that the drug was not withdrawn for safety reasons.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Amur virus** Amur virus: Amur virus (AMRV) is a zoonotic negative-sense single-stranded RNA virus. It may be a member of the genus Orthohantavirus, but it has not be definitively classified as a species and may only be a strain. It has been identified as a causative agent of hemorrhagic fever with renal syndrome. Genome: The complete genome sequence Amur virus has been isolated from a sample obtained from Apodemus peninsulae in Northeastern China. AMRV strains from China and Far East and Soochong virus (SOOV) (especially SOO-1/2 strains from Northeastern Korea) were found to share high identities of nucleotide sequences and were monophyletic distinct from Apodemus agrarius HTNV. Two genetic sublineages of SOOV exist, but findings suggest that AMRV and SOOV are different strains of the same hantavirus. Reservoir: The virus is reported to be carried by Korean field mice (Apodemus peninsulae) in the Far East of Russia, China, and Korea.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cross-phase modulation** Cross-phase modulation: Cross-phase modulation (XPM) is a nonlinear optical effect where one wavelength of light can affect the phase of another wavelength of light through the optical Kerr effect. When the optical power from a wavelength impacts the refractive index, the impact of the new refractive index on another wavelength is known as XPM. Applications of XPM: Cross-phase modulation can be used as a technique for adding information to a light stream by modifying the phase of a coherent optical beam with another beam through interactions in an appropriate nonlinear medium. This technique is applied to fiber optic communications. If both beams have the same wavelength, then this type of cross-phase modulation is degenerate.XPM is among the most commonly used techniques for quantum nondemolition measurements. Applications of XPM: Other advantageous applications of XPM include: Nonlinear optical Pulse Compression of ultrashort pulses Passive mode-locking Ultrafast optical switching Demultiplexing of OTDM channels Wavelength conversion of WDM channels Measurement of nonlinear optical properties of the media (non-linear index n2 (Kerr nonlinearity) and nonlinear response relaxation time) Disadvantages of XPM: XPM in DWDM applications In dense wavelength-division multiplexing (DWDM) applications with intensity modulation and direct detection (IM-DD), the effect of XPM is a two step process: First the signal is phase modulated by the copropagating second signal. In a second step dispersion leads to a transformation of the phase modulation into a power variation. Additionally, the dispersion results in a walk-off between the channels and thereby reduces the effect of XPM. Disadvantages of XPM: XPM leads to interchannel crosstalk in WDM systems It can produce amplitude and timing jitter
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Peak kilovoltage** Peak kilovoltage: Peak kilovoltage (kVp) refers to the maximum high voltage applied across an X-ray tube to produce the X-rays. During X-ray generation, surface electrons are released from a heated cathode by thermionic emission. The applied voltage (kV) accelerates these electrons toward an anode target, ultimately producing X-rays when the electrons are stopped in the anode. Thus, the kVp corresponds to the highest kinetic energy of the electrons striking the target, and is proportional to the maximum photon energy of the resulting X-ray emission spectrum. In early and basic X-ray equipment, the applied voltage varies cyclically, with one, two, or more pulses per mains AC power cycle. One standard way to measure pulsating DC is its peak amplitude, hence kVp. Most modern X-ray generators apply a constant potential across the X-ray tube; in such systems, the kVp and the steady-state kV are identical. Peak kilovoltage: kVp controls the property called "radiographic contrast" of an X-ray image (the ratio of transmitted radiation through regions of different thickness or density). Each body part contains a certain type of cellular composition which requires an X-ray beam with a certain kVp to penetrate it. The body part is said to have "subject contrast" (that is, different cellular make up: some dense, some not so dense tissues all within a specific body part). For example: bone to muscle to air ratios in the abdomen differ from those of the chest area. So the subject contrast is said to be higher in the chest than in the abdomen. In order to image the body so that the maximum information will result, higher subject contrast areas require a higher kVp so as to result in a low radiographic contrast image, and vice versa. Peak kilovoltage: Although the product of tube current and exposure time, measured in milliampere-seconds (mA·s), is the primary controlling factor of radiographic density, kVp also affects the radiographic density indirectly. As the energy (which is proportional to the peak voltage) of the stream of electrons in the X-ray tube increases, the X-ray photons created from those electrons are more likely to penetrate the cells of the body and reach the image receptor (film or plate), resulting in increased film density (compared to lower energy beams that may be absorbed in the body on their way to the image receptor). However, scattered X-rays also contribute to increased film density: the higher the kVp of the beam, the more scatter will be produced. Scatter adds unwanted density (that is, density that does not bring pertinent information to the image receptor). This is why kVp is not primarily used to control film density – as the density resulting from increasing kVp exceeds what is needed to penetrate a body part, it only adds useless photons to the image. Peak kilovoltage: Increasing mAs causes more photons (radiation) of the particular kVp energy to be produced. This is helpful when larger body parts are imaged, because they require more photons. The more photons that pass through a particular tissue type (whose kVp is interacting at the cellular level), the more photons reach the image receptor. The more photons that pass through a part and reach the image receptor with pertinent information, the more useful the film density on the resulting image. Conversely, lower mAs creates fewer photons, which will decrease film density, but is helpful when imaging smaller parts. The measurement of kVp is done by kV meter. The quality of X-ray tube depends upon the kV applied across the filament at the target. A slight change in kV affects the image significantly. Therefore, it is necessary to measure kV applied to tube accurately.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Read-mostly memory** Read-mostly memory: Read-mostly memory (RMM) is a type of memory that can be read fast, but written to only slowly. Read-mostly memory: Historically, the term was used to refer to different types of memory over time: In 1970, it was used by Intel and Energy Conversion Devices to refer to a new type of amorphous and crystalline nonvolatile and reprogrammable semiconductor memory (phase-change memory aka PCM/PRAM). However, it was also used to refer to reprogrammable memory (REPROM) and magnetic-core memory.The term has mostly fallen into disuse, but is sometimes used referring to electrically erasable programmable read-only (EEPROM) or flash memory today.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ICE Cubes Service** ICE Cubes Service: The International Commercial Experiment Cubes (ICE Cubes) service is a commercial service that offers access to space for research, technology and education.It allows public or private entities to run their experiments on board the International Space Station for access to microgravity. Examples of potential fields of research include pharmaceutical development, microbiology, stem cells, radiation, materials science, 3D printing, fluid sciences, and art. The service also allows to demonstrate and validate technologies in microgravity. Overview: The ICE Cubes Service stems from a commercial agreement between Space Applications Services and the European Space Agency (ESA). The ICE Cubes Facility (ICF) is the first European commercial research facility in the ISS. It was installed in June 2018 and it houses modular experiments on different disciplines. Researchers have continuous live remote access to their payloads via Internet to directly read data and send commands. History: From June 2015 to January 2018, the ICE Cubes Facility (ICF) was developed, manufactured, assembled and tested by Space Applications Services in Belgium, partially under a European Union Horizon 2020 grant. On 21 May 2018, the ICF was launched with a Cygnus Spacecraft (CRS OA-9E mission). On 6 June 2018, NASA astronaut Ricky Arnold installed the facility into the EPM rack inside the European Columbus module. The first experiments (so-called "Experiment Cubes") were launched on the SpaceX Dragon supply vessel and then installed inside the ICF by ESA astronaut Alexander Gerst. They included science, technology and art projects provided by the International Space University (ISU) and international collaborators.In parallel, a partnership agreement for the launch and exploitation in orbit of the ICF was signed with ESA on 20 June 2017, and the first European Commercial Access Service to the ISS officially opened its doors for business. History: One of the ISU Experiment Cubes, Hydra-2, was a research experiment on methane-producing microorganisms to investigate their activity in microgravity conditions. Researchers hope these microorganisms could be used for biomining of asteroids to produce methane to fuel future space missions. Hydra-2 returned to Earth in January 2019. History: The second International Space University cube was an interactive art installation. The artistic cube contained a kaleidoscope linked to an installation on the ground that is activated by the heart pulse of participants. The images produced by the device are then sent down to the ground installation on Earth and displayed in real-time. The other experiment is RUSH, a technology demonstration payload for radiation tolerant electronics. Hydra-3 is still hosted inside the ICF. History: Successive experiments studied some aspects of plant germination, demonstration of spectroscopic diagnostics and recovery of cybersecurity functions on commercial electronics in space.In March 2019, ICE Cubes became an implementation partner at the Center for the Advancement of Science in Space (CASIS), a non-profit organization that manages the ISS United States National Laboratory. History: In October 2019, UK Science Minister Chris Skidmore announced the launch by the UK Space Agency of a contest to identity and match-fund business ideas taking advantage of space. ICE Cubes was chosen as one of the hosts for the launch of the selected projects.In December 2020, Cube#6 - Kirara was launched on SpaceX CRS-21 and installed in the ICF. Kirara is the first experiment for COVID-19 drug discovery research on the International Space Station. It tests a COVID-19 medicine in microgravity in order to better understand how remdesivir interacts with its delivery substance cyclodextrin so that the drug's efficiency can be improved. The ICF and the Experiment Cubes: The ICE Cubes Facility is mostly composed of the ICE Cubes Container, which is permanently housed into the EPM rack, and of the ICE Cubes Framework, which hosts the Experiment Cubes. The Framework can slide in and out of the Container, and the Experiment Cubes are simply plugged onto the Framework, which minimizes the crew time for installation/removal. The ICF provides the Experiment Cubes with two power lines, real-time and differed communications (via internet), and temperature regulation (forced air flow).The Experiment Cubes can be built with commercial off-the-shelf products and be integrated together to permit larger experiments. The ICF can accommodate up to twenty "plug-and-play" 1U (10x10x10 cm) Experiment Cubes or a smaller number of larger ones within the ICF Containers, plus some additional Experiment Cubes inside the Columbus cabin, either wired or via Wi-Fi.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ethinylestradiol/desogestrel** Ethinylestradiol/desogestrel: Desogestrel/ethinylestradiol (EE/DSG), sold under the brand name Marvelon among others, is a fixed-dose combination of desogestrel (DSG), a progestin, and ethinylestradiol (EE), an estrogen, which is used as a birth control pill to prevent pregnancy in women. It is taken by mouth.It was approved for medical use in the United Kingdom in 1981, and in the United States in 1992. In 2020, it was the 120th most commonly prescribed medication in the United States, with more than 5 million prescriptions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bundle metric** Bundle metric: In differential geometry, the notion of a metric tensor can be extended to an arbitrary vector bundle, and to some principal fiber bundles. This metric is often called a bundle metric, or fibre metric. Definition: If M is a topological manifold and π : E → M a vector bundle on M, then a metric on E is a bundle map k : E ×M E → M × R from the fiber product of E with itself to the trivial bundle with fiber R such that the restriction of k to each fibre over M is a nondegenerate bilinear map of vector spaces. Roughly speaking, k gives a kind of dot product (not necessarily symmetric or positive definite) on the vector space above each point of M, and these products vary smoothly over M. Properties: Every vector bundle with paracompact base space can be equipped with a bundle metric. For a vector bundle of rank n, this follows from the bundle charts ϕ:π−1(U)→U×Rn : the bundle metric can be taken as the pullback of the inner product of a metric on Rn ; for example, the orthonormal charts of Euclidean space. The structure group of such a metric is the orthogonal group O(n). Example: Riemann metric: If M is a Riemannian manifold, and E is its tangent bundle TM, then the Riemannian metric gives a bundle metric, and vice versa. Example: on vertical bundles: If the bundle π:P → M is a principal fiber bundle with group G, and G is a compact Lie group, then there exists an Ad(G)-invariant inner product k on the fibers, taken from the inner product on the corresponding compact Lie algebra. More precisely, there is a metric tensor k defined on the vertical bundle E = VP such that k is invariant under left-multiplication: k(Lg∗X,Lg∗Y)=k(X,Y) for vertical vectors X, Y and Lg is left-multiplication by g along the fiber, and Lg* is the pushforward. That is, E is the vector bundle that consists of the vertical subspace of the tangent of the principal bundle. Example: on vertical bundles: More generally, whenever one has a compact group with Haar measure μ, and an arbitrary inner product h(X,Y) defined at the tangent space of some point in G, one can define an invariant metric simply by averaging over the entire group, i.e. by defining k(X,Y)=∫Gh(Lg∗X,Lg∗Y)dμg as the average. The above notion can be extended to the associated bundle P×GV where V is a vector space transforming covariantly under some representation of G. In relation to Kaluza–Klein theory: If the base space M is also a metric space, with metric g, and the principal bundle is endowed with a connection form ω, then π*g+kω is a metric defined on the entire tangent bundle E = TP.More precisely, one writes π*g(X,Y) = g(π*X, π*Y) where π* is the pushforward of the projection π, and g is the metric tensor on the base space M. The expression kω should be understood as (kω)(X,Y) = k(ω(X),ω(Y)), with k the metric tensor on each fiber. Here, X and Y are elements of the tangent space TP. In relation to Kaluza–Klein theory: Observe that the lift π*g vanishes on the vertical subspace TV (since π* vanishes on vertical vectors), while kω vanishes on the horizontal subspace TH (since the horizontal subspace is defined as that part of the tangent space TP on which the connection ω vanishes). Since the total tangent space of the bundle is a direct sum of the vertical and horizontal subspaces (that is, TP = TV ⊕ TH), this metric is well-defined on the entire bundle. In relation to Kaluza–Klein theory: This bundle metric underpins the generalized form of Kaluza–Klein theory due to several interesting properties that it possesses. The scalar curvature derived from this metric is constant on each fiber, this follows from the Ad(G) invariance of the fiber metric k. The scalar curvature on the bundle can be decomposed into three distinct pieces: RE = RM(g) + L(g, ω) + RG(k)where RE is the scalar curvature on the bundle as a whole (obtained from the metric π*g+kω above), and RM(g) is the scalar curvature on the base manifold M (the Lagrangian density of the Einstein–Hilbert action), and L(g, ω) is the Lagrangian density for the Yang–Mills action, and RG(k) is the scalar curvature on each fibre (obtained from the fiber metric k, and constant, due to the Ad(G)-invariance of the metric k). The arguments denote that RM(g) only depends on the metric g on the base manifold, but not ω or k, and likewise, that RG(k) only depends on k, and not on g or ω, and so-on.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Continuous q-Hermite polynomials** Continuous q-Hermite polynomials: In mathematics, the continuous q-Hermite polynomials are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme. Roelof Koekoek, Peter A. Lesky, and René F. Swarttouw (2010, 14) give a detailed list of their properties. Definition: The polynomials are given in terms of basic hypergeometric functions by cos θ. Recurrence and difference relations: 2xHn(x∣q)=Hn+1(x∣q)+(1−qn)Hn−1(x∣q) with the initial conditions H0(x∣q)=1,H−1(x∣q)=0 From the above, one can easily calculate: 16 x4−4x2(3−q−q2−q3)+(1−q−q3+q4)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aber Mawr Formation** Aber Mawr Formation: The Aber Mawr Formation is a geological formation in Wales. It preserves fossils dating back to the Arenig Series of the Ordovician period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Prepainted metal** Prepainted metal: According to EN 13523-0, a prepainted metal (or coil coated metal) is a ‘metal on which a coating material (e.g. paint, film…) has been applied by coil coating’. When applied onto the metallic substrate, the coating material (in liquid, in paste or powder form) forms a film possessing protective, decorative and/or other specific properties. In 40 years, the European prepainted metal production has multiplied by 18. Metal: The choice of metallic substrate is determined by the dimensional, mechanical and corrosion resistance properties required of the coated product in use. The most common metallic substrates that are organically coated are: Hot dip galvanised steel (HDG) which consists of a cold reduced steel substrate onto which a layer of zinc is coated via a hot dip process to impart enhanced corrosion properties onto the base steel. Metal: Galvanized mild steel (GMS) can be used as balustrade and handrail of staircase, pipe, etc. Other zinc-based alloys are coated onto steel and used as a substrate for coil coating, giving different properties. They give improved corrosion resistance in particular conditions. Electro-galvanised (EG) coated steel consists of a cold reduced substrate onto which a layer of zinc is coated by an electrolytic process. Cold reduced steel (CR) without any zinc coating Wrought aluminium alloys Many other substrates are organically coated: zinc/iron, stainless steel, tinplate, brass, zinc and copper. Coil coating: Coil coating is the continuous and highly automated industrial process for efficiently coating metal coils. Because the metal is treated before it is cut and formed, the entire surface is cleaned and treated, providing tightly-bonded finishes. (Formed parts can have many holes, recessed areas, valleys, and hidden areas that make it difficult to clean and uniformly paint.) Coil-coated metal (often called prepainted metal) is often considered more durable and more corrosion-resistant than most post painted metal.Annually, 4.5 million tons of coil-coated steel and aluminum are produced and shipped in North America, and 5 million tons in Europe. In almost every five-year period since the early 1980s, the growth rate of coil-coated metal has exceeded the growth rates of either steel and/or aluminum production. Coil coating: Process The definition of a coil coating process according to EN 10169:2010 is a ‘process in which an (organic) coating material is applied on rolled metal strip in a continuous process which includes cleaning, if necessary, and chemical pre-treatment of the metal surface and either one-side or two-side, single or multiple application of (liquid) paints or coating powders which are subsequently cured or/and laminating with permanent plastic films’.The metal substrate (steel or aluminum) is delivered in coil form from the rolling mills. Coil weights vary from 5-6 tons for aluminum and up to about 25 tons for steel. The coil is positioned at the beginning of the line, then unwound at a constant speed, passing through the various pre-treatment and coating processes before being recoiled. Two strip accumulators at the beginning and the end of the line enable the work to be continuous, allowing new coils to be added (and finished coils removed) by a metal stitching process without slowing down or stopping the line. Coil coating: The coil coating line The continuous process of applying up to three separate coating layers onto one or both sides of a metal strip substrate occurs on a coil coating line. These lines vary greatly in size, with widths from 18 to 60 inches (46 to 152 cm) and speeds from 100 to 700 feet per minute (0.5 to 3.6 m/s); however, all coil-coating lines share the same basic process steps.A typical organic coil coating line consists of decoilers, entry strip accumulator, cleaning, chemical pretreatment, primer coat application, curing, final coat application, curing, exit accumulator and recoilers. Coil coating: The following steps take place on a modern coating line: Mechanical stitching of the strip to its predecessor Cleaning the strip Power brushing Surface treatment by chemical conversion Drying the strip Application of primer on one or both sides Passage through the first curing oven (between 15 and 60 seconds) Cooling the strip Coating the finish on one or both sides Passage through the second curing oven (between 15 and 60 seconds) Cooling down to room temperature Rewinding of the coated coil Coatings Available coatings include polyesters, plastisols, polyurethanes, polyvinylidene fluorides (PVDF), epoxies, primers, backing coats and laminate films. For each product, the coating is built up in a number of layers. Coil coating: Primer coatings form the essential link between the pretreatment and the finish coating. Essentially, a primer is required to provide inter-coat adhesion between the pretreatment and the finish coat and is also required to promote corrosion resistance in the total system. The composition of the primer will vary depending on the type of finish coat used. Primers require compatibility with various pretreatments and top coat paint systems; therefore, they usually comprise a mixture of resin systems to achieve this end. Coil coating: Backing coats are applied to the underside of the strip with or without a primer. The coating is generally not as thick as the finish coating used for exterior applications. Backing coats are generally not exposed to corrosive environments and not visible in the end application. Applications: Prepainted metal is used in a variety of products. It can be formed for many different applications, including those with T bends, without loss of coating quality. Major industries use prepainted metal in products such as building panels, metal roofs wall panels, garage doors, office furniture (desks, cubicle divider panels, file cabinets, and modular cabinets), home appliances (refrigerators, dishwashers, freezers, range hoods, microwave ovens, and washers and dryers), heating and air-conditioning outer panels and ductwork, commercial appliances, vending machines, foodservice equipment and cooking tins, beverage cans, and automotive panels and parts (fuel tanks, body panels, bumpers), The list continues to grow, with new industries making the switch from post-painted to prepainted processes each year.Some high-tech, complex coatings are applied with the coil coating process. Coatings for cool metal roofing materials, smog-eating building panels, antimicrobial products, anti-corrosive metal parts, and solar panels use this process. Pretreatments and coatings can be applied with the coil coating process in very precise, thin, uniform layers, and makes some complex coatings feasible and more cost-effective. Applications: The largest market for prepainted metal is in both commercial and residential construction. It is chosen for the quality, low cost, design flexibility, and environmentally beneficial properties. Using prepainted metal can contribute to credit toward LEED certification for sustainable design. A wide arrange of color options are available with prepainted metal, including vibrant colors for modern designs, and natural weathered finishes in rustic expressions. Prepainted metal also can be formed, almost like plastic, in fluid shapes. This flexibility allows architects to achieve unique, expressive designs using metal.The output of the coil coating industry is a prepainted metal strip. This has numerous applications in various industries, including in: The construction industry for both indoor and outdoor applications; The automotive and transport industries; The production of white goods including washing machines; Cabinets for electronic goods; Office furniture; Lighting envelopes; Bakeware. History: In the old days of traditional manufacturing, steel and other metals arrived at factories in an untreated and unpainted state. Companies would fabricate and paint or treat the metal components of their product before assembly. This was costly, time-consuming, and environmentally harmful. The coil coating process was pioneered in the 1930s for painting, coating and pre-treating large coils of metals before they arrived at a manufacturing facility. The venetian blind industry was the first to utilize pre-painted metal.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lofting (bowling)** Lofting (bowling): Lofting (by a bowler) in bowling is throwing a bowling ball a short or long distance down the lane. This is usually done with the bounce-pass technique, but can also be done with a straight ball. Lofting is sometimes discouraged by the bowling community and bowling alley employees, because it can sometimes cause damage to the ball and lanes. However, this is somewhat untrue. Loft will almost never cause major damage to a ball, nor will lofting cause damage to (synthetic) lanes. Many bowling alleys that use wooden lanes will either have signs that tell the bowlers not to loft, or an employee will inform the bowlers not to do so, because wooden lanes can be dented by a lofted ball. Lofting the ball before the arrows in some bowling alleys is not against the rules. Some professional bowlers do loft a considerable amount under certain lane conditions. Crankers and other high-rev players may be forced to loft under dry conditions in order to delay the ball's reaction and prevent it from over-hooking. Lofting over the gutter is known as "lofting the gutter cap," and is sometimes done when a bowler has to hook the whole lane on a very broken-down oil pattern. It's common for this to happen at qualifying rounds for the US Open. Lofting (bowling): In the sport of candlepin bowling, "lofting" a ball beyond a lob line situated ten feet (3.05 m) down the lane from the main foul line, without it touching the lane anywhere on the bowler's side of it, is called a lob, and is considered a ball foul, resulting in no counted pinfall from a ball delivered in such a manner, as the ball must first touch the lanebed anywhere on the bowler's side of the lob line to be considered a legal delivery.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Volition (psychology)** Volition (psychology): Volition also known as will or conation is the cognitive process by which an individual decides on and commits to a particular course of action. It is defined as purposive striving and is one of the primary human psychological functions. Others include affect (feeling or emotion), motivation (goals and expectations), and cognition (thinking). Volitional processes can be applied consciously or they can be automatized as habits over time. Volition (psychology): Most modern conceptions of volition address it as a process of conscious action control which becomes automatized (e.g. see Heckhausen and Kuhl; Gollwitzer; Boekaerts and Corno). Overview: Many researchers treat volition and willpower as scientific and colloquial terms (respectively) for the same process. When a person makes up their mind to do a thing, that state is termed 'immanent volition'. When we put forth any particular act of choice, that act is called an emanant, executive, or imperative volition. When an immanent or settled state of choice controls or governs a series of actions, that state is termed predominant volition. Subordinate volitions are particular acts of choice which carry into effect the object sought for by the governing or predominant volition. Overview: According to Gary Kielhofner's "Model of Human Occupation", volition is one of the three sub-systems that act on human behavior. Within this model, volition refers to a person's values, interests and self-efficacy (personal causation) about personal performance. Kurt Lewin argues that motivation and volition are one and the same, in distinction to the nineteenth century psychologist Narziß Ach. Ach proposed that there is a certain threshold of desire that distinguishes motivation from volition: when desire lies below this threshold, it is motivation, and when it crosses over, it becomes volition. In the book A Bias for Action, Heinrich Bruch and Sumantra Ghoshal also differentiate volition (willpower) from motivation. Using this model, they propose assessing individuals' differing levels of commitment with regard to tasks by measuring it on a scale of intent from motivation(an emotion) to volition (a decision). Discussions of impulse control (e.g., Kuhl and Heckhausen) and education (e.g., Corno), also make the motivation-volition distinction. Corno's model ties volition to the processes of self-regulated learning.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DNS Long-Lived Queries** DNS Long-Lived Queries: DNS Long-Lived Queries is a mechanism that allows DNS clients to learn about changes to DNS data without polling.DNS LLQ is currently used by Apple Inc.'s Back To My Mac (BTMM) service to track changes in the IP addresses of BTMM servers and clients. DNS LLQ has also been proposed as a solution for doing DNS-Based Service Discovery (DNS-SD) on routed networks, using long-lived TCP/IP connections. DNS Long-Lived Queries: DNS-SD is a mechanism for identifying services on the local network. DNS-SD is typically used to present names of services (for example, printers or file servers) in user interfaces. DNS Long-Lived Queries (DNS-LLQ) can be used for DNS-SD to allow new services to appear automatically in an active user interface without requiring frequent polling. DNS-LLQ is being proposed in the IETF DNSSD working group as one option for providing service discovery in these routed networks. Although DNS LLQ over TCP/IP has not been standardized, it is in use in Apple Inc.'s current mDNS implementation.DNS LLQ is initiated by a client resolver that wishes to track one or more names in the DNS. The client resolver sends a registration message to a caching server, or to the authoritative server for the zone containing the name or names to be tracked. The query includes a lease; the tracking persists for the duration of the lease. If tracking is desired after the lease expires, the client resolver sends a new registration. The registration message includes a list of one or more queries. The server immediately returns the answers it has for these queries. For the duration of the lease, whenever the information covered by any of the queries changes, the server sends a "gratuitous response" containing new answers. Before the queries are answered and the lease recorded, the server and client perform a challenge/response exchange to validate the registration. Gratuitous answers are acknowledged by the client, and retransmitted if not acknowledged. After several tries, the server holding the registration will assume that the client resolver is no longer available, and will delete the registration.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Passion fruit mousse** Passion fruit mousse: Passion fruit mousse (Portuguese: mousse de maracujá, sometimes spelled musse) is a passion fruit-flavored variation of mousse from Brazilian cuisine. It is usually less aerated than traditional mousses. Recipes vary, but it is usually prepared using gelatin, egg whites, condensed milk and concentrated passion fruit juice. Ingredients often also include cream, either during preparation, or alongside the prepared mousse; sugar is sometimes used as well. History: In the early 1960s, the Brazilian branch of Nestlé was concerned with declining sales of Leite Moça, their condensed milk brand. Starting in 1961, Nestlé began organized culinary courses and worked with cooking schools to promote Leite Moça. It was a very successful effort: Nestlé saw an increase in usage, from less than 10% of recipes in courses integrating Leite Moça in 1961, to over 70% of courses using it in some way in 1964.This affected Brazilian cuisine, especially the desserts: recipes that were once made very similar to their European counterparts were adapted and modified to incorporate condensed milk. In the case of the mousse, it became thicker and less aerated, with reduced cooking time.Not long after, passion fruit mousse recipes start to appear; examples can be found as early as 1962.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anterior meniscofemoral ligament** Anterior meniscofemoral ligament: The anterior meniscofemoral ligament (ligament of Humphry) is a small fibrous band of the knee joint. It arises from the posterior horn of the lateral meniscus and passes superiorly and medially in front of the posterior cruciate ligament to attach to the lateral surface of medial condyle of the femur. Anterior meniscofemoral ligament: Anterior meniscofemoral ligament is found in 11.8% of the subjects during MRI scan of the knee.It may be confused for the posterior cruciate ligament during arthroscopy. In this situation, a tug on the ligament while observing for motion of the lateral meniscus can be used to tell the two apart.Anterior meniscofemoral ligament, together with posterior meniscofemoral ligament, meniscotibial ligament, and the popliteomeniscal fascicles, stabilises the posterolateral part of the lateral meniscus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Simple Addition** Simple Addition: Simple Addition or Totals is a family of patience or card solitaire games that share certain aims and procedures. Composition: Moyse counts the games of Elevens, Fifteens, Tens and Thirteens as part of the Simple Addition family. Parlett adds Baroness, Block Eleven and Block Ten, Decade, Haden, Nines, Seven Up or Seventh Wonder, Pyramid or Pile of Twenty-Eight, Fourteens and Eighteens or Ferris Wheel, Grand Round or Wheel.Simple Addition sometimes also refers specifically to the game of Thirteens.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Transistor** Transistor: A transistor is a semiconductor device used to amplify or switch electrical signals and power. It is one of the basic building blocks of modern electronics. It is composed of semiconductor material, usually with at least three terminals for connection to an electronic circuit. A voltage or current applied to one pair of the transistor's terminals controls the current through another pair of terminals. Because the controlled (output) power can be higher than the controlling (input) power, a transistor can amplify a signal. Some transistors are packaged individually, but many more in miniature form are found embedded in integrated circuits. Because transistors are the key active components in practically all modern electronics, many people consider them one of the 20th century's greatest inventions.Physicist Julius Edgar Lilienfeld proposed the concept of a field-effect transistor in 1926, but it was not possible to construct a working device at that time. The first working device was a point-contact transistor invented in 1947 by physicists John Bardeen, Walter Brattain, and William Shockley at Bell Labs; the three shared the 1956 Nobel Prize in Physics for their achievement. The most widely used type of transistor is the metal–oxide–semiconductor field-effect transistor (MOSFET), invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959. Transistors revolutionized the field of electronics and paved the way for smaller and cheaper radios, calculators, computers, and other electronic devices. Transistor: Most transistors are made from very pure silicon, and some from germanium, but certain other semiconductor materials are sometimes used. A transistor may have only one kind of charge carrier in a field-effect transistor, or may have two kinds of charge carriers in bipolar junction transistor devices. Compared with the vacuum tube, transistors are generally smaller and require less power to operate. Certain vacuum tubes have advantages over transistors at very high operating frequencies or high operating voltages. Many types of transistors are made to standardized specifications by multiple manufacturers. History: The thermionic triode, a vacuum tube invented in 1907, enabled amplified radio technology and long-distance telephony. The triode, however, was a fragile device that consumed a substantial amount of power. In 1909, physicist William Eccles discovered the crystal diode oscillator. Physicist Julius Edgar Lilienfeld filed a patent for a field-effect transistor (FET) in Canada in 1925, intended as a solid-state replacement for the triode. He filed identical patents in the United States in 1926 and 1928. However, he did not publish any research articles about his devices nor did his patents cite any specific examples of a working prototype. Because the production of high-quality semiconductor materials was still decades away, Lilienfeld's solid-state amplifier ideas would not have found practical use in the 1920s and 1930s, even if such a device had been built. In 1934, inventor Oskar Heil patented a similar device in Europe. History: Bipolar transistors From November 17 to December 23, 1947, John Bardeen and Walter Brattain at AT&T's Bell Labs in Murray Hill, New Jersey performed experiments and observed that when two gold point contacts were applied to a crystal of germanium, a signal was produced with the output power greater than the input. Solid State Physics Group leader William Shockley saw the potential in this, and over the next few months worked to greatly expand the knowledge of semiconductors. The term transistor was coined by John R. Pierce as a contraction of the term transresistance. According to Lillian Hoddeson and Vicki Daitch, Shockley proposed that Bell Labs' first patent for a transistor should be based on the field-effect and that he be named as the inventor. Having unearthed Lilienfeld's patents that went into obscurity years earlier, lawyers at Bell Labs advised against Shockley's proposal because the idea of a field-effect transistor that used an electric field as a "grid" was not new. Instead, what Bardeen, Brattain, and Shockley invented in 1947 was the first point-contact transistor. To acknowledge this accomplishment, Shockley, Bardeen and Brattain jointly received the 1956 Nobel Prize in Physics "for their researches on semiconductors and their discovery of the transistor effect".Shockley's team initially attempted to build a field-effect transistor (FET) by trying to modulate the conductivity of a semiconductor, but was unsuccessful, mainly due to problems with the surface states, the dangling bond, and the germanium and copper compound materials. Trying to understand the mysterious reasons behind this failure led them instead to invent the bipolar point-contact and junction transistors.In 1948, the point-contact transistor was independently invented by physicists Herbert Mataré and Heinrich Welker while working at the Compagnie des Freins et Signaux Westinghouse, a Westinghouse subsidiary in Paris. Mataré had previous experience in developing crystal rectifiers from silicon and germanium in the German radar effort during World War II. With this knowledge, he began researching the phenomenon of "interference" in 1947. By June 1948, witnessing currents flowing through point-contacts, he produced consistent results using samples of germanium produced by Welker, similar to what Bardeen and Brattain had accomplished earlier in December 1947. Realizing that Bell Labs' scientists had already invented the transistor, the company rushed to get its "transition" into production for amplified use in France's telephone network, filing his first transistor patent application on August 13, 1948.The first bipolar junction transistors were invented by Bell Labs' William Shockley, who applied for patent (2,569,347) on June 26, 1948. On April 12, 1950, Bell Labs chemists Gordon Teal and Morgan Sparks successfully produced a working bipolar NPN junction amplifying germanium transistor. Bell announced the discovery of this new "sandwich" transistor in a press release on July 4, 1951.The first high-frequency transistor was the surface-barrier germanium transistor developed by Philco in 1953, capable of operating at frequencies up to 60 MHz. They were made by etching depressions into an n-type germanium base from both sides with jets of Indium(III) sulfate until it was a few ten-thousandths of an inch thick. Indium electroplated into the depressions formed the collector and emitter.AT&T first used transistors in telecommunications equipment in the No. 4A Toll Crossbar Switching System in 1953, for selecting trunk circuits from routing information encoded on translator cards. Its predecessor, the Western Electric No. 3A phototransistor, read the mechanical encoding from punched metal cards. History: The first prototype pocket transistor radio was shown by INTERMETALL, a company founded by Herbert Mataré in 1952, at the Internationale Funkausstellung Düsseldorf from August 29 to September 6, 1953. The first production-model pocket transistor radio was the Regency TR-1, released in October 1954. Produced as a joint venture between the Regency Division of Industrial Development Engineering Associates, I.D.E.A. and Texas Instruments of Dallas, Texas, the TR-1 was manufactured in Indianapolis, Indiana. It was a near pocket-sized radio with four transistors and one germanium diode. The industrial design was outsourced to the Chicago firm of Painter, Teague and Petertil. It was initially released in one of six colours: black, ivory, mandarin red, cloud grey, mahogany and olive green. Other colours shortly followed.The first production all-transistor car radio was developed by Chrysler and Philco corporations and was announced in the April 28, 1955 edition of the Wall Street Journal. Chrysler made the Mopar model 914HR available as an option starting in fall 1955 for its new line of 1956 Chrysler and Imperial cars, which reached dealership showrooms on October 21, 1955.The Sony TR-63, released in 1957, was the first mass-produced transistor radio, leading to the widespread adoption of transistor radios. Seven million TR-63s were sold worldwide by the mid-1960s. Sony's success with transistor radios led to transistors replacing vacuum tubes as the dominant electronic technology in the late 1950s.The first working silicon transistor was developed at Bell Labs on January 26, 1954, by Morris Tanenbaum. The first production commercial silicon transistor was announced by Texas Instruments in May 1954. This was the work of Gordon Teal, an expert in growing crystals of high purity, who had previously worked at Bell Labs. History: Field effect transistors The basic principle of the field-effect transistor (FET) was first proposed by physicist Julius Edgar Lilienfeld when he filed a patent for a device similar to MESFET in 1926, and for an insulated-gate field-effect transistor in 1928. The FET concept was later also theorized by engineer Oskar Heil in the 1930s and by William Shockley in the 1940s. History: In 1945 JFET was patented by Heinrich Welker. Following Shockley's theoretical treatment on JFET in 1952, a working practical JFET was made in 1953 by George C. Dacey and Ian M. Ross.In 1948, Bardeen patented the progenitor of MOSFET, an insulated-gate FET (IGFET) with an inversion layer. Bardeen's patent, and the concept of an inversion layer, forms the basis of CMOS technology today. History: MOSFET (MOS transistor) In the early years of the semiconductor industry, companies focused on the junction transistor, a relatively bulky device that was difficult to mass-produce, limiting it to several specialized applications. Field-effect transistors (FETs) were theorized as potential alternatives, but researchers could not get them to work properly, largely due to the surface state barrier that prevented the external electric field from penetrating the material.In 1957, Bell Labs engineer Mohamed Atalla proposed a new method of semiconductor device fabrication: coating a silicon wafer with an insulating layer of silicon oxide so electricity could overcome the surface state and reliably penetrate to the semiconducting silicon below. The process, known as surface passivation, became critical to the semiconductor industry, as it enabled the mass-production of silicon integrated circuits. Building on the method, he developed the metal–oxide–semiconductor (MOS) process, and proposed that it could be used to build the first working silicon FET. Atalla and his Korean colleague Dawon Kahng developed the metal–oxide–semiconductor field-effect transistor (MOSFET), or MOS transistor, in 1959, the first transistor that could be miniaturized and mass-produced for a wide range of uses. In a self-aligned CMOS process, a transistor is formed wherever the gate layer (polysilicon or metal) crosses a diffusion layer.: p.1 (see Fig. 1.1)  With its high scalability, much lower power consumption, and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits, allowing the integration of more than 10,000 transistors in a single IC.CMOS (complementary MOS) was invented by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963. The first report of a floating-gate MOSFET was made by Dawon Kahng and Simon Sze in 1967. A double-gate MOSFET was first demonstrated in 1984 by Electrotechnical Laboratory researchers Toshihiro Sekigawa and Yutaka Hayashi. FinFET (fin field-effect transistor), a type of 3D non-planar multi-gate MOSFET, originated from the research of Digh Hisamoto and his team at Hitachi Central Research Laboratory in 1989. Importance: Because transistors are the key active components in practically all modern electronics, many people consider them one of the 20th century's greatest inventions.The invention of the first transistor at Bell Labs was named an IEEE Milestone in 2009. Other Milestones include the inventions of the junction transistor in 1948 and the MOSFET in 1959.The MOSFET is by far the most widely used transistor, in applications ranging from computers and electronics to communications technology such as smartphones. It has been considered the most important transistor, possibly the most important invention in electronics, and the device that enabled modern electronics. It has been the basis of modern digital electronics since the late 20th century, paving the way for the digital age. The US Patent and Trademark Office calls it a "groundbreaking invention that transformed life and culture around the world". Its ability to be mass-produced by a highly automated process (semiconductor device fabrication), from relatively basic materials, allows astonishingly low per-transistor costs. MOSFETs are the most numerously produced artificial objects in history, with more than 13 sextillion manufactured by 2018.Although several companies each produce over a billion individually packaged (known as discrete) MOS transistors every year, the vast majority are produced in integrated circuits (also known as IC's, microchips, or simply chips), along with diodes, resistors, capacitors and other electronic components, to produce complete electronic circuits. A logic gate consists of up to about 20 transistors, whereas an advanced microprocessor, as of 2022, may contain as many as 57 billion MOSFETs.The transistor's low cost, flexibility and reliability have made it ubiquitous. Transistorized mechatronic circuits have replaced electromechanical devices in controlling appliances and machinery. It is often easier and cheaper to use a standard microcontroller and write a computer program to carry out a control function than to design an equivalent mechanical system. Simplified operation: A transistor can use a small signal applied between one pair of its terminals to control a much larger signal at another pair of terminals, a property called gain. It can produce a stronger output signal, a voltage or current, proportional to a weaker input signal, acting as an amplifier. It can also be used as an electrically controlled switch, where the amount of current is determined by other circuit elements.There are two types of transistors, with slight differences in how they are used: A bipolar junction transistor (BJT) has terminals labeled base, collector and emitter. A small current at the base terminal, flowing between the base and the emitter, can control or switch a much larger current between the collector and emitter.A field-effect transistor (FET) has terminals labeled gate, source and drain. A voltage at the gate can control a current between source and drain.The image represents a typical bipolar transistor in a circuit. A charge flows between emitter and collector terminals depending on the current in the base. Because the base and emitter connections behave like a semiconductor diode, a voltage drop develops between them. The amount of this drop, determined by the transistor's material, is referred to as VBE. Simplified operation: Transistor as a switch Transistors are commonly used in digital circuits as electronic switches which can be either in an on or off state, both for high-power applications such as switched-mode power supplies and for low-power applications such as logic gates. Important parameters for this application include the current switched, the voltage handled, and the switching speed, characterized by the rise and fall times.In a switching circuit, the goal is to simulate, as near as possible, the ideal switch having the properties of an open circuit when off, the short circuit when on, and an instantaneous transition between the two states. Parameters are chosen such that the "off" output is limited to leakage currents too small to affect connected circuitry, the resistance of the transistor in the "on" state is too small to affect circuitry, and the transition between the two states is fast enough not to have a detrimental effect.In a grounded-emitter transistor circuit, such as the light-switch circuit shown, as the base voltage rises, the emitter and collector currents rise exponentially. The collector voltage drops because of reduced resistance from the collector to the emitter. If the voltage difference between the collector and emitter were zero (or near zero), the collector current would be limited only by the load resistance (light bulb) and the supply voltage. This is called saturation because the current is flowing from collector to emitter freely. When saturated, the switch is said to be on.The use of bipolar transistors for switching applications requires biasing the transistor so that it operates between its cut-off region in the off-state and the saturation region (on). This requires sufficient base drive current. As the transistor provides current gain, it facilitates the switching of a relatively large current in the collector by a much smaller current into the base terminal. The ratio of these currents varies depending on the type of transistor, and even for a particular type, varies depending on the collector current. In the example of a light-switch circuit, as shown, the resistor is chosen to provide enough base current to ensure the transistor is saturated. The base resistor value is calculated from the supply voltage, transistor C-E junction voltage drop, collector current, and amplification factor beta. Simplified operation: Transistor as an amplifier The common-emitter amplifier is designed so that a small change in voltage (Vin) changes the small current through the base of the transistor whose current amplification combined with the properties of the circuit means that small swings in Vin produce large changes in Vout.Various configurations of single transistor amplifiers are possible, with some providing current gain, some voltage gain, and some both. Simplified operation: From mobile phones to televisions, vast numbers of products include amplifiers for sound reproduction, radio transmission, and signal processing. The first discrete-transistor audio amplifiers barely supplied a few hundred milliwatts, but power and audio fidelity gradually increased as better transistors became available and amplifier architecture evolved.Modern transistor audio amplifiers of up to a few hundred watts are common and relatively inexpensive. Comparison with vacuum tubes: Before transistors were developed, vacuum (electron) tubes (or in the UK "thermionic valves" or just "valves") were the main active components in electronic equipment. Advantages The key advantages that have allowed transistors to replace vacuum tubes in most applications are No cathode heater (which produces the characteristic orange glow of tubes), reducing power consumption, eliminating delay as tube heaters warm up, and immune from cathode poisoning and depletion. Very small size and weight, reducing equipment size. Large numbers of extremely small transistors can be manufactured as a single integrated circuit. Low operating voltages compatible with batteries of only a few cells. Circuits with greater energy efficiency are usually possible. For low-power applications (for example, voltage amplification) in particular, energy consumption can be very much less than for tubes. Complementary devices available, providing design flexibility including complementary-symmetry circuits, not possible with vacuum tubes. Very low sensitivity to mechanical shock and vibration, providing physical ruggedness and virtually eliminating shock-induced spurious signals (for example, microphonics in audio applications). Not susceptible to breakage of a glass envelope, leakage, outgassing, and other physical damage. Comparison with vacuum tubes: Limitations Transistors may have the following limitations: They lack the higher electron mobility afforded by the vacuum of vacuum tubes, which is desirable for high-power, high-frequency operation – such as that used in some over-the-air television transmitters and in travelling wave tubes used as amplifiers in some satellites Transistors and other solid-state devices are susceptible to damage from very brief electrical and thermal events, including electrostatic discharge in handling. Vacuum tubes are electrically much more rugged. Comparison with vacuum tubes: They are sensitive to radiation and cosmic rays (special radiation-hardened chips are used for spacecraft devices). In audio applications, transistors lack the lower-harmonic distortion – the so-called tube sound – which is characteristic of vacuum tubes, and is preferred by some. Types: Classification Transistors are categorized by Structure: MOSFET (IGFET), BJT, JFET, insulated-gate bipolar transistor (IGBT), other types. Semiconductor material (dopants): The metalloids; germanium (first used in 1947) and silicon (first used in 1954)—in amorphous, polycrystalline and monocrystalline form. The compounds gallium arsenide (1966) and silicon carbide (1997). The alloy silicon-germanium (1989) The allotrope of carbon graphene (research ongoing since 2004), etc. (see Semiconductor material). Electrical polarity (positive and negative): NPN, PNP (BJTs), N-channel, P-channel (FETs). Maximum power rating: low, medium, high. Maximum operating frequency: low, medium, high, radio (RF), microwave frequency (the maximum effective frequency of a transistor in a common-emitter or common-source circuit is denoted by the term fT, an abbreviation for transition frequency—the frequency of transition is the frequency at which the transistor yields unity voltage gain) Application: switch, general purpose, audio, high voltage, super-beta, matched pair. Physical packaging: through-hole metal, through-hole plastic, surface mount, ball grid array, power modules (see Packaging). Amplification factor hFE, βF (transistor beta) or gm (transconductance). Types: Working temperature: Extreme temperature transistors and traditional temperature transistors (−55 to 150 °C (−67 to 302 °F)). Extreme temperature transistors include high-temperature transistors (above 150 °C (302 °F)) and low-temperature transistors (below −55 °C (−67 °F)). The high-temperature transistors that operate thermally stable up to 250 °C (482 °F) can be developed by a general strategy of blending interpenetrating semi-crystalline conjugated polymers and high glass-transition temperature insulating polymers.Hence, a particular transistor may be described as silicon, surface-mount, BJT, NPN, low-power, high-frequency switch. Types: Mnemonics Convenient mnemonic to remember the type of transistor (represented by an electrical symbol) involves the direction of the arrow. For the BJT, on an n-p-n transistor symbol, the arrow will "Not Point iN". On a p-n-p transistor symbol, the arrow "Points iN Proudly". This however does not apply to MOSFET-based transistor symbols as the arrow is typically reversed (i.e. the arrow for the n-p-n points inside). Types: Field-effect transistor (FET) The field-effect transistor, sometimes called a unipolar transistor, uses either electrons (in n-channel FET) or holes (in p-channel FET) for conduction. The four terminals of the FET are named source, gate, drain, and body (substrate). On most FETs, the body is connected to the source inside the package, and this will be assumed for the following description. Types: In a FET, the drain-to-source current flows via a conducting channel that connects the source region to the drain region. The conductivity is varied by the electric field that is produced when a voltage is applied between the gate and source terminals, hence the current flowing between the drain and source is controlled by the voltage applied between the gate and source. As the gate–source voltage (VGS) is increased, the drain–source current (IDS) increases exponentially for VGS below threshold, and then at a roughly quadratic rate: (IDS ∝ (VGS − VT)2, where VT is the threshold voltage at which drain current begins) in the "space-charge-limited" region above threshold. A quadratic behavior is not observed in modern devices, for example, at the 65 nm technology node.For low noise at narrow bandwidth, the higher input resistance of the FET is advantageous. Types: FETs are divided into two families: junction FET (JFET) and insulated gate FET (IGFET). The IGFET is more commonly known as a metal–oxide–semiconductor FET (MOSFET), reflecting its original construction from layers of metal (the gate), oxide (the insulation), and semiconductor. Unlike IGFETs, the JFET gate forms a p–n diode with the channel which lies between the source and drains. Functionally, this makes the n-channel JFET the solid-state equivalent of the vacuum tube triode which, similarly, forms a diode between its grid and cathode. Also, both devices operate in the depletion-mode, they both have a high input impedance, and they both conduct current under the control of an input voltage. Types: Metal–semiconductor FETs (MESFETs) are JFETs in which the reverse biased p–n junction is replaced by a metal–semiconductor junction. These, and the HEMTs (high-electron-mobility transistors, or HFETs), in which a two-dimensional electron gas with very high carrier mobility is used for charge transport, are especially suitable for use at very high frequencies (several GHz). Types: FETs are further divided into depletion-mode and enhancement-mode types, depending on whether the channel is turned on or off with zero gate-to-source voltage. For enhancement mode, the channel is off at zero bias, and a gate potential can "enhance" the conduction. For the depletion mode, the channel is on at zero bias, and a gate potential (of the opposite polarity) can "deplete" the channel, reducing conduction. For either mode, a more positive gate voltage corresponds to a higher current for n-channel devices and a lower current for p-channel devices. Nearly all JFETs are depletion-mode because the diode junctions would forward bias and conduct if they were enhancement-mode devices, while most IGFETs are enhancement-mode types. Types: Metal–oxide–semiconductor FET (MOSFET) The metal–oxide–semiconductor field-effect transistor (MOSFET, MOS-FET, or MOS FET), also known as the metal–oxide–silicon transistor (MOS transistor, or MOS), is a type of field-effect transistor that is fabricated by the controlled oxidation of a semiconductor, typically silicon. It has an insulated gate, whose voltage determines the conductivity of the device. This ability to change conductivity with the amount of applied voltage can be used for amplifying or switching electronic signals. The MOSFET is by far the most common transistor, and the basic building block of most modern electronics. The MOSFET accounts for 99.9% of all transistors in the world. Types: Bipolar junction transistor (BJT) Bipolar transistors are so named because they conduct by using both majority and minority carriers. The bipolar junction transistor, the first type of transistor to be mass-produced, is a combination of two junction diodes and is formed of either a thin layer of p-type semiconductor sandwiched between two n-type semiconductors (an n–p–n transistor), or a thin layer of n-type semiconductor sandwiched between two p-type semiconductors (a p–n–p transistor). This construction produces two p–n junctions: a base-emitter junction and a base-collector junction, separated by a thin region of semiconductor known as the base region. (Two junction diodes wired together without sharing an intervening semiconducting region will not make a transistor). Types: BJTs have three terminals, corresponding to the three layers of semiconductor—an emitter, a base, and a collector. They are useful in amplifiers because the currents at the emitter and collector are controllable by a relatively small base current. In an n–p–n transistor operating in the active region, the emitter-base junction is forward biased (electrons and holes recombine at the junction), and the base-collector junction is reverse biased (electrons and holes are formed at, and move away from the junction), and electrons are injected into the base region. Because the base is narrow, most of these electrons will diffuse into the reverse-biased base-collector junction and be swept into the collector; perhaps one-hundredth of the electrons will recombine in the base, which is the dominant mechanism in the base current. As well, as the base is lightly doped (in comparison to the emitter and collector regions), recombination rates are low, permitting more carriers to diffuse across the base region. By controlling the number of electrons that can leave the base, the number of electrons entering the collector can be controlled. Collector current is approximately β (common-emitter current gain) times the base current. It is typically greater than 100 for small-signal transistors but can be smaller in transistors designed for high-power applications. Types: Unlike the field-effect transistor (see below), the BJT is a low-input-impedance device. Also, as the base-emitter voltage (VBE) is increased the base-emitter current and hence the collector-emitter current (ICE) increase exponentially according to the Shockley diode model and the Ebers-Moll model. Because of this exponential relationship, the BJT has a higher transconductance than the FET. Bipolar transistors can be made to conduct by exposure to light because the absorption of photons in the base region generates a photocurrent that acts as a base current; the collector current is approximately β times the photocurrent. Devices designed for this purpose have a transparent window in the package and are called phototransistors. Types: Usage of MOSFETs and BJTs The MOSFET is by far the most widely used transistor for both digital circuits as well as analog circuits, accounting for 99.9% of all transistors in the world. The bipolar junction transistor (BJT) was previously the most commonly used transistor during the 1950s to 1960s. Even after MOSFETs became widely available in the 1970s, the BJT remained the transistor of choice for many analog circuits such as amplifiers because of their greater linearity, up until MOSFET devices (such as power MOSFETs, LDMOS and RF CMOS) replaced them for most power electronic applications in the 1980s. In integrated circuits, the desirable properties of MOSFETs allowed them to capture nearly all market share for digital circuits in the 1970s. Discrete MOSFETs (typically power MOSFETs) can be applied in transistor applications, including analog circuits, voltage regulators, amplifiers, power transmitters, and motor drivers. Types: Other transistor types Field-effect transistor (FET): Metal–oxide–semiconductor field-effect transistor (MOSFET), where the gate is insulated by a shallow layer of insulator p-type MOS (PMOS) n-type MOS (NMOS) complementary MOS (CMOS) RF CMOS, for power electronics Multi-gate field-effect transistor (MuGFET) Fin field-effect transistor (FinFET), source/drain region shapes fins on the silicon surface GAAFET, Similar to FinFET but nanowires are used instead of fins, the nanowires are stacked vertically and are surrounded on 4 sides by the gate MBCFET, a variant of GAAFET that uses horizontal nanosheets instead of nanowires, made by Samsung. Also known as RibbonFET (made by Intel) Thin-film transistor, used in LCD and OLED displays Floating-gate MOSFET (FGMOS), for non-volatile storage Power MOSFET, for power electronics lateral diffused MOS (LDMOS) Carbon nanotube field-effect transistor (CNFET, CNTFET), where the channel material is replaced by a carbon nanotube Ferroelectric field-effect transistor (Fe FET), uses ferroelectric materials Junction gate field-effect transistor (JFET), where the gate is insulated by a reverse-biased p–n junction Metal–semiconductor field-effect transistor (MESFET), similar to JFET with a Schottky junction instead of a p–n junction High-electron-mobility transistor (HEMT) Negative-Capacitance FET (NC-FET) Inverted-T field-effect transistor (ITFET) Fast-reverse epitaxial diode field-effect transistor (FREDFET) Organic field-effect transistor (OFET), in which the semiconductor is an organic compound Ballistic transistor (disambiguation) FETs used to sense the environment Ion-sensitive field-effect transistor (ISFET), to measure ion concentrations in solution, Electrolyte–oxide–semiconductor field-effect transistor (EOSFET), neurochip, Deoxyribonucleic acid field-effect transistor (DNAFET). Types: Bipolar junction transistor (BJT): Heterojunction bipolar transistor, up to several hundred GHz, common in modern ultrafast and RF circuits Schottky transistor avalanche transistor Darlington transistors are two BJTs connected together to provide a high current gain equal to the product of the current gains of the two transistors Insulated-gate bipolar transistors (IGBTs) use a medium-power IGFET, similarly connected to a power BJT, to give a high input impedance. Power diodes are often connected between certain terminals depending on specific use. IGBTs are particularly suitable for heavy-duty industrial applications. The ASEA Brown Boveri (ABB) 5SNA2400E170100 , intended for three-phase power supplies, houses three n–p–n IGBTs in a case measuring 38 by 140 by 190 mm and weighing 1.5 kg. Each IGBT is rated at 1,700 volts and can handle 2,400 amperes Phototransistor. Types: Emitter-switched bipolar transistor (ESBT) is a monolithic configuration of a high-voltage bipolar transistor and a low-voltage power MOSFET in cascode topology. It was introduced by STMicroelectronics in the 2000s, and abandoned a few years later around 2012. Multiple-emitter transistor, used in transistor–transistor logic and integrated current mirrors Multiple-base transistor, used to amplify very-low-level signals in noisy environments such as the pickup of a record player or radio front ends. Effectively, it is a very large number of transistors in parallel where, at the output, the signal is added constructively, but random noise is added only stochastically. Tunnel field-effect transistor, where it switches by modulating quantum tunneling through a barrier. Diffusion transistor, formed by diffusing dopants into semiconductor substrate; can be both BJT and FET. Unijunction transistor, which can be used as a simple pulse generator. It comprises the main body of either p-type or n-type semiconductor with ohmic contacts at each end (terminals Base1 and Base2). A junction with the opposite semiconductor type is formed at a point along the length of the body for the third terminal (Emitter). Single-electron transistors (SET), consist of a gate island between two tunneling junctions. The tunneling current is controlled by a voltage applied to the gate through a capacitor. Nanofluidic transistor, controls the movement of ions through sub-microscopic, water-filled channels. Multigate devices: Tetrode transistor Pentode transistor Trigate transistor (prototype by Intel) Dual-gate field-effect transistors have a single channel with two gates in cascode, a configuration optimized for high-frequency amplifiers, mixers, and oscillators. Junctionless nanowire transistor (JNT), uses a simple nanowire of silicon surrounded by an electrically isolated "wedding ring" that acts to gate the flow of electrons through the wire. Nanoscale vacuum-channel transistor, when in 2012, NASA and the National Nanofab Center in South Korea were reported to have built a prototype vacuum-channel transistor in only 150 nanometers in size, can be manufactured cheaply using standard silicon semiconductor processing, can operate at high speeds even in hostile environments, and could consume just as much power as a standard transistor. Organic electrochemical transistor. Solaristor (from solar cell transistor), a two-terminal gate-less self-powered phototransistor. Device identification: Three major identification standards are used for designating transistor devices. In each, the alphanumeric prefix provides clues to the type of the device. Device identification: Joint Electron Device Engineering Council (JEDEC) The JEDEC part numbering scheme evolved in the 1960s in the United States. The JEDEC EIA-370 transistor device numbers usually start with 2N, indicating a three-terminal device. Dual-gate field-effect transistors are four-terminal devices, and begin with 3N. The prefix is followed by a two-, three- or four-digit number with no significance as to device properties, although early devices with low numbers tend to be germanium devices. For example, 2N3055 is a silicon n–p–n power transistor, 2N1301 is a p–n–p germanium switching transistor. A letter suffix, such as "A", is sometimes used to indicate a newer variant, but rarely gain groupings. Device identification: Japanese Industrial Standard (JIS) In Japan, the JIS semiconductor designation (|JIS-C-7012), labels transistor devices starting with 2S, e.g., 2SD965, but sometimes the "2S" prefix is not marked on the package–a 2SD965 might only be marked D965 and a 2SC1815 might be listed by a supplier as simply C1815. This series sometimes has suffixes, such as R, O, BL, standing for red, orange, blue, etc., to denote variants, such as tighter hFE (gain) groupings. Device identification: European Electronic Component Manufacturers Association (EECA) The European Electronic Component Manufacturers Association (EECA) uses a numbering scheme that was inherited from Pro Electron when it merged with EECA in 1983. This scheme begins with two letters: the first gives the semiconductor type (A for germanium, B for silicon, and C for materials like GaAs); the second letter denotes the intended use (A for diode, C for general-purpose transistor, etc.). A three-digit sequence number (or one letter and two digits, for industrial types) follows. With early devices this indicated the case type. Suffixes may be used, with a letter (e.g. "C" often means high hFE, such as in: BC549C) or other codes may follow to show gain (e.g. BC327-25) or voltage rating (e.g. BUK854-800A). The more common prefixes are: Proprietary Manufacturers of devices may have their proprietary numbering system, for example CK722. Since devices are second-sourced, a manufacturer's prefix (like "MPF" in MPF102, which originally would denote a Motorola FET) now is an unreliable indicator of who made the device. Some proprietary naming schemes adopt parts of other naming schemes, for example, a PN2222A is a (possibly Fairchild Semiconductor) 2N2222A in a plastic case (but a PN108 is a plastic version of a BC108, not a 2N108, while the PN100 is unrelated to other xx100 devices). Device identification: Military part numbers sometimes are assigned their codes, such as the British Military CV Naming System. Device identification: Manufacturers buying large numbers of similar parts may have them supplied with "house numbers", identifying a particular purchasing specification and not necessarily a device with a standardized registered number. For example, an HP part 1854,0053 is a (JEDEC) 2N2218 transistor which is also assigned the CV number: CV7763 Naming problems With so many independent naming schemes, and the abbreviation of part numbers when printed on the devices, ambiguity sometimes occurs. For example, two different devices may be marked "J176" (one the J176 low-power JFET, the other the higher-powered MOSFET 2SJ176). Device identification: As older "through-hole" transistors are given surface-mount packaged counterparts, they tend to be assigned many different part numbers because manufacturers have their systems to cope with the variety in pinout arrangements and options for dual or matched n–p–n + p–n–p devices in one pack. So even when the original device (such as a 2N3904) may have been assigned by a standards authority, and well known by engineers over the years, the new versions are far from standardized in their naming. Construction: Semiconductor material The first BJTs were made from germanium (Ge). Silicon (Si) types currently predominate but certain advanced microwave and high-performance versions now employ the compound semiconductor material gallium arsenide (GaAs) and the semiconductor alloy silicon-germanium (SiGe). Single element semiconductor material (Ge and Si) is described as elemental. Rough parameters for the most common semiconductor materials used to make transistors are given in the adjacent table. These parameters will vary with an increase in temperature, electric field, impurity level, strain, and sundry other factors. Construction: The junction forward voltage is the voltage applied to the emitter-base junction of a BJT to make the base conduct a specified current. The current increases exponentially as the junction forward voltage is increased. The values given in the table are typical for a current of 1 mA (the same values apply to semiconductor diodes). The lower the junction forward voltage the better, as this means that less power is required to "drive" the transistor. The junction forward voltage for a given current decreases with an increase in temperature. For a typical silicon junction, the change is −2.1 mV/°C. In some circuits special compensating elements (sensistors) must be used to compensate for such changes. Construction: The density of mobile carriers in the channel of a MOSFET is a function of the electric field forming the channel and of various other phenomena such as the impurity level in the channel. Some impurities, called dopants, are introduced deliberately in making a MOSFET, to control the MOSFET electrical behavior. Construction: The electron mobility and hole mobility columns show the average speed that electrons and holes diffuse through the semiconductor material with an electric field of 1 volt per meter applied across the material. In general, the higher the electron mobility the faster the transistor can operate. The table indicates that Ge is a better material than Si in this respect. However, Ge has four major shortcomings compared to silicon and gallium arsenide: Its maximum temperature is limited. Construction: It has relatively high leakage current. It cannot withstand high voltages. Construction: It is less suitable for fabricating integrated circuits.Because the electron mobility is higher than the hole mobility for all semiconductor materials, a given bipolar n–p–n transistor tends to be swifter than an equivalent p–n–p transistor. GaAs has the highest electron mobility of the three semiconductors. It is for this reason that GaAs is used in high-frequency applications. A relatively recent FET development, the high-electron-mobility transistor (HEMT), has a heterostructure (junction between different semiconductor materials) of aluminium gallium arsenide (AlGaAs)-gallium arsenide (GaAs) which has twice the electron mobility of a GaAs-metal barrier junction. Because of their high speed and low noise, HEMTs are used in satellite receivers working at frequencies around 12 GHz. HEMTs based on gallium nitride and aluminum gallium nitride (AlGaN/GaN HEMTs) provide still higher electron mobility and are being developed for various applications. Construction: Maximum junction temperature values represent a cross-section taken from various manufacturers' datasheets. This temperature should not be exceeded or the transistor may be damaged. Al–Si junction refers to the high-speed (aluminum-silicon) metal–semiconductor barrier diode, commonly known as a Schottky diode. This is included in the table because some silicon power IGFETs have a parasitic reverse Schottky diode formed between the source and drain as part of the fabrication process. This diode can be a nuisance, but sometimes it is used in the circuit. Packaging Discrete transistors can be individually packaged transistors or unpackaged transistor chips. Construction: Transistors come in many different semiconductor packages (see image). The two main categories are through-hole (or leaded), and surface-mount, also known as surface-mount device (SMD). The ball grid array (BGA) is the latest surface-mount package. It has solder "balls" on the underside in place of leads. Because they are smaller and have shorter interconnections, SMDs have better high-frequency characteristics but lower power ratings. Construction: Transistor packages are made of glass, metal, ceramic, or plastic. The package often dictates the power rating and frequency characteristics. Power transistors have larger packages that can be clamped to heat sinks for enhanced cooling. Additionally, most power transistors have the collector or drain physically connected to the metal enclosure. At the other extreme, some surface-mount microwave transistors are as small as grains of sand. Construction: Often a given transistor type is available in several packages. Transistor packages are mainly standardized, but the assignment of a transistor's functions to the terminals is not: other transistor types can assign other functions to the package's terminals. Even for the same transistor type the terminal assignment can vary (normally indicated by a suffix letter to the part number, q.e. BC212L and BC212K). Construction: Nowadays most transistors come in a wide range of SMT packages, in comparison, the list of available through-hole packages is relatively small, here is a shortlist of the most common through-hole transistors packages in alphabetical order: ATV, E-line, MRT, HRT, SC-43, SC-72, TO-3, TO-18, TO-39, TO-92, TO-126, TO220, TO247, TO251, TO262, ZTX851. Unpackaged transistor chips (die) may be assembled into hybrid devices. The IBM SLT module of the 1960s is one example of such a hybrid circuit module using glass passivated transistor (and diode) die. Other packaging techniques for discrete transistors as chips include direct chip attach (DCA) and chip-on-board (COB). Flexible transistors Researchers have made several kinds of flexible transistors, including organic field-effect transistors. Flexible transistors are useful in some kinds of flexible displays and other flexible electronics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Four causes** Four causes: The four causes or four explanations are, in Aristotelian thought, four fundamental types of answer to the question "why?", in analysis of change or movement in nature: the material, the formal, the efficient, and the final. Aristotle wrote that "we do not have knowledge of a thing until we have grasped its why, that is to say, its cause." While there are cases in which classifying a "cause" is difficult, or in which "causes" might merge, Aristotle held that his four "causes" provided an analytical scheme of general applicability.Aristotle's word aitia (Greek: αἰτία) has, in philosophical scholarly tradition, been translated as 'cause'. This peculiar, specialized, technical, usage of the word 'cause' is not that of everyday English language. Rather, the translation of Aristotle's αἰτία that is nearest to current ordinary language is "explanation."In Physics II.3 and Metaphysics V.2, Aristotle holds that there are four kinds of answers to "why" questions: Matter The material cause of a change or movement. This is the aspect of the change or movement that is determined by the material that composes the moving or changing things. For a table, this might be wood; for a statue, it might be bronze or marble. Four causes: Form The formal cause of a change or movement. This is a change or movement caused by the arrangement, shape, or appearance of the thing changing or moving. Aristotle says, for example, that the ratio 2:1, and number in general, is the formal cause of the octave. Four causes: Efficient, or agent The efficient or moving cause of a change or movement. This consists of things apart from the thing being changed or moved, which interact so as to be an agency of the change or movement. For example, the efficient cause of a table is a carpenter, or a person working as one, and according to Aristotle the efficient cause of a child is a parent. Four causes: Final, end, or purpose The final cause of a change or movement. This is a change or movement for the sake of a thing to be what it is. For a seed, it might be an adult plant; for a sailboat, it might be sailing; for a ball at the top of a ramp, it might be coming to rest at the bottom.The four "causes" are not mutually exclusive. For Aristotle, several, preferably four, answers to the question "why" have to be given to explain a phenomenon and especially the actual configuration of an object. For example, if asking why a table is such and such, an explanation in terms of the four causes would sound like this: This table is solid and brown because it is made of wood (matter); it does not collapse because it has four legs of equal length (form); it is as it is because a carpenter made it, starting from a tree (agent); it has these dimensions because it is to be used by humans (end). Four causes: Aristotle distinguished between intrinsic and extrinsic causes. Matter and form are intrinsic causes because they deal directly with the object, whereas efficient and finality causes are said to be extrinsic because they are external.Thomas Aquinas demonstrated that only those four types of causes can exist and no others. He also introduced a priority order according to which "matter is made perfect by the form, form is made perfect by the agent, and agent is made perfect by the finality." Hence, the finality is the cause of causes or, equivalently, the queen of causes. Definition of "cause": In his philosophical writings, Aristotle used the Greek word αἴτιον (aition), a neuter singular form of an adjective. The Greek word had meant, perhaps originally in a "legal" context, what or who is "responsible," mostly but not always in a bad sense of "guilt" or "blame." Alternatively, it could mean "to the credit of" someone or something. The appropriation of this word by Aristotle and other philosophers reflects how the Greek experience of legal practice influenced the concern in Greek thought to determine what is responsible.: 100, 106–107  The word developed other meanings, including its use in philosophy in a more abstract sense. Definition of "cause": About a century before Aristotle, the anonymous author of the Hippocratic text On Ancient Medicine had described the essential characteristics of a cause as it is considered in medicine:We must, therefore, consider the causes of each [medical] condition to be those things which are such that, when they are present, the condition necessarily occurs, but when they change to another combination, it ceases. Aristotle's "four causes": Aristotle used the four causes to provide different answers to the question, "because of what?" The four answers to this question illuminate different aspects of how a thing comes into being or of how an event takes place.: 96–98 Material Aristotle considers the material "cause" (ὕλη, hū́lē) of an object as equivalent to the nature of the raw material out of which the object is composed. (The word "nature" for Aristotle applies to both its potential in the raw material and its ultimate finished form. In a sense this form already existed in the material: see potentiality and actuality.) Whereas modern physics looks to simple bodies, Aristotle's physics took a more general viewpoint, and treated living things as exemplary. Nevertheless, he felt that simple natural bodies such as earth, fire, air, and water also showed signs of having their own innate sources of motion, change, and rest. Fire, for example, carries things upwards, unless stopped from doing so. Things formed by human artifice, such as beds and cloaks, have no innate tendency to become beds or cloaks.In traditional Aristotelian philosophical terminology, material is not the same as substance. Matter has parallels with substance in so far as primary matter serves as the substratum for simple bodies which are not substance: sand and rock (mostly earth), rivers and seas (mostly water), atmosphere and wind (mostly air and then mostly fire below the moon). In this traditional terminology, 'substance' is a term of ontology, referring to really existing things; only individuals are said to be substances (subjects) in the primary sense. Secondary substance, in a different sense, also applies to man-made artifacts. Aristotle's "four causes": Formal Aristotle considers the formal "cause" (εἶδος, eîdos) as describing the pattern or form which when present makes matter into a particular type of thing, which we recognize as being of that particular type. By Aristotle's own account, this is a difficult and controversial concept. It links with theories of forms such as those of Aristotle's teacher, Plato, but in Aristotle's own account (see his Metaphysics), he takes into account many previous writers who had expressed opinions about forms and ideas, but he shows how his own views differ from them. Aristotle's "four causes": Efficient Aristotle defines the agent or efficient "cause" (κινοῦν, kinoûn) of an object as that which causes change and drives transient motion (such as a painter painting a house) (see Aristotle, Physics II 3, 194b29). In many cases, this is simply the thing that brings something about. For example, in the case of a statue, it is the person chiseling away which transforms a block of marble into a statue. According to Lloyd, of the four causes, only this one is what is meant by the modern English word "cause" in ordinary speech. Aristotle's "four causes": Final Aristotle defines the end, purpose, or final "cause" (τέλος, télos) as that for the sake of which a thing is done. Like the form, this is a controversial type of explanation in science; some have argued for its survival in evolutionary biology, while Ernst Mayr denied that it continued to play a role. It is commonly recognised that Aristotle's conception of nature is teleological in the sense that Nature exhibits functionality in a more general sense than is exemplified in the purposes that humans have. Aristotle observed that a telos does not necessarily involve deliberation, intention, consciousness, or intelligence: This is most obvious in the animals other than man: they make things neither by art nor after inquiry or deliberation. That is why people wonder whether it is by intelligence or by some other faculty that these creatures work, – spiders, ants, and the like... It is absurd to suppose that purpose is not present because we do not observe the agent deliberating. Art does not deliberate. If the ship-building art were in the wood, it would produce the same results by nature. If, therefore, purpose is present in art, it is present also in nature. Aristotle's "four causes": According to Aristotle, a seed has the eventual adult plant as its end (i.e., as its telos) if and only if the seed would become the adult plant under normal circumstances. In Physics II.9, Aristotle hazards a few arguments that a determination of the end (i.e., final cause) of a phenomenon is more important than the others. He argues that the end is that which brings it about, so for example "if one defines the operation of sawing as being a certain kind of dividing, then this cannot come about unless the saw has teeth of a certain kind; and these cannot be unless it is of iron." According to Aristotle, once a final "cause" is in place, the material, efficient and formal "causes" follow by necessity. However, he recommends that the student of nature determine the other "causes" as well, and notes that not all phenomena have an end, e.g., chance events.Aristotle saw that his biological investigations provided insights into the causes of things, especially into the final cause: We should approach the investigation of every kind of animal without being ashamed, since in each one of them there is something natural and something beautiful. The absence of chance and the serving of ends are found in the works of nature especially. And the end, for the sake of which a thing has been constructed or has come to be, belongs to what is beautiful. George Holmes Howison highlights "final causation" in presenting his theory of metaphysics, which he terms "personal idealism", and to which he invites not only man, but all (ideal) life: Here, in seeing that Final Cause – causation at the call of self-posited aim or end – is the only full and genuine cause, we further see that Nature, the cosmic aggregate of phenomena and the cosmic bond of their law which in the mood of vague and inaccurate abstraction we call Force, is after all only an effect... Thus teleology, or the Reign of Final Cause, the reign of ideality, is not only an element in the notion of Evolution, but is the very vital cord in the notion. The conception of evolution is founded at last and essentially in the conception of Progress: but this conception has no meaning at all except in the light of a goal; there can be no goal unless there is a Beyond for everything actual; and there is no such Beyond except through a spontaneous ideal. The presupposition of Nature, as a system undergoing evolution, is therefore the causal activity of our Pure Ideals. These are our three organic and organizing conceptions called the True, the Beautiful, and the Good. Aristotle's "four causes": However, Edward Feser argues, in line with the Aristotelian and Thomistic tradition, that finality has been greatly misunderstood. Indeed, without finality, efficient causality becomes inexplicable. Finality thus understood is not purpose but that end towards which a thing is ordered. When a match is rubbed against the side of a matchbox, the effect is not the appearance of an elephant or the sounding of a drum, but fire. The effect is not arbitrary because the match is ordered towards the end of fire which is realized through efficient causes. In their biosemiotic study, Stuart Kauffman, Robert K. Logan et al. (2008) remark: Our language is teleological. We believe that autonomous agents constitute the minimal physical system to which teleological language rightly applies. Scholasticism: In the Scholasticism, the efficient causality was governed by two principles: omne agens agit simile sibi (every agent produces something similar to itself): stated frequently in the writings of St. Thomas Aquinas, the principle establishes a relationship of similarity and analogy between cause and effect; nemo dat quod non habet (no one gives what he does not possess): partially similar to the legal principle of the same name, in Metaphysics it establishes that the cause cannot bestow on the effect the quantity of being (and thus of unity, truth, goodness, reality and perfection) that it does not already possess within itself. Otherwise, there would be creation out of nothingness of self and other-from-self In other words, the cause must possess a degree of reality greater than or equal to that of the effect. If it is greater, we speak of equivocal causation, in analogy to the three types of logical predication (univocal, equivocal, analogical); if it is equal, we speak of univocal predication.Thomas in this regard distinguished between causa fiendi (cause of occurring, of only beginning to be) and causa essendi (cause of being and also of beginning to be) When the being of the agent cause is in the effect in a lesser or equal degree, this is a causa fiendi. Scholasticism: Furthermore, the second principle also establishes a qualitative link: the cause can only transmit its own essence to the effect. For example, a dog cannot transmit the essence of a feline to its young, but only that of a dog. The principle is equivalent to that of Causa aequat effectum (cause equals effect) in both a quantitative and qualitative sense. Modern science: In his Advancement of Learning (1605), Francis Bacon wrote that natural science "doth make inquiry, and take consideration of the same natures : but how? Only as to the material and efficient causes of them, and not as to the forms." Using the terminology of Aristotle, Bacon demands that, apart from the "laws of nature" themselves, the causes relevant to natural science are only efficient causes and material causes, or, to use the formulation which became famous later, natural phenomena require scientific explanation in terms of matter and motion. In The New Organon, Bacon divides knowledge into physics and metaphysics: From the two kinds of axioms which have been spoken of arises a just division of philosophy and the sciences, taking the received terms (which come nearest to express the thing) in a sense agreeable to my own views. Thus, let the investigation of forms, which are (in the eye of reason at least, and in their essential law) eternal and immutable, constitute Metaphysics; and let the investigation of the efficient cause, and of matter, and of the latent process, and the latent configuration (all of which have reference to the common and ordinary course of nature, not to her eternal and fundamental laws) constitute Physics. And to these let there be subordinate two practical divisions: to Physics, Mechanics; to Metaphysics, what (in a purer sense of the word) I call Magic, on account of the broadness of the ways it moves in, and its greater command over nature. Modern science: Biology Explanations in terms of final causes remain common in evolutionary biology. Francisco J. Ayala has claimed that teleology is indispensable to biology since the concept of adaptation is inherently teleological. In an appreciation of Charles Darwin published in Nature in 1874, Asa Gray noted "Darwin's great service to Natural Science" lies in bringing back teleology "so that, instead of Morphology versus Teleology, we shall have Morphology wedded to Teleology." Darwin quickly responded, "What you say about Teleology pleases me especially and I do not think anyone else has ever noticed the point." Francis Darwin and T. H. Huxley reiterate this sentiment. The latter wrote that "the most remarkable service to the philosophy of Biology rendered by Mr. Darwin is the reconciliation of Teleology and Morphology, and the explanation of the facts of both, which his view offers." James G. Lennox states that Darwin uses the term 'Final Cause' consistently in his Species Notebook, On the Origin of Species, and after.Contrary to the position described by Francisco J. Ayala, Ernst Mayr states that "adaptedness... is a posteriori result rather than an a priori goal-seeking." Various commentators view the teleological phrases used in modern evolutionary biology as a type of shorthand. For example, S. H. P. Madrell writes that "the proper but cumbersome way of describing change by evolutionary adaptation [may be] substituted by shorter overtly teleological statements" for the sake of saving space, but that this "should not be taken to imply that evolution proceeds by anything other than from mutations arising by chance, with those that impart an advantage being retained by natural selection." However, Lennox states that in evolution as conceived by Darwin, it is true both that evolution is the result of mutations arising by chance and that evolution is teleological in nature.Statements that a species does something "in order to" achieve survival are teleological. The validity or invalidity of such statements depends on the species and the intention of the writer as to the meaning of the phrase "in order to." Sometimes it is possible or useful to rewrite such sentences so as to avoid teleology. Some biology courses have incorporated exercises requiring students to rephrase such sentences so that they do not read teleologically. Nevertheless, biologists still frequently write in a way which can be read as implying teleology even if that is not the intention. Modern science: Animal behaviour (Tinbergen's four questions) Tinbergen's four questions, named after the ethologist Nikolaas Tinbergen and based on Aristotle's four causes, are complementary categories of explanations for animal behaviour. They are also commonly referred to as levels of analysis. The four questions are on: function, what an adaptation does that is selected for in evolution; phylogeny, the evolutionary history of an organism, revealing its relationships to other species; mechanism, namely the proximate cause of a behaviour, such as the role of testosterone in aggression; and ontogeny, the development of an organism from egg to embryo to adult. Technology (Heidegger's four causes): In The Question Concerning Technology, echoing Aristotle, Martin Heidegger describes the four causes as follows: causa materialis: the material or matter causa formalis: the form or shape the material or matter enters causa finalis: the end causa efficiens: the effect that brings about the finished result.Heidegger explains that "[w]hoever builds a house or a ship or forges a sacrificial chalice reveals what is to be brought forth, according to the terms of the four modes of occasioning."The educationist David Waddington comments that although the efficient cause, which he identifies as "the craftsman," might be thought the most significant of the four, in his view each of Heidegger's four causes is "equally co-responsible" for producing a craft item, in Heidegger's terms "bringing forth" the thing into existence. Waddington cites Lovitt's description of this bringing forth as "a unified process."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of Zhejiang University Science A** Journal of Zhejiang University Science A: The Journal of Zhejiang University Science A: Applied Physics & Engineering is a monthly peer-reviewed scientific journal covering applied physics and engineering. It was established in 2000 and is published by Zhejiang University Press and Springer Science+Business Media. Abstracting and indexing: The journal is abstracted and indexed in the Science Citation Index Expanded, Scopus, and Inspec. According to the Journal Citation Reports, the journal has a 2015 impact factor of 0.941.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Non-philosophy** Non-philosophy: Non-philosophy (French: non-philosophie) is a concept developed by French Continental philosopher François Laruelle (formerly of the Collège international de philosophie and the University of Paris X: Nanterre). Non-philosophy according to Laruelle: Laruelle argues that all forms of philosophy (from ancient philosophy to analytic philosophy to deconstruction and so on) are structured around a prior decision, and remain constitutively blind to this decision. The 'decision' that Laruelle is concerned with here is the dialectical splitting of the world in order to grasp the world philosophically. Examples from the history of philosophy include Immanuel Kant's distinction between the synthesis of manifold impressions and the faculties of the understanding; Martin Heidegger's split between the ontic and the ontological; and Jacques Derrida's notion of différance/presence. The reason Laruelle finds this decision interesting and problematic is because the decision itself cannot be grasped (philosophically grasped, that is) without introducing some further scission. Non-philosophy according to Laruelle: Laruelle further argues that the decisional structure of philosophy can only be grasped non-philosophically. In this sense, non-philosophy is a science of philosophy. Non-philosophy is not metaphilosophy because, as Laruelle scholar Ray Brassier notes, "philosophy is already metaphilosophical through its constitutive reflexivity". Brassier also defines non-philosophy as the "theoretical practice of philosophy proceeding by way of transcendental axioms and producing theorems which are philosophically uninterpretable". The reason why the axioms and theorems of non-philosophy are philosophically uninterpretable is because, as explained, philosophy cannot grasp its decisional structure in the way that non-philosophy can. Non-philosophy according to Laruelle: Laruelle's non-philosophy, he claims, should be considered to philosophy what non-Euclidean geometry is to the work of Euclid. It stands in particular opposition to philosophical heirs of Jacques Lacan such as Alain Badiou. Non-philosophy according to Laruelle: Laruelle scholar Ekin Erkan, elucidating on Laruelle's system, notes that "'non-philosophy' [...] withdraws from the metaphysical precept of separating the world into binarisms, perhaps epitomized by the formative division between 'universals” and “particulars' in Kant’s Transcendental Deduction. Laruelle’s method also rejects the 'evental' nature of Being described by Heiddegger [...] Laruelle's 'One' is understood as generic identity - an identity/commonality that reverses the classical metaphysics found in philosophy’s bastion thinkers (a lineage that runs from Plato to Badiou), where the transcendental is upheld as a necessary precondition for grounding reality."" Role of the subject: The decisional structure of philosophy is grasped by the subject of non-philosophy. Laruelle's concept of "the subject" here is not the same as the subject-matter, nor does it have anything to do with the traditional philosophical notion of subjectivity. It is, instead, a function along the same lines as a mathematical function. Role of the subject: The concept of performativity (taken from speech act theory) is central to the idea of the subject of non-philosophy. Laruelle believes that both philosophy and non-philosophy are performative. However, philosophy merely performatively legitimates the decisional structure which, as already noted, it is unable to fully grasp, in contrast to non-philosophy which collapses the distinction (present in philosophy) between theory and action. In this sense, non-philosophy is radically performative because the theorems deployed in accordance with its method constitute fully-fledged scientific actions. Non-philosophy, then, is conceived as a rigorous and scholarly discipline.The role of the subject is a critical facet of Laruelle's non-ethics and Laruelle's political system. "By problematizing what he terms 'The Statist Ideal,' or the 'Unitary Illusion' - be it negative (Hegel) or positive (Nietzsche) - Laruelle interrogates the 'scission' of the minority subject, which he contends is a “symptom” of the Western dialectic practice. In opposition to the Kantian first principles upon which both Continental and Analytic philosophy rest, Laruelle attempts to sketch a 'real Critique of Reason' that is determined in itself and through itself; insofar as this involves Laruellean 'non-ethics,' this involves breaking from the long-situated practice of studying the State from the paralogism of the State view, itself." Radical immanence: The radically performative character of the subject of non-philosophy would be meaningless without the concept of radical immanence. The philosophical doctrine of immanence is generally defined as any philosophical belief or argument which resists transcendent separation between the world and some other principle or force (such as a creator deity). According to Laruelle, the decisional character of philosophy makes immanence impossible for it, as some ungraspable splitting is always taking place within. By contrast, non-philosophy axiomatically deploys immanence as being endlessly conceptualizable by the subject of non-philosophy. This is what Laruelle means by "radical immanence". The actual work of the subject of non-philosophy is to apply its methods to the decisional resistance to radical immanence which is found in philosophy. Sans-philosophie: In "A New Presentation of Non-Philosophy" (2004), François Laruelle states: I see non-philosophers in several different ways. I see them, inevitably, as subjects of the university, as is required by worldly life, but above all as related to three fundamental human types. They are related to the analyst and the political militant, obviously, since non-philosophy is close to psychoanalysis and Marxism — it transforms the subject by transforming instances of philosophy. But they are also related to what I would call the ‘spiritual′ type — which it is imperative not to confuse with ‘spiritualist′. The spiritual are not spiritualists. They are the great destroyers of the forces of philosophy and the state, which band together in the name of order and conformity. The spiritual haunt the margins of philosophy, Gnosticism, mysticism, and even of institutional religion and politics. The spiritual are not just abstract, quietist mystics; they are for the world. This is why a quiet discipline is not sufficient, because man is implicated in the world as the presupposed that determines it. Thus, non-philosophy is also related to Gnosticism and science-fiction; it answers their fundamental question — which is not at all philosophy's primary concern — ‘Should humanity be saved? And how?’ And it is also close to spiritual revolutionaries such as Müntzer and certain mystics who skirted heresy. When all is said and done, is non-philosophy anything other than the chance for an effective utopia?" Numbered amongst the early members or sympathizers of sans-philosophie ("without philosophy") are those included in a collection published in 2005 by L’Harmattan: François Laruelle, Jason Barker, Ray Brassier, Laurent Carraz, Hugues Choplin, Jacques Colette, Nathalie Depraz, Oliver Feltham, Gilles Grelet, Jean-Pierre Faye, Gilbert Hottois, Jean-Luc Rannou, Pierre A. Riffard, Sandrine Roux and Jordanco Sekulovski. Since then, a slew of translations and new introductions have appeared from John Ó Maoilearca (Mullarkey), Anthony Paul Smith, Rocco Gangle, Katerina Kolozova, and Alexander Galloway. Precursors: Adam Karl August von Eschenmayer also developed an approach to philosophy called non-philosophy. Precursors: He defined it as a kind of mystical illumination by which was obtained a belief in God that could not be reached by mere intellectual effort. He carried this tendency to mysticism into his physical researches, and was led by it to take a deep interest in the phenomena of animal magnetism. He ultimately became a devout believer in demoniacal and spiritual possession; and his later writings are all strongly impregnated with supernaturalism. Precursors: Laruelle sees Eschenmayer's doctrine as a "break with philosophy and its systematic aspect in the name of passion, faith, and feeling".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Permutohedron** Permutohedron: In mathematics, the permutohedron of order n is an (n − 1)-dimensional polytope embedded in an n-dimensional space. Its vertex coordinates (labels) are the permutations of the first n natural numbers. The edges identify the shortest possible paths (sets of transpositions) that connect two vertices (permutations). Two permutations connected by an edge differ in only two places (one transposition), and the numbers on these places are neighbors (differ in value by 1). Permutohedron: The image on the right shows the permutohedron of order 4, which is the truncated octahedron. Its vertices are the 24 permutations of (1, 2, 3, 4). Parallel edges have the same edge color. The 6 edge colors correspond to the 6 possible transpositions of 4 elements, i.e. they indicate in which two places the connected permutations differ. (E.g. red edges connect permutations that differ in the last two places.) History: According to Günter M. Ziegler (1995), permutohedra were first studied by Pieter Hendrik Schoute (1911). The name permutoèdre was coined by Georges Th. Guilbaud and Pierre Rosenstiehl (1963). They describe the word as barbaric, but easy to remember, and submit it to the criticism of their readers.The alternative spelling permutahedron is sometimes also used. Permutohedra are sometimes called permutation polytopes, but this terminology is also used for the related Birkhoff polytope, defined as the convex hull of permutation matrices. More generally, V. Joseph Bowman (1972) uses that term for any polytope whose vertices have a bijection with the permutations of some set. Vertices, edges, and facets: The permutohedron of order n has n! vertices, each of which is adjacent to n − 1 others. The number of edges is (n − 1) n!/2, and their length is √2. Two connected vertices differ by swapping two coordinates, whose values differ by 1. The pair of swapped places corresponds to the direction of the edge. Vertices, edges, and facets: (In the example image the vertices (3, 2, 1, 4) and (2, 3, 1, 4) are connected by a blue edge and differ by swapping 2 and 3 on the first two places. The values 2 and 3 differ by 1. All blue edges correspond to swaps of coordinates on the first two places.) The number of facets is 2n − 2, because they correspond to non-empty proper subsets S of {1 ... n}. Vertices, edges, and facets: The vertices of a facet corresponding to subset S have in common, that their coordinates on places in S are smaller than the rest. More generally, the faces of dimensions 0 (vertices) to n − 1 (the permutohedron itself) correspond to the strict weak orderings of the set {1 ... n}. So the number of all faces is the n-th ordered Bell number. A face of dimension d corresponds to an ordering with k = n − d equivalence classes. The number of faces of dimension d = n − k in the permutohedron of order n is given by the triangle T (sequence A019538 in the OEIS): T(n,k)=k!⋅{nk} with {nk} representing the Stirling numbers of the second kind It is shown on the right together with its row sums, the ordered Bell numbers. Other properties: The permutohedron is vertex-transitive: the symmetric group Sn acts on the permutohedron by permutation of coordinates. Other properties: The permutohedron is a zonotope; a translated copy of the permutohedron can be generated as the Minkowski sum of the n(n − 1)/2 line segments that connect the pairs of the standard basis vectors.The vertices and edges of the permutohedron are isomorphic to one of the Cayley graphs of the symmetric group, namely the one generated by the transpositions that swap consecutive elements. The vertices of the Cayley graph are the inverse permutations of those in the permutohedron. The image on the right shows the Cayley graph of S4. Its edge colors represent the 3 generating transpositions: (1, 2), (2, 3), (3, 4) This Cayley graph is Hamiltonian; a Hamiltonian cycle may be found by the Steinhaus–Johnson–Trotter algorithm. Tessellation of the space: The permutohedron of order n lies entirely in the (n − 1)-dimensional hyperplane consisting of all points whose coordinates sum to the number 1 + 2 + ... + n = n(n + 1)/2.Moreover, this hyperplane can be tiled by infinitely many translated copies of the permutohedron. Each of them differs from the basic permutohedron by an element of a certain (n − 1)-dimensional lattice, which consists of the n-tuples of integers that sum to zero and whose residues (modulo n) are all equal: x1 + x2 + ... + xn = 0 x1 ≡ x2 ≡ ... ≡ xn (mod n).This is the lattice An−1∗ , the dual lattice of the root lattice An−1 . In other words, the permutohedron is the Voronoi cell for An−1∗ . Accordingly, this lattice is sometimes called the permutohedral lattice.Thus, the permutohedron of order 4 shown above tiles the 3-dimensional space by translation. Here the 3-dimensional space is the affine subspace of the 4-dimensional space R4 with coordinates x, y, z, w that consists of the 4-tuples of real numbers whose sum is 10, x + y + z + w = 10.One easily checks that for each of the following four vectors, (1,1,1,−3), (1,1,−3,1), (1,−3,1,1) and (−3,1,1,1),the sum of the coordinates is zero and all coordinates are congruent to 1 (mod 4). Any three of these vectors generate the translation lattice. Tessellation of the space: The tessellations formed in this way from the order-2, order-3, and order-4 permutohedra, respectively, are the apeirogon, the regular hexagonal tiling, and the bitruncated cubic honeycomb. The dual tessellations contain all simplex facets, although they are not regular polytopes beyond order-3.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Headlamp** Headlamp: A headlamp is a lamp attached to the front of a vehicle to illuminate the road ahead. Headlamps are also often called headlights, but in the most precise usage, headlamp is the term for the device itself and headlight is the term for the beam of light produced and distributed by the device. Headlamp: Headlamp performance has steadily improved throughout the automobile age, spurred by the great disparity between daytime and nighttime traffic fatalities: the US National Highway Traffic Safety Administration states that nearly half of all traffic-related fatalities occur in the dark, despite only 25% of traffic travelling during darkness.Other vehicles, such as trains and aircraft, are required to have headlamps. Bicycle headlamps are often used on bicycles, and are required in some jurisdictions. They can be powered by a battery or a small generator like a bottle or hub dynamo. History of automotive headlamps: Origins The first horseless carriages used carriage lamps, which proved unsuitable for travel at speed. The earliest lights used candles as the most common type of fuel. History of automotive headlamps: Mechanics Acetylene Gas headlamp The earliest headlamps, fuelled by combustible gas such as acetylene gas or oil, operated from the late 1880s. Acetylene gas lamps were popular in 1900s because the flame is resistant to wind and rain. Thick concave mirrors combined with magnifying lenses projected the acetylene flame light. A number of car manufacturers offered Prest-O-Lite calcium carbide acetylene gas generator cylinder with gas feed pipes for lights as standard equipment for 1904 cars. History of automotive headlamps: Electric headlamp The first electric headlamps were introduced in 1898 on the Columbia Electric Car from the Electric Vehicle Company of Hartford, Connecticut, and were optional. Two factors limited the widespread use of electric headlamps: the short life of filaments in the harsh automotive environment, and the difficulty of producing dynamos small enough, yet powerful enough to produce sufficient current.Peerless made electric headlamps standard in 1908. A Birmingham, England firm called Pockley Automobile Electric Lighting Syndicate marketed the world's first electric car-lights as a complete set in 1908, which consisted of headlamps, sidelamps, and tail lights that were powered by an eight-volt battery.In 1912 Cadillac integrated their vehicle's Delco electrical ignition and lighting system, forming the modern vehicle electrical system. History of automotive headlamps: The Guide Lamp Company introduced "dipping" (low-beam) headlamps in 1915, but the 1917 Cadillac system allowed the light to be dipped using a lever inside the car rather than requiring the driver to stop and get out. The 1924 Bilux bulb was the first modern unit, having the light for both low (dipped) and high (main) beams of a headlamp emitting from a single bulb. A similar design was introduced in 1925 by Guide Lamp called the "Duplo". In 1927 the foot-operated dimmer switch or dip switch was introduced and became standard for much of the century. 1933–1934 Packards featured tri-beam headlamps, the bulbs having three filaments. From highest to lowest, the beams were called "country passing", "country driving" and "city driving". The 1934 Nash also used a three-beam system, although in this case with bulbs of the conventional two-filament type, and the intermediate beam combined low beam on the driver's side with high beam on the passenger's side, so as to maximise the view of the roadside while minimizing glare toward oncoming traffic. The last vehicles with a foot-operated dimmer switch were the 1991 Ford F-Series and E-Series [Econoline] vans. Fog lamps were new for 1938 Cadillacs, and their 1954 "Autronic Eye" system automated the selection of high and low beams. History of automotive headlamps: Directional lighting, using a switch and electromagnetically shifted reflector to illuminate the curbside only, was introduced in the rare, one-year-only 1935 Tatra. Steering-linked lighting was featured on the 1947 Tucker Torpedo's center-mounted headlight and was later popularized by the Citroën DS. This made it possible to turn the light in the direction of travel when the steering wheel turned. History of automotive headlamps: The standardized 7-inch (178 mm) round sealed-beam headlamp, one per side, was required for all vehicles sold in the United States from 1940, virtually freezing usable lighting technology in place until the 1970s for Americans. In 1957 the law changed to allow smaller 5.75-inch (146 mm) round sealed beams, two per side of the vehicle, and in 1974 rectangular sealed beams were permitted as well. History of automotive headlamps: Britain, Australia, and some other Commonwealth countries, as well as Japan and Sweden, also made extensive use of 7-inch sealed beams, though they were not mandated as they were in the United States. This headlamp format was not widely accepted in continental Europe, which found replaceable bulbs and variations in the size and shape of headlamps useful in car design. History of automotive headlamps: Technology moved forward in the rest of the world. In 1962 a European consortium of bulb- and headlamp-makers introduced the first halogen lamp for vehicle headlamp use, the H1. Shortly thereafter headlamps using the new light source were introduced in Europe. These were effectively prohibited in the US, where standard-size sealed beam headlamps were mandatory and intensity regulations were low. US lawmakers faced pressure to act, due both to lighting effectiveness and to vehicle aerodynamics/fuel savings. High-beam peak intensity, capped at 140,000 candela per side of the car in Europe, was limited in the United States to 37,500 candela on each side of the car until 1978, when the limit was raised to 75,000. An increase in high-beam intensity to take advantage of the higher allowance could not be achieved without a move to halogen technology, and so sealed-beam headlamps with internal halogen lamps became available for use on 1979 models in the United States. History of automotive headlamps: As of 2010 halogen sealed beams dominate the sealed-beam market, which has declined steeply since replaceable-bulb headlamps were permitted in 1983.High-intensity discharge (HID) systems appeared in the early 1990s, first in the BMW 7 Series. 1996's Lincoln Mark VIII was an early American effort at HIDs, and was the only car with DC HIDs. History of automotive headlamps: Design and style Beyond the engineering, performance, and regulatory-compliance aspects of headlamps, there is the consideration of the various ways they are designed and arranged on a motor vehicle. Headlamps were round for many years because that is the native shape of a parabolic reflector. Using principles of reflection, the simple symmetric round reflective surface projects light and helps focus the beam. History of automotive headlamps: Headlamp styling outside the United States, pre-1983 There was no requirement in Europe for headlamps of standardized size or shape, and lamps could be designed in any shape and size, as long as the lamps met the engineering and performance requirements contained in the applicable European safety standards. Rectangular headlamps were first used in 1960, developed by Hella for the German Ford Taunus P3 and by Cibié for the Citroën Ami 6. They were prohibited in the United States where round lamps were required until 1975. Another early headlamp styling concept involved conventional round lamps faired into the car's bodywork with aerodynamic glass covers, such as those on the 1961 Jaguar E-Type, and on pre-1967 VW Beetles. History of automotive headlamps: Headlamp styling in the United States, 1940–1983 Headlight design in the U.S. changed very little from 1940 to 1983.In 1940, a consortium of state motor vehicle administrators standardized upon a system of two 7 in (178 mm) round sealed beam headlamps on all vehicles—the only system allowed for 17 years. This requirement eliminated problems of tarnished reflectors by sealing them together with the bulbs. It also made aiming the headlight beams simpler and eliminated non-standard bulbs and lamps.The Tucker 48 included a defining "cyclops-eye" feature: a third center-mounted headlight connected to the car's steering mechanism. It only illuminated if the steering was moved more than ten degrees off center and the high beams were turned on.A system of four round lamps, rather than two, one high/low and one high-beam 5+3⁄4 in (146 mm) sealed beam on each side of the vehicle, was introduced on some 1957 Cadillac, Chrysler, DeSoto, and Nash models in states that permitted the new system. Separate low and high beam lamps eliminated the need for compromise in lens design and filament positioning required in a single unit. Other cars followed suit when all states permitted the new lamps by the time the 1958 models were brought to market. The four-lamp system permitted more design flexibility and improved low and high beam performance. Auto stylists such as Virgil Exner carried out design studies with the low beams in their conventional outboard location, and the high beams vertically stacked at the centerline of the car, but no such designs reached volume production. History of automotive headlamps: An example arrangement includes the stacking of two headlamps on each side, with low beams above high beams. The Nash Ambassador used this arrangement in the 1957 model year. Pontiac used this design starting in the 1963 model year; American Motors, Ford, Cadillac, and Chrysler followed two years later. Also in the 1965 model year, the Buick Riviera had concealable stacked headlamps. Various Mercedes models sold in America used this arrangement because their home-market replaceable-bulb headlamps were illegal in the US. History of automotive headlamps: In the late 1950s and early 1960s, some Lincoln, Buick, and Chrysler cars had the headlamps arranged diagonally with the low-beam lamps outboard and above the high-beam lamps. British cars including the Gordon-Keeble, Jensen CV8, Triumph Vitesse, and Bentley S3 Continental used such an arrangement as well.In 1968, the newly initiated Federal Motor Vehicle Safety Standard 108 required all vehicles to have either the twin or quad round sealed beam headlamp system and prohibited any decorative or protective element in front of an operating headlamp. Glass-covered headlamps like those used on the Jaguar E-Type, pre-1968 VW Beetle, 1965 Chrysler and Imperial models, Porsche 356, Citroën DS, and Ferrari Daytona were no longer permitted, and vehicles had to be equipped with uncovered headlamps for the US market. This made it difficult for vehicles with headlamp configurations designed for good aerodynamic performance to achieve it in their US-market configurations. History of automotive headlamps: The FMVSS 108 was amended in 1974 to permit rectangular sealed-beam headlamps. This allowed manufacturers flexibility to lower the hoods on new cars. These could be placed in horizontal arrays or in vertically stacked pairs. As previously with round lamps, the US permitted only two standardized sizes of rectangular sealed-beam lamp: A system of two 200 by 142 mm (7.9 by 5.6 in) high/low beam units corresponding to the existing 7-inch round format, or a system of four 165 by 100 mm (6.5 by 3.9 in) units, two high/low and two high-beam. corresponding to the existing 5+3⁄4 in (146 mm) round format. History of automotive headlamps: The rectangular headlamp design became so prevalent in U.S.-made cars that only a few models continued using round headlamps by 1979. History of automotive headlamps: International headlamp styling, 1983–present In 1983, granting a 1981 petition from Ford Motor Company, the US headlamp regulations were amended to allow replaceable-bulb, nonstandard-shape, architectural headlamps with aerodynamic lenses that could for the first time be made of hard-coated polycarbonate. This allowed the first US-market car since 1939 with replaceable bulb headlamps: the 1984 Lincoln Mark VII. These composite headlamps were sometimes referred to as "Euro" headlamps since aerodynamic headlamps were common in Europe. Though conceptually similar to European headlamps with non-standardized shape and replaceable-bulb construction, these headlamps conform to the headlamp design, construction, and performance specifications of US Federal Motor Vehicle Safety Standard 108 rather than the internationalized European safety standards used outside North America. Nevertheless, this change to US regulations made it possible for headlamp styling in the US market to move closer to that in Europe. History of automotive headlamps: Hidden headlamps Hidden headlamps were introduced in 1936, on the Cord 810/812. They were mounted in the front fenders, which were smooth until the lights were cranked out—each with its own small dash-mounted crank—by the operator. They aided aerodynamics when the headlamps were not in use and were among the Cord's signature design features. History of automotive headlamps: Later hidden headlamps require one or more vacuum-operated servos and reservoirs, with associated plumbing and linkage, or electric motors, geartrains and linkages to raise the lamps to an exact position to assure correct aiming despite ice, snow, and age. Some hidden headlamp designs, such as those on the Saab Sonett III, used a lever-operated mechanical linkage to raise the headlamps into position. History of automotive headlamps: During the 1960s and 1970s, many notable sports cars used this feature such as the Chevrolet Corvette (C3), Ferrari Berlinetta Boxer and Lamborghini Countach as they allowed low bonnet lines but raised the lights to the required height, but since 2004 no modern volume-produced car models use hidden headlamps because they present difficulties in complying with pedestrian-protection provisions added to international auto safety regulations regarding protuberances on car bodies to minimize injury to pedestrians struck by cars.Some hidden headlamps themselves do not move, but rather are covered when not in use by panels designed to blend in with the car's styling. When the lamps are switched on, the covers are swung out of the way, usually downward or upward, for example on the 1992 Jaguar XJ220. The door mechanism may be actuated by vacuum pots, as on some Ford vehicles of the late 1960s through early 1980s such as the 1967–1970 Mercury Cougar, or by an electric motor as on various Chrysler products of the middle 1960s through late 1970s such as the 1966–1967 Dodge Charger. Regulations and requirements: Modern headlamps are electrically operated, positioned in pairs, one or two on each side of the front of a vehicle. A headlamp system is required to produce a low and a high beam, which may be produced by multiple pairs of single-beam lamps or by a pair of dual-beam lamps, or a mix of single-beam and dual-beam lamps. High beams cast most of their light straight ahead, maximizing seeing distance but producing too much glare for safe use when other vehicles are present on the road. Because there is no special control of upward light, high beams also cause backdazzle from fog, rain and snow due to the retroreflection of the water droplets. Low beams have stricter control of upward light, and direct most of their light downward and either rightward (in right-traffic countries) or leftward (in left-traffic countries), to provide forward visibility without excessive glare or backdazzle. Regulations and requirements: Low beam Low beam (dipped beam, passing beam, meeting beam) headlamps provide a distribution of light designed to provide forward and lateral illumination, with limits on light directed towards the eyes of other road users to control glare. This beam is intended for use whenever other vehicles are present ahead, whether oncoming or being overtaken. The international ECE Regulations for filament headlamps and for high-intensity discharge headlamps specify a beam with a sharp, asymmetric cutoff preventing significant amounts of light from being cast into the eyes of drivers of preceding or oncoming cars. Control of glare is less strict in the North American SAE beam standard contained in FMVSS / CMVSS 108. High beam High beam (main beam, driving beam, full beam) headlamps provide a bright, center-weighted distribution of light with no particular control of light directed towards other road users' eyes. As such, they are only suitable for use when alone on the road, as the glare they produce will dazzle other drivers. International ECE Regulations permit higher-intensity high-beam headlamps than are allowed under North American regulations. Regulations and requirements: Compatibility with traffic directionality Most low-beam headlamps are specifically designed for use on only one side of the road. Headlamps for use in left-traffic countries have low-beam headlamps that "dip to the left"; the light is distributed with a downward/leftward bias to show the driver the road and signs ahead without blinding oncoming traffic. Headlamps for right-traffic countries have low beams that "dip to the right", with most of their light directed downward/rightward. Regulations and requirements: Within Europe, when driving a vehicle with right-traffic headlamps in a left-traffic country or vice versa for a limited time (as for example on vacation or in transit), it is a legal requirement to adjust the headlamps temporarily so that their wrong-side beam distribution does not dazzle oncoming drivers. This may be achieved by methods including adhering opaque decals or prismatic lenses to a designated part of the lens. Some projector-type headlamps can be made to produce a proper left- or right-traffic beam by shifting a lever or other movable element in or on the lamp assembly. Many tungsten (pre-halogen) European-code headlamps made in France by Cibié, Marchal, and Ducellier could be adjusted to produce either a left- or a right-traffic low beam by means of a two-position bulb holder. Regulations and requirements: Because wrong-side-of-road headlamps blind oncoming drivers and do not adequately light the driver's way, and blackout strips and adhesive prismatic lenses reduce the safety performance of the headlamps, some countries require all vehicles registered or used on a permanent or semi-permanent basis within the country to be equipped with headlamps designed for the correct traffic-handedness. North American vehicle owners sometimes privately import and install Japanese-market (JDM) headlamps on their car in the mistaken belief that the beam performance will be better, when in fact such misapplication is quite hazardous and illegal. Regulations and requirements: Adequacy Vehicle headlamps have been found unable to illuminate an assured clear distance ahead at speeds above 60 km/h (40 mph). It may be unsafe and, in a few areas, illegal to drive above this speed at night. Regulations and requirements: Use in daytime Some countries require automobiles to be equipped with daytime running lights (DRL) to increase the conspicuity of vehicles in motion during the daytime. Regional regulations govern how the DRL function may be provided. In Canada, the DRL function required on vehicles made or imported since 1990 can be provided by the headlamps, the fog lamps, steady-lit operation of the front turn signals, or by special daytime running lamps. Functionally dedicated daytime running lamps not involving the headlamps are required on all new cars first sold in the European Union since February 2011. In addition to the EU and Canada, countries requiring DRL include Albania, Argentina, Bosnia and Herzegovina, Colombia (no more from Aug/2011), Iceland, Israel, Macedonia, Norway, Moldova, Russia, Serbia, and Uruguay. Regulations and requirements: Construction, performance, and aim There are two different beam pattern and headlamp construction standards in use in the world: The ECE standard, which is allowed or required in virtually all industrialized countries except the United States, and the SAE standard that is mandatory only in the US. Japan formerly had bespoke lighting regulations similar to the US standards, but for the left side of the road. However, Japan now adheres to the ECE standard. The differences between the SAE and ECE headlamp standards are primarily in the amount of glare permitted toward other drivers on low beam (SAE permits much more glare), the minimum amount of light required to be thrown straight down the road (SAE requires more), and the specific locations within the beam at which minimum and maximum light levels are specified. Regulations and requirements: ECE low beams are characterized by a distinct horizontal "cutoff" line at the top of the beam. Below the line is bright, and above is dark. On the side of the beam facing away from oncoming traffic (right in right-traffic countries, left in left-traffic countries), this cutoff sweeps or steps upward to direct light to road signs and pedestrians. SAE low beams may or may not have a cutoff, and if a cutoff is present, it may be of two different general types: VOL, which is conceptually similar to the ECE beam in that the cutoff is located at the top of the left side of the beam and aimed slightly below horizontal, or VOR, which has the cutoff at the top of the right side of the beam and aimed at the horizon.Proponents of each headlamp system decry the other as inadequate and unsafe: US proponents of the SAE system claim that the ECE low beam cutoff gives short seeing distances and inadequate illumination for overhead road signs, while international proponents of the ECE system claim that the SAE system produces too much glare. Comparative studies have repeatedly shown that there is little or no overall safety advantage to either SAE or ECE beams; the two systems' acceptance and rejection by various countries is based primarily on which system is already in use.In North America, the design, performance, and installation of all motor vehicle lighting devices are regulated by Federal and Canada Motor Vehicle Safety Standard 108, which incorporates SAE technical standards. Elsewhere in the world, ECE internationalized regulations are in force either by reference or by incorporation in individual countries' vehicular codes. Regulations and requirements: US laws required sealed beam headlamps on all vehicles between 1940 and 1983, and other countries such as Japan, United Kingdom, and Australia also made extensive use of sealed beams. In most other countries, and in the US since 1984, replaceable-bulb headlamps predominate. Regulations and requirements: Headlamps must be kept in proper aim. Regulations for aim vary from country to country and from beam specification to beam specification. In the US, SAE standard headlamps are aimed without regard to headlamp mounting height. This gives vehicles with high-mounted headlamps a seeing distance advantage, at the cost of increased glare to drivers in lower vehicles. By contrast, ECE headlamp aim angle is linked to headlamp mounting height, to give all vehicles roughly equal seeing distance and all drivers roughly equal glare. Regulations and requirements: Light colour White Headlamps are generally required to produce white light, according to both ECE and SAE standards. ECE Regulation 48 currently requires new vehicles to be equipped with headlamps emitting white light. Different headlamp technologies produce different characteristic types of white light; the white specification is quite large and permits a wide range of apparent colour from warm white (with a brown-orange-amber-yellow cast) to cold white (with a blue-violet cast). Regulations and requirements: Selective yellow Previous ECE regulations also permitted selective yellow light. A research experiment done in the UK in 1968 using tungsten (non-halogen) lamps found that visual acuity is about 3% better with selective yellow headlamps than with white ones of equal intensity. Research done in the Netherlands in 1976 concluded that yellow and white headlamps are equivalent as regards traffic safety, though yellow light causes less discomfort glare than white light. Researchers note that tungsten filament lamps emit only a small amount of the blue light blocked by a selective-yellow filter, so such filtration makes only a small difference in the characteristics of the light output, and suggest that headlamps using newer kinds of sources such as metal halide (HID) bulbs may, through filtration, give off less visually distracting light while still having greater light output than halogen ones.Selective yellow headlamps are no longer common, but are permitted in various countries throughout Europe as well as in non-European locales such as South Korea, Japan and New Zealand. In Iceland, yellow headlamps are allowed and the vehicle regulations in Monaco still officially require selective yellow light from all vehicles' low beam and high beam headlamps, and fog lamps if present.In France, a statute passed in November 1936 based on advice from the Central Commission for Automobiles and for Traffic in General, required selective yellow headlights to be fitted. The mandate for yellow headlamps was enacted to reduce driver fatigue from discomfort glare. The requirement initially applied to vehicles registered for road use after April 1937, but was intended to extend to all vehicles through retrofitting of selective yellow lights on older vehicles, from the start of 1939. Later stages of the implementation were disrupted in September 1939 by the outbreak of war.The French yellow-light mandate was based on observations by the French Academy of Sciences in 1934, when the academy recorded that the selective yellow light was less dazzling than white light and that the light diffused less in fog than green or blue lights. Yellow light was obtained by dint of yellow glass for the headlight bulb or lens, a yellow coating on a colourless bulb, lens, or reflector, or a yellow filter between the bulb and the lens. Filtration losses reduced the emitted light intensity by about 18 percent, which might have contributed to the reduced glare.The mandate was in effect until December 1992, so for many years yellow headlights visually marked French-registered cars wherever they were seen, though some French drivers are said to have switched to white headlamps despite the requirement for yellow ones.The requirement was criticised as a trade barrier in the automobile sector; French politician Jean-Claude Martinez described it as a protectionist law.Formal research found, at best, a small improvement in visual acuity with yellow rather than white headlights, and French automaker Peugeot estimated that white headlamps produce 20 to 30 percent more light—though without explaining why this estimate was larger than the 15% to 18% value measured in formal research—and wanted drivers of their cars to get the benefits of extra illumination. More generally, country-specific vehicle technical regulations in Europe were regarded as a costly nuisance. In a survey published in 1988, automakers gave a range of responses when asked what it cost to supply a car with yellow headlamps for France. General Motors and Lotus said there was no additional cost, Rover said the additional cost was marginal, and Volkswagen said yellow headlamps added 28 Deutsche Marks to the cost of vehicle production. Addressing the French requirement for yellow lights (among other country-specific lighting requirements) was undertaken as part of an effort toward common vehicle technical standards throughout the European Community. A provision in EU Council Directive 91/663, issued on 10 December 1991, specified white headlamps for all new vehicle type-approvals granted by the EC after 1 January 1993 and stipulated that from that date EC (later EU) member states would not be permitted to refuse entry of a vehicle meeting the lighting standards contained in the amended document—so France would no longer be able to refuse entry to a vehicle with white headlights. The directive was adopted unanimously by the council, and hence with France's vote.Though no longer required in France, selective yellow headlamps remain legal there; the current regulation stipulates that "every motor vehicle must be equipped, at the front, with two or four lights, creating in a forward direction selective yellow or white light permitting efficient illumination of the road at night for a distance, in clear conditions, of 100 metres". Optical systems: Reflector lamps Lens optics A light source (filament or arc) is placed at or near the focus of a reflector, which may be parabolic or of non-parabolic complex shape. Fresnel and prism optics moulded into the headlamp lens refract (shift) parts of the light laterally and vertically to provide the required light distribution pattern. Most sealed-beam headlamps have lens optics. Optical systems: Reflector optics Starting in the 1980s, headlamp reflectors began to evolve beyond the simple stamped steel parabola. The 1983 Austin Maestro was the first vehicle equipped with Lucas-Carello's homofocal reflectors, which comprised parabolic sections of different focal length to improve the efficiency of light collection and distribution. CAD technology allowed the development of reflector headlamps with nonparabolic, complex-shape reflectors. First commercialised by Valeo under their Cibié brand, these headlamps would revolutionise automobile design.The 1987 US-market Dodge Monaco/Eagle Premier twins and European Citroën XM were the first cars with complex-reflector headlamps with faceted optic lenses. General Motors' Guide Lamp division in America had experimented with clear-lens complex-reflector lamps in the early 1970s and achieved promising results, but the US-market 1990 Honda Accord was first with clear-lens multi-reflector headlamps; these were developed by Stanley in Japan.The optics to distribute the light in the desired pattern are designed into the reflector itself, rather than into the lens. Depending on the development tools and techniques in use, the reflector may be engineered from the start as a bespoke shape, or it may start as a parabola standing in for the size and shape of the completed package. In the latter case, the entire surface area is modified so as to produce individual segments of specifically calculated, complex contours. The shape of each segment is designed such that their cumulative effect produces the required light distribution pattern.Modern reflectors are commonly made of compression-moulded or injection moulded plastic, though glass and metal optic reflectors also exist. The reflective surface is vapour deposited aluminum, with a clear overcoating to prevent the extremely thin aluminium from oxidizing. Extremely tight tolerances must be maintained in the design and production of complex-reflector headlamps. Optical systems: Dual-beam reflector headlamps Night driving is difficult and dangerous due to the blinding glare of headlights from oncoming traffic. Headlamps that satisfactorily illuminate the road ahead without causing glare have long been sought. The first solutions involved resistance-type dimming circuits, which decreased the intensity of the headlamps. This yielded to tilting reflectors, and later to dual-filament bulbs with a high and a low beam. Optical systems: In a two-filament headlamp, there can only be one filament exactly at the focal point of the reflector. There are two primary means of producing two different beams from a two-filament bulb in a single reflector. Optical systems: American system One filament is located at the focal point of the reflector. The other filament is shifted axially and radially away from the focal point. In most 2-filament sealed beams and in 2-filament replaceable bulbs of type 9004, 9007, and H13, the high-beam filament is at the focal point and the low-beam filament is off focus. For use in right-traffic countries, the low-beam filament is positioned slightly upward, forward, and leftward of the focal point, so that when it is energized, the beam is widened and shifted slightly downward and rightward of the headlamp axis. Transverse-filament bulbs such as the 9004 can only be used with the filaments horizontal, but axial-filament bulbs can be rotated or "clocked" by the headlamp designer to optimize the beam pattern or to affect the traffic-handedness of the low beam. The latter is accomplished by clocking the low-beam filament in an upward-forward-leftward position to produce a right-traffic low beam, or in an upward-forward-rightward position to produce a left-traffic low beam. Optical systems: The opposite tactic has also been employed in certain two-filament sealed beams. Placing the low beam filament at the focal point to maximize light collection by the reflector, and positioning the high beam filament slightly rearward-rightward-downward of the focal point. The relative directional shift between the two beams is the same with either technique – in a right-traffic country, the low beam is slightly downward-rightward and the high beam is slightly upward-leftward, relative to one another – but the lens optics must be matched to the filament placements selected. Optical systems: European system The traditional European method of achieving low and high beams from a single bulb involves two filaments along the axis of the reflector. The high beam filament is on the focal point, while the low beam filament is approximately 1 cm forward of the focal point and 3 mm above the axis. Below the low beam filament is a cup-shaped shield (called a "Graves shield") spanning an arc of 165°. When the low beam filament is illuminated, this shield casts a shadow on the corresponding lower area of the reflector, blocking downward light rays that would otherwise strike the reflector and be cast above the horizon. The bulb is rotated (or "clocked") within the headlamp to position the Graves shield so as to allow light to strike a 15° wedge of the lower half of the reflector. This is used to create the upsweep or upstep characteristic of ECE low beam light distributions. The bulb's rotative position within the reflector depends on the type of beam pattern to be produced and the traffic directionality of the market for which the headlamp is intended. Optical systems: This system was first used with the tungsten incandescent Bilux/Duplo R2 bulb of 1954, and later with the halogen H4 bulb of 1971. In 1992, US regulations were amended to permit the use of H4 bulbs redesignated HB2 and 9003, and with slightly different production tolerances stipulated. These are physically and electrically interchangeable with H4 bulbs. Similar optical techniques are used, but with different reflector or lens optics to create a US beam pattern rather than a European one. Optical systems: Each system has its advantages and disadvantages. The American system historically permitted a greater overall amount of light within the low beam, since the entire reflector and lens area is used, but at the same time, the American system has traditionally offered much less control over upward light that causes glare, and for that reason has been largely rejected outside the US. In addition, the American system makes it difficult to create markedly different low and high beam light distributions. The high beam is usually a rough copy of the low beam, shifted slightly upward and leftward. The European system traditionally produced low beams containing less overall light, because only 60% of the reflector's surface area is used to create the low beam. However, low beam focus and glare control are easier to achieve. In addition, the lower 40% of the reflector and lens are reserved for high beam formation, which facilitates the optimization of both low and high beams. Optical systems: Developments in the 1990s and 2000s Complex-reflector technology in combination with new bulb designs such as H13 is enabling the creation of European-type low and high beam patterns without the use of a Graves Shield, while the 1992 US approval of the H4 bulb has made traditionally European 60% / 40% optical area divisions for low and high beam common in the US. Therefore, the difference in active optical area and overall beam light content no longer necessarily exists between US and ECE beams. Dual-beam HID headlamps employing reflector technology have been made using adaptations of both techniques. Optical systems: Projector (polyellipsoidal) lamps In this system a filament is located at one focus of an ellipsoidal reflector and has a condenser lens at the front of the lamp. A shade is located at the image plane, between the reflector and lens, and the projection of the top edge of this shade provides the low-beam cutoff. The shape of the shade edge and its exact position in the optical system determine the shape and sharpness of the cutoff. The shade may be lowered by a solenoid actuated pivot to provide a low beam, and removed from the light path for the high beam. Such optics are known as BiXenon or BiHalogen projectors. If the cutoff shade is fixed in the light path, separate high-beam lamps are required. The condenser lens may have slight fresnel rings or other surface treatments to reduce cutoff sharpness. Modern condenser lenses incorporate optical features specifically designed to direct some light upward towards the locations of retroreflective overhead road signs. Optical systems: Hella introduced ellipsoidal optics for acetylene headlamps in 1911, but following the electrification of vehicle lighting, this optical technique wasn't used for many decades. The first modern polyellipsoidal (projector) automotive lamp was the Super-Lite, an auxiliary headlamp produced in a joint venture between Chrysler Corporation and Sylvania and optionally installed in 1969 and 1970 full-size Dodge automobiles. It used an 85-watt transverse-filament tungsten-halogen bulb and was intended as a mid-beam, to extend the reach of the low beams during turnpike travel when low beams alone were inadequate but high beams would produce excessive glare.Projector main headlamps appeared in 1981 on the Audi Quartz, a concept car designed by Pininfarina for Geneva Auto Salon. Developed more or less simultaneously in Germany by Hella and Bosch and in France by Cibié, the projector low beam permitted accurate beam focus and a much smaller-diameter optical package, though a much deeper one, for any given beam output. The 1986 BMW 7 Series (E32) was the first volume-production car to use polyellipsoidal low beam headlamps. The main disadvantage of this type of headlamp is the need to accommodate the physical depth of the assembly, which may extend far back into the engine compartment. Light sources: Tungsten The first electric headlamp light source was the tungsten filament, operating in a vacuum or inert-gas atmosphere inside the headlamp bulb or sealed beam. Compared to newer-technology light sources, tungsten filaments give off small amounts of light relative to the power they consume. Also, during the normal operation of such lamps, tungsten boils off the surface of the filament and condenses on the bulb glass, blackening it. This reduces the light output of the filament and blocks some of the light that would pass through an unblackened bulb glass, though blackening was less of a problem in sealed beam units; their large interior surface area minimized the thickness of the tungsten accumulation. For these reasons, plain tungsten filaments are all but obsolete in automotive headlamp service. Light sources: Tungsten-halogen Tungsten-halogen technology (also called "quartz-halogen", "quartz-iodine", "iodine cycle", etc.) increases the effective luminous efficacy of a tungsten filament: when operating at a higher filament temperature which results in more lumens output per watt input, a tungsten-halogen lamp has a much longer brightness lifetime than similar filaments operating without the halogen regeneration cycle. At equal luminosity, the halogen-cycle bulbs also have longer lifetimes. European-designed halogen headlamp light sources are generally configured to provide more light at the same power consumption as their lower-output plain tungsten counterparts. By contrast, many US-based designs are configured to reduce or minimize the power consumption while keeping light output above the legal minimum requirements; some US tungsten-halogen headlamp light sources produce less initial light than their non-halogen counterparts. A slight theoretical fuel economy benefit and reduced vehicle construction cost through lower wire and switch ratings were the claimed benefits when American industry first chose how to implement tungsten-halogen technology. There was an improvement in seeing distance with US halogen high beams, which were permitted for the first time to produce 150,000 candela (cd) per vehicle, double the non-halogen limit of 75,000 cd but still well shy of the international European limit of 225,000 cd. After replaceable halogen bulbs were permitted in US headlamps in 1983, the development of US bulbs continued to favor long bulb life and low power consumption, while European designs continued to prioritise optical precision and maximum output.The H1 lamp was the first tungsten-halogen headlamp light source. It was introduced in 1962 by a consortium of European bulb and headlamp makers. This bulb has a single axial filament that consumes 55 watts at 12.0 volts, and produces 1550 lumens ±15% when operated at 13.2 V. H2 (55 W @ 12.0 V, 1820 lm @ 13.2 V) followed in 1964, and the transverse-filament H3 (55 W @ 12.0 V, 1450 lm ±15%) in 1966. H1 still sees wide use in low beams, high beams and auxiliary fog and driving lamps, as does H3. The H2 is no longer a current type, since it requires an intricate bulb holder interface to the lamp, has a short life and is difficult to handle. For those reasons, H2 was withdrawn from ECE Regulation 37 for use in new lamp designs (though H2 bulbs are still manufactured for replacement purposes in existing lamps), but H1 and H3 remain current and these two bulbs were legalised in the United States in 1993. More recent single-filament bulb designs include the H7 (55 W @ 12.0 V, 1500 lm ±10% @ 13.2 V), H8 (35 W @ 12.0 V, 800 lm ±15% @ 13.2 V), H9 (65 W @ 12.0 V, 2100 lm ±10% @ 13.2 V), and H11 (55 W @ 12.0 V, 1350 lm ±10% @ 13.2 V). 24-volt versions of many bulb types are available for use in trucks, buses, and other commercial and military vehicles. Light sources: The first dual-filament halogen bulb to produce both a low and a high beam, the H4 (60/55 W @ 12 V, 1650/1000 lm ±15% @ 13.2 V), was released in 1971 and quickly became the predominant headlamp bulb throughout the world except in the United States, where the H4 is still not legal for automotive use. In 1989, the Americans created their own standard for a bulb called HB2: almost identical to H4 except with more stringent constraints on filament geometry and positional variance, and power consumption and light output expressed at the US test voltage of 12.8V.The first US halogen headlamp bulb, introduced in 1983, was the HB1/9004. It is a 12.8-volt, transverse dual-filament design that produces 700 lumens on low beam and 1200 lumens on high beam. The 9004 is rated for 65 watts (high beam) and 45 watts (low beam) at 12.8 volts. Other US approved halogen bulbs include the HB3 (65 W, 12.8 V), HB4 (55 W, 12.8 V), and HB5 (65/55 watts, 12.8 V). All of the European-designed and internationally approved bulbs except H4 are presently approved for use in headlamps complying with US requirements. Light sources: Halogen infrared reflective (HIR) A further development of the tungsten-halogen bulb has a dichroic coating that passes visible light and reflects infrared radiation. The glass in such a bulb may be spherical or tubular. The reflected infrared radiation strikes the filament located at the center of the glass envelope, heating the filament to a greater degree than can be achieved through resistive heating alone. The superheated filament emits more light without an increase in power consumption. Light sources: High-intensity discharge (HID) High-intensity discharge lamps (HID) produce light with an electric arc rather than a glowing filament. The high intensity of the arc comes from metallic salts that are vaporized within the arc chamber. These lamps have a higher efficacy than tungsten lamps. Because of the increased amounts of light available from HID lamps relative to halogen bulbs, HID headlamps producing a given beam pattern can be made smaller than halogen headlamps producing a comparable beam pattern. Alternatively, the larger size can be retained, in which case the HID headlamp can produce a more robust beam pattern.Automotive HID may be generically called "xenon headlamps", though they are actually metal-halide lamps that contain xenon gas. The xenon gas allows the lamps to produce minimally adequate light immediately upon start, and shortens the run-up time. The usage of argon, as is commonly done in street lights and other stationary metal-halide lamp applications, causes lamps to take several minutes to reach their full output. Light sources: The light from HID headlamps can exhibit a distinct bluish tint when compared with tungsten-filament headlamps. Light sources: Retrofitment When a halogen headlamp is retrofitted with an HID bulb, light distribution and output are altered. In the United States, vehicle lighting that does not conform to FMVSS 108 is not street legal. Glare will be produced and the headlamp's type approval or certification becomes invalid with the altered light distribution, so the headlamp is no longer street-legal in some locales. In the US, suppliers, importers and vendors that offer non-compliant kits are subject to civil fines. By October 2004, the NHTSA had investigated 24 suppliers and all resulted in termination of sale or recalls.In Europe and the many non-European countries applying ECE Regulations, even HID headlamps designed as such must be equipped with lens cleaning and automatic self-leveling systems, except on motorcycles. These systems are usually absent on vehicles not originally equipped with HID lamps. Light sources: History In 1992 the first production low beam HID headlamps were manufactured by Hella and Bosch beginning in 1992 for optional availability on the BMW 7 Series. This first system uses a built-in, non-replaceable bulb without a UV-blocking glass shield or touch-sensitive electrical safety cutout, designated D1 – a designation that would be recycled years later for a wholly different type of lamp. The AC ballast is about the size of a building brick. Light sources: In 1996 the first American-made effort at HID headlamps was on the 1996–98 Lincoln Mark VIII, which uses reflector headlamps with an unmasked, integral-ignitor lamp made by Sylvania and designated Type 9500. This was the only system to operate on DC, since reliability proved inferior to the AC systems. The Type 9500 system was not used on any other models, and was discontinued after Osram's takeover of Sylvania in 1997. All HID headlamps worldwide presently use the standardized AC-operated bulbs and ballasts. Light sources: In 1999 the first worldwide HID headlights for both low and high beam were introduced on the Mercedes-Benz CL-Class (C215). Light sources: Operation HID headlamp bulbs do not run on low-voltage DC current, so they require a ballast with either an internal or external ignitor. The ignitor is integrated into the bulb in D1 and D3 systems, and is either a separate unit or part of the ballast in D2 and D4 systems. The ballast controls the current to the bulb. The ignition and ballast operation proceeds in three stages: Ignition: a high voltage pulse is used to produce an electrical arc – in a manner similar to a spark plug – which ionizes the xenon gas, creating a conducting channel between the tungsten electrodes. Electrical resistance is reduced within the channel, and current flows between the electrodes. Light sources: Initial phase: the bulb is driven with controlled overload. Because the arc is operated at high power, the temperature in the capsule rises quickly. The metallic salts vaporize, and the arc is intensified and made spectrally more complete. The resistance between the electrodes also falls; the electronic ballast control gear registers this and automatically switches to continuous operation. Light sources: Continuous operation: all metal salts are in the vapor phase, the arc has attained its stable shape, and the luminous efficacy has attained its nominal value. The ballast now supplies stable electrical power so the arc will not flicker. Stable operating voltage is 85 volts AC in D1 and D2 systems, 42 volts AC in D3 and D4 systems. The frequency of the square-wave alternating current is typically 400 hertz or higher.The command is often near the steering wheel and a specific indicator is shown on the dashboard. Light sources: Bulb types HID headlamps produce between 2,800 and 3,500 lumens from between 35 and 38 watts of electrical power, while halogen filament headlamp bulbs produce between 700 and 2,100 lumens from between 40 and 72 watts at 12.8 V.Current-production bulb categories are D1S, D1R, D2S, D2R, D3S, D3R, D4S, and D4R. The D stands for discharge, and the number is the type designator. The final letter describes the outer shield. The arc within an HID headlamp bulb generates considerable short-wave ultraviolet (UV) light, but none of it escapes the bulb, for a UV-absorbing hard glass shield is incorporated around the bulb's arc tube. This is important to prevent degradation of UV-sensitive components and materials in headlamps, such as polycarbonate lenses and reflector hardcoats. "S" lamps – D1S, D2S, D3S, and D4S – have a plain glass shield and are primarily used in projector-type optics. "R" lamps – D1R, D2R, D3R, and D4R – are designed for use in reflector-type headlamp optics. They have an opaque mask covering specific portions of the shield, which facilitates the optical creation of the light-dark boundary (cutoff) near the top of a low-beam light distribution. Automotive HID lamps emit considerable near-UV light, despite the shield. Light sources: Color The correlated color temperature of factory installed automotive HID headlamps is between 4200K while tungsten-halogen lamps are at 3000K to 3550K. The spectral power distribution (SPD) of an automotive HID headlamp is discontinuous and spikey while the SPD of a filament lamp, like that of the sun, is a continuous curve. Moreover, the color rendering index (CRI) of tungsten-halogen headlamps (98) is much closer than that of HID headlamps (~75) to standardized sunlight (100). Studies have shown no significant safety effect of this degree of CRI variation in headlighting. Light sources: Advantages Increased safety Automotive HID lamps offer about 3000 lumens and 90 Mcd/m2 versus 1400 lumens and 30 Mcd/m2 offered by halogen lamps. In a headlamp optic designed for use with an HID lamp, it produces more usable light. Studies have demonstrated drivers react faster and more accurately to roadway obstacles with good HID headlamps compared to halogen ones. Hence, good HID headlamps contribute to driving safety. The contrary argument is that glare from HID headlamps can reduce traffic safety by interfering with other drivers' vision. Light sources: Efficacy and output Luminous efficacy is the measure of how much light is produced versus how much energy is consumed. HID lamps give higher efficacy than halogen lamps. The highest-intensity halogen lamps, H9 and HIR1, produce 2100 to 2530 lumens from approximately 70 watts at 13.2 volts. A D2S HID bulb produces 3200 lumens from approximately 42 watts during stable operation. The reduced power consumption means less fuel consumption, with resultant less CO2 emission per vehicle fitted with HID lighting (1.3 g/km assuming that 30% of an engine running time is with the lights on). Light sources: Longevity The average service life of an HID bulb is 2000 hours, compared to between 450 and 1000 hours for a halogen lamp. Light sources: Disadvantages Glare Vehicles equipped with HID headlamps (except motorcycles) are required by ECE regulation 48 also to be equipped with headlamp lens cleaning systems and automatic beam leveling control. Both of these measures are intended to reduce the tendency for high-output headlamps to cause high levels of glare to other road users. In North America, ECE R48 does not apply and while lens cleaners and beam levelers are permitted, they are not required; HID headlamps are markedly less prevalent in the US, where they have produced significant glare complaints. Scientific study of headlamp glare has shown that for any given intensity level, the light from HID headlamps is 40% more glaring than the light from tungsten-halogen headlamps. Light sources: Mercury content HID headlamp bulb types D1R, D1S, D2R, D2S and 9500 contain the toxic heavy metal mercury. The disposal of mercury-containing vehicle parts is increasingly regulated throughout the world, for example under US EPA regulations. Newer HID bulb designs D3R, D3S, D4R, and D4S which are in production since 2004 contain no mercury, but are not electrically or physically compatible with headlamps designed for previous bulb types. Light sources: Cost HID headlamps are significantly more costly to produce, install, purchase, and repair. The extra cost of the HID lights may exceed the fuel cost savings through their reduced power consumption, though some of this cost disadvantage is offset by the longer lifespan of the HID bulb relative to halogen bulbs. LED Timeline Automotive headlamp applications using light-emitting diodes (LEDs) have been undergoing development since 2004.In 2006 the first series-production LED low beams were factory-installed on the Lexus LS 600h / LS 600h L. The high beam and turn signal functions used filament bulbs. The headlamp was supplied by Koito Industries Ltd. Light sources: In 2007 the first headlamps with all functions provided by LEDs, supplied by AL-Automotive Lighting, were introduced on the V10 Audi R8 sports car (except in North America).In 2009 Hella headlamps on the 2009 Cadillac Escalade Platinum became the first all-LED headlamps for the North American market.In 2010 the first all-LED headlamps with adaptive high beam and what Mercedes called the "Intelligent Light System" were introduced on the 2011 Mercedes CLS. Light sources: In 2013 the first digitally controlled full-LED glare-free "Matrix LED" adaptive headlamps were introduced by Audi on the facelifted A8, with 25 individual LED segments. The system dims the light that would shine directly onto oncoming and preceding vehicles, but continues to cast its full light on the zones between and beside them. This works because the LED high beams are split into numerous individual light-emitting diodes. High-beam LEDs in both headlights are arranged in a matrix and adapt fully electronically to the surroundings in milliseconds. They are activated and deactivated or dimmed individually by a control unit. In addition, the headlights also function as a cornering light. Using predictive route data supplied by the MMI navigation plus, the focus of the beam is shifted towards the bend even before the driver turns the steering wheel. In 2014: Mercedes-Benz introduced a similar technology on the facelifted CLS-Class in 2014, called Multibeam LED, with 24 individual segments.As of 2010, LED headlamps such as those available on the Toyota Prius were providing output between halogen and HID headlamps, with system power consumption slightly lower than other headlamps, longer lifespans, and more flexible design possibilities. As LED technology continues to evolve, the performance of LED headlamps was predicted to improve to approach, meet, and perhaps one day surpass that of HID headlamps. That occurred by mid-2013, when the Mercedes S-Class came with LED headlamps giving higher performance than comparable HID setups. Light sources: Cold lenses Before LEDs, all light sources used in headlamps (tungsten, halogen, HID) emitted infrared energy that can thaw built-up snow and ice off a headlamp lens and prevent further accumulation. LEDs do not. Some LED headlamps move heat from the heat sink on the back of the LEDs to the inner face of the front lens to warm it up, while on others no provision is made for lens thawing. Light sources: Laser A laser lamp uses mirrors to direct a laser on to a phosphor that then emits a light. Laser lamps use half as much power as LED lamps. They were first developed by Audi for use as headlamps in the 24 Hours of Le Mans.In 2014, the BMW i8 became the first production car to be sold with an auxiliary high-beam lamp based on this technology. The limited-production Audi R8 LMX uses lasers for its spot lamp feature, providing illumination for high-speed driving in low-light conditions. The Rolls-Royce Phantom VIII employs laser headlights with a high beam range of over 600 meters. Automatic headlamps: Automatic systems for activating the headlamps have been available since the mid-1950s, originally only on luxury American models such as Cadillac's Twilight Sentinel, Lincoln, and Imperial. Basic implementations turn the headlights on at dusk and off at dawn. Modern implementations use sensors to detect the amount of exterior light. Automatic headlamps: UN R48 has mandated the installation of automatic headlamps since 30 July 2016. With a daytime running lamp equipped and operated, the dipped beam headlamp should automatically turn on if the car is driving in less than 1,000 lux ambient conditions such as in a tunnel and in dark environments. While in such situations, a daytime running lamp would make glare more evident to the upcoming vehicle driver, which in turn would influence the upcoming vehicle driver's eyesight, such that, by automatically switching the daytime running lamp to the dipped-beam headlamp, the inherent safety defect could be solved and safety benefit ensured. Beam aim control: Headlamp leveling systems The 1948 Citroën 2CV was launched in France with a manual headlamp leveling system, controlled by the driver with a knob through a mechanical rod linkage. This allowed the driver to adjust the vertical aim of the headlamps to compensate for the passenger and cargo load in the vehicle. In 1954, Cibié introduced an automatic headlamp leveling system linked to the vehicle's suspension system to keep the headlamps correctly aimed regardless of vehicle load, without driver intervention. The first vehicle to be so equipped was the Panhard Dyna Z. Beginning in the 1970s, Germany and some other European countries began requiring remote-control headlamp leveling systems that permit the driver to lower the lamps' aim by means of a dashboard control lever or knob if the rear of the vehicle is weighted down with passengers or cargo, which would tend to raise the lamps' aim angle and create glare. Such systems typically use stepper motors at the headlamp and a rotary switch on the dash marked "0", "1", "2", "3" for different beam heights, "0" being the "normal" (and highest) position for when the car is lightly loaded. Beam aim control: Internationalized ECE Regulation 48, in force in most of the world outside North America, currently specifies a limited range within which the vertical aim of the headlamps must be maintained under various vehicle load conditions; if the vehicle isn't equipped with an adaptive suspension sufficient to keep the headlamps aimed correctly regardless of load, a headlamp leveling system is required. The regulation stipulates a more stringent version of this anti-glare measure if the vehicle has headlamps with low beam light source(s) that produce more than 2,000 lumens – xenon bulbs and certain high-power halogens, for example. Such vehicles must be equipped with headlamp self-leveling systems that sense the vehicle's degree of squat due to cargo load and road inclination, and automatically adjust the headlamps' vertical aim to keep the beam correctly oriented without any action required by the driver.Leveling systems are not required by the North American regulations. A 2007 study, however, suggests automatic levelers on all headlamps, not just those with high-power light sources, would give drivers substantial safety benefits of better seeing and less glare. Beam aim control: Directional headlamps These provide improved lighting for cornering. Some automobiles have their headlamps connected to the steering mechanism so the lights will follow the movement of the front wheels. Czechoslovak Tatra was an early implementer of such a technique, producing in the 1930s a vehicle with a central directional headlamp. The American 1948 Tucker Sedan was likewise equipped with a third central headlamp connected mechanically to the steering system. Beam aim control: The 1967 French Citroën DS and 1970 Citroën SM were equipped with an elaborate dynamic headlamp positioning system that adjusted the inboard headlamps' horizontal and vertical position in response to inputs from the vehicle's steering and suspension systems. Beam aim control: At that time US regulations required this system to be removed from those models sold in the U.S.The D series cars equipped with the system used cables connecting the long-range headlamps to a lever on the steering relay while the inner long-range headlamps on the SM used a sealed hydraulic system using a glycerin-based fluid instead of mechanical cables. Both these systems were of the same design as their respective cars' headlamp leveling systems. The cables of the D system tended to rust in the cable sheaths while the SM system gradually leaked fluid, causing the long-range lamps to turn inward, looking "cross-eyed." A manual adjustment was provided but once it was to the end of its travel the system required refilling with fluid or replacement of the tubes and dashpots.Citroën SM non-US market vehicles were equipped with heating of the headlamp cover glasses, this heat supplied by ducts carrying warm air from the radiator exhaust to the space between the headlamp lenses and the cover glasses. This provided demisting/defogging of the entire interior of the cover glasses, keeping the glass clear of mist/fog over the entire surface. The glasses have thin stripes on their surfaces that are heated by the headlight beams; however, the ducted warm air provides demisting when the headlamps are not turned on. The glasses' stripes on both D and SM cars appear similar to rear windshield glass electric defogger heating strips, but they are passive, not electrified. Beam aim control: Advanced front-lighting system (AFS) Beginning in the 2000s, there was a resurgence in interest in the idea of moving or optimizing the headlight beam in response not only to vehicular steering and suspension dynamics, but also to ambient weather and visibility conditions, vehicle speed, and road curvature and contour. A task force under the EUREKA organization, composed primarily of European automakers, lighting companies and regulators began working to develop design and performance specifications for what is known as Adaptive Front-Lighting Systems, commonly AFS. Beam aim control: Manufacturers such as BMW, Toyota, Škoda, and Vauxhall/Opel have released vehicles equipped with AFS since 2003. Beam aim control: Rather than the mechanical linkages employed in earlier directional-headlamp systems, AFS relies on electronic sensors, transducers, and actuators. Other AFS techniques include special auxiliary optical systems within a vehicle's headlamp housings. These auxiliary systems may be switched on and off as the vehicle and operating conditions call for light or darkness at the angles covered by the beam the auxiliary optics produce. A typical system measures steering angle and vehicle speed to swivel the headlamps. The most advanced AFS systems use GPS signals to anticipate changes in road curvature, rather than simply reacting to them. Beam aim control: Automatic beam switching Even when conditions would warrant the use of high-beam headlamps, drivers often do not use them. There have long been efforts, particularly in America, to devise an effective automatic beam selection system to relieve the driver of the need to select and activate the correct beam as traffic, weather, and road conditions change. General Motors introduced the first automatic headlight dimmer called the 'Autronic Eye' in 1952 on their Cadillac, Buick, and Oldsmobile models; the feature was offered in other GM vehicles starting in 1953. The system's phototube and associated circuitry were housed in a gunsight-like tube atop the dashboard. An amplifier module was located in the engine compartment that controlled the headlight relay using signals from the dashboard-mounted tube unit. Beam aim control: This pioneering setup gave way in 1958 to a system called 'GuideMatic' in reference to GM's Guide lighting division. The GuideMatic had a more compact dashtop housing and a control knob that allowed the driver to adjust the system's sensitivity threshold to determine when the headlamps would be dipped from high to low beam in response to an oncoming vehicle. By the early 1970s, this option was withdrawn from all GM models except Cadillac, on which GuideMatic was available through 1988. The photosensor for this system used an amber lens, and the adoption of retro-reflective yellow road signs, such as for oncoming curves, caused them to dim prematurely - possibly leading to their discontinuation.Ford- and Chrysler-built vehicles were also available with the GM-made dimmers from the 1950s through the 1980s. A system called 'AutoDim' was offered on several Lincoln models starting in the mid-1950s, and eventually the Ford Thunderbird and some Mercury models offered it as well. Premium Chrysler and Imperial models offered a system called Automatic Beam Control throughout the 1960s and early 1970s. Beam aim control: Rabinow dimmer Though the systems based on photoresistors evolved, growing more compact and moving from the dashboard to a less conspicuous location behind the radiator grill, they were still unable to reliably discern headlamps from non-vehicular light sources such as streetlights. They also did not dip to low beam when the driver approached a vehicle from behind, and they would spuriously dip to low beam in response to road sign reflections of the vehicle's own high beam headlamps. American inventor Jacob Rabinow devised and refined a scanning automatic dimmer system impervious to streetlights and reflections, but no automaker purchased the rights, and the problematic photoresistor type remained on the market through the late 1980s. Beam aim control: Bone-Midland lamps In 1956, the inventor Even P. Bone developed a system where a vane in front of each headlight moved automatically and caused a shadow in front of the approaching vehicle, allowing for high beam use without glare for the approaching driver. The system, called "Bone-Midland Lamps," was never taken up by any car manufacturer. Beam aim control: Camera-based dimmer Present systems based on imaging CMOS cameras can detect and respond appropriately to leading and oncoming vehicles while disregarding streetlights, road signs, and other spurious signals. Camera-based beam selection was first released in 2005 on the Jeep Grand Cherokee and has since then been incorporated into comprehensive driver assistance systems by automakers worldwide. The headlights will dim when a bright reflection bounces off of a street sign. Beam aim control: Intelligent Light System Intelligent Light System is a headlamp beam control system introduced in 2006 on the Mercedes-Benz E-Class (W211) which offers five different bi-xenon light functions, each of which is suited to typical driving or weather conditions: Country mode Motorway mode Enhanced fog lamps Active light function (Advanced front-lighting system (AFS)) Cornering light function Adaptive highbeam Adaptive Highbeam Assist is Mercedes-Benz' marketing name for a headlight control strategy that continuously automatically tailors the headlamp range so the beam just reaches other vehicles ahead, thus always ensuring maximum possible seeing range without glaring other road users. It was first launched in the Mercedes E-class in 2009. It provides a continuous range of beam reach from a low-aimed low beam to a high-aimed high beam, rather than the traditional binary choice between low and high beams. Beam aim control: The range of the beam can vary between 65 and 300 meters, depending on traffic conditions. In traffic, the low beam cutoff position is adjusted vertically to maximise seeing range while keeping glare out of leading and oncoming drivers' eyes. When no traffic is close enough for glare to be a problem, the system provides full high beam. Headlamps are adjusted every 40 milliseconds by a camera on the inside of the front windscreen which can determine distance to other vehicles. The S-Class, CLS-Class and C-Class also offer this technology. In the CLS, the adaptive high beam is realised with LED headlamps - the first vehicle producing all adaptive light functions with LEDs. Since 2010 some Audi models with Xenon headlamps are offering a similar system: adaptive light with variable headlight range control.In Japan, the Toyota Crown, Toyota Crown Majesta, Nissan Fuga and Nissan Cima offer the technology on top level models. Beam aim control: Until Feb 2022, this technology had been illegal in the US, as FMVSS 108 specifically stated that headlamps must have dedicated high and low beams to be deemed road-legal. An infrastructure bill enacted in November 2021 included language that directs the National Highway Traffic Safety Administration to amend FMVSS 108 to allow the use of this technology, and set a two-year deadline for implementing this change. In Feb 2022, the NHTSA amended FMVSS 108 allowing adaptive headlights for use in the US. Beam aim control: Glare-free high beam and pixel light A glare-free high beam is a camera-driven dynamic lighting control strategy that selectively shades spots and slices out of the high beam pattern to protect other road users from glare, while continuously providing the driver with maximum seeing range. The area surrounding other road users is constantly illuminated at high beam intensity, but without the glare that would typically result from using uncontrolled high beams in traffic. This constantly changing beam pattern requires complex sensors, microprocessors, and actuators because the vehicles which must be shadowed out of the beam are constantly moving. The dynamic shadowing can be achieved with movable shadow masks shifted within the light path inside the headlamp. Or, the effect can be achieved by selectively darkening addressable LED emitters or reflector elements, a technique known as pixel light.The first mechanically controlled (non-LED), glare-free high beam was Volkswagen's "Dynamic Light Assist" package, which was introduced in 2010 on the Volkswagen Touareg, Phaeton, and Passat. In 2012, the facelifted Lexus LS (XF40) introduced an identical bi-xenon system: "Adaptive High-beam System". Beam aim control: The first mechanically controlled LED glare-free headlamps were introduced in 2012 on BMW 7 Series: "Selective Beam" (anti-dazzle high-beam assistant). In 2013 Mercedes-Benz introduced the same LED system: "Adaptive Highbeam Assist Plus". The first digitally controlled LED glare-free headlamps were introduced in 2013 on Audi A8. See LED section. Care: Headlamp systems require periodic maintenance. Sealed beam headlamps are modular; when the filament burns out, the entire sealed beam is replaced. Most vehicles in North America made since the late 1980s use headlamp lens-reflector assemblies that are considered a part of the car, and just the bulb is replaced when it fails. Manufacturers vary the means by which the bulb is accessed and replaced. Headlamp aim must be properly checked and adjusted frequently, for misaimed lamps are dangerous and ineffective.Over time, the headlamp lens can deteriorate. It can become pitted due to abrasion of road sand and pebbles and can crack, admitting water into the headlamp. "Plastic" (polycarbonate) lenses can become cloudy and discoloured. This is due to oxidation of the painted-on lens hardcoat by ultraviolet light from the sun and the headlamp bulbs. If it is minor, it can be polished out using a reputable brand of a car polish that is intended for restoring the shine to chalked paint. In more advanced stages, the deterioration extends through the actual plastic material, rendering the headlamp useless and necessitating complete replacement. Sanding or aggressively polishing the lenses, or plastic headlight restoration, can buy some time, but doing so removes the protective coating from the lens, which when so stripped will deteriorate faster and more severely. Kits for a quality repair are available that allow the lens to be polished with progressively finer abrasives, and then be sprayed with an aerosol of ultra violet resistant clear coating. Care: The reflector, made out of vaporized aluminum deposited in an extremely thin layer on a metal, glass, or plastic substrate, can become dirty, oxidised, or burnt, and lose its specularity. This can happen if water enters the headlamp, if bulbs of higher than specified wattage are installed, or simply with age and use. Reflectors thus degraded, if they cannot be cleaned, must be replaced. Lens cleaners: Dirt buildup on headlamp lenses increases glare to other road users, even at levels too low to reduce seeing performance significantly for the driver. Therefore, headlamp lens cleaners are required by UN Regulation 48 on vehicles equipped with low-beam headlamps using light sources that have a reference luminous flux of 2,000 lumens or more. This includes all HID headlamps and some high-power halogen units. Some cars have lens cleaners fitted even where the regulations do not require them. North America, for example, does not use UN regulations, and FMVSS 108 does not require lens cleaners on any headlamps, though they are permitted. Lens cleaners: Lens cleaning systems come in two main varieties: a small motor-driven rubber wiper or brush conceptually similar to windshield wipers, or a fixed or telescopic high-pressure sprayer which cleans the lenses with a spray of windshield washer fluid. Most recent lens cleaning systems are of the spray type because UN regulations do not permit mechanical cleaning systems (wipers) to be used with plastic-lens headlamps, and most recent headlamps have plastic lenses. Some cars with retractable headlamps, such as the original Mazda MX-5, have a squeegee at the front of the lamp recess which automatically wipes the lenses as they are raised or lowered, although it does not provide washer fluid.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fresh Start (detergent)** Fresh Start (detergent): Fresh Start was the first powdered detergent to come in a plastic bottle. It was also one of the first detergents to be highly concentrated, before all detergents went concentrated. Fresh Start was a product of the Colgate-Palmolive company and was introduced in the late '70s. In 2005, Colgate-Palmolive sold the North American rights for Fresh Start to Phoenix Brands. The target audience of Fresh Start was mainly active women. Advertisements from that time also depict active women having fun without worrying about laundry.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Saladitos** Saladitos: Saladitos are plums or apricots, which are dried, salted and which can also be sweetened with sugar and anise or coated in chili and lime. A common misconception is that saladitos and chamoy are the same thing; saladitos are the dried salted fruit, whereas chamoy is made from the leftover brine. History: Saladitos are considered a candy in Mexico, and is said to have originated in China in the development of li hing mui (also known as cracked seed). The exact history is unknown. About: Sometimes used in the popular Michelada drink. For Mexicans and some Asians, one of the method of eating saladitos is to stuff a few of them into an orange or lemon and then suck the salted juice out, while allowing the saladito to rehydrate. Once all the juice is eaten, the saladitos are eaten and the pits can be cracked open to eat the seed. Another method is to eat the saladito without any fruit, and cracking open the pit to eat the seed or discard the pit. One can also first rinse the saladito with water, and then eat it plain. About: On some occasions, to spice up drinks, a few saladitos are put inside drinks like Micheladas, Sprite, ginger ale or beer. Once the saladito is placed in the soda, bubbles will begin to rise immediately. In Taiwan, a popular plum drink is made by soaking several saladitos in a pitcher of water until the plum rehydrates and flavors the water. There is also a mezcal cocktail that shares the name saladito.In Australia, saladitos are known as "salty plums" and come in a variety of different textures, with some being more salty and some being sweeter. In the northern part of Australia, Northern Territory, Western Australia borders and Queensland they are very popular. Some of the main ingredients are salt, sugar, food colouring and plums. About: In Trinidad, Tobago and in some english-speaking Caribbean islands, this treat is referred to as salted prunes. Recalls: A recall notice was issued in the United States in 2009 by the Texas Department of State Health Services, when saladitos believed to have been sourced from Asia were found to have levels of lead exceeding health guidelines. In 2021–2022, the California Department of Public Health issued a recall warning on eight brands of saladitos manufactured in China and Taiwan, also found to have levels of lead exceeding health guidelines.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Desoximetasone** Desoximetasone: Desoximetasone is a medication belonging to the family of medications known as topical corticosteroids. It is used for the relief of various skin conditions, including rashes. It helps to reduce redness, itching, and irritation. Desoximetasone is a synthetic corticosteroid, a class of primarily synthetic steroids used as anti-inflammatory and anti-pruritic agents. Three brand name products are available (availability depending on country): Topicort Emollient Cream (0.25% desoximetasone) Topicort LP Emollient Cream (0.05% desoximetasone) Topisolone Cream (0.25% desoximetasone) Usage: When using desoximetasone, some of the medication may be absorbed through the skin and into the bloodstream. Too much absorption can lead to unwanted side effects elsewhere in the body. Large amounts of desoximetasone should be avoided over large areas. It should not be used for extended periods of time. Treated areas should not be covered with airtight dressings such as plastic wrap or adhesive bandages. Children may absorb more medication than adults do. Desoximetasone is for use only on the skin and should be kept out of the eyes. Usage: Desoximetasone can also be used to treat some types of psoriasis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lattice network** Lattice network: A symmetrical lattice is a two-port electrical wave filter in which diagonally-crossed shunt elements are present – a configuration which sets it apart from ladder networks. The component arrangement of the lattice is shown in the diagram below. The filter properties of this circuit were first developed using image impedance concepts, but later the more general techniques of network analysis were applied to it. Lattice network: There is a duplication of components in the lattice network as the "series impedances" (instances of Za) and "shunt impedances" (instances of Zb) both occur twice, an arrangement that offers increased flexibility to the circuit designer with a variety of responses achievable. It is possible for the lattice network to have the characteristics of: a delay network, an amplitude or phase correcting network, a dispersive network  or as a linear phase filter,: 412  according to the choice of components for the lattice elements. Configuration: The basic configuration of the symmetrical lattice is shown in the left-hand diagram. A commonly used short-hand version is shown on the right, with dotted lines indicating the presence the second pair of matching impedances. It is possible with this circuit to have the characteristic impedance specified independently of its transmission properties, a feature not available to ladder filter structures. In addition, it is possible to design the circuit to be a constant-resistance network for a range of circuit characteristics. Configuration: The lattice structure can be converted to an unbalanced form (see below), for insertion in circuits with a ground plane. Such conversions also reduce the component count and relax component tolerances.It is possible to redraw the lattice in the Wheatstone bridge configuration (as shown in the article Zobel network). However, this is not a convenient format in which to investigate the properties of lattice filters, especially their behavior in cascade. Basic properties: Results from image theory Filter theory was initially developed from earlier studies of transmission lines. In this theory, a filter section is specified in terms of its propagation constant and image impedance (or characteristic impedance). Basic properties: Specifically for the lattice, the propagation function, γ, and characteristic impedance, Zo, are defined by,: 379  ln artanh ⁡ZaZb and Zo=Za⋅Zb Once γ and Zo have been chosen, solutions can be found for ZaZb and Za⋅Zb from which the characteristics of Za and Zb can each be determined. (In practice, the choices for γ and zo are restricted to those which result in physically realisable impedances for Za and Zb .) Although a filter circuit may have one or more pass-bands and possibly several stop-bands (or attenuation regions), only networks with a single pass-band are considered here. Basic properties: In the pass-band of the circuit, the product Za × Zb is real (i.e. Zo is resistive) and may be equated to Ro, the terminating resistance of the filter. So ZaZb=Ro2 or ZaRo=RoZb (for frequencies in the passband)That is, the impedances behave as duals of each other within this frequency range. Basic properties: In the attenuation range of the filter, the characteristic impedance of the filter is purely imaginary, and Za=Zb (for frequencies in the attenuation band)Consequently, in order to achieve a specific characteristic, the reactances within Za and Zb are chosen so that their resonant and anti-resonant frequencies are duals of each other in the passband, and match one another in the stopband. The transition region of the filter, where a change from one set of conditions to another occurs, can be made as narrow as required by increasing the complexity of Za and Zb . The phase response of the filter in the pass-band is governed by the locations (spacings) of the resonant and anti-resonant frequencies of Za and Zb . Basic properties: For convenience, the normalised parameters  yo  and  zo  are defined by yo=ZbZa=zbza and zo=za⋅zb=ZoRo where normalised values za=ZaRo and zb=ZbRo have been introduced. The parameter yo is termed the index function and zo is the normalised characteristic impedance of the network. The parameter yo is approximately 1 in the attenuation region; zo is approximately 1 in the transmission region.: 383 Cascaded lattices All high-order lattice networks can be replaced by a cascade of simpler lattices, provided their characteristic impedances are all equal to that of the original and the sum of their propagation functions equals the original.: 435  In the particular case of all-pass networks (networks which modify the phase characteristic only), any given network can always be replaced by a cascade of second-order lattices together with, possibly, one single first order lattice.Whatever the filter requirements being considered, the reduction process results in simpler filter structures, with less stringent demands on component tolerances. Basic properties: The shortcomings of image theory The filter characteristics predicted by image theory require a correctly terminated network. As the necessary terminations are often impossible to achieve, resistors are commonly used as the terminations, resulting in a mismatched filter. Consequently, the predicted amplitude and phase responses of the circuit will no longer be as image theory predicts. In the case of a low-pass filter, for example, where the mismatch is most severe near the cut-off frequency, the transition from pass-band to stop-band is far less sharp than expected. Basic properties: The figure below illustrates the issue: A lattice filter, equivalent to two sections of constant k low-pass filter, has been derived by image methods. (The network is normalised, with L=1 and C=1, so Ro=LC=1 and ωc=2LC=2. The left-hand figure gives the lattice circuit and the right-hand figure gives the insertion loss with the network terminated (1) resistively, and (2) in its correct characteristic impedances. To minimise the mismatch problem, various forms of image filter end terminations were proposed by Zobel and others, but the inevitable compromises led to the method falling out of favour. It was replaced by the more exact methods of network analysis and network synthesis. Basic properties: Results derived by network analysis This diagram shows the general circuit for the symmetrical lattice: Through mesh analysis or nodal analysis of the circuit, its full transfer function can be found. That is, The input and output impedances (Zin and Zout) of the network are given by These equations are exact, for all realisable impedance values, unlike image theory where the propagation function only predicts performance accurately when ZS and ZL are the matching characteristic impedances of the network. Basic properties: The equations can be simplified by making a number of assumptions. Firstly, networks are often sourced and terminated by resistors of the same value R0 so that ZS = ZL = R0 and the equations become Secondly, if the impedances Za and Zb are duals of one another, so that Za Zb = R02, then further simplification is possible: so such networks are constant-resistance networks. Basic properties: Finally, for normalised networks, with R0 = 1, If the impedances Za and Zb (or the normalised impedances za and zb) are pure reactances, then the networks become all-pass, constant-resistance, with a flat frequency response but a variable phase response. This makes them ideal as delay networks and phase equalisers. When resistors are present within Za and Zb then, provided the duality condition still applies, a circuit will be constant-resistance but have a variable amplitude response. One application for such circuits is as amplitude equalisers. Conversions and equivalences: (See references) T to lattice Pi to lattice Common series element Common parallel element Combining two lattices into one Lattice to T (see also the next section) This lattice-to-T conversion only gives a realisable circuit when the evaluation of (Zb − Za) / 2 gives positive valued components. For other situations, the bridged-T may provide a solution, as discussed in the next section. Unbalanced equivalents: The lattice is a balanced configuration which is not suitable for some applications. In such cases it is necessary to convert the circuit to an electrically equivalent unbalanced form. This provides benefits, including reduced component count and relaxed circuit tolerances. The simple conversion procedure shown in the previous section can only be applied in a limited set of conditions – generally, some form of bridged-T circuit is necessary. Many of the conversions require the inclusion of a 1:1 ideal transformer, but there are some configurations which avoid this requirement, and one example is shown below. Unbalanced equivalents: This conversion procedure starts by using the property of a lattice where a common series element in all arms can be taken outside the lattice as two series elements (as shown above). By repeatedly applying this property, components can be extracted from within the lattice structure. Finally, by means of Bartlett's bisection theorem, an unbalanced bridged-T circuit is achieved. In the left-hand figure, the Za arm has a shunt capacitor, Ca, and the Zb arm has a series capacitor, Cb. Consequently, Za consists of Ca in parallel with Za′, and Zb consists of Cb in series with Zb′. This can be developed into the unbalanced bridged-T shown, provided Ca > Cb. (An alternative version of this circuit has the T configuration of capacitors replaced by a Pi (or Delta) arrangement. For this T to Pi conversion, see the equations in Attenuator (electronics)). When Cb > Ca, an alternative procedure is necessary, where common inductors are first extracted from the lattice arms. As shown, an inductor La shunts Za′ and an inductor Lb is in series with Zb′. This leads to the alternative bridged-T circuit on the right. If La > Lb, then the negative-valued inductor can be achieved by means of mutually coupled coils. To achieve a negative mutual inductance, the two coupled inductors L1 and L2 are wound 'series-aiding'. So finally, the bridged-T circuit takes the form Bridged-T circuits like these may be used in delay and phase-correcting networks. Another lattice configuration, containing resistors, is shown below. It has shunt resistors Ro across the Za’s and series resistors Ro as part of the Zb's, as shown in the left hand figure. It is easily converted to an unbalanced bridged-T circuit, as shown on the right. Unbalanced equivalents: When Z1Z2 = R02 it becomes a constant resistance network, which has an insertion loss given by T(p)=R0R0+Z1(p) When normalized to 1ohm, the source, load and R0 are all unity, so Z1.Z2 = 1, and the insertion loss becomes In the past, circuits configured in this way were very popular as amplitude equalisers. For example, they were used to correct for the high frequency losses in telephone cables and in long runs of coaxial cable for television installations.An example, showing the design procedure for a simple equaliser, is given in the section on synthesis, later. All-pass networks: (See previously quoted references to Zobel, Darlington, Bode and Guillemin. Also see Stewart and Weinberg.)All-pass networks are an important sub-class of lattice networks. They have been used as passive lumped-element delays, as phase correctors for filter networks and in dispersive networks. They are constant-resistance networks so they can be cascaded with each other and with other circuits without introducing mismatch problems. All-pass networks: In the case of all-pass networks, there is no attenuation region, so the impedances Za and Zb (of the lattice) are duals of each other at all frequencies and Z0 is always resistive, equal to R0. i.e., For normalised networks, where R0 = 1, the transfer function T(p) can be written and so In practice, T(p) can be expressed as a ratio of polynomials in p, and the impedances za and zb are also ratios of polynomials in p. For the impedances to be realisable, they must satisfy Foster's reactance theorem. The two simplest all-pass networks are the first and second order lattices. These are important circuits because, as Bode pointed out, all high order all-pass lattice networks can be replaced by a cascade of second order networks with, possibly, one first order network, to give the identical response. These two simple, normalised lattices have transfer impedances given by The circuits are considered in more detail in the section on 'Synthesis' Lattice synthesis: Network synthesis is the process of deriving a circuit to match a chosen transfer function. Not all transfer functions can be realized by physical networks, but for those that can, the lattice network is always a solution. In other words, if a symmetrical two-terminal pair network is realizable at all, it is realizable as a lattice network.: 39, : 339  This is because the lattice structure is the most general form of a network, with fewer constraints than, say, T, П or bridged-T networks. Lattice synthesis: Once a lattice circuit has been developed, it is often desirable to convert the result into an unbalanced form,: 268, : 168  so that the circuit can be used in systems with an earth plane.: 352  Furthermore, there are other benefits to be gained from the conversion process, such as a reduced component count and less stringent component tolerances. Where a synthesis procedure results in several possible lattice solutions, the one that is easiest to convert is usually chosen. Often, the conversion process results in mutually coupled inductors, as shown earlier, but it is sometimes possible to avoid these altogether, if a high value of insertion loss can be tolerated, or if a combination of circuits in parallel is considered. Lattice synthesis: Synthesis with z parameters z-parameters, or Impedance parameters, are one set from the family of parameters that define a two-port network, with input and output values defined by I1, I2, V1 and V2,: 254 : 29  as shown in the figure. Equations defining network behaviour in terms of z-parameters are where the z-parameters are defined under open circuit conditions (see Impedance parameters) so they are sometimes referred to as "open-circuit parameters". They are defined thus: 136  For the symmetrical lattice, the relationships between z-parameters and the lattice impedances are easily found, and they are So, Sometimes synthesis of a lattice can be achieved by simply apportioning parts of an expression in z12, or in z11 and z12, directly to the impedances Za and Zb, as in the following example. Lattice synthesis: Example 1 Consider z12 to be given by: 229  This can be expanded into partial fractions, to give Allocate terms to Za and Zb, accordingly, so giving and Zb=2pp2+1+2p=2p3+4pp2+1 The lattice network which has these solutions for Za and Zb is shown in the left-hand circuit, below. It can be converted to an unbalanced form by, firstly, extracting the common parallel inductors and, secondly, by then extracting series common capacitors. This gives the ladder network shown in the right-hand circuit. Lattice synthesis: Synthesis from the open-circuit transfer function The open-circuit voltage-ratio transfer function T can be obtained in terms of z11 and z12,: 43  since with I2 = 0 so from an expression for T, which gives the ratio of z12, and z11, it may be possible to obtain circuits for Za and Zb. In practice, T may be expressed in the form where N(p) and D(p) are polynomials in p, the complex frequency variable, and K is a constant factor less or equal to unity. For a given expression for T, it is often possible to find expressions (and hence circuits for Za and Zb), provided the value chosen for K is small enough. Now, for the lattice, 11 12 =Zb−ZaZb+Za=1−ZaZb1+ZaZb Rearranging ZaZb=1−T1+T=D−KND+KN The procedure evaluates the numerator and denominator of the expression as polynomials in p and then apportions factors to Za and Zb. A loss term K, with K < 1, may be needed to aid realization. Example 2 Derive a lattice network with voltage-ratio transfer function T2 given by: 345  with K=1 and Choose Za=1+35p and Zb=1+2p5+1p The lattice realization of T2 is shown below, on the left. The unbalanced network, on the right, is obtained by first extracting the common series resistors and then extracting capacitance. Lattice synthesis: Example 3 An L-C circuit has a transfer function T3 given by This is realizable with K = 0.05, so Factorizing top and bottom gives Choose, say, Za and Zb can be realized as LC ladder networks, with Za having a shunt inductor as first element and Zb having a series inductor as first element, as shown in the left-hand figure. This lattice can be converted to unbalanced form, by the methods given earlier, to give the component values of the right-hand figure, Darlington synthesis The Darlington method forms the basis for synthesis of lossless two terminal-pair networks with resistive termination for prescribed transfer characteristics. Lattice synthesis: The figure shows the basic network configuration. The associated transfer impedance is The first step is to express the input impedance ZI of a terminated network in terms of its z-parameters. This is in which z11, z22 and z12 are z-parameters of the network, as defined earlier. For a normalized network, put R = 1, and rearrange the expression thus: In practice, ZI consists of a ratio of two polynomials in p: where m1 and n1 are the even and odd parts of the numerator polynomial, respectively and m2 and n2 are the even and odd parts of the denominator polynomial, respectively. Rearranging ZI=m1n2⋅(n1/m1)+1(m2/n2)+1 By comparing the two expressions for ZI, the following relationships are suggested Example 4 Consider a network with ZI given by So, So solutions for z11, z22 and z12 are i.e. z11 is an inductor of 1.6229H in series with a capacitor of 1.18F. i.e. z22 is an inductor of 1.1246H in series with a capacitor of 1.18F By extracting a series inductance of 0.4983p = (1.6229p – 1.1246p) from z11, the remaining network becomes symmetrical with The components of a symmetrical lattice can be calculated from Za = z11 − z12 and Zb = z11 + z12. So i.e. an inductor of 0.9993H. Lattice synthesis: and i.e. an inductor of 1.2499H in series with a capacitor of 0.59F The circuit is shown in the left hand figure below. It can be easily converted into the unbalanced form shown in the right hand figure. It is a low-pass filter with pass-band ripple of 1.25 dB, with −3 dB at 0.169 Hz, a null in the stop band at 0.414 Hz, and stop-band attenuation beyond the null frequency below −40 dB. Lattice synthesis: Synthesis of constant-resistance lattice networks If the impedances Za and Zb are duals, and normalised, so that then the image impedance ZI becomes a pure resistance. A symmetrical lattice fulfilling this condition is a "constant resistance lattice". Such a lattice, terminated in 1 ohm, is shown below. This has the transfer function in which T is the transfer impedance with a 1-ohm load in contrast to the open-circuit transfer impedance z21. Rearranging this, gives The constant resistance lattice is thus seen to offer a possible approach to the synthesis of transfer functions. Lattice synthesis: It is the case that a constant resistance lattice is no less general than any other lattice, which means that any realizable transfer impedance can be realized in the form of a constant resistance lattice,.: 233 : 480  Such networks are very convenient, because there is no mismatch between sections or with resistive terminations. Consequently, the overall insertion loss of a cascade of constant resistance sections is simply to sum total of the individual sections. Conversely, a given complicated transfer impedance may be decomposed into multiplicative factors, whose individual lattice realizations, when connected in cascade, represent a synthesis of that transfer impedance. So, although it is possible to synthesize a single lattice with complicated impedances Za and Zb, it is practically easier to construct and align a cascade of simpler circuits. Lattice synthesis: All-pass constant-resistance networks All-pass networks have a constant gain with frequency, but they have a phase response which varies in some chosen manner. For example, in the case of lattice delay networks, the phase response is linear with frequency over a specified frequency range, whereas in the case of Lattice phase equalisers, the phase response of the network deviates so as to compensate for the non-linear phase response of a filter network. Lattice synthesis: The first and second order networks are the most important because, as Bode: 240  pointed out, these can be cascaded, as required, to give the same result as a complicated high order lattice. Example 5 The all-pass response of the first order is This has a zero located at +c and a pole at –c in the complex frequency plane. It has a response where the phase varies with frequency, but the magnitude of T5 is unity at all frequencies. Using the expression for Za as a function of T, from earlier, gives So Za is an inductance with value 1/c and, consequently, Zb is a capacitor of value 1/c. The network, normalised to 1 ohm, is shown in the left-hand figure below. Example 6 The all-pass response of the second order is This has two zeroes located at x±y and two poles at −x±y where a = 2x and b = x2 + y2. For such a response, the phase varies with frequency, but the magnitude of T6 is unity at all frequencies. For this characteristic, Za is found from So Za is a parallel combination of a capacitance 1/a and an inductance with value a/b. Similarly Zb is an inductor 1/a in series with a capacitor of value a/b and the network is shown at the right hand side below. Lattice synthesis: The lattice networks can be converted to unbalanced circuits by using the properties of lattices with common elements in both Za and Zb, shown earlier, and Bartlett’s Bisection theorem.: 28  In the case of the second order network, when a2 > b (i.e. L1 > L2 or C2 > C1 or y > √3x), it is necessary to use the circuit containing mutually coupled coils for the second order all-pass network. Lattice synthesis: A cascade of second order networks with, maybe, a single first order network, can be used to give a high order response. For example, the article Lattice delay network gives pole-zero locations for many all-pass transfer functions which approximate to a linear phase characteristic. That article also includes some examples. Lattice synthesis: Synthesis of amplitude equalizers A typical transmission path has increasing loss with frequency and this can be corrected by cascading the system with an equalizing network that has a rising response with frequency. In this regard, one circuit configuration that is commonly used to provide the necessary equalization is shown in the figure labelled 'Lattice - basic equaliser circuit', given earlier (in the section on 'Unbalanced Equivalents'). Lattice synthesis: As stated there, the insertion loss of the normalized circuit is given by T=11+Z1(p) , so Z1 can be found from Z1(p)=1T(p)−1 If some residual ripple on the response is permitted, then a simple correcting network may suffice for Z1 and Z2, but this ripple may be reduced as much as desired by adopting more complicated correcting networks. Choosing locations for the poles and zeros for Z1 and Z2 may be aided by the straight line asymptotic method. Lattice synthesis: Example 7 A transfer function which has a rising response over a limited frequency range is Note that the response approaches unity at high frequencies. It can be realized as a bridged-T or lattice in which Z1 is an R-C network. Lattice synthesis: Z1 can be found from Z1(p)=1T(p)−1 So 26.6697 20.6697 3.3333 10.3333 p+p2 The admittance Y1, where Y1 = 1/Z1 can be expressed as a continued fraction containing four terms, thus So Z1 can be realized as an R-C ladder network, in the Cauer manner, and is shown as part of the bridged-T circuit below. Z2 is the dual of Z1, and so is an R-L circuit, as shown. The equivalent lattice circuit is shown on the right–hand side. Lattice synthesis: Constant resistance low-pass filters High order low-pass filters can be obtained by cascading an appropriate number of simpler constant resistance low-pass sections.: 484 The first of these low-pass sections, with just a single pole, has the response Provided k1≤a this is realizable impedance, where Za1 is a combination of two resistors and an inductor, as shown in the left-hand circuit below, and Zb1 is the dual of Za1. Lattice synthesis: This is easily transformed into an unbalanced form, as shown on the right. The second of the filter sections, with two poles, has the response So the lattice impedance Za2 is given by: Certain conditions have to be met to ensure this is a realizable network,: 486  which are Also The conditions set limits on the value of the constant multiplier k2 in the expression for T2. The circuit for the lattice elements Za2 is shown on the left, below, and that for the dual elements Zb is shown on the right. Lattice synthesis: Component values for Za are, and those for the impedances Zb2 are: The unbalanced version of this lattice is as shown below: By cascading a number of the first and second order circuits, of the type just developed, it is possible to derive higher order low-pass networks of the type: The lattice networks so obtained can be converted to an unbalanced form, provided the value of k is sufficiently small. Lattice synthesis: Example 8 A maximally flat third-order normalized low pass filter has the transfer function This can be expanded as So a cascade of three lattices will give the required result. If an unbalanced circuit is required, we have to accept some overall loss. By choosing k1 = k2 = a = 0.5, then the network shown below is obtained. This circuit has an overall loss of four times, whereas the conventional L-C ladder network: 605  has no loss (but is not a constant resistance network). Lattice synthesis: Computer-aided design methods The development of mainframe and then personal computers, in the final quarter of the twentieth century, permitted the rapid development of numerical processing techniques. Initially, computers were used as an aid to network analysis then to optimization methods such as the minimax method, in the design of phase equalizers and filters), before being applied to network synthesis directly. Overviews of the software developments in the field of synthesis have been given in Taylor & Huang and Kuo.: 438 Only a few of the early synthesis programs have dealt with lattice networks, but S-Filsyn (a powerful synthesis and analysis program ) provides some coverage of lattice and bridged-T circuits. Early history: The symmetrical lattice and the ladder networks (the constant k filter and m-derived filter), were the subject of much interest in the early part of the twentieth century. At that time, the rapidly growing telephone industry had a significant influence on the development of filter theory, while seeking to increase the signal carrying capacity of telephone transmission lines. George Ashley Campbell was a key contributor to this new filter theory, as was Otto Julius Zobel. They and many colleagues worked at the laboratories of Western Electric and the American Telephone and Telegraph Co., and their work was reported in the early editions of the Bell System Technical Journal. Early history: Campbell discussed lattice filters in his article of 1922, while other early workers with an interest in the lattice included Johnson and Bartlett. Zobel's article on filter theory and design, published at about this time, mentioned lattices only briefly, with his main emphasis on ladder networks. It was only later, when Zobel considered the simulation and equalisation of telephone transmission lines, that he gave the lattice configuration more attention. (The telephone transmission lines of the time had a balanced-pair configuration with a nominal characteristic impedance of 600 ohms, so the lattice equaliser, with its balanced structure, was particularly appropriate for use with them). Later workers, especially Hendrik Wade Bode, gave greater prominence to lattice networks in their filter designs. Early history: In those early days, filter theory was based on image impedance concepts, or image filter theory, which was a design approach developed from the well-established studies of transmission lines. The filter was considered to be a lumped component version of a section of transmission line, and was one of many within a cascade of similar sections. As mentioned above, the weakness of the image filter approach was that the frequency response of a network was often not as predicted when the network was terminated resistively, instead of by the required image impedances. This was essentially a mismatch issue and Zobel overcame it by means of matching end sections. (see: m-derived filter, mm'-type filter, General mn-type image filter, with later work by Payne and Bode.)Although lattice filters sometimes suffer from this same problem, a range of constant-resistance networks can avoid it altogether. Early history: During the 1930s, as techniques in network analysis and synthesis became better developed, designing ladder filters by image methods became less popular. Even so, the concepts still found relevance in some modern designs. On the other hand, lattice networks and their circuit equivalents continue to be used in many applications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Goursat tetrahedron** Goursat tetrahedron: In geometry, a Goursat tetrahedron is a tetrahedral fundamental domain of a Wythoff construction. Each tetrahedral face represents a reflection hyperplane on 3-dimensional surfaces: the 3-sphere, Euclidean 3-space, and hyperbolic 3-space. Coxeter named them after Édouard Goursat who first looked into these domains. It is an extension of the theory of Schwarz triangles for Wythoff constructions on the sphere. Graphical representation: A Goursat tetrahedron can be represented graphically by a tetrahedral graph, which is in a dual configuration of the fundamental domain tetrahedron. In the graph, each node represents a face (mirror) of the Goursat tetrahedron. Each edge is labeled by a rational value corresponding to the reflection order, being π/dihedral angle. A 4-node Coxeter-Dynkin diagram represents this tetrahedral graph with order-2 edges hidden. If many edges are order 2, the Coxeter group can be represented by a bracket notation. Existence requires each of the 3-node subgraphs of this graph, (p q r), (p u s), (q t u), and (r s t), must correspond to a Schwarz triangle. Extended symmetry: An extended symmetry of the Goursat tetrahedron is a semidirect product of the Coxeter group symmetry and the fundamental domain symmetry (the Goursat tetrahedron in these cases). Coxeter notation supports this symmetry as double-brackets like [Y[X]] means full Coxeter group symmetry [X], with Y as a symmetry of the Goursat tetrahedron. If Y is a pure reflective symmetry, the group will represent another Coxeter group of mirrors. If there is only one simple doubling symmetry, Y can be implicit like [[X]] with either reflectional or rotational symmetry depending on the context. Extended symmetry: The extended symmetry of each Goursat tetrahedron is also given below. The highest possible symmetry is that of the regular tetrahedron as [3,3], and this occurs in the prismatic point group [2,2,2] or [2[3,3]] and the paracompact hyperbolic group [3[3,3]]. See Tetrahedron#Isometries of irregular tetrahedra for 7 lower symmetry isometries of the tetrahedron. Whole number solutions: The following sections show all of the whole number Goursat tetrahedral solutions on the 3-sphere, Euclidean 3-space, and Hyperbolic 3-space. The extended symmetry of each tetrahedron is also given. The colored tetrahedal diagrams below are vertex figures for omnitruncated polytopes and honeycombs from each symmetry family. The edge labels represent polygonal face orders, which is double the Coxeter graph branch order. The dihedral angle of an edge labeled 2n is π/n. Yellow edges labeled 4 come from right angle (unconnected) mirror nodes in the Coxeter diagram. 3-sphere (finite) solutions The solutions for the 3-sphere with density 1 solutions are: (Uniform polychora) Euclidean (affine) 3-space solutions Density 1 solutions: Convex uniform honeycombs: Compact hyperbolic 3-space solutions Density 1 solutions: (Convex uniform honeycombs in hyperbolic space) (Coxeter diagram#Compact (Lannér simplex groups)) Paracompact hyperbolic 3-space solutions Density 1 solutions: (See Coxeter diagram#Paracompact (Koszul simplex groups)) Rational solutions: There are hundreds of rational solutions for the 3-sphere, including these 6 linear graphs which generate the Schläfli-Hess polychora, and 11 nonlinear ones from Coxeter: In all, there are 59 sporadic tetrahedra with rational angles, and 2 infinite families.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Living shorelines** Living shorelines: Living shorelines are a relatively new approach for addressing shoreline erosion and protecting marsh areas. Unlike traditional structures such as bulkheads or seawalls that worsen erosion, living shorelines incorporate as many natural elements as possible which create more effective buffers in absorbing wave energy and protecting against shoreline erosion. The process of creating a living shoreline is referred to as soft engineering, which utilizes techniques that incorporate ecological principles in shoreline stabilization. The natural materials used in the construction of living shorelines create and maintain valuable habitats. Structural and organic materials commonly used in the construction of living shorelines include sand, wetland plants, sand fill, oyster reefs, submerged aquatic vegetation, stones and coir fiber logs. Benefits and ecosystem services: Shoreline stabilization Riparian and intertidal protection Water quality improvements from upland run-off filtration Terrestrial and aquatic habitat creation Absorption of wave energy, leading to reduced erosion rates Preservation of natural shoreline exchanges Enhancement of fisheries feeding and breeding habitat Adaptability and use in a wide range of environments Potential for less associated costs compared to traditional structures such as seawalls and bulkheads Creates and preserves nursery and critical feeding habitats for aquatic life Allows for a more natural aesthetic display than traditional structures Design: Many factors need to be addressed when preparing a living shoreline project. Permitting requirements and appropriate restoration strategies for a particular habitat are two critical topics for consideration before construction begins. Design: Planning and implementation steps 1. Analysis of the site: The bank erosion rate, elevation level, vegetation, wave energy, wind patterns, wave activity and soil type of the proposed site need to be examined to determine if it is an appropriate area for living shoreline stabilization. Restoration plans of stabilization activities are designed upon completion of the initial site analysis.2. Permitting: Before any implementation begins, permits should be applied for and obtained through the appropriate regulatory agencies. All project plans need to be in compliance with local, state and federal laws before any construction begins to avoid legal issues and ensure long-term sustainability.3. Site preparation: Once the necessary permits are obtained, preparation begins by clearing all debris, unstable trees and existing failing structures, such as bulkheads, from the site. In addition, any issues regarding stormwater runoff must also be addressed prior to the installation of a living shoreline.4. Project installation: Generally, living shoreline structures will include planting marsh, riparian, or other types of aquatic vegetation. Bio-logs, organic fiber mats and oyster shells are also readily used materials throughout installation.5. Maintenance and monitoring: The restored habitat area should be regularly monitored upon completion to obtain data on project successes. The collection of such data will improve construction and implementation strategies of future projects. The site should also be maintained by replanting necessary vegetation, removing debris and adding sand fill when appropriate. The materials should also be monitored to ensure they are staying in place and achieving desired shoreline stabilization goals. Design: Materials Vegetation zone Clean dredge material and sand fill are generally used to construct a rolling slope to weaken wave energy and provide an area to plant vegetation. Regrading, filling and replanting native vegetation can occur on sites that do not have a bulkhead or on sites where bulkheads have been removed. If removing the bulkhead is not feasible, another option is to fill sand in front of the structure and regrade and replant vegetation on the shoreline and embankment. Design: Roots from trees and grass stabilize the riparian area above high tide by gripping the soil. Such activity results in bank erosion minimization, wildlife habitat creation and upland runoff filtration. The type of plants that make up common riparian zones typically include grasses, shrubs and woody trees but the species of each are dependent on the naturally occurring vegetation of the area. Design: Wetland and beach areas Breakwaters provide erosion control and facilitate habitat development by breaking up wave activity in open-water areas. These structures, made with rock and oyster spat, should be placed in areas of medium to high wave energy and arranged parallel to the bank. Once implemented, the area around the shoreline should be calmer than before which can allow for the creation of marsh and intertidal habitat through the replanting of marsh grasses and other submerged aquatic plants. Design: Filter fabric is a key element in minimizing soil loss under rocks. This porous material made from natural elements is commonly implemented under breakwaters and rock sills or other hybrid living shoreline locations. Geotextile material tubes measure about 12 feet in diameter, are filled with sediment and aligned with the shoreline to weaken wave energy and protect against erosion. These tubes facilitate oyster reef development and create areas to dispose of new dredge material. Design: Low-crested rock sills are formed by the parallel arrangement and underwater placement of single rocks along shorelines and marshes. The rocks decrease erosion rates in these areas by dispelling oncoming wave energy. The placement of these sills are no greater than 6 to 12 inches over the mean high water mark and typically divided into sections to allow for the passing of boats, large waves and wildlife. Design: Mangroves play a critical role in shoreline stabilization through the trapping of nutrients and sediments and dissipation of wave energy administered by their extensive root system. The incorporation of mangroves with living shorelines could play a large role in decreasing erosion rates since they naturally occur in subtropical and estuarine tropical areas. More specifically, mangroves are typically found in southern Florida, the Caribbean and some areas of southern Louisiana. Design: Marsh grasses are generally planted up to the mean high tide line and in the water of the intertidal zones to break up wave energy, provide fish and wildlife habitat and improve water quality through upland runoff filtration. Studies show that plantings may show more success when administered in the spring in areas with existing marsh, mild wind conditions and surrounding areas of less than 3 miles of open water.Natural bio-logs/fiber logs can be used to reduce bank erosion and stabilize inclines by implementation at the bottom of a slope or in the water which is formed to the bank line and secured in place. The coconut fiber and netting are biodegradable and work to grab sediment, hold moisture to facilitate vegetative growth, and allow stability of the bank while roots develop. Design: Natural fiber matting can be made from a combination of biodegradable, organic mediums but is primarily made from jute, straw, coir fiber or wood. Placing such matting over an abrupt eroding slope minimizes sediment loss and catches sediments otherwise transported by wave dynamics. Natural fiber matting can also be implemented with riparian vegetation or marsh grass plantings to improve bank stabilization. Design: Rock footers are small quantities of boulder or rock intended to enhance bank stabilization and add additional support to bio-logs. Rock footers can also be used to support the structure of the biodegradable fiber logs, so that they do not fall out into steeper areas of the bank. Rubble and recycled concrete can be used to form a breakwater offshore of a living shoreline site to refract wave energy before it hits the area. The addition of oyster spat these breakwaters can simultaneously enhance water quality and facilitate habitat growth. Design: Submerged aquatic zone Oyster shell reefs are another option when creating living shorelines. Oysters are critical in enhancing water quality and providing habitat to fish species, so creation of oyster reefs to decrease shoreline erosion rates have many added benefits. In addition, the establishment of oyster reefs play a role in protecting valuable aquatic vegetation of the marine ecosystem. To ensure a healthy reef, only clean oyster shells that have been sitting in the sun for adequate time should be used in the construction process. Design: Reef balls of oysters can achieve similar outcomes as oyster shell reefs but have a different implementation process. This type of artificial reef is made up of small, hallow concrete balls that facilitates the build-up of oyster shells as oyster spat take hold on the outside of the structure. An advantage of this implementation strategy is that it decreases poaching of oysters which can be a common obstacle in living shoreline construction that use oyster shells. Design: Seagrass beds create natural buffer zones against shoreline erosion when implemented in association with living shorelines. In addition, seagrass beds enhance water quality, improve sediment stabilization, supply habitat and food for aquatic organisms and dissipate high-energy waves. Project Examples: VIMS teaching marsh, Gloucester Point, Virginia Jamestown 4-H Camp, James River, James City County, Virginia The Hermitage Museum and Gardens, Norfolk, Virginia Longwood University’s Hull Springs Farm, Westmoreland County, Virginia Jefferson Patterson Park & Museum, Calvert County, Maryland South River Federation, MD St John’s College, Annapolis, Maryland Magothy Beach Road, Pasadena, Maryland San Francisco Bay Piscataway Park, Potomac River, Maryland Delaware
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Creative Diagnostics** Creative Diagnostics: Creative Diagnostics is an American biotechnology company that specializes in the research and manufacturing of antibodies, viral antigens, diagnostic components, and critical assay reagents. Founding: Creative Diagnostics was founded in Shirley, New York, USA in 2005. Originally, the business was focused on monoclonal and polyclonal antibodies. Later, various kinds of antibodies, viral antigens, reagents, medical kits, and biological services were launched to broaden the company's activities. Partnerships: Since 2010, Creative Diagnostics has maintained a commercial partnership agreement with CD Genomics, Inc. The two companies also began a platform license agreement in 2012. Operations: Creative Diagnostics provides contract research and manufacturing services. Additionally, the company conducts ELISA testing. Other products include: Matched antibody pairs Anti-idiotypic Antibodies HBV Core Antigen Antibody Isotyping Kits Protein Antigen Expression Service Fluorescent Dye Labeling DNA Immunization Antibody Production
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reconstructor** Reconstructor: Reconstructor is a commercial point cloud processing software. Developed and marketed by the Italian software house Gexcel, Reconstructor was first released in September 2007 and continuously updated since then. It's a complete point cloud processing software package that includes many post processing tools for 3D reconstruction, post processing, measurements, 3D modeling and content creation. The Geomax, Stonex and Teledyne-Opetch manufacturers have chosen Reconstructor technology for their customers. History: The technology behind Reconstructor has been initially born through a technology transfer agreement between the Joint Research Centre and Gexcel in 2007, to introduce the lidar technology for international nuclear plants monitoring into the market. Nowadays the software is completely built in-house by Gexcel. Initially called JRC3DReconstructor, it is constantly growing to adapt the structure, functionalities and tools to the surveyors' needs. From 2019 release become Reconstructor and introduced a new add-on structure. History: The core software available with a perpetual licence of monthly temporary licence allows to register, analyze, inspect, measure and share data. Then you can add a set of commands to detect specific industries like Land and Quarry (Mining add-on), Cultural Heritage (Color add-on), Mobile mapping dataset (HERON add-on). The technology behind Reconstructor is also academically traceable, as it can be available with a special educational licence.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dishcloth** Dishcloth: A dishcloth or dishrag is used in the kitchen to clean or dry dishes and surfaces. Dishcloths are typically made of cotton or other fibres, such as microfiber, and measure 11" to 13" inches square. Dishcloths used for drying dishes are also known as tea towels. Microwave disinfection: Dishcloths are often left damp and provide a breeding ground for bacteria. Since the kitchen sink is used to clean food, dishcloths are routinely infected with E. coli and salmonella. In 2007, a study from the Journal of Environmental Health found that putting a damp dishcloth (or sponge) in the microwave for 2 minutes killed 99% of living pathogens. However, fire departments have subsequently warned people not to do this as it can be a fire hazard, especially if the dishcloth or sponge is not sufficiently wet. Several small fires have been started as a result of people following the advice from the study.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Macintosh Processor Upgrade Card** Macintosh Processor Upgrade Card: The generically named Macintosh Processor Upgrade Card (code named STP) is a central processing unit upgrade card sold by Apple Computer, designed for many Motorola 68040-powered Macintosh LC, Quadra and Performa models. The card contains a PowerPC 601 CPU and plugs into the 68040 CPU socket of the upgraded machine. The Processor upgrade card required the original CPU be plugged back into the card itself, and gave the machine the ability to run in its original 68040 configuration, or through the use of a software configuration utility allowed booting as a PowerPC 601 computer running at twice the original speed in MHz (50 MHz or 66 MHz) with 32 KB of L1 Cache, 256 KB of L2 Cache and a PowerPC Floating Point Unit available to software. The Macintosh Processor Upgrade requires and shipped with System 7.5.Development of the card started in July 1993. The upgrade card was announced in January 1994 at the MacWorld Expo in San Francisco. Apple described the Macintosh Processor Upgrade Card as giving a performance increase of "two to four times" for general purposes, or "up to 10 times" for floating point intensive programs. Macintosh Processor Upgrade Card: While the Macintosh Processor Upgrade did not plug into the LC Processor Direct Slot, due to power used and the space taken by the upgrade, LC PDS cards could not be fitted while the card was installed. This limited the usefulness of the Processor Upgrade Card, as internal ethernet, Apple IIe compatibility, video cards and other LC PDS expansion options must be removed. Macintosh Processor Upgrade Card: The Macintosh Processor Upgrade Card can bring a 68k Mac, that can normally only go up to Mac OS 8.1, to be upgraded to Mac OS 8.6 or newer as long as the card is always in use. If the user turns off or disconnect the card, the machine will display a Sad Mac as newer versions of Mac OS aren't compatible with 68k processors. The Macintosh Processor Upgrade Card can only run up to Mac OS 9.1 as 9.2 onwards require a G3 Processor as a minimum. Macintosh Processor Upgrade Card: DayStar Digital manufactured the Macintosh Processor Upgrade Card for Apple, sold the same card as their Daystar PowerCard 601-50/66 and also manufactured a Daystar PowerCard 601/100 which reached 100 MHz. After Daystar went out of business the 100 MHz model was manufactured and sold by Sonnet Technologies as their Sonnet Presto PPC 605.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nyko Wand** Nyko Wand: The Wand is a line of game controllers released by Nyko as third-party alternatives to the official Nintendo Wii Remote. The original Wand duplicated the functionality of the Wii Remote, while the updated Wand+ added internal replication of the Wii MotionPlus for more advanced motion sensing, similar to Nintendo's later Wii Remote Plus. The Wand series also adds additional functionality through the use of a proprietary extension of the standard Wii Remote expansion port. Overview: The design of the Wand is largely similar to Nintendo's Wii Remote. Like the official controller, it has been made available in multiple colors, and features seven digital buttons and a D-pad on the face, a trigger button on the reverse; an infrared sensor for pointer controls; and motion sensing hardware. The latter was improved in the updated Wand+, which internally replicates the functionality of the Wii MotionPlus.The Wand expansion port also includes additional pins dubbed "Trans-Port Technology" by their creator, which allow Wand-specific accessories to digitally activate the controller's buttons and receive haptic feedback information, features not available on the Wii Remote. History: Development on the Wand was first revealed publicly at the January 2009 Consumer Electronics Show (CES), where it was given a CNET Best of CES award in the Gaming category for "[improving] on the original Nintendo Wii remote". Coverage of the device's unveiling noted that the Wand was the first third-party Wii Remote alternative to be developed, and the controller was released to retail on May 21, 2009. At CES 2010, Nyko displayed an updated version of the Wand with integrated MotionPlus support, dubbed the Wand+. At the time of its January unveiling, it was the only Wii controller with internal MotionPlus capability—Nintendo's Wii Remote Plus would not be announced until late in the year—and the Wand+ was nominated by CNET for Best of CES, in addition to being voted among the Best of CES 2010 by the editors of CrunchGear. The Wand+ became available on September 2, 2010. Accessories: The Wand's design makes it compatible with most standard Wii Remote accessories, including controller shells such as the Wii Wheel and Zapper, and expansion port devices like the Nunchuk and Classic Controller. Released prior to Nintendo's MotionPlus, the Wand was not initially compatible with the accessory, however a firmware update was made available to Wand owners, and future shipments supported the device. Accessories: Kama The Kama is Nyko's alternative to the official Nunchuk, and can be used with Wands as well as Wii remotes and other alternatives. It is produced in both wireless and wired models. The wireless Kama uses two AAA batteries for power, while the wired version (which is powered via the expansion port) uses this space to include a rumble motor for haptic feedback when used with Wands, via Trans-Port. Accessories: Pistol Grip The Pistol Grip is a gun shell for Wands, similar in design to Nyko's (mechanical) Perfect Shot, with digital inputs using Trans-Port. It is intended for use with light gun games, and features digital hammer and trigger buttons mapped to the Wand's A and B buttons, with a switch allowing their functions to be inverted depending on the game's controls. It also includes a rumble motor for haptic feedback, and a pass-through port for attaching other accessories to the Pistol Grip. Due to its reliance on features of the Wand, it is not compatible with other devices. Accessories: Type Pad Pro The Type Pad Pro is a QWERTY keyboard shell for the Wand and other Wii Remote-compatible devices. It connects wirelessly to the Wii via a USB dongle, and is powered by the remote's expansion port. When used with a Wand, buttons on the Type Pad are able to activate the A and B buttons digitally via Trans-Port. Reception: Reviews of Wand controllers have been moderate to positive. In their review of the original version, the Wand was described by Destructoid as being "as good as and in some ways superior to the original", with "better" buttons, improved grip and otherwise "identical" functionality, concluding with a "Buy it!" rating. IGN preferred the Wand's buttons to the Wii Remote, noting that "the 1 and 2 buttons are much easier to get a firm grip and mash mercilessly on," the B trigger had "improved tactile response," and their responsiveness was "exceptional", while the Wand's motion controls were comparable with Nintendo's controller. UGO felt the larger buttons were "much more inline with a classic NES controller than the tiny Wii buttons", describing the controller as being superior to the Wii Remote for "more hardcore gamers".Other reviewers had more mixed feelings on the Wand. Nintendojo said the third-party controller was "a good alternative" but had "a few issues" when used with extension controllers, ultimately rating it a 7.5/10. CNET, while speaking positively about the motion controls, internal speaker and buttons, felt the redesigned D-pad was "clunky and cumbersome", and were disappointed by the lack of Trans-Port accessories at launch, giving it 3/5. Much criticism has been directed at the Wand's aesthetics. Reporting on the device's unveiling at CES, Destructoid called it a "twisted brainwrong" which "looks a bit like it was designed by half-crazed, starving Oompa Loompas on the back of a thirty day coke binge." Joystiq noted that "the defining characteristic of the Nyko Wand is its ugliness."Reception for the Wand+ in particular has been positive. Engadget felt that including MotionPlus support in a stock-sized controller was "as it should have been in the first place", preferring it over the dongle solution, ending by "shockingly [having] to conclude it is equal to or superior to the stock Wiimote." GeekDad appreciated the use of "a newer, more muted color palette" over the "garish" original Wand, adding that it featured "functionality that the Nintendo product should have rightly delivered from the get-go" and was overall "as responsive as [a] first-party product".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lagrange polynomial** Lagrange polynomial: In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data. Given a data set of coordinate pairs (xj,yj) with 0≤j≤k, the xj are called nodes and the yj are called values. The Lagrange polynomial L(x) has degree {\textstyle \leq k} and assumes each value at the corresponding node, L(xj)=yj. Although named after Joseph-Louis Lagrange, who published it in 1795, the method was first discovered in 1779 by Edward Waring. It is also an easy consequence of a formula published in 1783 by Leonhard Euler.Uses of Lagrange polynomials include the Newton–Cotes method of numerical integration, Shamir's secret sharing scheme in cryptography, and Reed–Solomon error correction in coding theory. For equispaced nodes, Lagrange interpolation is susceptible to Runge's phenomenon of large oscillation. Definition: Given a set of {\textstyle k+1} nodes {x0,x1,…,xk} , which must all be distinct, xj≠xm for indices j≠m , the Lagrange basis for polynomials of degree {\textstyle \leq k} for those nodes is the set of polynomials {\textstyle \{\ell _{0}(x),\ell _{1}(x),\ldots ,\ell _{k}(x)\}} each of degree {\textstyle k} which take values {\textstyle \ell _{j}(x_{m})=0} if {\textstyle m\neq j} and {\textstyle \ell _{j}(x_{j})=1} . Using the Kronecker delta this can be written {\textstyle \ell _{j}(x_{m})=\delta _{jm}.} Each basis polynomial can be explicitly described by the product: Notice that the numerator {\textstyle \prod _{m\neq j}(x-x_{m})} has {\textstyle k} roots at the nodes {\textstyle \{x_{m}\}_{m\neq j}} while the denominator {\textstyle \prod _{m\neq j}(x_{j}-x_{m})} scales the resulting polynomial so that 1. Definition: {\textstyle \ell _{j}(x_{j})=1.} The Lagrange interpolating polynomial for those nodes through the corresponding values {y0,y1,…,yk} is the linear combination: Each basis polynomial has degree {\textstyle k} , so the sum {\textstyle L(x)} has degree {\textstyle \leq k} , and it interpolates the data because {\textstyle L(x_{m})=\sum _{j=0}^{k}y_{j}\ell _{j}(x_{m})=\sum _{j=0}^{k}y_{j}\delta _{mj}=y_{m}.} The interpolating polynomial is unique. Proof: assume the polynomial {\textstyle M(x)} of degree {\textstyle \leq k} interpolates the data. Then the difference {\textstyle M(x)-L(x)} is zero at {\textstyle k+1} distinct nodes {\textstyle \{x_{0},x_{1},\ldots ,x_{k}\}.} But the only polynomial of degree {\textstyle \leq k} with more than {\textstyle k} roots is the constant zero function, so {\textstyle M(x)-L(x)=0,} or {\textstyle M(x)=L(x).} Barycentric form: Each Lagrange basis polynomial {\textstyle \ell _{j}(x)} can be rewritten as the product of three parts, a function {\textstyle \ell (x)=\prod _{m}(x-x_{m})} common to every basis polynomial, a node-specific constant {\textstyle w_{j}=\prod _{m\neq j}(x_{j}-x_{m})^{-1}} (called the barycentric weight), and a part representing the displacement from {\textstyle x_{j}} to {\textstyle x} By factoring {\textstyle \ell (x)} out from the sum, we can write the Lagrange polynomial in the so-called first barycentric form: L(x)=ℓ(x)∑j=0kwjx−xjyj. Barycentric form: If the weights wj have been pre-computed, this requires only O(k) operations compared to O(k2) for evaluating each Lagrange basis polynomial ℓj(x) individually. The barycentric interpolation formula can also easily be updated to incorporate a new node xk+1 by dividing each of the wj , j=0…k by (xj−xk+1) and constructing the new wk+1 as above. For any {\textstyle x,} {\textstyle \sum _{j=0}^{k}\ell _{j}(x)=1} because the constant function {\textstyle g(x)=1} is the unique polynomial of degree ≤k interpolating the data {\textstyle \{(x_{0},1),(x_{1},1),\ldots ,(x_{k},1)\}.} We can thus further simplify the barycentric formula by dividing L(x)=L(x)/g(x): L(x)=ℓ(x)∑j=0kwjx−xjyj/ℓ(x)∑j=0kwjx−xj=∑j=0kwjx−xjyj/∑j=0kwjx−xj. This is called the second form or true form of the barycentric interpolation formula. Barycentric form: This second form has advantages in computation cost and accuracy: it avoids evaluation of ℓ(x) ; the work to compute each term in the denominator wj/(x−xj) has already been done in computing (wj/(x−xj))yj and so computing the sum in the denominator costs only {\textstyle k-1} addition operations; for evaluation points {\textstyle x} which are close to one of the nodes {\textstyle x_{j}} , catastrophic cancelation would ordinarily be a problem for the value {\textstyle (x-x_{j})} , however this quantity appears in both numerator and denominator and the two cancel leaving good relative accuracy in the final result. Barycentric form: Using this formula to evaluate L(x) at one of the nodes xj will result in the indeterminate ∞yj/∞ ; computer implementations must replace such results by L(xj)=yj. Each Lagrange basis polynomial can also be written in barycentric form: ℓj(x)=wjx−xj/∑m=0kwmx−xm. A perspective from linear algebra: Solving an interpolation problem leads to a problem in linear algebra amounting to inversion of a matrix. Using a standard monomial basis for our interpolation polynomial {\textstyle L(x)=\sum _{j=0}^{k}x^{j}m_{j}} , we must invert the Vandermonde matrix (xi)j to solve L(xi)=yi for the coefficients mj of L(x) . By choosing a better basis, the Lagrange basis, {\textstyle L(x)=\sum _{j=0}^{k}l_{j}(x)y_{j}} , we merely get the identity matrix, δij , which is its own inverse: the Lagrange basis automatically inverts the analog of the Vandermonde matrix. A perspective from linear algebra: This construction is analogous to the Chinese remainder theorem. Instead of checking for remainders of integers modulo prime numbers, we are checking for remainders of polynomials when divided by linears. Furthermore, when the order is large, Fast Fourier transformation can be used to solve for the coefficients of the interpolated polynomial. Example: We wish to interpolate f(x)=x2 over the domain 1≤x≤3 at the three nodes {1,2,3} 9. The node polynomial ℓ is 11 6. The barycentric weights are w0=(1−2)−1(1−3)−1=12,w1=(2−1)−1(2−3)−1=−1,w2=(3−1)−1(3−2)−1=12. The Lagrange basis polynomials are 1. The Lagrange interpolating polynomial is: L(x)=1⋅x−21−2⋅x−31−3+4⋅x−12−1⋅x−32−3+9⋅x−13−1⋅x−23−2=x2. In (second) barycentric form, L(x)=∑j=02wjx−xjyj∑j=02wjx−xj=12x−1+−4x−2+92x−312x−1+−1x−2+12x−3. Notes: The Lagrange form of the interpolation polynomial shows the linear character of polynomial interpolation and the uniqueness of the interpolation polynomial. Therefore, it is preferred in proofs and theoretical arguments. Uniqueness can also be seen from the invertibility of the Vandermonde matrix, due to the non-vanishing of the Vandermonde determinant. Notes: But, as can be seen from the construction, each time a node xk changes, all Lagrange basis polynomials have to be recalculated. A better form of the interpolation polynomial for practical (or computational) purposes is the barycentric form of the Lagrange interpolation (see below) or Newton polynomials. Lagrange and other interpolation at equally spaced points, as in the example above, yield a polynomial oscillating above and below the true function. This behaviour tends to grow with the number of points, leading to a divergence known as Runge's phenomenon; the problem may be eliminated by choosing interpolation points at Chebyshev nodes.The Lagrange basis polynomials can be used in numerical integration to derive the Newton–Cotes formulas. Remainder in Lagrange interpolation formula: When interpolating a given function f by a polynomial of degree k at the nodes x0,...,xk we get the remainder R(x)=f(x)−L(x) which can be expressed as R(x)=f[x0,…,xk,x]ℓ(x)=ℓ(x)f(k+1)(ξ)(k+1)!,x0<ξ<xk, where f[x0,…,xk,x] is the notation for divided differences. Alternatively, the remainder can be expressed as a contour integral in complex domain as R(x)=ℓ(x)2πi∫Cf(t)(t−x)(t−x0)⋯(t−xk)dt=ℓ(x)2πi∫Cf(t)(t−x)ℓ(t)dt. The remainder can be bound as max x0≤ξ≤xk|f(k+1)(ξ)|. Remainder in Lagrange interpolation formula: Derivation Clearly, R(x) is zero at nodes. To find R(x) at a point xp , define a new function F(x)=R(x)−R~(x)=f(x)−L(x)−R~(x) and choose {\textstyle {\tilde {R}}(x)=C\cdot \prod _{i=0}^{k}(x-x_{i})} where C is the constant we are required to determine for a given xp . We choose C so that F(x) has k+2 zeroes (at all nodes and xp ) between x0 and xk (including endpoints). Assuming that f(x) is k+1 -times differentiable, since L(x) and R~(x) are polynomials, and therefore, are infinitely differentiable, F(x) will be k+1 -times differentiable. By Rolle's theorem, F(1)(x) has k+1 zeroes, F(2)(x) has k zeroes... F(k+1) has 1 zero, say ξ,x0<ξ<xk . Explicitly writing F(k+1)(ξ) :F(k+1)(ξ)=f(k+1)(ξ)−L(k+1)(ξ)−R~(k+1)(ξ) L(k+1)=0,R~(k+1)=C⋅(k+1)! (Because the highest power of x in R~(x) is k+1 )0=f(k+1)(ξ)−C⋅(k+1)! The equation can be rearranged as C=f(k+1)(ξ)(k+1)! Since F(xp)=0 we have R(xp)=R~(xp)=fk+1(ξ)(k+1)!∏i=0k(xp−xi) Derivatives: The dth derivative of a Lagrange interpolating polynomial can be written in terms of the derivatives of the basis polynomials, := ∑j=0kyjℓj(d)(x). Recall (see § Definition above) that each Lagrange basis polynomial is The first derivative can be found using the product rule: ℓj′(x)=∑i=0i≠jk[1xj−xi∏m=0m≠(i,j)kx−xmxj−xm]=ℓj(x)∑i=0i≠jk1x−xi. The second derivative is ℓj″(x)=∑i=0i≠jk1xj−xi[∑m=0m≠(i,j)k(1xj−xm∏n=0n≠(i,j,m)kx−xnxj−xn)]=ℓj(x)∑0≤i<m≤k2(x−xi)(x−xm)=ℓj(x)[(∑i=0i≠jk1x−xi)2−∑i=0i≠jk1(x−xi)2]. The third derivative is ℓj‴(x)=ℓj(x)∑0≤i<m<n≤k3!(x−xi)(x−xm)(x−xn) and likewise for higher derivatives. Finite fields: The Lagrange polynomial can also be computed in finite fields. This has applications in cryptography, such as in Shamir's Secret Sharing scheme.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pluricentric language** Pluricentric language: A pluricentric language or polycentric language is a language with several interacting codified standard forms, often corresponding to different countries. Many examples of such languages can be found worldwide among the most-spoken languages, including but not limited to Chinese in mainland China, Taiwan and Singapore; English in the United Kingdom, the United States, India, and elsewhere; and French in France, Canada, and elsewhere. The converse case is a monocentric language, which has only one formally standardized version. Examples include Japanese and Russian. Pluricentric language: In some cases, the different standards of a pluricentric language may be elaborated until they become autonomous languages, as happened with Malaysian and Indonesian, and with Hindi and Urdu. The same process is under way in Serbo-Croatian. Examples of varying degrees of pluricentrism: Arabic Pre-Islamic Arabic can be considered a polycentric language. In Arabic-speaking countries different levels of polycentricity can be detected. Modern Arabic is a pluricentric language with varying branches correlating with different regions where Arabic is spoken and the type of communities speaking it. The vernacular varieties of Arabic include: Peninsular Arabic Hejazi Arabic (urban cities of western Saudi Arabia) Najdi Arabic (much of central Saudi Arabia) Omani Arabic Gulf Arabic (spoken around the coasts of the Persian Gulf in Kuwait, Bahrain, Qatar, the United Arab Emirates, as well as parts of Saudi Arabia, Iraq, Iran, and Oman) Yemeni Arabic Levantine Arabic (spoken in the Levant region) Syrian Arabic Jordanian Arabic Lebanese Arabic Palestinian Arabic, Maghrebi Arabic (spoken in the Maghreb region) Algerian Arabic Libyan Arabic Moroccan Arabic Tunisian Arabic, Mesopotamian Arabic Baghdad Arabic, Egyptian Arabic, Sudanese Arabic, and many others.In addition, many speakers use Modern Standard Arabic in education and formal settings. Therefore, in Arabic-speaking communities, diglossia is frequent. Examples of varying degrees of pluricentrism: Armenian The Armenian language is a pluricentric language with two standard varieties, Eastern Armenian and Western Armenian, which have developed as separate literary languages since the eighteenth century. Prior to this, almost all Armenian literature was written in Classical Armenian, which is now solely used as a liturgical language. Eastern and Western Armenian can also refer to the two major dialectal blocks into which the various non-standard dialects of Armenian are categorized. Eastern Armenian is the official language of the Republic of Armenia. It is also spoken, with dialectal variations, by Iranian Armenians, Armenians in Karabakh (see Karabakh dialect), and in the Armenian diaspora, especially in the former Soviet Union (Russia, Georgia, Ukraine). Western Armenian is spoken mainly in the Armenian diaspora, especially in the Middle East, France, the US, and Canada. Examples of varying degrees of pluricentrism: Additionally, Armenian is written in two standard orthographies: classical and reformed Armenian orthography. The former is used by practically all speakers of Western Armenian and by Armenians in Iran, while the latter, which was developed in Soviet Armenia in the 20th century, is used in Armenia and Nagorno-Karabakh. Catalan–Valencian–Balearic The term "Catalan–Valencian–Balearic" is seldom used (for example, in a dictionary by Antoni Maria Alcover i Sureda). Examples of varying degrees of pluricentrism: This language is internationally known as Catalan, as in Ethnologue. This is also the most commonly used name in Catalonia, but also in Andorra and the Balearic Islands, probably due to the prestige of the Central Catalan dialect spoken in and around Barcelona. However, in the Valencian Community, the official name of this language is Valencian. One reason for this is political (see Serbo-Croatian for a similar situation), but this variant does have its own literary tradition that dates back to the Reconquista. Examples of varying degrees of pluricentrism: Although mutually intelligible with other varieties of Catalan, Valencian has lexical peculiarities and its own spelling rules, which are set out by the Acadèmia Valenciana de la Llengua, created in 1998. However, this institution recognizes that Catalan and Valencian are varieties of the same language. For their part, there are specific varieties in the two major Balearic islands, Mallorcan (mallorquí) in Mallorca, Menorcan (menorquí) in Menorca, Eivissenc in Eivissa. The University of the Balearic Islands is the language regulator for these varieties. Examples of varying degrees of pluricentrism: Chinese Until the mid-20th century, most Chinese people spoke only their local varieties of Chinese. These varieties had diverged widely from the written form used by scholars, Literary Chinese, which was modelled on the language of the Chinese classics. As a practical measure, officials of the Ming and Qing dynasties carried out the administration of the empire using a common language based on northern varieties, known as Guānhuà (官話, literally "speech of officials"), known as Mandarin in English after the officials. Knowledge of this language was thus essential for an official career, but it was never formally defined.In the early years of the 20th century, Literary Chinese was replaced as the written standard by written vernacular Chinese, which was based on northern dialects. In the 1930s, a standard national language Guóyǔ (國語, literally "national language") was adopted, with its pronunciation based on the Beijing dialect, but with vocabulary also drawn from other northern varieties. After the establishment of the People's Republic of China in 1949, the standard was known as Pǔtōnghuà (普通话/普通話, literally "common speech"), but was defined in the same way as Guóyǔ in the Republic of China now governing Taiwan. It also became one of the official languages of Singapore, under the name Huáyǔ (华语/華語, literally "Chinese language"). Examples of varying degrees of pluricentrism: Although the three standards remain close, they have diverged to some extent. Most Mandarin speakers in Taiwan and Singapore came from the southeast coast of China, where the local dialects lack the retroflex initials /tʂ tʂʰ ʂ/ found in northern dialects, so that many speakers in those places do not distinguish them from the apical sibilants /ts tsʰ s/. Similarly, retroflex codas (erhua) are typically avoided in Taiwan and Singapore. There are also differences in vocabulary, with Taiwanese Mandarin absorbing loanwords from Min Chinese, Hakka Chinese, and Japanese, and Singaporean Mandarin borrowing words from English, Malay, and southern varieties of Chinese. Examples of varying degrees of pluricentrism: Eastern South Slavic (Bulgarian–Macedonian–Torlakian (Gorani)–Paulician (Banat)) Some linguists and scholars, mostly from Bulgaria and Greece, but some also from other countries, consider Eastern South Slavic to be a pluricentric language with four standards: Bulgarian (based on the Rup, Balkan and Moesian ("Eastern Bulgarian") dialects), Macedonian (based on the Western and Central Macedonian dialects), Gorani (based on the Torlakian dialects), and Paulician (including Banat Bulgarian). Politicians and nationalists from Bulgaria are likely to refer to this entire grouping as 'Bulgarian', and to be particularly hostile to the notion that Macedonian is an autonomous language separate from Bulgarian, which Macedonian politicians and citizens tend to claim. As of 2021, the hypothesis that Eastern South Slavic, 'Greater Bulgarian', 'Bulgaro-Macedonian', or simply 'Bulgarian', is a pluricentric language with several mutually intelligible official standards in the same way that Serbo-Croatian is, and Czechoslovak used to be, has not yet been fully developed in linguistics; it is a popular idea in Bulgarian politics, but an unpopular one in North Macedonia. Examples of varying degrees of pluricentrism: English English is a pluricentric language, with differences in pronunciation, vocabulary, spelling, etc., between each of the constituent countries of the United Kingdom, North America, the Caribbean, Ireland, English-speaking African countries, Singapore, India, and Oceania. Educated native English speakers using their version of one of the standard forms of English are almost completely mutually intelligible, but non-standard forms present significant dialectal variations, and are marked by reduced intelligibility. Examples of varying degrees of pluricentrism: British and American English are the two most commonly taught varieties in the education systems where English is taught as a second language. British English tends to predominate in Europe and the former British colonies of the West Indies, Africa, and Asia, where English is not the first language of the majority of the population. (The Falkland Islands, a British territory off the southeast coast of South America with English as its native language, have their own dialect, while British English is the standard.) In contrast, American English tends to dominate instruction in Latin America, Liberia, and East Asia (In Latin America, British English is taught in schools with British curriculum in countries with descendants of British settlers.) Due to globalization and the resulting spread of the language in recent decades, English is becoming increasingly decentralized, with daily use and statewide study of the language in schools growing in most regions of the world. However, in the global context, the number of native speakers of English is much smaller than the number of non-native speakers of English of reasonable competence. In 2018, it was estimated that for every native speaker of English, there are six non-native speakers of reasonable competence, raising the questions of English as a lingua franca as the most widely spoken form of the language. Examples of varying degrees of pluricentrism: Philippine English (which is predominantly spoken as a second language) has been primarily influenced by American English. The rise of the call center industry in the Philippines has encouraged some Filipinos to "polish" or neutralize their accents to make them more closely resemble the accents of their client countries. Examples of varying degrees of pluricentrism: Countries such as Australia, New Zealand, and Canada have their own well-established varieties of English which are the standard within those countries but are far more rarely taught overseas to second language learners. (Standard English in Australia and New Zealand is related to British English in its common pronunciation and vocabulary; a similar relationship exists between Canadian English and American English.) English was historically pluricentric when it was used across the independent kingdoms of England and Scotland prior to the Acts of Union in 1707. English English and Scottish English are now subsections of British English. Examples of varying degrees of pluricentrism: French In the modern era, there are several major loci of the French language, including Standard French (also known as Parisian French), Canadian French (including Quebec French and Acadian French), American French (for instance, Louisiana French), Haitian French, and African French. Examples of varying degrees of pluricentrism: Until the early 20th century, the French language was highly variable in pronunciation and vocabulary within France, with varying dialects and degrees of intelligibility, the langues d'oïl. However, government policy made it so that the dialect of Paris would be the method of instruction in schools, and other dialects, like Norman, which has influence from Scandinavian languages, were neglected. Controversy still remains in France over the fact that the government recognizes them as languages of France, but provides no monetary support for them nor has the Constitutional Council of France ratified the Charter for Regional or Minority Languages. Examples of varying degrees of pluricentrism: North American French is the result of French colonization of the New World between the 17th and 18th centuries. In many cases, it contains vocabulary and dialectal quirks not found in Standard Parisian French owing to history: most of the original settlers of Quebec, Acadia, and later what would become Louisiana and northern New England came from Northern and Northwest France, and would have spoken dialects like Norman, Poitevin, and Angevin with far fewer speaking the dialect of Paris. This, plus isolation from developments in France, most notably the drive for standardization by L'Académie française, make North American dialects of the language quite distinct. Acadian French, that which is spoken in New Brunswick, Canada, contains many words that are much older than anything found in modern France, much of it having roots in the 17th century, and a distinct intonation. Québécois, the largest of the dialects, has a distinct pronunciation that is not found in Europe in any measure and a greater difference in vowel pronunciation, and syntax tends to vary greatly. Cajun French has some distinctions not found in Canada in that there is more vocabulary derived from both local Native American and African dialects and a pronunciation of the letter r that has disappeared in France entirely. It is rolled, and with heavier contact with the English language than any of the above the pronunciation has shifted to harder sounding consonants in the 20th century. Cajun French equally has been an oral language for generations and it is only recently that its syntax and features been adapted to French orthography. Examples of varying degrees of pluricentrism: Minor standards can also be found in Belgium and Switzerland, with particular influence of Germanic languages on grammar and vocabulary, sometimes through the influence of local dialects. In Belgium, for example, various Germanic influences in spoken French are evident in Wallonia (for example, to blink in English, and blinken in German and Dutch, blinquer in Walloon and local French, cligner in standard French). Ring (rocade or périphérique in standard French) is a common word in the three national languages for beltway or ring road. Also, in Belgium and Switzerland, there are noted differences in the number system when compared to standard Parisian or Canadian French, notably in the use of septante, octante/huitante and nonante for the numbers 70, 80 and 90. In other standards of French, these numbers are usually denoted soixante-dix (sixty-ten), quatre-vingts (four-twenties) and quatre-vingt-dix (four-twenties-and-ten). Examples of varying degrees of pluricentrism: French varieties spoken in Oceania are also influenced by local languages. New Caledonian French is influenced by Kanak languages in its vocabulary and grammatical structure. African French is another variety. Examples of varying degrees of pluricentrism: German Standard German is often considered an asymmetric pluricentric language; the standard used in Germany is often considered dominant, mostly because of the sheer number of its speakers and their frequent lack of awareness of the Austrian Standard German and Swiss Standard German varieties. Although there is a uniform stage pronunciation based on a manual by Theodor Siebs that is used in theatres, and, nowadays to a lesser extent, in radio and television news all across German-speaking countries, this is not true for the standards applied at public occasions in Austria, South Tyrol and Switzerland, which differ in pronunciation, vocabulary, and sometimes even grammar. (In Switzerland, the letter ß has been removed from the alphabet, with ss as its replacement.) Sometimes this even applies to news broadcasts in Bavaria, a German state with a strong separate cultural identity. The varieties of Standard German used in those regions are to some degree influenced by the respective dialects (but by no means identical to them), by specific cultural traditions (e.g. in culinary vocabulary, which differs markedly across the German-speaking area of Europe), and by different terminology employed in law and administration. A list of Austrian terms for certain food items has even been incorporated into EU law, even though it is clearly incomplete. Examples of varying degrees of pluricentrism: Hindustani The Hindi languages are a large dialect continuum defined as a unit culturally. Medieval Hindustani (then known as Hindavi) was based on a register of the Delhi dialect and has two modern literary forms, Standard Hindi and Standard Urdu. Additionally, there are historical literary standards, such as the closely related Braj Bhasha and the more distant Awadhi, as well as recently established standard languages based on what were once considered Hindi dialects: Maithili and Dogri. Other varieties, such as Rajasthani, are often considered distinct languages but have no standard form. Caribbean Hindi and Fijian Hindi also differ significantly from the Sanskritized standard Hindi spoken in India. Examples of varying degrees of pluricentrism: Malay–Indonesian From a purely linguistic viewpoint, Malaysian and Indonesian are two normative varieties of the same language (Malay). Both lects have the same dialectal basis, and linguistic sources still tend to treat the standards as different forms of a single language. In popular parlance, however, the two varieties are often thought of as distinct tongues in their own rights due to the growing divergence between them and for politically motivated reasons. Nevertheless, they retain a high degree of mutual intelligibility despite a number of differences in vocabulary and grammar. The Malay language itself has many local dialects and creolized versions, whereas the "Indonesian language", the standardized variety in Indonesia acting as a lingua franca of the country, has received a great number of international and local influences. Examples of varying degrees of pluricentrism: Malayalam Malayalam is a pluricentric language with historically more than one written form. Malayalam script is officially recognized, but there are other standardized varieties such as Arabi Malayalam of Mappila Muslims, Karshoni of Saint Thomas Christians and Judeo-Malayalam of Cochin Jews. Persian The Persian language has three standard varieties with official status in Iran (locally known as Farsi), Afghanistan (officially known as Dari), and Tajikistan (officially known as Tajik). The standard forms of the three are based on the Tehrani, Kabuli, and Dushanbe varieties, respectively. Examples of varying degrees of pluricentrism: The Persian alphabet is used for both Farsi (Iranian) and Dari (Afghan). Traditionally, Tajiki is also written with Perso-Arabic script. In order to increase literacy, a Latin alphabet (based on the Common Turkic Alphabet) was introduced in 1917. Later in the late 1930s, the Tajik Soviet Socialist Republic promoted the use of Cyrillic alphabet, which remains the most widely used system today. Attempts to reintroduce the Perso-Arabic script were made.The language spoken by Bukharan Jews is called Bukhori (or Bukharian), and is written in Hebrew alphabet. Examples of varying degrees of pluricentrism: Portuguese Apart from the Galician question, Portuguese varies mainly between Brazilian Portuguese and European Portuguese (also known as "Lusitanian Portuguese", "Standard Portuguese" or even "Portuguese Portuguese"). Both varieties have undergone significant and divergent developments in phonology and the grammar of their pronominal systems. The result is that communication between the two varieties of the language without previous exposure can be occasionally difficult, although speakers of European Portuguese tend to understand Brazilian Portuguese better than vice versa, due to the heavy exposure to music, soap operas etc. from Brazil. Word ordering can be dramatically different between European and Brazilian Portuguese.Brazilian and European Portuguese currently have two distinct, albeit similar, spelling standards. A unified orthography for the two varieties (including a limited number of words with dual spelling) has been approved by the national legislatures of Brazil and Portugal and is now official; see Spelling reforms of Portuguese for additional details. Formal written standards remain grammatically close to each other, despite some minor syntactic differences. Examples of varying degrees of pluricentrism: African Portuguese and Asian Portuguese are based on the standard European dialect, but have undergone their own phonetic and grammatical developments, sometimes reminiscent of the spoken Brazilian variant. A number of creoles of Portuguese have developed in African countries, for example in Guinea-Bissau and on the island of São Tomé. Examples of varying degrees of pluricentrism: Serbo-Croatian Serbo-Croatian is a pluricentric language with four standards (Bosnian, Croatian, Montenegrin, and Serbian) promoted in Bosnia and Herzegovina, Croatia, Montenegro, and Serbia. These standards do differ slightly, but do not hinder mutual intelligibility. Rather, as all four standardised varieties are based on the prestige Shtokavian dialect, major differences in intelligibility are identified not on the basis of standardised varieties, but rather dialects, like Kajkavian and Chakavian. "Lexical differences between the ethnic variants are extremely limited, even when compared with those between closely related Slavic languages (such as standard Czech and Slovak, Bulgarian and Macedonian), and grammatical differences are even less pronounced. More importantly, complete understanding between the ethnic variants of the standard language makes translation and second language teaching impossible." Spanish Spanish has both national and regional linguistic norms, which vary in terms of vocabulary, grammar, and pronunciation, but all varieties are mutually intelligible and the same orthographic rules are shared throughout.In Spain, Standard Spanish is based upon the speech of educated speakers from Madrid. All varieties spoken in the Iberian Peninsula are grouped as Peninsular Spanish. Canarian Spanish (spoken in the Canary Islands), along with Spanish spoken in the Americas (including Spanish spoken in the United States, Central American Spanish, Mexican Spanish, Andean Spanish, and Caribbean Spanish), are particularly related to Andalusian Spanish. Examples of varying degrees of pluricentrism: The United States is now the world's second-largest Spanish-speaking country after Mexico in total number of speakers (L1 and L2 speakers). A report said there are 41 million L1 Spanish speakers and another 11.6 million L2 speakers in the U.S. This puts the US ahead of Colombia (48 million) and Spain (46 million) and second only to Mexico (121 million).The Spanish of Latin Americans has a growing influence on the language across the globe through music, culture and television produced using the language of the largely bilingual speech community of US Latinos.In Argentina and Uruguay the Spanish standard is based on the local dialects of Buenos Aires and Montevideo. This is known as Rioplatense Spanish, (from Rio de la Plata (River Plate)) and is distinguishable from other standard Spanish dialects by voseo. In Colombia, Rolo (a name for the dialect of Bogotá) is valued for its clear pronunciation. The Judeo-Spanish (also known as Ladino; not to be confused with Latino) spoken by Sephardi Jews can be found in Israel and elsewhere; it is usually considered a separate language. Examples of varying degrees of pluricentrism: Swedish Two varieties exist, though only one written standard remains (regulated by the Swedish Academy of Sweden): Rikssvenska (literally "Realm Swedish", also less commonly known as "Högsvenska", 'High Swedish' in Finland), the official language of Sweden, and Finlandssvenska which, alongside Finnish, is the other official language of Finland. There are differences in vocabulary and grammar, with the variety used in Finland remaining a little more conservative. The most marked differences are in pronunciation and intonation: Whereas Swedish speakers usually pronounce /k/ before front vowels as [ɕ], this sound is usually pronounced by a Swedo-Finn as [t͡ʃ]; in addition, the two tones that are characteristic of Swedish (and Norwegian) are absent from most Finnish dialects of Swedish, which have an intonation reminiscent of Finnish and thus sound more monotonous when compared to Rikssvenska. Examples of varying degrees of pluricentrism: There are dialects that could be considered different languages due to long periods of isolation and geographical separation from the central dialects of Svealand and Götaland that came to constitute the base for the standard Rikssvenska. Dialects such as Elfdalian, Jamtlandic, and Gutnish all differ as much, or more, from standard Swedish than the standard varieties of Danish. Some of them have a standardized orthography, but the Swedish government has not granted any of them official recognition as regional languages and continues to look upon them as dialects of Swedish. Most of them are severely endangered and spoken by elderly people in the countryside. Examples of varying degrees of pluricentrism: Tamil The vast majority of Tamil speakers reside in southern India, where it is the official language of Tamil Nadu and of Puducherry, and one of 22 languages listed in the Eighth Schedule to the Constitution of India. It is also one of two official languages in Sri Lanka, one of four official languages in Singapore, and is used as the medium of instruction in government-aided Tamil primary schools in Malaysia. Other parts of the world have Tamil-speaking populations, but are not loci of planned development.Tamil is diglossic, with the literary variant used in books, poetry, speeches and news broadcasts while the spoken variant is used in everyday speech, online messaging and movies. While there are significant differences in the standard spoken forms of the different countries, the literary register is mostly uniform, with some differences in semantics that are not perceived by native speakers. There has been no attempt to compile a dictionary of Sri Lankan Tamil.As a result of the Pure Tamil Movement, Indian Tamil tends to avoid loanwords to a greater extent than Sri Lankan Tamil. Coinages of new technical terms also differ between the two. Tamil policy in Singapore and Malaysia tends to follow that of Tamil Nadu regarding linguistic purism and technical coinages.There are some spelling differences, particularly in the greater use of Grantha letters to write loanwords and foreign names in Sri Lanka, Singapore and Malaysia. The Tamil Nadu script reform of 1978 has been accepted in Singapore and Malaysia, but not Sri Lanka. Others Standard Irish (Gaeilge), Scottish Gaelic and possibly Manx can be viewed as three standards arisen through divergence from the Classical Gaelic norm via orthographic reforms. Komi, a Uralic language spoken in northeastern European Russia, has official standards for its Komi-Zyrian and Komi-Permyak dialects. Korean: North and South (to some extent—differences are growing; see North–South differences in the Korean language and Korean dialects) Kurdish language has two main literary norms: Kurmanji (Northern Kurdish) and Sorani (Central Kurdish). The Zaza–Gorani languages, spoken by some Kurds, are occasionally considered to be Kurdish as well, despite not being mutually intelligible. Examples of varying degrees of pluricentrism: For most of its history, Hebrew did not have a center. The grammar and lexicon were dominated by the canonical texts, but when the pronunciation was standardised for the first time, its users were already scattered. Therefore, three main forms of pronunciations developed, particularly for the purpose of prayer: Ashkenazi, Sephardi, and Temani. When Hebrew was revived as a spoken language, there was a discussion about which pronunciation should be used. Ultimately, the Sephardi pronunciation was chosen even though most of the speakers at the time were of Ashkenazi background, because it was considered more authentic. The standard Israeli pronunciation of today is not Identical to the Sephardi, but is somewhat of a merger with Ashkenazi influences and interpretation. The Ashkenazi pronunciation is still used in Israel by Haredim in prayer and by Jewish communities outside of Israel. Examples of varying degrees of pluricentrism: Lao and Isan, the situation is in stark contrast to Laos where the Lao language is actively promoted as a language of national unity. Laotian Lao people are very conscious of their distinct, non-Thai language and although influenced by Thai-language media and culture, strive to maintain 'good Lao'. Although spelling has changed, the Lao speakers in Laos continue to use a modified form of the Tai Noi script, the modern Lao alphabet. Examples of varying degrees of pluricentrism: Norwegian consists of a multitude of spoken dialects displaying a great deal of variation in pronunciation and (to a somewhat lesser extent) vocabulary, with no officially recognized "standard spoken Norwegian" (but see Urban East Norwegian). All Norwegian dialects are mutually intelligible to a certain extent. There are two written standards: Bokmål, "book language", based on Danish (Danish and Norwegian Bokmål are mutually intelligible languages with significant differences primarily in pronunciation rather than vocabulary or grammar), and Nynorsk, "New Norwegian", based primarily on rural Western and rural inland Norwegian dialects. Examples of varying degrees of pluricentrism: Pashto has three official standard varieties: Central Pashto, which is the most prestigious standard dialect (also used in Kabul), Northern Pashto, and Southern Pashto. Examples of varying degrees of pluricentrism: Romance languages Padanian according to Geoffrey Hull, which consists of a great variety of dialects, some mutually unintelligible, and various written standards unrecognised both by Italy and Switzerland: exceptions are Monegasque, taught in primary schools in the Principality of Monaco, and Romansh, with five sub-varieties and one “compromised” version called Rumantsch Grischun. Lombard, Piedmontese, Friulian and Istriot orthographies exist, with varying degrees of territorial specificity. This is why most linguists consider Padanian to be a language family (Rhaeto-Cisalpine) rather than only one idiom. Examples of varying degrees of pluricentrism: Romanian in Romania and that in Moldova during the Soviet era, but nowadays, Romania and Moldova use the same standard of Romanian. Sardinian consists of a conglomerate of spoken dialects, displaying a significant degree of variation in phonetics and sometimes vocabulary. The Spanish subdivision of Sardinia into two administrative areas led to the emergence of two separate orthographies, Logudorese and Campidanese, as standardized varieties of the same language. Ukrainian and Rusyn (Prešov, Lemko, Pannonian) are either considered to be standardized varieties of the same language or separate languages. Dutch is considered pluricentric with recognised varieties in Suriname, ABC Islands, Belgium and the Netherlands. The Albanian language has two main varieties Gheg and Tosk. Gheg is spoken to the north and Tosk spoken to the south of the Shkumbin river. Standard Albanian is a standardised form of spoken Albanian based on Tosk. Examples of varying degrees of pluricentrism: The Belarusian language features two orthographic standards: official Belarusian, sometimes referred to as narkomaŭka, and Taraškievica, also known as "classical orthography". The division stems from 1933 reform believed by some to be an attempt to artificially similarize Belarusian and Russian languages. Originally, these standards differed only in written form, but due to Taraškievica being widely used among Belarusian diaspora, it grew some distinct orthoepic features, as well as differences in vocabulary. Examples of varying degrees of pluricentrism: Afrikaans varieties of South Africa and Namibia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stack Attack** Stack Attack: Stack Attack was the game for the 2003 FIRST Robotics Competition. Two teams of two robots compete by moving large Sterilite bins into their zones and arranging them into stacks. Layout: At the beginning of the match, each team is given 4 bins, which they may arrange as they see fit. Each of these bins are marked with a retroreflective tape that is highly visible to the infrared sensors included in the kit of parts. In the center of the field on top of the ramp is a large stack of 29 bins. Object: The object of this game is for each two player alliance to rack up more points than the other team. Scoring is one point for every bin in an alliance's scoring zone, multiplied by the height of their highest stack. Each robot on the top of the ramp at the end of the match adds 25 points to an alliance's score. Bins that are supported by a robot do not count towards the final score. Gameplay: Human Player The Human player period is 10 seconds at the start of the match where one designated person from each team may walk onto the field and pass or receive one or more of their four bins to their allied team's human player. This allowed teams to create stacks higher than the initial 4-high stacks. The human players needed to return to a pressure-sensitive pad before their robot was activated. Gameplay: Autonomous The autonomous period was 15 seconds long in Stack Attack. During the autonomous period, teams could use infrared sensors to hunt the opposing team's stacks, use the same sensors to follow white tape or use dead reckoning to navigate to the center stacks. Gameplay: Human-controlled The human controlled phase began immediately after the autonomous period. During this phase, robots would attempt to amass the most bins on their side, while preventing the opposing teams from getting bins onto their scoring area and knocking down opposing stacks. Robots that were capable of stacking would use this time to begin building and protecting a stack of bins. Strategies: Autonomous The most successful strategy in autonomous mode was to get the robot to run through the centre stack and push as many bins as possible onto that robot's scoring end. Since it was very difficult to move bins from one end of the field to the other, once most of the centre bins landed in one end, often that team would win. Teams could also try to seek out and knock over the stacks of bins that the human players had placed at the start of the match. Strategies: Human Controlled Game Often the main game was a chaotic period where teams would try and clear out their own zone while attempting to keep as many bins on their scoring section as possible. Robots that were capable of stacking would usually protect their stack for the entire match, then release it just as the match ended. Events: The following regional events were held in 2003: Arizona Regional - Phoenix, AZ BAE Systems/Granite State Regional - Manchester, NH Buckeye Regional - Cleveland, OH Canadian Regional - Mississauga, ON Central Florida Regional - Orlando, FL Chesapeake Regional - Annapolis, MD Great Lakes Regional - Ypsilanti, MI Johnson & Johnson Mid-Atlantic Regional - Piscataway, NJ Lone Star Regional - Houston, TX Midwest Regional - Evanston, Il NASA/VCU Regional - Richmond, VA New York City Regional - New York City, NY Pacific Northwest Regional - Seattle, WA Peachtree Regional - Duluth, GA Philadelphia Regional - Philadelphia, PA Pittsburgh Regional - Pittsburgh, PA St. Louis Regional - St. Charles, MO Sacramento Regional - Sacramento, CA SBPLI Long Island Regional - Long Island, NY Silicon Valley Regional - San Jose, CA Southern California Regional - Los Angeles, CA UTC New England Regional - Hartford, CT Western Michigan Regional - Grand Rapids, MIThe championship was held at Reliant Park in Houston, TX.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Narrowband modem** Narrowband modem: In telecommunication, a narrowband modem is a modem whose modulated output signal has an essential frequency spectrum that is limited to that which can be wholly contained within, and faithfully transmitted through, a voice channel with a nominal 4 kHz bandwidth. Note: High frequency (HF) modems are limited to operation over a voice channel with a nominal 3 kHz bandwidth.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Engagement marketing** Engagement marketing: Engagement marketing, sometimes called "experiential marketing", "event marketing", "on-ground marketing", "live marketing", "participation marketing", "Loyalty Marketing", or "special events", is a marketing strategy that directly engages consumers and invites and encourages them to participate in the evolution of a brand or a brand experience. Rather than looking at consumers as passive receivers of messages, engagement marketers believe that consumers should be actively involved in the production and co-creation of marketing programs, developing a relationship with the brand.Consumer engagement is when a brand and a consumer connect. According to Brad Nierenberg, experiential marketing is the live, one-on-one interactions that allow consumers to create connections with brands. Consumers will continue to seek and demand one-on-one, shareable interaction with a brand. Virtual extension: Experiential marketing is a growing trend which involves marketing a product or a service through custom memorable experiences that engage the customers and create emotional attachment to the product/service. Physical and interactive experiences are used to reinforce the offer of a product and make customers feel as if they are part of them. Experiences are positively related to customer's attitudes, mood, and behaviors. They also represent a means through which a company can gain competitive advantage by differentiating itself from competitors. To achieve success, an experience should be engaging, compelling, and able to touch the customer's senses and capture their loyalty.Many aspects differentiate experiential from traditional marketing. First, experiential marketing focuses on providing sensory, emotional, cognitive, and rational values to the consumers. Second, experiential marketing aims to create synergies between meaning, perception, consumption, and brand loyalty. Furthermore, experiential marketing requires a more diverse range of research methods in order to understand consumers.Smith has developed a six-step process to develop an effective experiential branding strategy. The first step includes carrying out a customer experience audit in order to analyze the current experience of the brand. The second step is to create a brand platform and develop a touchpoint with customers. The following step includes designing the brand experience; coordinating the brand's people, products and processes against the brand proposition. The next steps involve communicating the brand proposition internally and externally. The last step consists of monitoring performance in order to ensure that the brand is meeting its objectives.Nowadays, experiential marketing is getting more technologically advanced and personalized. The wide spread of the Internet and the increasing competition among online retailers has led to the rise of virtual experiential marketing (VEM). VEM uses the Internet and its various channels to create an enriched and engaging experience by using visual and audio tools. VEM relies on an electronic environment that engages customers and arouses their emotional responses to create an unparalleled experience and consequently capture their loyalty. The elements which characterize virtual experiential marketing are: sense, interaction, pleasure, flow and community relationship. Furthermore, affective involvement has been identified as a key factor which affects online purchase intention. Thus, the online experience must emphasize an emotional appeal to the consumer in order to build purchase intention.Management consultancy A.T. Kearney has developed a model to create high impact virtual customer experiences, emphasizing four basic steps: Develop a compelling customer value proposition. In developing this, it is important to understand how an online experience can satisfy customer needs. Virtual extension: Create the digital customer experience framework to address all areas of interaction between customers and the business. Use proven tools, the "7Cs", to support the framework. The key tools are: content, customization, customer care, communication, community, connectivity, and convenience. Integrate the online and offline customer experience. In fact, companies can enhance the virtual customer experience through consistent links with the offline world. Engagement: Engagement measures the extent to which a consumer has a meaningful brand experience when exposed to commercial advertising, sponsorship, television contact, or other experience. In March 2006 the Advertising Research Foundation (ARF) defined Engagement as "turning on a prospect to a brand idea enhanced by the surrounding context".According to a study by Jack Morton Worldwide, 11 out of 14 consumers reported preferring to learn about new products and services by experiencing them personally or hearing about them from an acquaintance. Meanwhile, a report by The Event Marketing Institute and Mosaic found that 74% of consumers say that engaging with branded event marketing experiences makes them more likely to buy the products being promoted.Engagement is complex because a variety of exposure and relationship factors affect engagement, making simplified rankings misleading. Typically, engagement with a medium often differs from engagement with advertising, according to an analysis conducted by the Magazine Publishers of America.Related to this notion is the term program engagement, which is the extent to which consumers recall specific content after exposure to a program and advertising. Starting in 2006 U.S. broadcast networks began guaranteeing specific levels of program engagement to large corporate advertisers. Multi-dimensional communication: Keith Ferrazzi wrote in 2009 that Information Age was transitioning into what he termed the Relationship Age, "in which emotion, empathy, and cooperation are critical success traits" and where "technology and human interaction are intersecting and trust, conversation, and collaboration are top of mind and top of agenda".In 2006, researchers from market research company Gallup identified two-dimensional (two-way) communication where consumers participate, share, and interact with a brand as a creator of the engagement crucial to business and personal success.Two-dimensional (2D) communication and engagement is where "both giver and receiver are listening to each other, interacting, learning and growing from the process".Three-dimensional engagement ("3DE") has "not only length and width, but depth, where both giver and receiver connect to a higher power and are changed in the experience. Not just a conversation, but connection to a purpose that transforms all in the process." As philosophy: Greg Ippolito, former creative director of engagement marketing agency Annodyne, wrote that the key point of differentiation between engagement marketing and other forms is that the former "is anchored by a philosophy, rather than a focus on specific marketing tools". That philosophy is that audiences should be engaged in the sales process when they want, and by which channels they prefer.Ippolito has argued that traditional top-down marketing results, largely, in the production and communication of white noise, whereas engagement marketing assumes a different approach: "Think of a salesperson who walks up to you in a store. You tell him thanks, you're okay, you're just looking. But he hovers and looms, finds ways to insert himself into your activity, and is a general annoyance. That's what typical marketing feels like: intrusive and disruptive. Engagement Marketing is the opposite. It's a salesperson who hangs back and engages you if/when you need help. Who can sense what you want to do, and help you arrive at that decision. Who will contact you directly with exclusive sales information, if—and only if—you request it. Engagement Marketing, done well, means connecting with audiences who want to hear from you, in relevant, meaningful, interesting ways. If you can pull that off, everything changes." After launching IMA in 2013, Ippolito shifted his focus to momentum marketing—described as "the next evolution of engagement marketing"—which shares the same customer-centric philosophy, but places a greater emphasis on leveraging data to reach target audiences online via their most well-traveled channels: "[M]odern consumers are hard to pin down; they're constantly in motion—traversing different spaces, utilizing different media, and as always, experiencing a range of different thoughts and feelings throughout any given day ... [The key is to] leverage the existing momentum of target consumers. By doing so, we can ... guide them where we want them to go—with minimum waste and maximum efficiency." Early examples of successful engagement marketing campaigns: PROMO magazine has credited Gary M. Reynolds, founder of GMR Marketing, with being the pioneer in the practice of engagement marketing. It has cited Reynolds' formation of the Miller Band Network in 1979 as the seminal engagement marketing moment.Another example of engagement marketing is seen in the marketing strategy of Jones Soda. At the company's website, regular customers are allowed to send in photos that will then be printed on bottles of the soda. These photos can be put on a small order of custom-made soda, or, if the photos are interesting enough, they can be put into production and used as labels for a whole production run. This strategy is effective at getting customers to co-create the product, and engaging customers with the brand.It could be argued that the Macys Thanksgiving Day parade is a type of experiential marketing, as the viewer is invited to experience floats and entertainment tied to specific brands (and Macy's itself).Another example of engagement marketing is seen in the marketing strategy of Jaihind Collection Pune for their paraplegic fashion Show.In the 21st century, engagement marketing has taken a new turn with the advent of different technologies. The effect of smartphones, touchscreens and virtual reality has become prominent. Examples of such engagement marketing can be found online. Though technological advancement made such campaigns possible, innovative ideas remain as important as ever. Common offline engagement marketing tools: Street marketing, also known as street teams Youth marketing, also known as entertainment marketing Event management, also known as event marketing Mobile marketing tours: often, brands will utilize custom-branded RVs, buses, and motor coaches to draw attention to their offering, serving as mobile billboards as well as mobile centers to create brand experiences on-site in retail parking lots or at larger events. Common offline engagement marketing tools: Marketing through amenities: companies promote their brands through interactive marketing via amenities such as charging stations. IOT Device connected to social platforms that display the numbers of fans and personalised messages to the off line customers. Common offline engagement marketing tools: Immersive storytelling uses immersion or immersive technology to create virtual brand worlds for consumers to engage with. Using technology such as virtual and augmented reality, CGI and 360° video content, face and gesture recognition, holographics and ultra-haptics, 3D scanning/mapping/printing, wearable and touchscreen technologies, geo-location technologies, drones, photobooths and magic eyes, brands are able to interact with their audiences in new and creative ways. Common online engagement marketing tools: Blogs: For engagement marketing purposes, companies can share content on their own blogs and participate as a commenter or content provider on relevant external blogs. Hosting a campaign that gives prizes to the readers of external blogs for their participation in some kind of contest is an example of an engagement marketing campaign aimed at external blogs. Common online engagement marketing tools: Social networking sites: Social networking sites (such as Facebook, Instagram, LinkedIn, and Twitter) are ideal for engagement marketing because they provide a way for people to interact with brands and create a two-way dialogue between customers and companies. Most companies maintain a presence on several of these sites. Some of these platforms have also created specific types of online presences for companies. For example, Facebook introduced Fan pages in 2007. Engagement outcomes such as sharing behaviours include motivations such as enjoyment, self-efficacy, learning, personal gain, altruism, empathy, social engagement, community interest, reciprocity, and reputation as well as social response to fan page cues such as social interactive value, visual appearance and identity attractiveness of the branded object Ideally, activations such as photo booths tied the event experience back to the user's social channels. Common online engagement marketing tools: Webcasts: Differing from internal webcast meetings with a small, specific invitation list, engagement marketing online events are aimed at a much larger and public audience. They are typically available live or on-demand, which allows viewers to view content on their own schedule. Similar to conferences, audience members can ask the speakers questions and participate in polls during live webcasts. Email campaigns: One of the earliest online engagement marketing tools, email marketing requires target audiences to opt-in to directly receive a marketer's emails. Companies can also encourage individuals to share their messages virally, via the forwarding of emails to colleagues, friends and family. Common online engagement marketing tools: Crowdsourcing: Crowdsourcing sites offer engagement marketing opportunities through their open media contests. Crowdsourcing sites like these generate brand ambassadors as an organic byproduct of the crowdsourcing process itself by encouraging users to share their submissions on various social networking sites. By first engaging fans and consumers in the act of shaping the brand identity itself, there is increased brand awareness and development of brand relationships well before launching any official media campaign.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Smoking jacket** Smoking jacket: A smoking jacket is an informal men's style of lounge jacket originally intended for tobacco smoking. Designed in the 1850s, a traditional smoking jacket has a shawl collar, turn-up cuffs, and is closed with either toggle or button fastenings, or with a tie belt. It is usually made from velvet and/or silk. Smoking jacket: Originating in the 1850s, The Gentleman's Magazine of London, England, defined the smoking jacket as a "kind of short robe de chambre [i.e. a banyan], of velvet, cashmere, plush, merino or printed flannel, lined with bright colours, ornamented with brandebourgs [i.e. frogs], olives or large buttons."The smoking jacket later evolved into the dinner jacket, essentially a dress coat without tails, following an example set by Edward, Prince of Wales (later King Edward VII) in 1865. The smoking jacket has remained in its original form and is commonly worn when smoking pipes and cigars. Etymology: The smoking jacket is named due to its association with tobacco smoking. As a false friend, the name carried on to its derivation the dinner jacket in several non-English languages. In Bulgarian, Catalan, Czech, Danish, Dutch, Estonian, French, German, Greek, Hebrew, Hungarian, Icelandic, Italian, Lithuanian, Polish, Portuguese, Romanian, Russian, Spanish, Swedish, Turkish, and other European languages, the term smoking indicates a dinner jacket, or a tuxedo jacket. History: In the 17th century, goods began flowing into Europe from Asia and the Americas, bringing in spices, tobacco, coffee, and silks. It became fashionable to be depicted in one's portrait wearing a silk robe de chambre, or dressing gown. One of the earliest mentions of this garment comes from Samuel Pepys, who wished to be depicted in his portrait in a silk gown but could not afford one, so he rented one: Thence home and eat one mouthful, and so to Hale's and there sat until almost quite dark upon working my gowne, which I hired to be drawn (in) it—an Indian gown, and I do see all the reason to expect a most excellent picture of it. —Diary, 30 March 1666 In the 18th century, gentlemen often referred to a specific style of "night gown" called the banyan, a knee-length robe, a more comfortable design than the justaucorps, onto which shawl collars became prevalent. The short smoking jacket soon evolved from these silk garments. To protect their clothes, many men would wear their robes-de-chambre while smoking in private. These robes acted as a barrier against ash and smoke, while also allowing them to showcase another garment from their collection. When the Crimean War of the 1850s popularised Turkish tobacco in Britain, smoking gained in popularity. After dinner, a gentleman might wear a smoking jacket and retreat to a smoking room. The jacket was intended to absorb the smoke from his cigar or pipe and protect his clothing from falling ash. History: The smoking jacket remained popular into the 20th century. An editorial in The Washington Post in 1902 wrote that the smoking jacket was "synonymous with comfort", while a Pennsylvania newspaper opined in 1908 that it would be "putting it mildly to say that a new House Coat or Smoking Jacket will give any man reason for elation". Due to its comfort, it was also worn by men as a leisure garment outside of smoking. Famous wearers included Fred Astaire (who was buried in a smoking jacket), Cary Grant, Dean Martin, Jon Pertwee and Frank Sinatra.While smoking jackets declined in popularity from the 1950s, a minority of wearers still persisted; Playboy mogul Hugh Hefner was a notable example. In its January/February 1999 issue, Cigar Aficionado stated that it was time the smoking jacket be brought back, perhaps as an "alternative type of formalwear".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Truncated icosidodecahedron** Truncated icosidodecahedron: In geometry, a truncated icosidodecahedron, rhombitruncated icosidodecahedron, great rhombicosidodecahedron, omnitruncated dodecahedron or omnitruncated icosahedron is an Archimedean solid, one of thirteen convex, isogonal, non-prismatic solids constructed by two or more types of regular polygon faces. Truncated icosidodecahedron: It has 62 faces: 30 squares, 20 regular hexagons, and 12 regular decagons. It has the most edges and vertices of all Platonic and Archimedean solids, though the snub dodecahedron has more faces. Of all vertex-transitive polyhedra, it occupies the largest percentage (89.80%) of the volume of a sphere in which it is inscribed, very narrowly beating the snub dodecahedron (89.63%) and small rhombicosidodecahedron (89.23%), and less narrowly beating the truncated icosahedron (86.74%); it also has by far the greatest volume (206.8 cubic units) when its edge length equals 1. Of all vertex-transitive polyhedra that are not prisms or antiprisms, it has the largest sum of angles (90 + 120 + 144 = 354 degrees) at each vertex; only a prism or antiprism with more than 60 sides would have a larger sum. Since each of its faces has point symmetry (equivalently, 180° rotational symmetry), the truncated icosidodecahedron is a 15-zonohedron. Names: The name great rhombicosidodecahedron refers to the relationship with the (small) rhombicosidodecahedron (compare section Dissection). There is a nonconvex uniform polyhedron with a similar name, the nonconvex great rhombicosidodecahedron. Area and volume: The surface area A and the volume V of the truncated icosidodecahedron of edge length a are: 30 174.292 0303 95 50 206.803 399 a3. If a set of all 13 Archimedean solids were constructed with all edge lengths equal, the truncated icosidodecahedron would be the largest. Cartesian coordinates: Cartesian coordinates for the vertices of a truncated icosidodecahedron with edge length 2φ − 2, centered at the origin, are all the even permutations of: (±1/φ, ±1/φ, ±(3 + φ)), (±2/φ, ±φ, ±(1 + 2φ)), (±1/φ, ±φ2, ±(−1 + 3φ)), (±(2φ − 1), ±2, ±(2 + φ)) and (±φ, ±3, ±2φ),where φ = 1 + √5/2 is the golden ratio. Dissection: The truncated icosidodecahedron is the convex hull of a rhombicosidodecahedron with cuboids above its 30 squares, whose height to base ratio is φ. The rest of its space can be dissected into nonuniform cupolas, namely 12 between inner pentagons and outer decagons and 20 between inner triangles and outer hexagons. An alternative dissection also has a rhombicosidodecahedral core. It has 12 pentagonal rotundae between inner pentagons and outer decagons. The remaining part is a toroidal polyhedron. Orthogonal projections: The truncated icosidodecahedron has seven special orthogonal projections, centered on a vertex, on three types of edges, and three types of faces: square, hexagonal and decagonal. The last two correspond to the A2 and H2 Coxeter planes. Spherical tilings and Schlegel diagrams: The truncated icosidodecahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane. Schlegel diagrams are similar, with a perspective projection and straight edges. Geometric variations: Within Icosahedral symmetry there are unlimited geometric variations of the truncated icosidodecahedron with isogonal faces. The truncated dodecahedron, rhombicosidodecahedron, and truncated icosahedron as degenerate limiting cases. Truncated icosidodecahedral graph: In the mathematical field of graph theory, a truncated icosidodecahedral graph (or great rhombicosidodecahedral graph) is the graph of vertices and edges of the truncated icosidodecahedron, one of the Archimedean solids. It has 120 vertices and 180 edges, and is a zero-symmetric and cubic Archimedean graph. Related polyhedra and tilings: This polyhedron can be considered a member of a sequence of uniform patterns with vertex figure (4.6.2p) and Coxeter-Dynkin diagram . For p < 6, the members of the sequence are omnitruncated polyhedra (zonohedrons), shown below as spherical tilings. For p > 6, they are tilings of the hyperbolic plane, starting with the truncated triheptagonal tiling.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chinese Characters Dictation Competition** Chinese Characters Dictation Competition: Chinese Characters Dictation Competition (Chinese: 中国汉字听写大会; pinyin: Zhōngguó hànzì tīngxiě dàhuì) is a weekly television program where contestants write Chinese characters after hearing the words. The show now broadcasts on CCTV-1.The show was inspired by spelling bees in the United States.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bézier surface** Bézier surface: Bézier surfaces are a species of mathematical spline used in computer graphics, computer-aided design, and finite element modeling. As with Bézier curves, a Bézier surface is defined by a set of control points. Similar to interpolation in many respects, a key difference is that the surface does not, in general, pass through the central control points; rather, it is "stretched" toward them as though each were an attractive force. They are visually intuitive and, for many applications, mathematically convenient. History: Bézier surfaces were first described in 1962 by the French engineer Pierre Bézier who used them to design automobile bodies. Bézier surfaces can be of any degree, but bicubic Bézier surfaces generally provide enough degrees of freedom for most applications. Equation: A given Bézier surface of degree (n, m) is defined by a set of (n + 1)(m + 1) control points ki,j where i = 0, ..., n and j = 0, ..., m. It maps the unit square into a smooth-continuous surface embedded within the space containing the ki,j s – for example, if the ki,j s are all points in a four-dimensional space, then the surface will be within a four-dimensional space. Equation: A two-dimensional Bézier surface can be defined as a parametric surface where the position of a point p as a function of the parametric coordinates u, v is given by: p(u,v)=∑i=0n∑j=0mBin(u)Bjm(v)ki,j evaluated over the unit square, where Bin(u)=(ni)ui(1−u)n−i is a basis Bernstein polynomial, and (ni)=n!i!(n−i)! is a binomial coefficient. Some properties of Bézier surfaces: A Bézier surface will transform in the same way as its control points under all linear transformations and translations. All u = constant and v = constant lines in the (u, v) space, and – in particular – all four edges of the deformed (u, v) unit square are Bézier curves. A Bézier surface will lie completely within the convex hull of its control points, and therefore also completely within the bounding box of its control points in any given Cartesian coordinate system. The points in the patch corresponding to the corners of the deformed unit square coincide with four of the control points. Equation: However, a Bézier surface does not generally pass through its other control points.Generally, the most common use of Bézier surfaces is as nets of bicubic patches (where m = n = 3). The geometry of a single bicubic patch is thus completely defined by a set of 16 control points. These are typically linked up to form a B-spline surface in a similar way as Bézier curves are linked up to form a B-spline curve. Equation: Simpler Bézier surfaces are formed from biquadratic patches (m = n = 2), or Bézier triangles. Bézier surfaces in computer graphics: Bézier patch meshes are superior to triangle meshes as a representation of smooth surfaces. They require fewer points (and thus less memory) to represent curved surfaces, are easier to manipulate, and have much better continuity properties. In addition, other common parametric surfaces such as spheres and cylinders can be well approximated by relatively small numbers of cubic Bézier patches. However, Bézier patch meshes are difficult to render directly. One problem with Bézier patches is that calculating their intersections with lines is difficult, making them awkward for pure ray tracing or other direct geometric techniques which do not use subdivision or successive approximation techniques. Bézier surfaces in computer graphics: They are also difficult to combine directly with perspective projection algorithms. Bézier surfaces in computer graphics: For this reason, Bézier patch meshes are in general eventually decomposed into meshes of flat triangles by 3D rendering pipelines. In high-quality rendering, the subdivision is adjusted to be so fine that the individual triangle boundaries cannot be seen. To avoid a "blobby" look, fine detail is usually applied to Bézier surfaces at this stage using texture maps, bump maps and other pixel shader techniques. Bézier surfaces in computer graphics: A Bézier patch of degree (m, n) may be constructed out of two Bézier triangles of degree m + n, or out of a single Bézier triangle of degree m + n, with the input domain as a square instead of a triangle. A Bézier triangle of degree m may also be constructed out of a Bézier surface of degree (m, m), with the control points so that one edge is squashed to a point, or with the input domain as a triangle instead of a square.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rescue buoy** Rescue buoy: A rescue buoy or rescue tube or torpedo buoy is a piece of lifesaving equipment used in water rescue. This flotation device can help support the victim's and rescuer's weight to make a rescue easier. It is an essential part of the equipment that must be carried by lifeguards. It further can act as a mark of identification, identifying an individual as a lifeguard. Description: The rescue tube is usually made of vinyl, and is buoyant enough to support the full weight of a rescuer and several victims. The tube has a long leash that the lifeguard wears around the body to tow the tube along while swimming a long distance. The rescue tube is usually red, but can come of a variety of colors. Rescue tubes often have the words "Guard" or "Lifeguard" printed on them. The tube may also have clips, so that it may be wrapped around a person. The rescue buoy is a hollow plastic rescue flotation device. It is referred to also as a Torpedo Buoy, because of its shape; and is often called a "Torp" for short by lifeguards. Because of its rigidity, it is slightly more hazardous in surf conditions. However, the rescue buoy generally has more buoyancy than a rescue tube, allowing the rescuer to assist multiple victims. There are several colors and sizes available commercially. The rails, or sides, or the buoy have handles allowing victims to grab on. Like the tube, the buoy is connect by a rope to a strap the rescuer wears. This allows them to swim while towing the buoy and victim. The buoy may also be connected to a landline device, which allows individuals onshore to pull the rescuer and victims back to shore. Early versions were constructed of aluminum, wood, cork, and fiberglass, with rope rails. History: Landline A lifeguard would swim out the victim while attached to the line, also known as a Reel and Line. The lifesaver, whilst still attached would clutch the victim, and would be rapidly pulled back to shore by others. This was inefficient as the line produced drag for the lifeguard and was at risk of becoming tangled. Another disadvantage was the need for two or more persons for operation; it was also inadequate in cases with multiple rescues simultaneously occurring at different locations. History: Rescue can First created by Captain Henry Sheffield in 1897, the first "rescue can" was made of sheet metal and pointed at both ends. It caused little drag but occasionally produced harm to the lifeguard and the victim. As the design was switched from metal to aluminum with rounded ends, injuries would still occur. Walters Torpedo Buoy The Walters Torpedo Buoy was invented in 1919 by Henry Walters of the American Red Cross Volunteer Life Savings Corps. Peterson Tube Created in 1935, Pete Peterson produced an inflatable rescue tube with snap hooks molded onto one end and a 14-inch strap on the other. The design was further improved upon in the late 1960s with the production of closed-cell foam rubber. History: World War II rescue buoy See: Rescue buoy (Luftwaffe) During World War II, at the instigation of German Generaloberst Ernst Udet, large buoys were deployed in the English Channel for downed Luftwaffe flyers. Each included a 43-square-foot (4.0 m2) enclosed cabin and a radio transmitter. One can be seen in the British films One of Our Aircraft is Missing (1942) and We Dive at Dawn (1943). History: Burnside Buoy Lt. Burnside coordinated with Professor Ron Rezek for the development of a plastic rescue buoy. A wood prototype was approved by the Board of Directors of the National Surf Line Saving Association in 1968. Robotic rescue buoy EMILY (Emergency Integrated Lifesaving Lanyard) is a robotic rescue buoy made by Hydronalix. Usage: While approaching the victim, the lifeguard should allow the rescue buoy to trail behind. Once the lifeguard makes contact with the victim, they should hand over the rescue buoy to the victim and bring them ashore. The buoyancy of the rescue buoy, along with the reassuring talk, should comfort and calm the victim. Water entry The three major components of a rescue floatation device (RFD) are the lanyard, float, and harness. The lanyard and harness can trip up the lifeguard during the entry run, so care must be taken upon handling the RFD. The lifeguard must carry the rescue buoy until the beach visitors are not at risk of getting hurt. Removal from water If surf conditions are rough, the lifeguard may want to carry the rescue buoy completely out of the water. An unsecured rescue buoy could potentially wash up with force against the lifeguard or victim. Fouling Fouling occurs when the lanyard wraps around an object jeopardizing the lifeguard. The attachment between the lanyard and lifeguard must allow for quick release in case of emergencies. Lanyard length The length of the lanyard is crucial, as it must help concise and efficient rescue. It must be long enough for the lifeguard to kick without the buoy in the way, and it must be short enough to avoid fouling.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Realmspace** Realmspace: Realmspace (product code SJR2) is an accessory for the Spelljammer campaign setting for the Dungeons & Dragons fantasy role-playing game. Contents: This 96-page booklet describes the area of space near the planet Toril of the Forgotten Realms setting. The book describes the sun, as well as the planets Anadia, Coliar, Toril, Karpri, Chandos, Glyth, Garden, H'Catha, as well as Elminster's Hideout. The book also details new magical items, new monsters, and new spelljamming ships. Publication history: The book was written by Dale "Slade" Henson, and was published in 1991. Cover art is by Thomas Baxa, with interior illustrations by Newton Ewell.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mechanochromic luminescence** Mechanochromic luminescence: Mechanochromic luminescence (ML) references to intensity and/or color changes of (solid-state) luminescent materials induced by mechanical forces, such as rubbing, crushing, pressing, shearing, or smearing. Unlike "triboluminescence" which does not require additional excitation source other than force itself, ML is often manifested by external photoexcitation such as a UV lamp. The most common cause of ML is related to changes of intermolecular interactions of dyes and pigments, which gives rise to various strong (exciton splitting) and/or weak (Forster) excited state interactions. For example, a certain boron complex of sunscreen compound avobenzone exhibits reversible ML. A recent detailed study suggests that ML from the boron complex consists of two critical coupled steps: 1) generation of low energy exciton trap via mechanical perturbation; and 2) exciton migration from regions where photoexcitation results in a higher excited state. Since solid-state energy transfer can be very efficient, only a small fraction of the low-energy exciton traps is required when mechanical force is applied. As a result, for crystalline ML materials, XRD measurement may not able to detect changes before and after mechanical stimuli while its photoluminescence can be quite different.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Deep social mind** Deep social mind: Deep social mind is a concept in evolutionary psychology; it refers to the distinctively human capacity to 'read' (that is, to infer) the mental states of others while reciprocally enabling those others to read one's own mental states at the same time. The term 'deep social mind' was first coined in 1999 by Andrew Whiten, professor of Evolutionary and Developmental Psychology at St. Andrews University, Scotland. Together with closely related terms such as 'reflexivity' and 'intersubjectivity', it is now well-established among scholars investigating the evolutionary emergence of human sociality, cognition and communication. Mind-reading in apes and humans: It is widely agreed that the brain is social in both human and nonhuman primates. But, according to Andrew Whiten, human sociality goes much further than ape sociality. Ape social intelligence is overwhelmingly 'Machiavellian' in the sense of manipulating others in social settings.One consequence is that while an ape may be motivated to 'read' (that is, to infer) the mental states of others around it, it has little motive to reciprocate. Instead of making its own mental states transparent to potential rivals, it seeks to block others from 'reading' its own mind. For example, one way to infer what another primate might be thinking is to detect which way its head is pointed, so as to reconstruct what it might be looking at. In the case of gorillas and chimpanzees, adult apes have evolved eyes which give away very little information concerning direction of gaze. Their eyes are dark-on-dark: the iris is dark brown or even black and the same applies to the sclera and surrounding skin. Looking at the eyes, therefore, it is not easy to detect direction of gaze. In the human case, the eyes are very different, the dark iris standing out against a white surrounding sclera. This feature, combined with the relatively large size of the human eye and its horizontally elongated shape, assists neighbouring conspecifics to detect direction of gaze and, on that basis, engage in mind-reading.According to the 'deep social mind' theory, this means that humans have become cognitively adapted to reflexivity and intersubjectivity: as a species, we are well-adapted to read the minds of trusted others while at the same time assisting those others in reading our own minds. One consequence of this is self-awareness or 'egocentric perspective reversal': I read your mind as you are reading mine. Therefore, between us, we can gain an awareness of our own minds as if from the outside: my mental states as these are reflected in yours and yours as they are reflected in mine. In that sense, if this argument is accepted, our minds mutually interpenetrate. 'Mind' in the human sense is not locked inside this or that skull but instead is relational, stretching between us. According to evolutionary psychologist Michael Tomasello, a human child normally achieves egocentric perspective reversal—viewing its own mental states as if from the standpoint of others—at around one year of age.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Elastic map** Elastic map: Elastic maps provide a tool for nonlinear dimensionality reduction. By their construction, they are a system of elastic springs embedded in the data space. This system approximates a low-dimensional manifold. The elastic coefficients of this system allow the switch from completely unstructured k-means clustering (zero elasticity) to the estimators located closely to linear PCA manifolds (for high bending and low stretching modules). With some intermediate values of the elasticity coefficients, this system effectively approximates non-linear principal manifolds. This approach is based on a mechanical analogy between principal manifolds, that are passing through "the middle" of the data distribution, and elastic membranes and plates. The method was developed by A.N. Gorban, A.Y. Zinovyev and A.A. Pitenko in 1996–1998. Energy of elastic map: Let S be a data set in a finite-dimensional Euclidean space. Elastic map is represented by a set of nodes wj in the same space. Each datapoint s∈S has a host node, namely the closest node wj (if there are several closest nodes then one takes the node with the smallest number). The data set S is divided into classes is a host of s} The approximation energy D is the distortion D=12∑j=1k∑s∈Kj‖s−wj‖2 ,which is the energy of the springs with unit elasticity which connect each data point with its host node. It is possible to apply weighting factors to the terms of this sum, for example to reflect the standard deviation of the probability density function of any subset of data points {si} On the set of nodes an additional structure is defined. Some pairs of nodes, (wi,wj) , are connected by elastic edges. Call this set of pairs E . Some triplets of nodes, (wi,wj,wk) , form bending ribs. Call this set of triplets G The stretching energy is UE=12λ∑(wi,wj)∈E‖wi−wj‖2 The bending energy is UG=12μ∑(wi,wj,wk)∈G‖wi−2wj+wk‖2 ,where λ and μ are the stretching and bending moduli respectively. The stretching energy is sometimes referred to as the membrane, while the bending energy is referred to as the thin plate term.For example, on the 2D rectangular grid the elastic edges are just vertical and horizontal edges (pairs of closest vertices) and the bending ribs are the vertical or horizontal triplets of consecutive (closest) vertices. Energy of elastic map: The total energy of the elastic map is thus U=D+UE+UG. The position of the nodes {wj} is determined by the mechanical equilibrium of the elastic map, i.e. its location is such that it minimizes the total energy U Expectation-maximization algorithm: For a given splitting of dataset S in classes Kj , minimization of the quadratic functional U is a linear problem with the sparse matrix of coefficients. Therefore, similar to principal component analysis or k-means, a splitting method is used: For given {wj} find {Kj} For given {Kj} minimize U and find {wj} If no change, terminate.This expectation-maximization algorithm guarantees a local minimum of U . For improving the approximation various additional methods are proposed. For example, the softening strategy is used. This strategy starts with a rigid grids (small length, small bending and large elasticity modules λ and μ coefficients) and finishes with soft grids (small λ and μ ). The training goes in several epochs, each epoch with its own grid rigidness. Another adaptive strategy is growing net: one starts from a small number of nodes and gradually adds new nodes. Each epoch goes with its own number of nodes. Applications: Most important applications of the method and free software are in bioinformatics for exploratory data analysis and visualisation of multidimensional data, for data visualisation in economics, social and political sciences, as an auxiliary tool for data mapping in geographic informational systems and for visualisation of data of various nature. The method is applied in quantitative biology for reconstructing the curved surface of a tree leaf from a stack of light microscopy images. This reconstruction is used for quantifying the geodesic distances between trichomes and their patterning, which is a marker of the capability of a plant to resist to pathogenes. Applications: Recently, the method is adapted as a support tool in the decision process underlying the selection, optimization, and management of financial portfolios.The method of elastic maps has been systematically tested and compared with several machine learning methods on the applied problem of identification of the flow regime of a gas-liquid flow in a pipe. There are various regimes: Single phase water or air flow, Bubbly flow, Bubbly-slug flow, Slug flow, Slug-churn flow, Churn flow, Churn-annular flow, and Annular flow. The simplest and most common method used to identify the flow regime is visual observation. This approach is, however, subjective and unsuitable for relatively high gas and liquid flow rates. Therefore, the machine learning methods are proposed by many authors. The methods are applied to differential pressure data collected during a calibration process. The method of elastic maps provided a 2D map, where the area of each regime is represented. The comparison with some other machine learning methods is presented in Table 1 for various pipe diameters and pressure. Applications: Here, ANN stands for the backpropagation artificial neural networks, SVM stands for the support vector machine, SOM for the self-organizing maps. The hybrid technology was developed for engineering applications. In this technology, elastic maps are used in combination with Principal Component Analysis (PCA), Independent Component Analysis (ICA) and backpropagation ANN. The textbook provides a systematic comparison of elastic maps and self-organizing maps (SOMs) in applications to economic and financial decision-making.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Parasyte: Part 1** Parasyte: Part 1: Parasyte: Part 1 (Japanese: 寄生獣, Hepburn: Kiseijū) is a 2014 Japanese science fiction action film directed by Takashi Yamazaki, starring Shota Sometani. It is the first film of the two Parasyte films, and was followed by Parasyte: Part 2. The films are based on the Parasyte manga series. Plot: Mysterious aliens called "Parasites" suddenly begin their invasion when some of them infect humans by entering their brain. One of them attempts to enter the brain of high school student Shinichi Izumi, but resorts to infecting his right hand after failing to bypass his headphones. Thanks to this way of entry, Shinichi retains his human consciousness, unlike the other victims. After his initial shock, Shinichi befriends the parasite and names him "Migi" (Japanese for "right"). Plot: The parasites terrorize humanity by secretly killing them as sources of food. Shinichi himself has to fend against the parasites who are disgusted of the fact that his body exhibits two consciousnesses. One of the parasites also possesses Shinichi's teacher, Ryoko Tamiya; however, Tamiya is a lot more reasonable and is interested in studying the humans' way of life, which she does by becoming impregnated with fellow parasite Mr. A. Tamiya explains that despite having parasite parents, the baby she carries is a normal human. Plot: When Mr. A's attack on Shinichi fails and results in his vessel's destruction, he transfers his consciousness to Shinichi's mother, Nobuko. Nobuko returns home and mortally injures Shinichi, although Migi manages to save him by using his essence to renew his heart, essentially infecting Shinichi's entire body with Migi's particles. Since then, Shinichi's personality starts to merge with that of Migi, namely, being apathetic to emotions; this results in Shinichi's estrangement from his girlfriend, Satomi Murano. Plot: Meanwhile, an underling of Tamiya, Takeshi Hirokawa, runs for mayorship in order to set up the town for the parasites' interests. Another parasite, Hideo Shimada transfers to Shinichi's school and initially acts friendly, but when a student discovers his true identity, he massacres the students. Shinichi is able to kill Shimada, who is left to his fate by Tamiya due to a disfigurement that Satomi causes, which makes him unable to control himself. Tamiya gives Shinichi the location of the Mr. A-possessed Nobuko before leaving the scene. At their meeting, Nobuko is able to overcome her parasite's consciousness long enough for Shinichi to safely kill her. Plot: The epilogue details Hirokawa's successful run for mayorship, the appearance of the mysterious parasite Goto, as well as Shinichi's visit to Satomi at the hospital, where an unknown individual records him talking with Migi. Cast: Shota Sometani as Shinichi Izumi Eri Fukatsu as Ryoko Tamiya Sadao Abe as Migi Ai Hashimoto as Satomi Murano Masahiro Higashide as Hideo Shimada Mansaku Ikeuchi as Mr. A Shuji Okui as Chef of Chinese Restaurant Takashi Yamanaka as Tsuji (辻) Hideto Iwai as Kusano Nao Ōmori as Shiro Kuramori Kimiko Yo as Nobuko Izumi Kōsuke Toyohara as Yamagishi Kazuki Kitamura as Takeshi Hirokawa Jun Kunimura as Hirama Tadanobu Asano as Goto Production: Development In 2005, New Line Cinema had acquired the film rights to Parasyte in 2005, and a film adaptation was reported to be in the works, with Jim Henson Studios and Don Murphy was set to be in charge of production. New Line Cinema's option expired in 2013, prompting a bidding war in Japan. Film studio and distributor Toho won the rights. Production: Casting Shota Sometani was cast as the protagonist Shinichi Izumi, along with Eri Fukatsu as high school teacher parasyte Ryoko Tamiya, and Ai Hashimoto as Shinichi's girlfriend Satomi Murano. Release: Parasyte: Part 1 screened at the 27th Tokyo International Film Festival as the closing film on October 30, 2014.The film was released on November 29, 2014 in Japan. Funimation licensed both Part 1 and Part 2 for Blu-ray, DVD, and Digital HD release on May 8, 2018 which included English dubs of both films. Reception: Box office The film topped the box office on its opening weekend in Japan, earning $2.9 million from 256,000 admissions on 418 screens. It grossed around ¥800 million at the Japanese box office after two weeks. The film grossed ¥48.3 million RMB at the Chinese box office. Reception: Critical reception Mark Schilling of The Japan Times gave the film 3 and a half stars out of 5, saying, "I couldn't call myself a fan of the manga, but the film adaptation of Parasyte hits the hard-to-find sweet spot between black comedy and serious sci-fi/horror". Peter Debruge of Variety in his favorable review felt that "[the film] marks an entertaining new iteration in the body-horror category, as if someone had grafted a very dark high-school comedy onto a David Cronenberg movie." Meanwhile, Christopher O'Keeffe of Twitch Film in his unfavorable review commented that "Parasyte: Part 1 spends a great deal of time laying the groundwork for the concluding chapter and its charmless aliens and the scarcity of action in early scenes fail to make it stand on its own." Sequel: A sequel, Parasyte: Part 2, was released in Japan on April 25, 2015
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ACE Yewt** ACE Yewt: The ACE Yewt is an electric light commercial vehicle (A-segment), produced by the ACE EV Group since 2021. History: Two years after its foundation, the ACE EV Group introduced its future range of electric cars, including a small pickup truck called the ACE Yewt. Characterized by a curvilinear silhouette, the vehicle was developed with the use of an aluminum frame and lightweight plastics reinforced with carbon fiber.In addition to the Yewt, the ACE EV Group also introduced a panel van variant called the ACE Cargo. The van was characterized by a high roof line, which suddenly lowered at the height of the two-person passenger compartment. Both the Yewt and Cargo were delivered in October 2021 for production in Adelaide, South Australia, reaching local buyers in the following year. The main targets are small entrepreneurs in large Australian metropolises. Specifications: Both the Yewt and Cargo offer a 23 kWh battery that provides a maximum speed of 100 km/h (62 mph), reach 50 km/h (31 mph) in 7 seconds, and travel a maximum of 150 to 200 kilometers on a single charge depending on the load and style.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Register (music)** Register (music): A register is the "height" or range of a note, set of pitches or pitch classes, melody, part, instrument, or group of instruments. A higher register indicates higher pitch. Example 1: Violins are in a higher register than cellos.In woodwind and brass instruments, the word register usually distinguishes pitch ranges produced using different normal modes of the air column, with higher registers produced by overblowing. Often the timbres of different woodwind instrument registers tend to be markedly different. Register (music): Example 2: The Western concert flute plays approximately three and a half octaves and generally has three complete registers and one partial register. The musical note C4 (corresponding to middle C on the piano) would be in that instrument's first register, whereas C5 (one octave higher) would be in its second register.However, on the clarinet the notes from (written) G4 or A4 to B♭4 sometimes are regarded as a separate "throat register", even though both they and the notes from F♯4 down are produced using the instrument's lowest normal mode; the timbre of the throat notes differs, and the throat register's fingerings also are distinctive, using special keys and not the standard tone holes used for other notes. Register (music): The register in which an instrument plays, or in which a part is written, affects the quality of sound or timbre. Register is also used structurally in musical form, with the climax of a piece usually being in the highest register of that piece. Often, serial and other pieces will use fixed register, allowing a pitch class to be expressed through only one pitch. Register (music): A "register" of the human voice is a series of tones of like quality originating through operation of the larynx. The constituent tones result from similar patterns of vibration in the vocal folds, which can generate several different such patterns, each resulting in characteristic sounds within a particular range of pitches. The term has wide application and can refer to any of several aspects of the human voice, including the following: A particular segment of the vocal range; A resonance area such as chest voice or head voice; A phonatory process; A certain vocal timbre; or A region of the voice set off by vocal breaks.Speech pathologists and many vocal pedagogues recognize four vocal registers: the vocal fry, modal, falsetto, and whistle. To delineate these registers, pathologists specify vibratory pattern of the vocal folds, sequential pitches, and type of sound.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SAML-based products and services** SAML-based products and services: Security Assertion Markup Language (SAML) is a set of specifications that encompasses the XML-format for security tokens containing assertions to pass information about a user and protocols and profiles to implement authentication and authorization scenarios. This article has a focus on software and services in the category of identity management infrastructure, which enable building Web-SSO solutions using the SAML protocol in an interoperable fashion. Software and services that are only SAML-enabled do not go here. Products that provide SAML actors: SAML actors are Identity Providers (IdP), Service Providers (SP), Discovery Services, ECP Clients, Metadata Services, or Broker/IdP-proxy. This table shows the capability of products according to Kantara Initiative testing. Claimed capabilities are in column "other". Each mark denotes that at least one interoperability test was passed. Detailed results with product and test procedure versions are available at the Kantara/Liberty site given below. Products that provide SAML actors: NOTE: This table represents a snapshot over time roll up of the most recent product test results (multiple testing rounds). Please note that some products features and abilities may have been updated since they were last tested. Please check the website information of the originating product for the latest features and updates. Libraries and toolkits to develop SAML actors and SAML-enabled services: Libraries and toolkits are used by developers to integrate applications and services into SAML federations or to build their own SAML-actors like IdPs. SAML-related services: This section lists public services such as identity and attribute providers, metadata and test services, but *not* SAML-enabled web-applications and cloud services.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Defected ground structure** Defected ground structure: A defected ground structure (DGS), is a purposefully created defect on the ground plane of a printed microstrip board. It is typically created in the form of an etched-out pattern on the ground plane. DGS is a simplified form of Electromagnetic Band Gap (EBG) structure. This EBG is a periodic pattern featuring a band-stop property in microstrip transmission line and circuit applications, but the DGS comprises a single defect or a very limited number of defects with periodic/aperiodic configurations. History: Kim et al. first conceived a limited form of EBG and coined the term ‘DGS’. They used a single unit of dumbbell-shaped defect beneath a microstrip line to use its stop-band characteristics within which it impedes the propagation of electromagnetic (EM) down the line over a range of frequencies. The compact feature and ease of implementation made it popular and several other shapes of DGS evolved very fast for various microwave circuit applications. Printed circuit filter is one of them. Apart from that, DGS has also been used in the circuits of an amplifier, rat-race coupler, branch-line coupler, Wilkinson power divider, etc. DGS was also employed underneath the feed lines to integrated microstrip antenna in order to filter out any unwanted harmonics. Ideas for Antenna Applications: A new concept of its application to microstrip antenna was first reported in 2005 by Guha et al. The main focus was to suppress the cross-polarized radiations in a circular microstrip patch. DGS was strategically used to weaken the cross-pol generating higher-order TM21 mode. The important and necessary condition is that the deployed DGS should not influence or disturb the main resonance, i.e. the primary radiation mode. That work indeed introduced a non-resonant DGS and proved the concept. Subsequently, several advancements both in DGS type, geometry, and cross-polar performance have been achieved. Ideas for Antenna Applications: Yet another idea of patch-DGS integration has advanced the microstrip antenna array design. The issue of mutual coupling among the array elements can be reduced by integrating simple DGS- first reported in 2006. This technique has been matured to address the practical issue of ‘Scan blindness’ of large arrays. This DGS-technique has been proved to be a very useful commercially viable tool to minimize two major issues like scan blindness and cross-polar radiations in phased arrays. New generation airborne and space-borne radars are now being developed using this DGS technology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Inferior pancreaticoduodenal artery** Inferior pancreaticoduodenal artery: The inferior pancreaticoduodenal artery (the IPDA) is a branch of the superior mesenteric artery. It supplies the head of the pancreas, and the ascending and inferior parts of the duodenum. Rarely, it may have an aneurysm. Structure: The inferior pancreaticoduodenal artery is a branch of the superior mesenteric artery. This occurs opposite the upper border of the inferior part of the duodenum. As soon as it branches, it divides into anterior and posterior branches. These run between the head of the pancreas and the lesser curvature of the duodenum. They then join (anastomose) with the anterior and posterior branches of the superior pancreaticoduodenal artery. Structure: Variation The inferior pancreaticoduodenal artery may branch from the first intestinal branch of the superior mesenteric artery rather than directly from it. Function: The inferior pancreaticoduodenal artery distributes branches to the head of the pancreas and to the ascending and inferior parts of the duodenum. Clinical significance: Aneurysm Very rarely, the inferior pancreaticoduodenal artery may have an aneurysm. It may be caused by certain medical interventions, major trauma, pancreatitis, cholecystitis, and vasculitis and other infections. A ruptured aneurysm causes abdominal pain, and haemorrhage leads to hypotension. It may be treated with open abdominal surgery. It may also be treated with endovascular surgery, such as a coil. These aneurysms represent around 2% of aneurysms in visceral arteries of the abdomen. Pseudoaneurysm may also occur. History: The inferior pancreaticoduodenal artery may be more simply known by the acronym IPDA.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Postal codes in the Faroe Islands** Postal codes in the Faroe Islands: Postal codes in the Faroe Islands consist of the two letter ISO 3166 code "FO", followed by three digits: P/F Postverk Føroya Óðinshædd 22 FO-100 Tórshavn FAROE ISLANDS PO Box addresses: Separate postal codes are used for PO Box addresses in the capital Tórshavn and some other towns: HN Jacobsens Bókahandil Postboks 55 FO-110 Tórshavn FAROE ISLANDS Former Danish postal codes: Previously, the Faroe Islands formed part of the Danish postcode system, introduced in 1967, which also included Greenland. This used the number range 3800 to 3899, and the "DK" prefix for Denmark: Føroya Ferdamannafelag DK-3800 Tórshavn FAROE ISLANDSLater on, the "FR" prefix was used: DGU Føroyadeild Debesartrøð FR 3800 Tórshavn FAROE ISLANDSWhen the three-digit postal codes were first introduced, they were used for PO Box addresses, alongside the existing four-digit ones.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**B'Twin** B'Twin: B’TWIN is a trademarked brand of bicycles as well as bicycle parts and accessories marketed by Decathlon.The bicycles are produced by several manufacturers in Asia and Europe. In 2010, a small part of the assembly process was relocated to France. From 2008–18, more than 1 million bicycles were produced in Portugal. They also sold bike accessories and parts for budget prices. B'Twin: In March 2018, Decathlon said it planned to drop the B’Twin name completely Over approximately the following year, the company migrated the B’Twin brand to children's bikes, as part of a branding arrangement that will see new ranges of bicycles, including both road and mountain bikes.At present, the B'Twin brand is used for folding bikes and some bicycle accessories, in addition to the aforementioned children's bikes. Ranges: B'Twin's product range includes: Kids' city bikes - City bikes designed for children Tilt - Budget folding bikes available in pedal-powered and electric versions The B'Twin brand formerly encompassed these additional products which have since been turned into separate brands: Riverside/Original - entry-level hybrid bikes Triban - road bikes including some UCI approved models Rockrider - mountain bikes Elops - Classic city bikes pedal or electric models Long distance city bikes - Bikes built for long distance rides, with only pedal-powered versions available ATB bikes - All terrain bikes Nework - active urban bikes
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hurwitz's automorphisms theorem** Hurwitz's automorphisms theorem: In mathematics, Hurwitz's automorphisms theorem bounds the order of the group of automorphisms, via orientation-preserving conformal mappings, of a compact Riemann surface of genus g > 1, stating that the number of such automorphisms cannot exceed 84(g − 1). A group for which the maximum is achieved is called a Hurwitz group, and the corresponding Riemann surface a Hurwitz surface. Because compact Riemann surfaces are synonymous with non-singular complex projective algebraic curves, a Hurwitz surface can also be called a Hurwitz curve. The theorem is named after Adolf Hurwitz, who proved it in (Hurwitz 1893). Hurwitz's automorphisms theorem: Hurwitz's bound also holds for algebraic curves over a field of characteristic 0, and over fields of positive characteristic p>0 for groups whose order is coprime to p, but can fail over fields of positive characteristic p>0 when p divides the group order. For example, the double cover of the projective line y2 = xp −x branched at all points defined over the prime field has genus g=(p−1)/2 but is acted on by the group SL2(p) of order p3−p. Interpretation in terms of hyperbolicity: One of the fundamental themes in differential geometry is a trichotomy between the Riemannian manifolds of positive, zero, and negative curvature K. It manifests itself in many diverse situations and on several levels. In the context of compact Riemann surfaces X, via the Riemann uniformization theorem, this can be seen as a distinction between the surfaces of different topologies: X a sphere, a compact Riemann surface of genus zero with K > 0; X a flat torus, or an elliptic curve, a Riemann surface of genus one with K = 0; and X a hyperbolic surface, which has genus greater than one and K < 0.While in the first two cases the surface X admits infinitely many conformal automorphisms (in fact, the conformal automorphism group is a complex Lie group of dimension three for a sphere and of dimension one for a torus), a hyperbolic Riemann surface only admits a discrete set of automorphisms. Hurwitz's theorem claims that in fact more is true: it provides a uniform bound on the order of the automorphism group as a function of the genus and characterizes those Riemann surfaces for which the bound is sharp. Statement and proof: Theorem: Let X be a smooth connected Riemann surface of genus g≥2 . Then its automorphism group Aut ⁡(X) has size at most 84 (g−1) Proof: Assume for now that Aut ⁡(X) is finite (this will be proved at the end). Statement and proof: Consider the quotient map X→X/G . Since G acts by holomorphic functions, the quotient is locally of the form z→zn and the quotient X/G is a smooth Riemann surface. The quotient map X→X/G is a branched cover, and we will see below that the ramification points correspond to the orbits that have a non-trivial stabiliser. Let g0 be the genus of X/G By the Riemann-Hurwitz formula, where the sum is over the k ramification points pi∈X/G for the quotient map X→X/G . The ramification index ei at pi is just the order of the stabiliser group, since deg ⁡(X/X/G) where fi the number of pre-images of pi (the number of points in the orbit), and deg ⁡(X/X/G)=|G| . By definition of ramification points, ei≥2 for all k ramification indices.Now call the righthand side |G|R and since g≥2 we must have R>0 . Rearranging the equation we find: If g0≥2 then R≥2 , and |G|≤(g−1) If g0=1 , then k≥1 and R≥0+1−1/2=1/2 so that |G|≤4(g−1) If g0=0 , then k≥3 and if k≥5 then R≥−2+k(1−1/2)≥1/2 , so that |G|≤4(g−1) if k=4 then R≥−2+4−1/2−1/2−1/2−1/3=1/6 , so that 12 (g−1) if k=3 then write e1=p,e2=q,e3=r . We may assume 2≤p≤q≤r if p≥3 then 12 so that 24 (g−1) if p=2 then if q≥4 then 20 so that 40 (g−1) if q=3 then 42 so that 84 (g−1) .In conclusion, 84 (g−1) To show that G is finite, note that G acts on the cohomology H∗(X,C) preserving the Hodge decomposition and the lattice H1(X,Z) In particular, its action on V=H0,1(X,C) gives a homomorphism GL ⁡(V) with discrete image h(G) In addition, the image h(G) preserves the natural non-degenerate Hermitian inner product {\textstyle (\omega ,\eta )=i\int {\bar {\omega }}\wedge \eta } on V . In particular the image h(G) is contained in the unitary group GL ⁡(V) which is compact. Thus the image h(G) is not just discrete, but finite. Statement and proof: It remains to prove that GL ⁡(V) has finite kernel. In fact, we will prove h is injective. Assume φ∈G acts as the identity on V . If fix ⁡(φ) is finite, then by the Lefschetz fixed-point theorem, This is a contradiction, and so fix ⁡(φ) is infinite. Since fix ⁡(φ) is a closed complex sub variety of positive dimension and X is a smooth connected curve (i.e. dim C⁡(X)=1 ), we must have fix ⁡(φ)=X . Thus φ is the identity, and we conclude that h is injective and G≅h(G) is finite. Statement and proof: Q.E.D. Corollary of the proof: A Riemann surface X of genus g≥2 has 84 (g−1) automorphisms if and only if X is a branched cover X→P1 with three ramification points, of indices 2,3 and 7. The idea of another proof and construction of the Hurwitz surfaces: By the uniformization theorem, any hyperbolic surface X – i.e., the Gaussian curvature of X is equal to negative one at every point – is covered by the hyperbolic plane. The conformal mappings of the surface correspond to orientation-preserving automorphisms of the hyperbolic plane. By the Gauss–Bonnet theorem, the area of the surface is A(X) = − 2π χ(X) = 4π(g − 1).In order to make the automorphism group G of X as large as possible, we want the area of its fundamental domain D for this action to be as small as possible. If the fundamental domain is a triangle with the vertex angles π/p, π/q and π/r, defining a tiling of the hyperbolic plane, then p, q, and r are integers greater than one, and the area is A(D) = π(1 − 1/p − 1/q − 1/r).Thus we are asking for integers which make the expression 1 − 1/p − 1/q − 1/rstrictly positive and as small as possible. This minimal value is 1/42, and 1 − 1/2 − 1/3 − 1/7 = 1/42gives a unique triple of such integers. This would indicate that the order |G| of the automorphism group is bounded by A(X)/A(D) ≤ 168(g − 1).However, a more delicate reasoning shows that this is an overestimate by the factor of two, because the group G can contain orientation-reversing transformations. For the orientation-preserving conformal automorphisms the bound is 84(g − 1). The idea of another proof and construction of the Hurwitz surfaces: Construction To obtain an example of a Hurwitz group, let us start with a (2,3,7)-tiling of the hyperbolic plane. Its full symmetry group is the full (2,3,7) triangle group generated by the reflections across the sides of a single fundamental triangle with the angles π/2, π/3 and π/7. Since a reflection flips the triangle and changes the orientation, we can join the triangles in pairs and obtain an orientation-preserving tiling polygon. The idea of another proof and construction of the Hurwitz surfaces: A Hurwitz surface is obtained by 'closing up' a part of this infinite tiling of the hyperbolic plane to a compact Riemann surface of genus g. This will necessarily involve exactly 84(g − 1) double triangle tiles. The following two regular tilings have the desired symmetry group; the rotational group corresponds to rotation about an edge, a vertex, and a face, while the full symmetry group would also include a reflection. The polygons in the tiling are not fundamental domains – the tiling by (2,3,7) triangles refines both of these and is not regular. Wythoff constructions yields further uniform tilings, yielding eight uniform tilings, including the two regular ones given here. These all descend to Hurwitz surfaces, yielding tilings of the surfaces (triangulation, tiling by heptagons, etc.). The idea of another proof and construction of the Hurwitz surfaces: From the arguments above it can be inferred that a Hurwitz group G is characterized by the property that it is a finite quotient of the group with two generators a and b and three relations a2=b3=(ab)7=1, thus G is a finite group generated by two elements of orders two and three, whose product is of order seven. More precisely, any Hurwitz surface, that is, a hyperbolic surface that realizes the maximum order of the automorphism group for the surfaces of a given genus, can be obtained by the construction given. This is the last part of the theorem of Hurwitz. Examples of Hurwitz groups and surfaces: The smallest Hurwitz group is the projective special linear group PSL(2,7), of order 168, and the corresponding curve is the Klein quartic curve. This group is also isomorphic to PSL(3,2). Next is the Macbeath curve, with automorphism group PSL(2,8) of order 504. Many more finite simple groups are Hurwitz groups; for instance all but 64 of the alternating groups are Hurwitz groups, the largest non-Hurwitz example being of degree 167. The smallest alternating group that is a Hurwitz group is A15. Examples of Hurwitz groups and surfaces: Most projective special linear groups of large rank are Hurwitz groups, (Lucchini, Tamburini & Wilson 2000). For lower ranks, fewer such groups are Hurwitz. For np the order of p modulo 7, one has that PSL(2,q) is Hurwitz if and only if either q=7 or q = pnp. Indeed, PSL(3,q) is Hurwitz if and only if q = 2, PSL(4,q) is never Hurwitz, and PSL(5,q) is Hurwitz if and only if q = 74 or q = pnp, (Tamburini & Vsemirnov 2006). Examples of Hurwitz groups and surfaces: Similarly, many groups of Lie type are Hurwitz. The finite classical groups of large rank are Hurwitz, (Lucchini & Tamburini 1999). The exceptional Lie groups of type G2 and the Ree groups of type 2G2 are nearly always Hurwitz, (Malle 1990). Other families of exceptional and twisted Lie groups of low rank are shown to be Hurwitz in (Malle 1995). Examples of Hurwitz groups and surfaces: There are 12 sporadic groups that can be generated as Hurwitz groups: the Janko groups J1, J2 and J4, the Fischer groups Fi22 and Fi'24, the Rudvalis group, the Held group, the Thompson group, the Harada–Norton group, the third Conway group Co3, the Lyons group, and the Monster, (Wilson 2001). Automorphism groups in low genus: The largest |Aut(X)| can get for a Riemann surface X of genus g is shown below, for 2≤g≤10, along with a surface X0 with |Aut(X0)| maximal. In this range, there only exists a Hurwitz curve in genus g=3 and g=7. Generalizations: The concept of a Hurwitz surface can be generalized in several ways to a definition that has examples in all but a few genera. Perhaps the most natural is a "maximally symmetric" surface: One that cannot be continuously modified through equally symmetric surfaces to a surface whose symmetry properly contains that of the original surface. This is possible for all orientable compact genera (see above section "Automorphism groups in low genus").
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MOF Model to Text Transformation Language** MOF Model to Text Transformation Language: MOF Model to Text Transformation Language (Mof2Text or MOFM2T) is an Object Management Group (OMG) specification for a model transformation language. Specifically, it can be used to express transformations which transform a model into text (M2T), for example a platform-specific model into source code or documentation. MOFM2T is one part of OMG's Model-driven architecture (MDA) and reuses many concepts of MOF, OMG's metamodeling architecture. Whereas MOFM2T is used for expressing M2T transformations, OMG's QVT is used for expressing M2M transformations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Match performance indicator** Match performance indicator: Match Performance Indicators (MPI) are the KPIs of sport. The term was created, and is widely used, in the scouting and analyzing system eye4TALENT. Basically, the MPIs are set up as an indicator of a player's performance compared to a standard of a specific position on the field. When judging e.g. a football player's performance, it is then possible to compare his stats with the MPI of his playing position. Since the introduction, the term has widely been used in various Danish media including Ekstra Bladet, BT, 6'eren and Kontra Magazine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TNFSF12** TNFSF12: Tumor necrosis factor ligand superfamily member 12 also known as TNF-related weak inducer of apoptosis (TWEAK) is a protein that in humans is encoded by the TNFSF12 gene. Function: TWEAK was discovered in 1997. The protein encoded by this gene is a cytokine that belongs to the tumor necrosis factor (TNF) ligand family. This protein is a ligand for the FN14/TWEAKR receptor. This cytokine has overlapping signaling functions with TNF, but displays a much wider tissue distribution. Leukocytes are the main source of TWEAK including human resting and activated monocytes, dendritic cells and natural killer cells. TWEAK can induce apoptosis via multiple pathways of cell death in a cell type-specific manner. This cytokine is also found to promote proliferation and migration of endothelial cells, and thus acts as a regulator of angiogenesis. Clinical significance: Excessive activation of the TWEAK pathway in chronic injury has been described to promote pathological tissue changes including chronic inflammation, fibrosis and angiogenesis. In chronic liver disease, for example, TWEAK expression is enhanced and causes hepatic stellate cells, which are key regulators of liver fibrosis, to proliferate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fully switched network** Fully switched network: A fully switched network is a computer network which uses only network switches rather than Ethernet hubs on Ethernet networks. The switches provide a dedicated connection to each workstation. A switch allows for many conversations to occur simultaneously. Before switches, networks based on hubs data could only allow transmission in one direction at a time, this was called half-duplex. By using a switch this restriction is removed; full-duplex communication is maintained and the network is collision free. This means that data can now be transmitted in both directions at the same time. Fully switched networks employ either twisted-pair or fiber-optic cabling, both of which use separate conductors for sending and receiving data. In this type of environment, Ethernet nodes can forgo the collision detection process and transmit at will, since they are the only potential devices that can access the medium. This means that a fully switched network is a collision-free environment. Fully switched network: The core function of a switch is to allow each workstation to communicate only with the switch instead of with each other. This in turn means that data can be sent from workstation to switch and from switch to workstation simultaneously. The core purpose of a switch is to decongest network flow to the workstations so that the connections can transmit more effectively; receiving transmissions that were only specific to their network address. With the network decongested and transmitting data in both directions simultaneously this can in fact double network speed and capacity when two workstations are trading information. For example, if your network speed is 5 Mbit/s, then each workstation is able to simultaneously transfer data at 5 Mbit/s.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nif gene** Nif gene: The nif genes are genes encoding enzymes involved in the fixation of atmospheric nitrogen into a form of nitrogen available to living organisms. The primary enzyme encoded by the nif genes is the nitrogenase complex which is in charge of converting atmospheric nitrogen (N2) to other nitrogen forms such as ammonia which the organism can use for various purposes. Besides the nitrogenase enzyme, the nif genes also encode a number of regulatory proteins involved in nitrogen fixation. The nif genes are found in both free-living nitrogen-fixing bacteria and in symbiotic bacteria associated with various plants. The expression of the nif genes is induced as a response to low concentrations of fixed nitrogen and oxygen concentrations (the low oxygen concentrations are actively maintained in the root environment of host plants). The first Rhizobium genes for nitrogen fixation (nif) and for nodulation (nod) were cloned in the early 1980s by Gary Ruvkun and Sharon R. Long in Frederick M. Ausubel's laboratory. Regulation: In most bacteria, regulation of nif genes transcription is done by the nitrogen sensitive NifA protein. When there isn't enough fixed nitrogen available for the organism's use, NtrC triggers NifA expression, and NifA activates the rest of the nif genes. If there is a sufficient amount of reduced nitrogen or oxygen is present, another protein is activated: NifL. NifL inhibits NifA activity resulting in the inhibition of nitrogenase formation. NifL is regulated by the products of glnD and glnK. The nif genes can be found on bacterial chromosomes, but in symbiotic bacteria they are often found on plasmids or symbiosis islands with other genes related to nitrogen fixation (such as the nod genes). Examples in nature: The expression and regulation of nif genes, while sharing common features in all or most of the nitrogen-fixing organisms in nature, have distinct characters and qualities that differ from one diazotroph to another. Examples of nif gene structure and regulation in different diazotrophs include: Klebsiella pneumoniae—a free-living anaerobic nitrogen-fixing bacterium. It contains a total of 20 nif genes located on the chromosome in a 24-Kb region. nifH, nifD, and nifK encode the nitrogenase subunits, while nifE, nifN, nifU, nifS, nifV, nifW, nifX, nifB, and nifQ encode proteins involved the assembly and incorporation of iron and molybdenum atoms into the nitrogenase subunits. nifF and nifJ encode proteins related to electron transfer taking place in the reduction process and nifA and nifL are regulatory proteins in charge of regulating the expression of the other nif genes.Rhodospirillum rubrum—a free-living anaerobic photosynthetic bacterium which, in addition to the transcriptional controls described above, regulates expression of the nif genes also in a metabolic way through a reversible ADP-ribosylation of a specific arginine residue in the nitrogenase complex. The ribosylation takes place when reduced nitrogen is present and it causes a barrier in the electron transfer flow and thereby inactivates nitrogenase activity. The enzymes catalyzing the ribosylation are called DraG and DraT.Rhodobacter capsulatus—a free-living anaerobic phototroph containing a transcriptional nif gene regulatory system. R. capsulatus regulates nif gene expression through nifA in the same manner described before, but it uses a different nifA activator which initiates the NtrC. NtrC activates a different expression of nifA and the other nif genes.Rhizobium spp.—Gram-negative, symbiotic nitrogen fixing bacteria that usually form a symbiotic relationship with legume species. In some rhizobia, the nif genes are located on plasmids called 'sym plasmids' (sym = symbiosis) which contain genes related to nitrogen fixation and metabolism, while the chromosomes contain most of the housekeeping genes of the bacteria. Regulation of the nif genes is at the transcriptional level and is dependent on colonization of the plant host.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Piperylene** Piperylene: Piperylene or 1,3-pentadiene is an organic compound with the formula CH3−CH=CH−CH=CH2. It is a volatile, flammable hydrocarbon. It is one of the five positional isomers of pentadiene. Reactions and occurrence: Piperylene is a typical diene. It forms a sulfolene upon treatment with sulfur dioxide.Piperylene is the product of the decarboxylation of sorbic acid, a common anti-mold agent.Piperylene is obtained as a byproduct of ethylene production from crude oil, combustion of biomass, waste incineration and exhaust gases. It is used as a monomer in the manufacturing of plastics, adhesives and resins.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alastair J Sloan** Alastair J Sloan: Professor Alastair J Sloan is an applied bioscientist and expert in the broad field of mineralised connective tissues, and since January 2020 current head of the Melbourne Dental School, University of Melbourne. Biography: Following primary and secondary education in Poulton-Le-Fylde, Lancashire in the UK, Alastair obtained his BSc in Biomedical Sciences from the University of Wales in 1993 and his PhD in Oral Biology and Pathology from Faculty of Medicine and Dentistry at The University of Birmingham, UK in 1997. He is currently Head of School of the Melbourne Dental School, holding that position from 2020. Before that, in 2005, he developed his own lab at Cardiff University, and was awarded his personal chair in 2012. He was Head of Oral and Biomedical Sciences at the School of Dentistry between 2010 and 2015, Director of International (2012-2015) and Director of Research (2015-2017). He was Director of the Cardiff Institute for Tissue Engineering and Repair at Cardiff University and appointed Head of School at Cardiff University School of Dentistry in 2017 before moving to Melbourne. Biography: Alastair Sloan's research is multi-disciplinary and in the broad field of mineralised connective tissues. He is interested in the reparative potential and behaviour of the dentine-pulp complex and bone, specifically the potential therapeutic manipulation of the dental pulp stem cells : (DPSCs). This includes understanding the heterogeneity within dental pulp progenitor populations and potential therapeutic roles of these DPSCs in the wider context of regenerative medicine. His lab is also focused on development of 'smart' materials and drug delivery systems for use in oral and dental medicine. Biography: Alastair has authored several books and book chapters in his field, as well as having his research widely published in leading medical, dental, tissue engineering and health policy journals including the Journal of Dental Research, The International Journal of Nanomedicine, BioMed Research International and The European Journal of Dental Education. Biography: He has been interviewed by and written articles for Newspaper and TV media including the BBC, the Wall Street Journal, the Irish Daily Mail, The Daily Mail and The Times of India He is currently a research funding panel member for the EU and Research Foundation, Flanders, having previously been a member of the UK NC3Rs grant assessment panel. He sits on the Nominations Committee of the International Association for Dental Research (IADR) and is a member of both the British Society for Oral and Dental Research and Australian and New Zealand Division of the IADR. Selected publications: Sloan has published more than 100 articles and book chapters in a large number of journals on dentistry, bone repair and general health, including: Source Alraies A, Waddington RJ, Sloan AJ, Moseley R (2020). Evaluation of dental pulp stem cell heterogeneity and behaviour in 3D type I collagen gels. Teoh L, Sloan AJ, McCullough MJ, Thompson W (2020). Measuring antibiotic stewardship in primary healthcare: An umbrella review of studies in medical care and a systematic review of dentistry. Bennett JH, Beeley JA, Anderson P, Belfield L, Brand H, Didilescu A, Dymock D, Guven Y, Hector MP, Holbrook P, Jayashinge JAP, O'Sullivan J, Riggio M, Roger-Leroi V, Scheven B, Sloan AJ, K Vandamme, Manzanares MC. (2020) Eur J Dent Educ. Selected publications: Avery SJ, Ayre WN, Sloan AJ, Waddington RJ (2020) Interrogating the Osteogenic Potential of Implant Surfaces In Vitro: A Review of Current Assays. Tissue Eng Part B (Rev) Serra E, Saubade F, Ligorio C, Whitehead K, Sloan A, Williams DW, Hidalgo-Bastida A, Verran J, Malic S. (2020). Methylcellulose Hydrogel with Melissa officinalis Essential Oil as a Potential Treatment for Oral Candidiasis. Microorganisms. Selected publications: Bender L, Boostrom HM, Varricchio C, Zuanon M, Celiksoy V, Sloan A, Cowpe J, Heard CM (2020) A Novel Dual Action Monolithic Thermosetting Hydrogel Loaded With Lidocaine And Metronidazole As a Potential Treatment For Alveolar Osteitis. Eur J Pharm Biopharm. Lim SY, Dafydd M, Ong J, Ord-McDermott LA, Board-Davies E, Sands K, Williams D, Sloan AJ, Heard CM (2019) Mucoadhesive thin films for the simultaneous delivery of microbiocide and anti-inflammatory drugs in the treatment of periodontal disease. Int. J. Pharm. Alraies A, Canetta E, Waddington RJ, Moseley R, Sloan AJ (2019). Discrimination of dental pulp stem cell regenerative heterogeneity by single cell Raman spectroscopy. Tissue Eng Part C Methods. Munir A, Døskeland A, Avery SJ, Fuoco T, Mohamed-Ahmed S, Lygre H, Finne-Wistrand A, Sloan AJ, Waddington RJ, Mustafa K, Suliman S (2019). Efficacy of copolymer scaffolds delivering human demineralised dentin matrix for bone regeneration. J Tissue Eng. Jiang W, Wang D, Alraies A, Lu Q, Zhu B, Sloan A, Ni L, Song B (2019). Wnt-GSK3b/b-catenin Regulates the Differentiation of Dental Pulp Stem Cells into Bladder Smooth Muscle Cells. Stem Cells Int. Prokopovich P, Rivera M, Perni S, Sloan AJ. (2018). Anti-inflammatory drug-eluting implant model system to prevent wear particles induced periprosthetic osteolysis. Alraies A, Cole DK, S Rees J, Glasse C, Young N, Waddington RJ, Sloan AJ. (2018). Real-Time Binding Kinetic Analyses of the Interaction of the Dietary Stain Orange II with Dentin Matrix. J Dent. Yusop N, Battersby P, Alraies A, Sloan AJ, Moseley R, Waddington RJ (2018). Isolation and Characterisation of Mesenchymal Stem Cells from Rat Bone Marrow and the Endosteal Niche: A Comparative Study. Stem Cells Int. Nishio Ayre W, Melling G, Cuveillier C, Natarajan M, Roberts JL, Marsh LL, Lynch CD, Maillard JY, Denyer SP, Sloan AJ. (2018). Enterococcus faecalis Demonstrates Pathogenicity through Increased Attachment in an Ex Vivo Polymicrobial Pulpal Infection. Infect Immun. Melling GE, Colombo JS, Avery SJ, Ayre WN, Evans SL, Waddington RJ, Sloan AJ. (2018). Liposomal Delivery of Demineralized Dentin Matrix for Dental Tissue Regeneration. SJ Avery, L Sadaghiani, AJ Sloan, RJ Waddington (2017). Analysing the Bioactive Makeup of Demineralised Dentine Matrix on Bone Marrow Mesenchymal Stem Cells for enhanced bone repair. European Cells & Materials. 10;34:1-14 Sadaghiani L, Gleeson HB, Youde S, Waddington RJ, Lynch CD, Sloan AJ. (2016) Growth Factor Liberation and DPSC Response Following Dentine Conditioning. Jordan RP, Marsh L, Ayre WN, Jones Q, Parkes M, Austin B, Sloan AJ, Waddington RJ (2016). An assessment of early colonisation of implant-abutment metal surfaces by single species and co-cultured bacterial periodontal pathogens. Castillo-Dalí G, Castillo-Oyagüe R, Terriza A, Saffar JL, Batista-Cruzado A, Lynch CD, Sloan AJ, Gutiérrez-Pérez JL, Torres-Lagares D. (2016). Pre-prosthetic use of poly(lactic-co-glycolic acid) membranes treated with oxygen plasma and TiO2 nanocomposite particles for guided bone regeneration processes. Selected publications: Board-Davies E, Moses R, Sloan A, Stephens P, Davies L. (2015) Oral Mucosal Lamina Propria-Progenitor Cells Exert Antibacterial Properties via the Secretion of Osteoprotegerin and Haptoglobin. Stem cells Translational Med, 4(11): 1283-1293 Howard-Jones R, Colombo JS, Waddington RJ, Errington RJ, Sloan AJ (2015) A 3-D ex vivo mandible slice system for longitudinal culturing of transplanted dental pulp progenitor cells Cytometry Part A. 87(10): 921-928 Lee CP, Colombo JS, Ayre WN, Waddington RJ, Sloan AJ (2015) Evaluating the Bioactivity of Demineralised Dentine Matrix Extract on the Cellular Behaviour of Clonal Dental Pulp Stem Cells in Orchestrating Dental Tissue Repair J. Tissue Engineering. May 14 Bender L, Boostrom HM, Varricchio C, Zuanon M, Celiksoy V, Sloan A, Cowpe J, Heard CM (2020) A Novel Dual Action Monolithic Thermosetting Hydrogel Loaded With Lidocaine And Metronidazole As a Potential Treatment For Alveolar Osteitis. Eur J Pharm Biopharm. Jan 27 Lim SY, Dafydd M, Ong J, Ord-McDermott LA, Board-Davies E, Sands K, Williams D, Sloan AJ, Heard CM (2019) Mucoadhesive thin films for the simultaneous delivery of microbiocide and anti-inflammatory drugs in the treatment of periodontal disease. Int. J. Pharm. Nov 20:118860. Selected publications: Prokopovich P, Rivera M, Perni S, Sloan AJ. (2018). Anti-inflammatory drug-eluting implant model system to prevent wear particles induced periprosthetic osteolysis. Nishio Ayre W, Melling G, Cuveillier C, Natarajan M, Roberts JL, Marsh LL, Lynch CD, Maillard JY, Denyer SP, Sloan AJ. (2018). Enterococcus faecalis Demonstrates Pathogenicity through Increased Attachment in an Ex Vivo Polymicrobial Pulpal Infection. Infect Immun.:86(5). pii: e00871-17. Sadaghiani L, Gleeson HB, Youde S, Waddington RJ, Lynch CD, Sloan AJ. (2016) Growth Factor Liberation and DPSC Response Following Dentine Conditioning. J. Dent. Res. 95: 1298-1307 Awards: Professor Sloan has received multiple international awards including: Winner, BSODR, Mineralised Tissue Group Research Award Recipient, IADR Distinguished Scientist Award, Young Investigator Award Fellow, Royal Society of Biology Adjunct Professor, College of Medicine and Health, University College, Cork (2019-date) Honorary Professor, College of Biomedical and Life Sciences, Cardiff University (2020-date) Honorary Fellow, International College of Dentists(2020) Ad Eundem Fellow, Faculty of Dentistry, Royal College of Surgeons Ireland (2021) IADR Distinguished Scientist Award, the Isaac Schour Memorial Award (2021)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clock and wavefront model** Clock and wavefront model: The clock and wavefront model is a model used to describe the process of somitogenesis in vertebrates. Somitogenesis is the process by which somites, blocks of mesoderm that give rise to a variety of connective tissues, are formed. The model describes the splitting off of somites from the paraxial mesoderm as the result of oscillating expression of particular proteins and their gradients. Overview: Once the cells of the pre-somitic mesoderm are in place following by cell migration during gastrulation, oscillatory expression of many genes begins in these cells as if regulated by a developmental "clock." This has led many to conclude that somitogenesis is coordinated by a "clock and wave" mechanism. Overview: More technically, this means that somitogenesis occurs due to the largely cell-autonomous oscillations of a network of genes and gene products which causes cells to oscillate between a permissive and a non-permissive state in a consistently timed-fashion, like a clock. These genes include members of the FGF family, Wnt and Notch pathway, as well as targets of these pathways. The wavefront progresses slowly in an anterior-to-posterior direction. As the wavefront of signaling comes in contact with cells in the permissive state, they undergo a mesenchymal-epithelial transition and pinch off of the more anterior pre-somitic mesoderm, forming a somite boundary and resetting the process for the next somite.In particular, the cyclic activation of the Notch pathway appears to be of great importance in the wavefront-clock model. It has been suggested that the activation of Notch cyclically activates a cascade of genes necessary for the somites to separate from the main paraxial body. This is controlled by different means in different species, such as through a simple negative feedback loop in zebrafish or in a complicated process in which FGF and Wnt clocks affect the Notch clock, as in chicks and mice. Generally speaking, however, the segmentation clock model is highly evolutionarily conserved.Intrinsic expression of “clock genes” must oscillate with a periodicity equal to the time necessary for one somite to form, for example 30 minutes in zebrafish, 90 minutes in chicks, and 100 minutes in snakes. Autonomy of oscillation: Gene oscillation in presomitic cells is largely, but not completely, cell autonomous. When Notch signaling is disrupted in zebrafish, neighboring cells no longer oscillate synchronously, indicating that Notch signaling is important for keeping neighboring populations of cells synchronous. In addition, some cellular inter-dependency has been displayed in studies concerning the protein Sonic hedgehog (Shh) in somitogenesis. Although expression of Shh pathway proteins has not been reported to oscillate in the pre-somitic mesoderm, they are expressed within the pre-somitic mesoderm during somitogenesis. When the notochord is ablated during somitogenesis in the chick embryo, the proper number of somites forms, but the segmentation clock is delayed for the posterior two thirds of the somites. The anterior somites are not affected. In one study, this phenotype was mimicked by Shh inhibitors, and timely somite formation was rescued by exogenous Shh protein, showing that the missing signal produced by the notochord is mediated by Shh.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**4-Hydroxyphenylacetic acid** 4-Hydroxyphenylacetic acid: 4-Hydroxyphenylacetic acid is a chemical compound found in olive oil and beer. Synthesis: 4-Hydroxyphenylacetic acid is obtained by reducing 4-hydroxymandelic acid with elemental phosphorus and iodine. Uses In industry, 4-hydroxyphenylacetic acid is an intermediate used to synthesize atenolol, 3,4-dihydroxyphenylacetic acid, and coclaurine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Basis (linear algebra)** Basis (linear algebra): In mathematics, a set B of vectors in a vector space V is called a basis (PL: bases) if every element of V may be written in a unique way as a finite linear combination of elements of B. The coefficients of this linear combination are referred to as components or coordinates of the vector with respect to B. The elements of a basis are called basis vectors. Basis (linear algebra): Equivalently, a set B is a basis if its elements are linearly independent and every element of V is a linear combination of elements of B. In other words, a basis is a linearly independent spanning set. A vector space can have several bases; however all the bases have the same number of elements, called the dimension of the vector space. This article deals mainly with finite-dimensional vector spaces. However, many of the principles are also valid for infinite-dimensional vector spaces. Definition: A basis B of a vector space V over a field F (such as the real numbers R or the complex numbers C) is a linearly independent subset of V that spans V. This means that a subset B of V is a basis if it satisfies the two following conditions: linear independence for every finite subset {v1,…,vm} of B, if c1v1+⋯+cmvm=0 for some c1,…,cm in F, then c1=⋯=cm=0 spanning property for every vector v in V, one can choose a1,…,an in F and v1,…,vn in B such that v=a1v1+⋯+anvn .The scalars ai are called the coordinates of the vector v with respect to the basis B, and by the first property they are uniquely determined. Definition: A vector space that has a finite basis is called finite-dimensional. In this case, the finite subset can be taken as B itself to check for linear independence in the above definition. Definition: It is often convenient or even necessary to have an ordering on the basis vectors, for example, when discussing orientation, or when one considers the scalar coefficients of a vector with respect to a basis without referring explicitly to the basis elements. In this case, the ordering is necessary for associating each coefficient to the corresponding basis element. This ordering can be done by numbering the basis elements. In order to emphasize that an order has been chosen, one speaks of an ordered basis, which is therefore not simply an unstructured set, but a sequence, an indexed family, or similar; see § Ordered bases and coordinates below. Examples: The set R2 of the ordered pairs of real numbers is a vector space under the operations of component-wise addition and scalar multiplication where λ is any real number. A simple basis of this vector space consists of the two vectors e1 = (1, 0) and e2 = (0, 1). These vectors form a basis (called the standard basis) because any vector v = (a, b) of R2 may be uniquely written as Any other pair of linearly independent vectors of R2, such as (1, 1) and (−1, 2), forms also a basis of R2. Examples: More generally, if F is a field, the set Fn of n-tuples of elements of F is a vector space for similarly defined addition and scalar multiplication. Let be the n-tuple with all components equal to 0, except the ith, which is 1. Then e1,…,en is a basis of Fn, which is called the standard basis of Fn. Examples: A different flavor of example is given by polynomial rings. If F is a field, the collection F[X] of all polynomials in one indeterminate X with coefficients in F is an F-vector space. One basis for this space is the monomial basis B, consisting of all monomials: Any set of polynomials such that there is exactly one polynomial of each degree (such as the Bernstein basis polynomials or Chebyshev polynomials) is also a basis. (Such a set of polynomials is called a polynomial sequence.) But there are also many bases for F[X] that are not of this form. Properties: Many properties of finite bases result from the Steinitz exchange lemma, which states that, for any vector space V, given a finite spanning set S and a linearly independent set L of n elements of V, one may replace n well-chosen elements of S by the elements of L to get a spanning set containing L, having its other elements in S, and having the same number of elements as S. Properties: Most properties resulting from the Steinitz exchange lemma remain true when there is no finite spanning set, but their proofs in the infinite case generally require the axiom of choice or a weaker form of it, such as the ultrafilter lemma. If V is a vector space over a field F, then: If L is a linearly independent subset of a spanning set S ⊆ V, then there is a basis B such that V has a basis (this is the preceding property with L being the empty set, and S = V). All bases of V have the same cardinality, which is called the dimension of V. This is the dimension theorem. A generating set S is a basis of V if and only if it is minimal, that is, no proper subset of S is also a generating set of V. A linearly independent set L is a basis if and only if it is maximal, that is, it is not a proper subset of any linearly independent set.If V is a vector space of dimension n, then: A subset of V with n elements is a basis if and only if it is linearly independent. A subset of V with n elements is a basis if and only if it is a spanning set of V. Coordinates: Let V be a vector space of finite dimension n over a field F, and be a basis of V. By definition of a basis, every v in V may be written, in a unique way, as where the coefficients λ1,…,λn are scalars (that is, elements of F), which are called the coordinates of v over B. However, if one talks of the set of the coefficients, one loses the correspondence between coefficients and basis elements, and several vectors may have the same set of coefficients. For example, 3b1+2b2 and 2b1+3b2 have the same set of coefficients {2, 3}, and are different. It is therefore often convenient to work with an ordered basis; this is typically done by indexing the basis elements by the first natural numbers. Then, the coordinates of a vector form a sequence similarly indexed, and a vector is completely characterized by the sequence of coordinates. An ordered basis is also called a frame, a word commonly used, in various contexts, for referring to a sequence of data allowing defining coordinates. Coordinates: Let, as usual, Fn be the set of the n-tuples of elements of F. This set is an F-vector space, with addition and scalar multiplication defined component-wise. The map is a linear isomorphism from the vector space Fn onto V. In other words, Fn is the coordinate space of V, and the n-tuple φ−1(v) is the coordinate vector of v. Coordinates: The inverse image by φ of bi is the n-tuple ei all of whose components are 0, except the ith that is 1. The ei form an ordered basis of Fn , which is called its standard basis or canonical basis. The ordered basis B is the image by φ of the canonical basis of Fn It follows from what precedes that every ordered basis is the image by a linear isomorphism of the canonical basis of Fn , and that every linear isomorphism from Fn onto V may be defined as the isomorphism that maps the canonical basis of Fn onto a given ordered basis of V. In other words, it is equivalent to define an ordered basis of V, or a linear isomorphism from Fn onto V. Change of basis: Let V be a vector space of dimension n over a field F. Given two (ordered) bases old =(v1,…,vn) and new =(w1,…,wn) of V, it is often useful to express the coordinates of a vector x with respect to Bold in terms of the coordinates with respect to Bnew. Change of basis: This can be done by the change-of-basis formula, that is described below. The subscripts "old" and "new" have been chosen because it is customary to refer to Bold and Bnew as the old basis and the new basis, respectively. It is useful to describe the old coordinates in terms of the new ones, because, in general, one has expressions involving the old coordinates, and if one wants to obtain equivalent expressions in terms of the new coordinates; this is obtained by replacing the old coordinates by their expressions in terms of the new coordinates. Change of basis: Typically, the new basis vectors are given by their coordinates over the old basis, that is, If (x1,…,xn) and (y1,…,yn) are the coordinates of a vector x over the old and the new basis respectively, the change-of-basis formula is for i = 1, ..., n. Change of basis: This formula may be concisely written in matrix notation. Let A be the matrix of the ai,j , and be the column vectors of the coordinates of v in the old and the new basis respectively, then the formula for changing coordinates is The formula can be proven by considering the decomposition of the vector x on the two bases: one has and The change-of-basis formula results then from the uniqueness of the decomposition of a vector over a basis, here old ; that is for i = 1, ..., n. Related notions: Free module If one replaces the field occurring in the definition of a vector space by a ring, one gets the definition of a module. For modules, linear independence and spanning sets are defined exactly as for vector spaces, although "generating set" is more commonly used than that of "spanning set". Related notions: Like for vector spaces, a basis of a module is a linearly independent subset that is also a generating set. A major difference with the theory of vector spaces is that not every module has a basis. A module that has a basis is called a free module. Free modules play a fundamental role in module theory, as they may be used for describing the structure of non-free modules through free resolutions. Related notions: A module over the integers is exactly the same thing as an abelian group. Thus a free module over the integers is also a free abelian group. Free abelian groups have specific properties that are not shared by modules over other rings. Specifically, every subgroup of a free abelian group is a free abelian group, and, if G is a subgroup of a finitely generated free abelian group H (that is an abelian group that has a finite basis), then there is a basis e1,…,en of H and an integer 0 ≤ k ≤ n such that a1e1,…,akek is a basis of G, for some nonzero integers a1,…,ak . For details, see Free abelian group § Subgroups. Related notions: Analysis In the context of infinite-dimensional vector spaces over the real or complex numbers, the term Hamel basis (named after Georg Hamel) or algebraic basis can be used to refer to a basis as defined in this article. This is to make a distinction with other notions of "basis" that exist when infinite-dimensional vector spaces are endowed with extra structure. The most important alternatives are orthogonal bases on Hilbert spaces, Schauder bases, and Markushevich bases on normed linear spaces. In the case of the real numbers R viewed as a vector space over the field Q of rational numbers, Hamel bases are uncountable, and have specifically the cardinality of the continuum, which is the cardinal number 2ℵ0 , where ℵ0 is the smallest infinite cardinal, the cardinal of the integers. Related notions: The common feature of the other notions is that they permit the taking of infinite linear combinations of the basis vectors in order to generate the space. This, of course, requires that infinite sums are meaningfully defined on these spaces, as is the case for topological vector spaces – a large class of vector spaces including e.g. Hilbert spaces, Banach spaces, or Fréchet spaces. Related notions: The preference of other types of bases for infinite-dimensional spaces is justified by the fact that the Hamel basis becomes "too big" in Banach spaces: If X is an infinite-dimensional normed vector space which is complete (i.e. X is a Banach space), then any Hamel basis of X is necessarily uncountable. This is a consequence of the Baire category theorem. The completeness as well as infinite dimension are crucial assumptions in the previous claim. Indeed, finite-dimensional spaces have by definition finite bases and there are infinite-dimensional (non-complete) normed spaces which have countable Hamel bases. Consider 00 , the space of the sequences x=(xn) of real numbers which have only finitely many non-zero elements, with the norm sup {\textstyle \|x\|=\sup _{n}|x_{n}|} . Its standard basis, consisting of the sequences having only one non-zero element, which is equal to 1, is a countable Hamel basis. Related notions: Example In the study of Fourier series, one learns that the functions {1} ∪ { sin(nx), cos(nx) : n = 1, 2, 3, ... } are an "orthogonal basis" of the (real or complex) vector space of all (real or complex valued) functions on the interval [0, 2π] that are square-integrable on this interval, i.e., functions f satisfying The functions {1} ∪ { sin(nx), cos(nx) : n = 1, 2, 3, ... } are linearly independent, and every function f that is square-integrable on [0, 2π] is an "infinite linear combination" of them, in the sense that for suitable (real or complex) coefficients ak, bk. But many square-integrable functions cannot be represented as finite linear combinations of these basis functions, which therefore do not comprise a Hamel basis. Every Hamel basis of this space is much bigger than this merely countably infinite set of functions. Hamel bases of spaces of this kind are typically not useful, whereas orthonormal bases of these spaces are essential in Fourier analysis. Related notions: Geometry The geometric notions of an affine space, projective space, convex set, and cone have related notions of basis. An affine basis for an n-dimensional affine space is n+1 points in general linear position. A projective basis is n+2 points in general position, in a projective space of dimension n. A convex basis of a polytope is the set of the vertices of its convex hull. A cone basis consists of one point by edge of a polygonal cone. See also a Hilbert basis (linear programming). Related notions: Random basis For a probability distribution in Rn with a probability density function, such as the equidistribution in an n-dimensional ball with respect to Lebesgue measure, it can be shown that n randomly and independently chosen vectors will form a basis with probability one, which is due to the fact that n linearly dependent vectors x1, ..., xn in Rn should satisfy the equation det[x1 ⋯ xn] = 0 (zero determinant of the matrix with columns xi), and the set of zeros of a non-trivial polynomial has zero measure. This observation has led to techniques for approximating random bases. Related notions: It is difficult to check numerically the linear dependence or exact orthogonality. Therefore, the notion of ε-orthogonality is used. For spaces with inner product, x is ε-orthogonal to y if |⟨x,y⟩|/(‖x‖‖y‖)<ε (that is, cosine of the angle between x and y is less than ε). Related notions: In high dimensions, two independent random vectors are with high probability almost orthogonal, and the number of independent random vectors, which all are with given high probability pairwise almost orthogonal, grows exponentially with dimension. More precisely, consider equidistribution in n-dimensional ball. Choose N independent random vectors from a ball (they are independent and identically distributed). Let θ be a small positive number. Then for N random vectors are all pairwise ε-orthogonal with probability 1 − θ. This N growth exponentially with dimension n and N≫n for sufficiently big n. This property of random bases is a manifestation of the so-called measure concentration phenomenon.The figure (right) illustrates distribution of lengths N of pairwise almost orthogonal chains of vectors that are independently randomly sampled from the n-dimensional cube [−1, 1]n as a function of dimension, n. A point is first randomly selected in the cube. The second point is randomly chosen in the same cube. If the angle between the vectors was within π/2 ± 0.037π/2 then the vector was retained. At the next step a new vector is generated in the same hypercube, and its angles with the previously generated vectors are evaluated. If these angles are within π/2 ± 0.037π/2 then the vector is retained. The process is repeated until the chain of almost orthogonality breaks, and the number of such pairwise almost orthogonal vectors (length of the chain) is recorded. For each n, 20 pairwise almost orthogonal chains were constructed numerically for each dimension. Distribution of the length of these chains is presented. Proof that every vector space has a basis: Let V be any vector space over some field F. Let X be the set of all linearly independent subsets of V. The set X is nonempty since the empty set is an independent subset of V, and it is partially ordered by inclusion, which is denoted, as usual, by ⊆. Let Y be a subset of X that is totally ordered by ⊆, and let LY be the union of all the elements of Y (which are themselves certain subsets of V). Proof that every vector space has a basis: Since (Y, ⊆) is totally ordered, every finite subset of LY is a subset of an element of Y, which is a linearly independent subset of V, and hence LY is linearly independent. Thus LY is an element of X. Therefore, LY is an upper bound for Y in (X, ⊆): it is an element of X, that contains every element of Y. Proof that every vector space has a basis: As X is nonempty, and every totally ordered subset of (X, ⊆) has an upper bound in X, Zorn's lemma asserts that X has a maximal element. In other words, there exists some element Lmax of X satisfying the condition that whenever Lmax ⊆ L for some element L of X, then L = Lmax. It remains to prove that Lmax is a basis of V. Since Lmax belongs to X, we already know that Lmax is a linearly independent subset of V. Proof that every vector space has a basis: If there were some vector w of V that is not in the span of Lmax, then w would not be an element of Lmax either. Let Lw = Lmax ∪ {w}. This set is an element of X, that is, it is a linearly independent subset of V (because w is not in the span of Lmax, and Lmax is independent). As Lmax ⊆ Lw, and Lmax ≠ Lw (because Lw contains the vector w that is not contained in Lmax), this contradicts the maximality of Lmax. Thus this shows that Lmax spans V. Proof that every vector space has a basis: Hence Lmax is linearly independent and spans V. It is thus a basis of V, and this proves that every vector space has a basis. This proof relies on Zorn's lemma, which is equivalent to the axiom of choice. Conversely, it has been proved that if every vector space has a basis, then the axiom of choice is true. Thus the two assertions are equivalent.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ORVYL and WYLBUR** ORVYL and WYLBUR: ORVYL is a time-sharing monitor developed by Stanford University for IBM System/360 and System/370 computers in 1967–68. ORVYL was one of the first time-sharing systems to be made available for IBM computers. Wylbur is a text editor and word processor program designed to work either without ORVYL, or in conjunction with ORVYL. Overview: The names ORVYL and WYLBUR are often used interchangeably, but: ORVYL is a timesharing monitor that supports a file system, command language, program execution and debugging, and provides supervisor services. The first version runs only on a 360/67, but later versions run on a System/370. WYLBUR is a text editor, word processor, job submission and retrieval, and e-mail program designed to work in conjunction with ORVYL or with IBM's OS/360, SVS, and MVS operating systems. Overview: MILTEN is terminal control software used by both ORVYL and WYLBUR for start/stop terminals.WYLBUR is not a full standalone operating system in the mold of Dartmouth Time Sharing System (DTSS) or Unix. Instead it runs on top of an IBM batch operating system (OS/360, SVS, MVS). It takes the form of an editor with a Remote Job Entry system and thus has much the same relationship to the IBM operating systems as Emacs does to Unix. For these reasons WYLBUR is often thought of as a text editor rather than a time-sharing system. However, whereas Unix does not need Emacs to provide text editing services, IBM's operating systems originally needed WYLBUR. Later innovations such as IBM's Administrative Terminal System (ATS), Conversational Remote Batch Entry (CRBE), Conversational Remote Job Entry (CRJE) and Time Sharing Option (TSO) made WYLBUR less relevant for IBM users and gradually replaced it. Overview: This article will use the full upper case spelling for commands and keywords. All references to characters and string assume an EBCDIC code page. Use: ORVYL and WYLBUR were used at the Stanford Linear Accelerator Center (SLAC), the European Organization for Nuclear Research (CERN), the U.S. National Institutes of Health (NIH), and many other sites. Retired from most sites in the late 1990s owing to concerns about Y2K issues, they remained in use at NIH until December 2009. ORVYL and WYLBUR are still available as open source from Stanford. There are also proprietary versions such as SuperWYlbur. Use: ORVYL and WYLBUR were much admired as shown by this excerpt from a 2004 article titled "Computing at CERN: the mainframe era": [In 1976 the IBM S/370-168] also brought with it the MVS (Multiple Virtual Storage) operating system, with its pedantic Job Control Language, and it provided the opportunity for CERN to introduce WYLBUR, the well-loved, cleverly designed and friendly time-sharing system developed at SLAC, together with its beautifully handwritten and illustrated manual by John Ehrman. WYLBUR was a masterpiece of design, achieving miracles with little power (at the time) shared amongst many simultaneous users. It won friends with its accommodating character and began the exit of punch-card machinery as computer terminals were introduced across the lab. Advantages and disadvantages: ORVYL and WYLBUR first became available in 1967–68, before TSS/360, TSO, or any other official time-sharing solution from IBM. This was roughly the same time that third-party time-sharing systems such as MTS became available and the under the radar development effort of CP-67 at IBM's own Cambridge Scientific Center took place. WYLBUR had the additional advantage that it could be used in conjunction with IBM's mainstream operating system, OS/360. Advantages and disadvantages: WYLBUR is a single-address-space system, unlike TSO. This conserved memory in the days when memory was precious. So even when TSO was available, organizations seeking to minimize memory use would often keep some or even a majority of their users on WYLBUR rather than letting them use the TSO interactive environment. Advantages and disadvantages: WYLBUR provides compressed Partitioned Data Sets (PDSs, aka libraries) to save disk space. In MVS, source code is typically stored as a sequence of card images (80 character lines). If a line contained only one or just a few characters, 80 characters were still used to store that line. Even when data, e.g., source code, are stored as variable blocked (VB), space could be wasted on strings of embedded blanks. WYLBUR implements stream-oriented storage of text in PDSs, (and sequential data sets) so that a one character line might only take 16 characters (line length, offset, chunk length, character) rather than 80 to store. WYLBUR, or an external program run via JCL, was used to convert files to and from the WYLBUR EDIT format. Advantages and disadvantages: Although TSO allows a user to do more than a locked-down WYLBUR system did, it is possible to write WYLBUR Exec scripts that execute batch jobs to perform functions that ordinarily would have required a TSO account, filling a batch job skeleton out with parameters, submitting the batch job, retrieving the output and displaying it on the screen. Advantages and disadvantages: WYLBUR has some security advantages over TSO, and some disadvantages. Advantages include: Being able to write rules to restrict user access to datasets other than those owned by them and stored under their prefix. This is analogous to a user's home directory on UNIX, and looks something like WYL.AV99.HCO, where AV99 is roughly analogous to the "group" and HCO the "user" within the group. Advantages and disadvantages: Being fairer about resource use. WYLBUR doesn't implement commands such as TSO's alloc which can intentionally or unintentionally prevent others' access to data files for an extended period of time or use tremendous amounts of memory or CPU time. In this way, it minimizes the impact of any single user on all other users. Advantages and disadvantages: Commands to set certain status parameters or "spy" on the commands being executed by other users were restricted to administrative users and could not be executed by regular users.Disadvantages related to security included: WYLBUR is a single-address-space system. That means that if a user can figure out how to access raw bytes in the address space, they can potentially access information they do not own. For example, there once existed a program written by two college students in the WYLBUR Exec scripting language which could dig the password of the most recently logged on user out of WYLBUR's memory. Advantages and disadvantages: Because the WYLBUR process runs under the system account assigned to WYLBUR, one is completely dependent on its enforcement of dataset access protections according to the rules set up in WYLBUR. Enforcement of the access rules could be completely disabled by an administrative user, for system maintenance purposes, who might not remember to re-enable them. Advantages and disadvantages: WYLBUR implements disk quotas, with an interesting twist: any system user could give away all or part of their quota to other users. This functionality could be combined with typical course-related student accounts that went away at the end of every semester, and computer-savvy student staff who had non-expiring accounts with low disk quotas, in a manner not always anticipated by university staff. Advantages and disadvantages: In systems running the ACF2 security package, a user with accounts in both TSO and WYLBUR that are tied to the same account name could reset the contents of their WYLBUR account's security record interactively from within TSO. This could be used to turn a regular WYLBUR user into an administrative WYLBUR user, increase its disk quota, etc. At least through the 1960s, the WYLBUR security rules were not enforced for batch jobs running on the same system. So, utilities such as IEHLIST and IEBGENER could be used to discover, read, and modify files belonging to other WYLBUR users unless you password protected those files, which was operationally awkward. Data Management: Wylbur had a special edit format for Wylbur data sets, which are compressed and have a line number and revision flag for each line. In addition Wylbur supports standard FB and VB datasets. A Wylbur user normally specifies a default volume, which may be the special word CATLG. requesting a search of the catalogue for an existing dataset and requesting that a new dataset be cataloged. Wylbur has the ability to convert line numbers between edit and IBM data sets, either as scaled integers or with an explicit decimal point. Editing: Wylbur provides a line editor that works with temporary data sets, similar to buffers in other editors. At any point in time one of the temporary data sets is designated as default. Wylbur maintains a current line pointer for each temporary data set. The user may specify an explicit working data set on a command; if he omits it, then the default temporary data set is used as the working data set. Editing: The unit of operation is a set of lines (associative range) and individual lines are identified with a line number in the range 0.0 to 99999.999; leading zeros in the integer part and trailing zeros in the fractional part may be omitted. The user can specify a line number in any of the following ways Absolute line number: ddddd.ddd FIRST: first line in the working data set CURRENT: the current line for the working data set LAST: last line in the working data set END: a target for copies, following the last line in the working data set relative: line+ordinal or line -ordinal. Editing: macro variable containing a lineWylbur libraries have a nonstandard format, however it allows the user to export to native OS files with integer sequence numbers and to import native OS files with integer sequence and rescale the line number by a factor of 1000. Editing: A range can be specified as a combination of An explicit range, e.g., 5.3-2/7.4+3 A pattern, e.g., 'X' DIGITS*3 A pattern with a column range, e.g., 'BAL' 10/15 An ordinal, e.g., 3RD 'BAL' 10/15 An ordinal relative to a match, e.g., 3RD AFTER 'BAL' 10/15 A sequence of lines following a match, e.g., EVERY 3RD AFTER 'BAL' 10/15 A Boolean operation, e.g., 'PAGE' INTERSECTION COMPLEMENT 'NUMBER' A specification in parentheses prefixed by SET, e.g., 'PAGE' INTERSECTION SET (COMPLEMENT 'NUMBER')A pattern is similar to a regular expression, but the syntax is closer to that of SNOBOL than to that of Unix or Perl, there is no backtracking and only the NIH Wylbur has capture of subpatterns. A pattern may be: Sample commands The specification of base+increment means that the replacement text on the first line is base and is incremented on subsequent lines, so that X10+10 replaces the matched text on the first line with X10, on the second line with X20 and on the third line with X30. The specification of SUBSTRING 2/4 means columns 2-4 of the matched string; note that this is less flexible than captures. Enhanced versions: Various organizations developed enhanced versions of Wylbur. These included National Institutes of Health Online Business Systems which was acquired by ACS (Affiliated Computer Services Inc). Optimum Systems Inc., sold to Electronic Data Systems and later spun off as SuperWylbur® Systems, Inc. RAND Corporation: WYLBUR Command Facilities 1975 This manual describes two related facilities that extend WYLBUR in the direction of a programming language that supports structured text entry and text manipulation applications. The first facility is an extension to WYLBUR's command vocabulary that gives WYLBUR many of the capabilities associated with traditional programming languages, The extension was written by Paul Andersen. The second facility is a batch preprocessor that permits the WYLBUR programmer to develop WYLBUR command programs in a language similar to PL/I. The preprocessor was developed by David J. Smith. WYLBUR Learner's Guide WYLBUR Reference Manual SuperWylbur SuperWylbur has several enhancements over the original Wylbur. The most important are Supporting 3270, 3767 and NTO terminals via VTAM; as with other proprietary Wylbur versions, SuperWylbur does not use MILTEN for VTAM terminals. An enhanced macro facility. Enhanced versions: Supporting user-written full-screen panels SuperWylbur macro Facility The macro processor adds commands, constants, functions and expressions to Wylbur. Even when the command syntax does not include parameters defined to be expressions, the user can use the forms %(expression) and %%(expression) to force evaluation. If the expression is a function with no argument or a variable then the parentheses may be omitted, e.g., %.TIME instead of %(.TIME). A doubled % requests that the valued be quoted. Enhanced versions: SuperWylbur constants SuperWylbur has two types of constants: Numeric constants [sign]digits[E[sign]exponent] #hexdigits string constants 'characters' "characters" SuperWylbur operators SuperWylbur operators whose names contain only special characters need not be separated by spaces. Operators whose names contain a period and letters must be separated by spaces. SuperWylbur has the following types of operators: SuperWylbur arithmetic operators SuperWylbur relational operators SuperWylbur logical operators & (.AND) And | (.OR) Or ¬ (.NOT) Not SuperWylbur macro functions The name of a macro function begins with a period. If there are arguments, a colon separates them from the name. A semicolon separates successive arguments. SuperWylbur provides the following types of macro functions: arithmetic functions accounting functions date and time conversion s parameters default to current date or time environmental queries Most of these return values from SET commands: full screen functions JES functions macro functions string functions working file functions s arguments default to current default working data set Complete list of macro functions SuperWylbur macro pseudofunctions Pseudofunctions are like functions, except that they appear on the left hand side of an assignment, e.g., LET .foo:bar=baz. Enhanced versions: .COLUMNS:(v;i1[;i2]) Replace columnsi1 through i2 of v .CURSOR Set panel variable on which to place cursor .FILE:i Associate a working data set with channel i .LINE:(n[;[s][;i]]) Replace or insert line n in working data set s if i is omitted or zeroReplace line i lines after (before if negative) line 'n' in working data set 's' .NEXT:i Set current line pointer for channel i .OUTPUT:i Add or replace current line of channel i, advance current line pointer .SUBSTRING:(v;i1[;i2]) Replace column i1 of v for i2 columns .UPDATE:i Replace the last line read from channel i SuperWylbur macro statements
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ISFiC Press** ISFiC Press: ISFiC Press is the small press publishing arm of ISFiC. It often produces books by the Author Guest of Honor at Windycon, an annual Chicago science fiction convention, launching the appropriate title at the convention. Although the press officially released its first book, Robert J. Sawyer's Relativity, on November 12, 2004, the people responsible for the press issued a filk CD two years earlier, entitled A Walk on the Windy Side. That CD is considered to be the press's first project. A Walk on the Windy Side includes songs by Charles de Lint and Juanita Coulson as well as readings by Frederik Pohl and Kristine Smith. ISFiC Press: In addition to stories and essays by Sawyer, Relativity includes an introduction by Mike Resnick, an afterword by Valerie Broege and a cover by Jael. Relativity won the Prix Aurora Award for best English Work (Other) for 2004.ISFiC Press released its first two novels, Every Inch a King, by Harry Turtledove with a cover by Bob Eggleton, and The Cunning Blood, by Jeff Duntemann with a cover by Todd Cameron Hamilton on November 11, 2005. ISFiC Press: In 2006, ISFiC Press published its first non-fiction book, Worldcon Guest of Honor Speeches, edited by Mike Resnick and Joe Siclari, which was nominated for a Hugo Award for Best Related Book. In November of that year they published Outbound, a collection of short stories by Jack McDevitt. ISFiC Press: In August 2012 ISFiC Press issued its first electronic book, Win Some, Lose Some: The Hugo Award Winning (and Nominated) Short Science Fiction and Fantasy of Mike Resnick (by Mike Resnick; Cover by Vincent Di Fate) as well as the hardcover edition of the same title. The e-book is offered in EPUB and MOBI format. The publication of this book is coincident with Chicon 7, the 70th World Science Fiction Convention, which was held in ISFiC Press's hometown of Chicago. ISFiC Press: The publisher and editor of ISFiC Press from its inception until 2012 was Steven H Silver and the business manager is Bill Roper. ISFiC Press publications: Relativity: Stories and Essays, by Robert J. Sawyer (2004) Every Inch a King, by Harry Turtledove (2005) The Cunning Blood, by Jeff Duntemann (2005) Worldcon Guest of Honor Speeches, edited by Mike Resnick and Joe Siclari (2006) Outbound, by Jack McDevitt (2006) Finding Magic, by Tanya Huff (2007) When Diplomacy Fails, edited by Eric Flint and Mike Resnick (2008) The Shadow on the Doorstep, by James P. Blaylock (2009) Assassin and Other Stories, by Steven Barnes (2010) Aurora in Four Voices, by Catherine Asaro (2011) Win Some, Lose Some: the Hugo Award Winning (and Nominated) Short Science Fiction and Fantasy of Mike Resnick, by Mike Resnick (2012) Velveteen vs. the Junior Super Patriots, by Seanan McGuire (2012) The Goblin Master's Grimoire, by Jim C. Hines (2013) Velveteen vs. the Multiverse, by Seanan McGuire (2013) Harvest Season: an Anthology, by the SF Squeecast (Catherynne M. Valente, Michael Damian Thomas, Seanan McGuire, and Elizabeth Bear) (2014) Bimbo on the Cover, by Maya Kaathryn Bohnhoff (2015) Velveteen vs. the Seasons, by Seanan McGuire (2016)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**An Exceptionally Simple Theory of Everything** An Exceptionally Simple Theory of Everything: "An Exceptionally Simple Theory of Everything" is a physics preprint proposing a basis for a unified field theory, often referred to as "E8 Theory", which attempts to describe all known fundamental interactions in physics and to stand as a possible theory of everything. The paper was posted to the physics arXiv by Antony Garrett Lisi on November 6, 2007, and was not submitted to a peer-reviewed scientific journal. The title is a pun on the algebra used, the Lie algebra of the largest "simple", "exceptional" Lie group, E8. The paper's goal is to describe how the combined structure and dynamics of all gravitational and Standard Model particle fields are part of the E8 Lie algebra.The theory is presented as an extension of the grand unified theory program, incorporating gravity and fermions. The theory received a flurry of media coverage, but was also met with widespread skepticism. Scientific American reported in March 2008 that the theory was being "largely but not entirely ignored" by the mainstream physics community, with a few physicists picking up the work to develop it further. In July 2009, Jacques Distler and Skip Garibaldi published a critical paper in Communications in Mathematical Physics called "There is no 'Theory of Everything' inside E8", arguing that Lisi's theory, and a large class of related models, cannot work. Distler and Garibaldi offer a direct proof that it is impossible to embed all three generations of fermions in E8, or to obtain even one generation of the Standard Model without the presence of additional particles that do not exist in the physical world. Overview: The goal of E8 Theory is to describe all elementary particles and their interactions, including gravitation, as quantum excitations of a single Lie group geometry—specifically, excitations of the noncompact quaternionic real form of the largest simple exceptional Lie group, E8. A Lie group, such as a one-dimensional circle, may be understood as a smooth manifold with a fixed, highly symmetric geometry. Larger Lie groups, as higher-dimensional manifolds, may be imagined as smooth surfaces composed of many circles (and hyperbolas) twisting around one another. At each point in a N-dimensional Lie group there can be N different orthogonal circles, tangent to N different orthogonal directions in the Lie group, spanning the N-dimensional Lie algebra of the Lie group. For a Lie group of rank R, one can choose at most R orthogonal circles that do not twist around each other, and so form a maximal torus within the Lie group, corresponding to a collection of R mutually-commuting Lie algebra generators, spanning a Cartan subalgebra. Each elementary particle state can be thought of as a different orthogonal direction, having an integral number of twists around each of the R directions of a chosen maximal torus. These R twist numbers (each multiplied by a scaling factor) are the R different kinds of elementary charge that each particle has. Mathematically, these charges are eigenvalues of the Cartan subalgebra generators, and are called roots or weights of a representation. Overview: In the Standard Model of particle physics, each different kind of elementary particle has four different charges, corresponding to twists along directions of a four-dimensional maximal torus in the twelve-dimensional Standard Model Lie group, SU(3)×SU(2)×U(1). In grand unified theories (GUTs), the Standard Model Lie group is considered as a subgroup of a higher-dimensional Lie group, such as of 24-dimensional SU(5) in the Georgi–Glashow model or of 45-dimensional Spin(10) in the SO(10) model. Since there is a different elementary particle for each dimension of the Lie group, these theories contain additional particles beyond the content of the Standard Model. Overview: In E8 Theory's current state, it is not possible to calculate masses for the existing or predicted particles. Lisi states the theory is young and incomplete, requiring a better understanding of the three fermion generations and their masses, and places a low confidence in its predictions. However, the discovery of new particles that do not fit in Lisi's classification, such as superpartners or new fermions, would fall outside the model and falsify the theory. As of 2021, none of the particles predicted by any version of E8 Theory have been detected. History: Before writing his 2007 paper, Lisi discussed his work on a Foundational Questions Institute (FQXi) forum, at an FQXi conference, and for an FQXi article. Lisi gave his first talk on E8 Theory at the Loops '07 conference in Morelia, Mexico, soon followed by a talk at the Perimeter Institute. John Baez commented on Lisi's work in his column This Week's Finds in Mathematical Physics, finding the idea intriguing but ending on the cautionary note that it might not be "mathematically natural to use this method to combine bosons and fermions". Lisi's arXiv preprint, "An Exceptionally Simple Theory of Everything", appeared on November 6, 2007, and immediately attracted attention. Lisi made a further presentation for the International Loop Quantum Gravity Seminar on November 13, 2007, and responded to press inquiries on an FQXi forum. He presented his work at the TED Conference on February 28, 2008.Numerous news sites reported on the new theory in 2007 and 2008, noting Lisi's personal history and the controversy in the physics community. The first mainstream and scientific press coverage began with articles in The Daily Telegraph and New Scientist, with articles soon following in many other newspapers and magazines. History: Lisi's paper spawned a variety of reactions and debates across various physics blogs and online discussion groups. The first to comment was Sabine Hossenfelder, summarizing the paper and noting the lack of a dynamical symmetry-breaking mechanism. Peter Woit commented, "I'm glad to see someone pursuing these ideas, even if they haven't come up with solutions to the underlying problems". The group blog The n-Category Café hosted some of the more technical discussions. Mathematician Bertram Kostant discussed the background of Lisi's work in a colloquium presentation at UC Riverside.On his blog, Musings, Jacques Distler offered one of the strongest criticisms of Lisi's approach, claiming to demonstrate that, unlike in the Standard Model, Lisi's model is nonchiral — consisting of a generation and an anti-generation — and to prove that any alternative embedding in E8 must be similarly nonchiral. These arguments were distilled in a paper written jointly with Skip Garibaldi, "There is no 'Theory of Everything' inside E8", published in Communications in Mathematical Physics. In this paper, Distler and Garibaldi offer a proof that it is impossible to embed all three generations of fermions in E8, or to obtain even the one-generation Standard Model. In response, Lisi argued that Distler and Garibaldi made unnecessary assumptions about how the embedding needs to happen. Addressing the one generation case, in June 2010 Lisi posted a new paper on E8 Theory, "An Explicit Embedding of Gravity and the Standard Model in E8", eventually published in a conference proceedings, describing how the algebra of gravity and the Standard Model with one generation of fermions embeds in the E8 Lie algebra explicitly using matrix representations. When this embedding is done, Lisi agrees that there is an antigeneration of fermions (also known as "mirror fermions") remaining in E8; but while Distler and Garibaldi state that these mirror fermions make the theory nonchiral, Lisi states that these mirror fermions might have high masses, making the theory chiral, or that they might be related to the other generations. "The explanation for the existence of three generations of fermions, all with the same apparent algebraic structure, remains largely a mystery," Lisi wrote.Some follow-ups to Lisi's original preprint have been published in peer-reviewed journals. Lee Smolin's "The Plebanski action extended to a unification of gravity and Yang–Mills theory" proposes a symmetry-breaking mechanism to go from an E8 symmetric action to Lisi's action for the Standard Model and gravity. Roberto Percacci's "Mixing internal and spacetime transformations: some examples and counterexamples" addresses a general loophole in the Coleman–Mandula theorem also thought to work in E8 Theory. Percacci and Fabrizio Nesti's "Chirality in unified theories of gravity" confirms the embedding of the algebra of gravitational and Standard Model forces acting on a generation of fermions in spin(3,11) + 64+, mentioning that Lisi's "ambitious attempt to unify all known fields into a single representation of E8 stumbled into chirality issues". In a joint paper with Lee Smolin and Simone Speziale, published in Journal of Physics A, Lisi proposed a new action and symmetry-breaking mechanism. History: In 2008, FQXi awarded Lisi a grant for further development of E8 Theory.In September 2010, Scientific American reported on a conference inspired by Lisi's work. Shortly thereafter, they published a feature article on E8 Theory, "A Geometric Theory of Everything", written by Lisi and James Owen Weatherall. In December 2011, in a paper for a special issue of the journal Foundations of Physics, Michael Duff argued against Lisi's theory and the attention it has received in the popular press. Duff states that Lisi's paper was incorrect, citing Distler and Garibaldi's proof, and criticizes the press for giving Lisi uncritical attention simply because of his "outsider" image.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kilopondmetre** Kilopondmetre: The Kilopondmetre is an obsolete unit of torque and energy in the gravitational metric system. It is abbreviated kp·m or m·kp, older publications often use m­kg and kg­m as well. Torque is a product of the length of a lever and the force applied to the lever. One kilopond is the force applied to one kilogram due to gravitational acceleration; this force is exactly 9.80665 N. This means 1 kp·m = 9.80665 kg·m/s2 = 9.80665 N·m.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Materials (journal)** Materials (journal): Materials is a semi-monthly peer-reviewed open access scientific journal covering materials science and engineering. It was established in 2008 and is published by MDPI. The editor-in-chief is Maryam Tabrizian (McGill University). The journal publishes reviews, regular research papers, short communications, and book reviews. There are currently hundreds of calls for submissions to special issues, a fact that has led to serious concerns. Abstracting and indexing: The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2021 impact factor of 3.748.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Europium(II) oxide** Europium(II) oxide: Europium(II) oxide (EuO) is a chemical compound which is one of the oxides of europium. In addition to europium(II) oxide, there is also europium(III) oxide and the mixed valence europium(II,III) oxide. Preparation: Europium(II) oxide can be prepared by the reduction of Europium(III) oxide with elemental europium at 800 °C and subsequent vacuum distillation at 1150 °C. Eu2O3 + Eu → 3 EuOIt is also possible to synthesize from the reaction of europium oxychloride and lithium hydride. 2 EuOCl + 2 LiH → 2 EuO + 2 LiCl + H2In modern research, thin films can be manufactured by molecular beam epitaxy directly from europium atoms and oxygen molecules. These films have contamination of Eu3+ of less than 1%. Properties: Europium(II) oxide is a violet compound as a bulk crystal and transparent blue in thin film form. It is unstable in humid atmosphere, slowly turning into the yellow europium(II) hydroxide hydrrate and then to white europium(III) hydroxide. EuO crystallizes in a cubic sodium chloride structure with a lattice parameter a = 0.5144nm. The compound is often non-stoichiometric, containing up to 4% Eu3+ and small amounts of elemental europium. However, since 2008 high purity crystalline EuO films can be created in ultra high vacuum conditions. These films have a crystallite size of about 4 nm.Europium(II) oxide is ferromagnetic with a Curie Temperature of 69.3 K. With the addition of about 5-7% elemental europium, this increases to 79 K. It also displays colossal magnetoresistance, with a dramatic increase in conductivity below the Curie temperature. One more way to increase the Curie temperature is doping with gadolinium, holmium, or lanthanum.Europium(II) oxide is a semiconductor with a band gap of 1.12 eV. Applications: Because of the properties of europium(II) oxide, thin layers of the oxide deposited on silicon are being studied for use as spin filters. Spin filter materials only allow electrons of a certain spin to pass, blocking electrons of the opposite spin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gynecologic Oncology (journal)** Gynecologic Oncology (journal): Gynecologic Oncology is a peer-reviewed medical journal covering all aspects of gynecologic oncology. The journal covers investigations relating to the etiology, diagnosis, and treatment of female cancers, as well as research from any of the disciplines related to this field of interest. It is published by Elsevier and is the official journal of the Society of Gynecologic Oncology. Abstracting and indexing: The journal is abstracted and indexed in Current Contents/Clinical Medicine, Index Medicus, Science Citation Index, and Scopus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spectacles (product)** Spectacles (product): Spectacles are smartglasses dedicated to recording video for the Snapchat service. This term is often used to address sunglasses and eyeglasses. They feature a camera lens and are capable of recording short video segments and syncing with a smartphone to upload to the user's online account. They were developed and manufactured by Snap Inc., and announced on September 23, 2016. The smartglasses were released on November 10, 2016. They are made for Snap's image messaging and multimedia platform, Snapchat, and were initially distributed exclusively through Snap's pop-up vending machine, Snapbot. On February 20, 2017, Snap Spectacles became available for purchase online.On April 26, 2018, a second-generation of the Spectacles launched in 17 countries. This version included both software and hardware updates including water resistance functionality and increased storage.On September 5, 2018, two improved second-generation Spectacles were released. The two new versions, dubbed Nico and Veronica, included major design changes that reflect more typical sunglasses styles. Spectacles (product): On August 13, 2019, Snap announced its Spectacles 3, which featured a new minimalistic frame and two cameras to replicate stereoscopic vision. The Spectacles shipped in November 2019 for $380. History: In December 2014 Snap Inc., then Snapchat Inc., acquired Vergence Labs the developers of the Epiphany Eyewear smartglasses. Vergence Labs was founded by entrepreneur Erick Miller in 2011 before Google Glass was announced. Miller worked on the idea as a graduate student at UCLA and poured his life savings into building the product. Snapchat was impressed with the Epiphany Eyewear product and the great team assembled by Miller, and acquired Vergence to develop a similar eyewear product.Epiphany Eyewear which recorded wide-angle point-of-view videos, had been positioned as Vergence's first step toward eventually building full featured augmented reality glasses which, according to Miller, would someday "give people what would previously be called superpowers". However, due to Vergence's small engineering team (consisting of founder ceo Erick Miller, co-founder Jon Rodriguez, software engineer Peter Brook, and designer / mechanical engineer David Meisenholder), the company had to scale back its ambitions in order to ship it's simpler first product, Epiphany Eyewear, which the team was able to successfully ship despite their extremely limited funding and team size. The successful development and launch of their product led to the company being noticed by Snapchat, which quietly acquired them, bringing them in-house to develop a similar but much more refined eyewear product for Snapchat. History: In October 2015, a leaked online video showed an early version of the new glasses, dubbed "Spectacles." on mid 2016, news outlets reported that Snapchat was hiring engineers from Microsoft, Nokia and Qualcomm. Reporters speculated that the hires were to build the new glasses.The new product was unveiled on September 24, 2016, and released on November 10, 2016. The glasses were sold through Snapbot, a proprietary vending machine for the smartglasses, which was located near Snap's headquarters in Venice, Los Angeles.In May 2017, a Snapchat patent became public which included an illustration of a hypothetical future version of Spectacles with augmented reality capabilities.In late 2017, Snapchat wrote off $40m worth of unsold Spectacles inventory and unused parts. As of May 2018, the company sold 220,000 pairs, which was less than initially expected. In April 2018, the company launched Spectacles 2.0, which included additional colors, lighter frames, the option of mirrored lenses, and the removal of the bright yellow ring around the camera window.In June 2018, Snap released an update for Spectacles allowing users to export videos from the glasses in square or widescreen format.In November 2018, it was reported that the company would release a new version of Spectacles by year end 2018 that included two cameras. The Snap Spectacles 3, which did feature two HD cameras on-device, were ultimately announced in August 2019.In May 2021, Snap announced its first AR-based product called Spectacles 4. The AR effects are officially referred to as Lenses and feature dual 3D waveguide display with a 26.3-degree diagonal field of view. It runs on the Qualcomm Snapdragon XR1 chip and has 2 RGB cameras, 4 microphones, and 2 stereo speakers. Snap claimed to have more than 250,000 Lens creators who created 2.5 million Lenses altogether. AR experiences available on the glasses as of December 2021 included "a zombie chase, a pong game, Solar System projection, and an interactive art piece." Additionally, according to The Verge, "Another new software update brings Connected Lenses to Spectacles, letting multiple pairs interact with the same Lens when sharing a Wi-Fi network." Design: Hardware The original version of the glasses included a camera lens with a 115° field of view (110° on V2) and records in a circular format that adapts to a smartphone's screen size and orientation. The smartglasses record when the user presses a button on the top left of its frame, for a maximum of 30 seconds (in 10 second intervals). They sync with its designated smartphone via Bluetooth and Wi-Fi. The camera also houses a ring of LED lights that indicates battery level and when they are recording. The pair of glasses charge in a yellow case that has a built-in battery and connects to its proprietary cable. The cable can be attached either to the case or directly to the glasses. According to the manufacturer, the fully charged case will hold enough power to recharge the glasses four times. The lithium-ion batteries in both the case and the glasses draw power from a standard 5 volt USB power supply, and connect via a USB cable which is held in place by small magnets. Design: Software Spectacles glasses capture video in a circular format, as shown in the thumbnail to the right. Snap Inc claims this is to more closely approximate the field of view of the human eye. Design: The glasses are exclusive to Snap Inc's service, Snapchat. They are paired by looking at the user's account Snapcode and pressing the button on the glasses frame, as well as connecting to them via Bluetooth (for iOS devices). The videos taken on the glasses are stored internally within the camera and can be viewed and individually uploaded in the "Memories" section of Snapchat. Snapbot: A Snapbot is a pop-up vending machine developed and manufactured by Snap Inc. It was designed for the distribution of Spectacles. Snapbot first appeared on November 10, 2016, in Venice, Los Angeles, and was then located in Big Sur, California. Snapbot was relocated to different locations in the U.S. for several months after the release of Spectacles. In February 2017, Snapchat began selling Spectacles online.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lightface analytic game** Lightface analytic game: In descriptive set theory, a lightface analytic game is a game whose payoff set A is a Σ11 subset of Baire space; that is, there is a tree T on ω×ω which is a computable subset of (ω×ω)<ω , such that A is the projection of the set of all branches of T. The determinacy of all lightface analytic games is equivalent to the existence of 0#.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fitness function** Fitness function: A fitness function is a particular type of objective function that is used to summarise, as a single figure of merit, how close a given design solution is to achieving the set aims. Fitness functions are used in evolutionary algorithms (EA), such as genetic programming and genetic algorithms to guide simulations towards optimal design solutions.In the field of EAs, each design solution is commonly represented as a string of numbers (referred to as a chromosome). After each round of testing, or simulation, the idea is to delete the n worst design solutions, and to breed n new ones from the best design solutions. Each design solution, therefore, needs to be awarded a figure of merit, to indicate how close it came to meeting the overall specification, and this is generated by applying the fitness function to the test, or simulation, results obtained from that solution.Two main classes of fitness functions exist: one where the fitness function does not change, as in optimizing a fixed function or testing with a fixed set of test cases; and one where the fitness function is mutable, as in niche differentiation or co-evolving the set of test cases. Another way of looking at fitness functions is in terms of a fitness landscape, which shows the fitness for each possible chromosome. In the following, it is assumed that the fitness is determined based on an evaluation that remains unchanged during an optimization run. Fitness function: A fitness function does not necessarily have to be able to calculate an absolute value, as it is sometimes sufficient to compare candidates in order to select the better one. A relative indication of fitness (candidate a is better than b) is sufficient in some cases, such as tournament selection or Pareto optimization. Requirements of evaluation and fitness function: The quality of the evaluation and calculation of a fitness function is fundamental to the success of an EA optimisation. It implements Darwin's principle of "survival of the fittest". Without fitness-based selection mechanisms for mate selection and offspring acceptance, EA search would be blind and hardly distinguishable from the Monte Carlo method. When setting up a fitness function, one must always be aware that it is about more than just describing the desired target state. Rather, the evolutionary search on the way to the optimum should also be supported as much as possible (see also section on auxiliary objectives), if and insofar as this is not already done by the fitness function alone. If the fitness function is designed badly, the algorithm will either converge on an inappropriate solution, or will have difficulty converging at all. Requirements of evaluation and fitness function: Definition of the fitness function is not straightforward in many cases and often is performed iteratively if the fittest solutions produced by an EA is not what is desired. Interactive genetic algorithms address this difficulty by outsourcing evaluation to external agents which are normally humans. Computational efficiency: The fitness function should not only correlate closely with the designer's goal, but it also should be computationally efficient. Speed of execution is very important, as a typical genetic algorithm must be iterated many times in order to produce a usable result for a non-trivial problem. Computational efficiency: Fitness approximation may be appropriate, especially in the following cases: Fitness computation time of a single solution is extremely high Precise model for fitness computation is missing The fitness function is uncertain or noisy.Alternatively or also in addition to the fitness approximation, the fitness calculations can also be distributed to a parallel computer in order to reduce the execution times. Depending on the population model of the EA used, both the EA itself and the fitness calculations of all offspring of one generation can be executed in parallel. Multi-objective optimization: Practical applications usually aim at optimizing multiple and at least partially conflicting objectives. Two fundamentally different approaches are often used for this purpose, Pareto optimization and optimization based on fitness calculated using the weighted sum. Multi-objective optimization: Weighted sum and penalty functions When optimizing with the weighted sum, the single values of the O objectives are first normalized so that they can be compared. This can be done with the help of costs or by specifying target values and determining the current value as the degree of fulfillment. Costs or degrees of fulfillment can then be compared with each other and, if required, can also be mapped to a uniform fitness scale. Without loss of generality, fitness is assumed to represent a value to be maximized. Each objective oi is assigned a weight wi in the form of a percentage value so that the overall raw fitness fraw can be calculated as a weighted sum: fraw=∑i=1Ooi⋅wiwith∑i=1Owi=1 A violation of R restrictions rj can be included in the fitness determined in this way in the form of penalty functions. For this purpose, a function pfj(rj) can be defined for each restriction which returns a value between 0 and 1 depending on the degree of violation, with the result being 1 if there is no violation. The previously determined raw fitness is multiplied by the penalty function(s) and the result is then the final fitness ffinal : ffinal=fraw⋅∏j=1Rpfj(rj)=∑i=1O(oi⋅wi)⋅∏j=1Rpfj(rj) This approach is simple and has the advantage of being able to combine any number of objectives and restrictions. The disadvantage is that different objectives can compensate each other and that the weights have to be defined before the optimization. In addition, certain solutions may not be obtained, see the section on the comparison of both types of optimization. Multi-objective optimization: Pareto optimization A solution is called Pareto-optimal if the improvement of one objective is only possible with a deterioration of at least one other objective. The set of all Pareto-optimal solutions, also called Pareto set, represents the set of all optimal compromises between the objectives. The figure below on the right shows an example of the Pareto set of two objectives f1 and f2 to be maximized. The elements of the set form the Pareto front (green line). From this set, a human decision maker must subsequently select the desired compromise solution. Constraints are included in Pareto optimization in that solutions without constraint violations are per se better than those with violations. If two solutions to be compared each have constraint violations, the respective extent of the violations decides.It was recognized early on that EAs with their simultaneously considered solution set are well suited to finding solutions in one run that cover the Pareto front sufficiently well. Besides the SPEA2, the NSGA-II and NSGA-III have established themselves as standard methods. Multi-objective optimization: The advantage of Pareto optimization is that, in contrast to the weighted sum, it provides all alternatives that are equivalent in terms of the objectives as an overall solution. The disadvantage is that a visualization of the alternatives becomes problematic or even impossible from four objectives on. Furthermore, the effort increases exponentially with the number of objectives. If there are more than three or four objectives, some have to be combined using the weighted sum or other aggregation methods. Multi-objective optimization: Comparison of both types of assessment With the help of the weighted sum, the total Pareto front can be obtained by a suitable choice of weights, provided that it is convex. This is illustrated by the adjacent picture on the left. The point P on the green Pareto front is reached by the weights w1 and w2 , provided that the EA converges to the optimum. The direction with the largest fitness gain in the solution set Z is shown by the drawn arrows. Multi-objective optimization: In case of a non-convex front, however, non-convex front sections are not reachable by the weighted sum. In the adjacent image on the right, this is the section between points A and B . This can be remedied to a limited extent by using an extension of the weighted sum, the cascaded weighted sum.Comparing both assessment approaches, the use of Pareto optimization is certainly advantageous when little is known about the possible solutions of a task and when the number of optimization objectives can be narrowed down to three, at most four. However, in the case of repeated optimization of variations of one and the same task, the desired lines of compromise are usually known and the effort to determine the entire Pareto front is no longer justified. This is also true when no human decision is desired or possible after optimization, such as in automated decision processes. Auxiliary objectives: In addition to the primary objectives resulting from the task itself, it may be necessary to include auxiliary objectives in the assessment to support the achievement of one or more primary objectives. An example of a scheduling task is used for illustration purposes. The optimization goals include not only a general fast processing of all orders but also the compliance with a latest completion time. The latter is especially necessary for the scheduling of rush orders. The second goal is not achieved by the exemplary initial schedule, as shown in the adjacent figure. A following mutation does not change this, but schedules the work step d earlier, which is a necessary intermediate step for an earlier start of the last work step e of the order. As long as only the latest completion time is evaluated, however, the fitness of the mutated schedule remains unchanged, even though it represents a relevant step towards the objective of a timely completion of the order. This can be remedied, for example, by an additional evaluation of the delay of work steps. The new objective is an auxiliary one, since it was introduced in addition to the actual optimization objectives to support their achievement. A more detailed description of this approach and another example can be found in.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Helium storage and conservation** Helium storage and conservation: Helium storage and conservation is a process of maintaining supplies of helium and preventing wasteful loss. Helium is commercially produced as a byproduct of natural gas extraction. Until the mid-1990s, the United States Bureau of Mines operated a large scale helium storage facility to support government requirements for helium. The Helium Privatization Act of 1996 and subsequent increased demand for helium has led to market volatility and the entrance of significant new producers. Intermittent shortages or price increases have motivated helium users to find new ways to save on helium consumption. A lack of helium supply can affect researchers and industrial users of helium, and may lead to loss of research materials and equipment. Perspectives on helium stocking and conservation: As early as 1982 there were discussions from multiple points of view about the possibility of helium shortage. One such point of discussion was to examine the usefulness of helium storage in the United States from an economic perspective. The maximisation of welfare resulting from this finite natural resource was the focal point of people of this school of thought. This economic approach is represented by the present value criterion. According to this criterion, a resource is ideally sold at the moment that the profit plus compounded interest is expected to be higher than it will be at any point in the foreseeable future, thus ensuring maximal economic value. On the other hand, there were people who advocated a more conservationist approach in the belief that the present value criterion resulted in too rapid use of the resource and too little consideration of the needs of future generations. Some scientists suggested that helium ought to be separated from as many sources as would be energetically ideal.Discussions continue. The occurrence of a worldwide helium shortage in 2006-7 made such concerns more pressing. This shortage caused price spikes and a complete cut-off of supply for some prospective buyers. Some equipment can become useless or permanently damaged without an adequate helium supply. For example, an uninterrupted supply of liquid helium is necessary for a vast number of university researchers, hospitals, pharmaceutical companies and high-tech industries. Without liquid helium, all magnetic resonance imaging machines would become inoperable and there is currently no equivalent diagnostic technology to replace them As a consequence, helium shortages are a very serious matter for certain groups. However, helium has a much wider range of applications. It has been used in other research laboratories, lighter-than-air craft, rockets, welding under inert conditions, producing breathing mixtures for deep-sea diving, respiratory therapy, and in cryogenics. Aside from laboratory applications and cryogenics, not all these uses exploit the unique properties of helium, which is therefore replaceable.One consequence of fears of helium shortages are attempts to improve production volume. It is profitable for natural gas manufacturers to recover helium from sources containing more than 0.3 percent. Part of the strategy of the 2013 Helium Stewardship Act, currently implemented by the United States Department of Energy through its Advanced Manufacturing Office of Isotopes within the Office of Nuclear Physics in the United States Department of Energy Office of Science, was to improve the economics of recovering helium beyond that threshold by making advances in the membrane technology used in the production process.The average price of liquid helium in North America in 2013 was around $6 per liter and represents the lower end of the price range; Europe with around $10 per liter is in the middle, whereas Latin America and Asia expends the highest band range of $13–15 per liter. Special situation of researchers: Several research organisations have released statements on the scarcity and conservation of helium. Among these are the American Physical Society, counting approximately 53,000 members, the Materials Research Society, an international organisation with 16,000 members, and the American Chemical Society, the world’s largest scientific society with some 158,000 members. These organisations released policy recommendations as early as 1995 and as late as 2016 urging the United States government to store and conserve helium because of the natural limits to the helium supply and the unique nature of the element. For researchers, helium is irreplaceable because it is essential for producing very low temperatures. In recent years, concerns about high prices and the occurrence of a shortage in 2006-7 have also contributed to calls for helium conservation and measures to lower the price of helium for researchers from these organisations. Not only the level of prices imposes hardships on researchers, but also their volatility. As researchers often work with essentially fixed budgets, sudden rises in the price of helium lead to a lack of sufficient funds for their research projects. An example from the United States of America clearly demonstrates the effect on researchers’ budgets: while in the mid-2000s individual investigator awards from the National Science Foundation’s Division of Materials Research were approximately $130,000 annually, and a typical low-temperature researcher spent up to $15,000 of their grant annually on liquid helium, in 2015 the typical Division of Materials Research grant for an individual investigator has only barely increased to $140,000 per year while researchers now have to spend upwards of $40,000—more than one quarter of their grant—on liquid helium. Currently, liquid helium can represent upwards of 30% of the cost of some low-temperature research projects.In response, research organisations have allocated funds for grants for small scale liquefiers for research purposes. According to estimates from the Division of Materials Research, there are potentially hundreds of research groups for which it would be economically viable to purchase such a system, but who do not have the necessary funds, as only a small fraction can be assisted by such grant programs.High prices have caused research organisations to issue recommendations to both the United States government and researchers on how to conserve helium by reducing consumption. In the wake of high prices, more researchers have invested in gas-capture systems to reduce their helium consumption. Such systems can pay for themselves within three years.Another measure that has been taken to ensure helium supply for researchers in the United States is the partnering of the American Physical Society and the American Chemical Society with the Defence Logistics Agency of the Department of Defense to create the Liquid Helium Purchasing Program, which provides more affordable and reliable liquid helium to program members. By combining customers’ needs, the Defence Logistics Agency substantially increases its purchasing power when negotiating contracts and price. The program also partners with multiple liquid helium suppliers so that its customers are not tied to a single vendor. The program enrollees achieved more reliable delivery and average savings of 15 percent.According to these research organisations, adverse effects of the high price of helium on research are already beginning to be seen: scientists are abandoning areas of research that require liquid helium, professors are having to cut hiring of graduate students, and institutions are moving away from hiring new faculty in areas of research that require the use of liquid helium. Development of the helium industry: In 1914, helium was mooted in Britain and the United States as a replacement for hydrogen in barrage balloons and aircraft.The first major development in helium production was the Helium Conservation Act of March 3, 1925. It established a production and sales program under the control of a centralized entity, the United States Bureau of Mines. Around this time, it was discovered that helium enabled divers to stay under water longer and ascend in a shorter time, presenting another application for helium. In reaction to depleting helium sources, the Helium Act of March 3, 1927 was established to prohibit the sale of helium to foreign countries and for non-governmental domestic use.By 1937, a number of factors collided to move the United States government to revise its helium policy and create the Helium Act of September 1, 1937. New uses for helium were appearing and the U.S. Army and Navy did not require anywhere near the national output. A final impetus was given by the Hindenburg disaster, which may have been prevented had the Germans had access to helium. The act authorized the sale of helium gas not needed by the U.S. government. This ultimately led to an expansion in helium usage in many scientific and commercial industries as the Bureau of Mines also supplied helium to private entities. The passage of this act also allowed non-hostile foreign governments to purchase helium for their own commercial use. When Nazi Germany applied for 18 million cubic feet of helium for public airship travel, this sparked a debate in the U.S., leading to a refusal.Throughout the Second World War, government demand still significantly outweighed private use and the supply was sufficient to meet government needs (230 million cubic feet in 1942). By the end of the war, demand for helium had dropped precipitously and the operation of most production plants ceased. This led the Bureau of Mines to begin a helium conservation program in January 1945 by injecting surplus helium into the Cliffside Gas Field. Creation of the US National Helium Reserve: From 1917 to 1962, the Bureau of Mines was the primary producer of helium and it remained the sole purifier of helium until 1963. Leading up to the early 1960s, there was a rapid growth in government demand in the United States for helium. It was fuelled by the military, especially for aerospace applications such as liquid fuel rockets for defense and space exploration. The amount of stored helium was very small before 1962 and the amount of available helium was essentially determined by the production of natural gas, from which it is separated as a side product, rather than by market forces.This situation changed in the early 1960s with the creation of the United States National Helium Reserve. At this time, the Bureau of Mines negotiated long-term contracts with four private companies for the first time to purchase and store large amounts of helium and it established an underground reservoir in the Cliffside Field near Amarillo, Texas. The original purpose of this reserve was to store helium in the 1960s for government use in the 1970s. To ensure that the revenue from future sales would amortize the cost, the Secretary of the Interior raised the price of high purity helium from $12 per thousand cubic feet to $35. This price jump was an incentive for private companies to enter the market and sell helium at lower prices. By 1970, it also became evident that the projected increase in government demand did not occur and that the helium stored in the Cliffside Field would last for decades. The combination of lower-than-projected demand and private competition resulted in sustained losses for the National Helium Reserve. In reaction, the government cancelled its contracts in 1973. As a consequence, the industrial capacity utilization rate for helium production dropped from 104% in 1966 to 41.7% in 1974. The helium companies involved in the operation sued the United States government for breach of contract. The owners of the land containing the natural gas from which helium was separated as a side-product sued the government for the value of the helium, as they were unable to sell it to third parties. In the 1970s the Bureau of Mines changed its policy to allow private companies to store helium in the Cliffside Field. This had a profound impact on the industry. Prior to this decision, roughly two billion cubic feet of helium were separated from natural gas annually and 0.6 billion cubic feet were sold. Three years after the decision, 0.88 billion cubic feet were sold, 0.54 were stored, and 0.98 were separated and vented. At the same time, roughly 4.74 billion cubic feet were not separated from natural gas. Helium Privatization Act: In the 1990s there was a rapid growth in demand due to the development of the electronics and magnetic resonance imaging industries. This growth continued at a slower pace until the 2010s, with the exception of 2008-2009. National Helium Reserve sales led to fluctuations in both pricing and supply. In this context, the Helium Privatization Act was passed in the United States in 1996. The Bureau of Land Management was given responsibility for operating the National Helium Reserve and charged with recouping the taxpayers’ investment by selling its crude helium to private vendors. More recent legislation aimed at fully privatising the helium market requires that the Bureau of Land Management sell off the vast majority of the reserve during the next several years and cease its operations by 2021. After problems with the helium supply in 2012-2013, the United States Congress acted to extend the life of the reserve. New producers: While formerly most of the helium production technologies were in the United States, additional producing countries slowly appeared, and Qatar, Canada, Algeria and Russia are producers of the gas. In 2015, this new production resulted in a surplus of supply over demand. The United States, which has historically been an exporter of helium, will soon become an importer for the first time in its history. Since 2013 the world's largest helium hub is no longer located in the United States of America but in Qatar, which produces 1.3 billion cubic feet of helium per year from a single project and meets 25% of the global demand. One challenge related to bringing new helium sources onto the market is that it usually requires venture capital financing. Another challenge is that the current selling price of U.S. Cliffside helium is too low to encourage more new producers to enter the field.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mathematische Nachrichten** Mathematische Nachrichten: Mathematische Nachrichten (abbreviated Math. Nachr.; English: Mathematical News) is a mathematical journal published in 12 issues per year by Wiley-VCH GmbH. It should not be confused with the Internationale Mathematische Nachrichten, an unrelated publication of the Austrian Mathematical Society. Mathematische Nachrichten: It was established in 1948 by East German mathematician Erhard Schmidt, who became its first editor-in-chief. At that time it was associated with the German Academy of Sciences at Berlin, and published by Akademie Verlag. After the fall of the Berlin Wall, Akademie Verlag was sold to VCH Verlagsgruppe Weinheim, which in turn was sold to John Wiley & Sons.According to the 2020 edition of Journal Citation Reports, the journal had an impact factor of 1.228, ranking it 111th among 333 journals in the category "Mathematics". As of 2021, Ben Andrews, Robert Denk, Klaus Hulek and Frédéric Klopp are the editors-in-chief of the journal.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Time in Grenada** Time in Grenada: Grenada observes Atlantic Standard Time (UTC−4) year-round. IANA time zone database: In the IANA time zone database, Grenada is given one zone in the file zone.tab—America/Grenada. "GD" refers to the country's ISO 3166-1 alpha-2 country code. Data for Grenada directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Praseodymium** Praseodymium: Praseodymium is a chemical element with the symbol Pr and the atomic number 59. It is the third member of the lanthanide series and is considered one of the rare-earth metals. It is a soft, silvery, malleable and ductile metal, valued for its magnetic, electrical, chemical, and optical properties. It is too reactive to be found in native form, and pure praseodymium metal slowly develops a green oxide coating when exposed to air. Praseodymium: Praseodymium always occurs naturally together with the other rare-earth metals. It is the sixth-most abundant rare-earth element and fourth-most abundant lanthanide, making up 9.1 parts per million of the Earth's crust, an abundance similar to that of boron. In 1841, Swedish chemist Carl Gustav Mosander extracted a rare-earth oxide residue he called didymium from a residue he called "lanthana", in turn separated from cerium salts. In 1885, the Austrian chemist Carl Auer von Welsbach separated didymium into two elements that gave salts of different colours, which he named praseodymium and neodymium. The name praseodymium comes from the Ancient Greek πράσινος (prasinos), meaning 'leek-green', and δίδυμος (didymos) 'twin'. Praseodymium: Like most rare-earth elements, praseodymium most readily forms the +3 oxidation state, which is the only stable state in aqueous solution, although the +4 oxidation state is known in some solid compounds and, uniquely among the lanthanides, the +5 oxidation state is attainable in matrix-isolation conditions. The 0, +1, and +2 oxidation states are rarely found. Aqueous praseodymium ions are yellowish-green, and similarly, praseodymium results in various shades of yellow-green when incorporated into glasses. Many of praseodymium's industrial uses involve its ability to filter yellow light from light sources. Physical properties: Praseodymium is the third member of the lanthanide series, and a member of the rare-earth metals. In the periodic table, it appears between the lanthanides cerium to its left and neodymium to its right, and above the actinide protactinium. It is a ductile metal with a hardness comparable to that of silver. Praseodymium is calculated has a very large atomic radius; with a radius of 247 pm, barium, rubidium and caesium are larger. However, observationally, it is usually 185 pm. Praseodymium's 59 electrons are arranged in the configuration [Xe]4f36s2; theoretically, all five outer electrons can act as valence electrons, but the use of all five requires extreme conditions and normally, praseodymium only gives up three or sometimes four electrons in its compounds.Like most other metals in the lanthanide series, praseodymium usually only uses three electrons as valence electrons, as afterward the remaining 4f electrons are too strongly bound: this is because the 4f orbitals penetrate the most through the inert xenon core of electrons to the nucleus, followed by 5d and 6s, and this increases with higher ionic charge. Praseodymium nevertheless can continue losing a fourth and even occasionally a fifth valence electron because it comes very early in the lanthanide series, where the nuclear charge is still low enough and the 4f subshell energy high enough to allow the removal of further valence electrons. Thus, similarly to the other early trivalent lanthanides, praseodymium has a double hexagonal close-packed crystal structure at room temperature. At about 560 °C, it transitions to a face-centered cubic structure, and a body-centered cubic structure appears shortly before the melting point of 935 °C.Praseodymium, like all of the lanthanides (except lanthanum, ytterbium, and lutetium, which have no unpaired 4f electrons), is paramagnetic at room temperature. Unlike some other rare-earth metals, which show antiferromagnetic or ferromagnetic ordering at low temperatures, praseodymium is paramagnetic at all temperatures above 1 K. Chemical properties: Praseodymium metal tarnishes slowly in air, forming a spalling green oxide layer like iron rust; a centimetre-sized sample of praseodymium metal corrodes completely in about a year. It burns readily at 150 °C to form praseodymium(III,IV) oxide, a nonstoichiometric compound approximating to Pr6O11: 12 Pr + 11 O2 → 2 Pr6O11This may be reduced to praseodymium(III) oxide (Pr2O3) with hydrogen gas. Praseodymium(IV) oxide, PrO2, is the most oxidised product of the combustion of praseodymium and can be obtained by either reaction of praseodymium metal with pure oxygen at 400 °C and 282 bar or by disproportionation of Pr6O11 in boiling acetic acid. The reactivity of praseodymium conforms to periodic trends, as it is one of the first and thus one of the largest lanthanides. At 1000 °C, many praseodymium oxides with composition PrO2−x exist as disordered, nonstoichiometric phases with 0 < x < 0.25, but at 400–700 °C the oxide defects are instead ordered, creating phases of the general formula PrnO2n−2 with n = 4, 7, 9, 10, 11, 12, and ∞. These phases PrOy are sometimes labelled α and β′ (nonstoichiometric), β (y = 1.833), δ (1.818), ε (1.8), ζ (1.778), ι (1.714), θ, and σ.Praseodymium is an electropositive element and reacts slowly with cold water and quite quickly with hot water to form praseodymium(III) hydroxide: 2 Pr (s) + 6 H2O (l) → 2 Pr(OH)3 (aq) + 3 H2 (g)Praseodymium metal reacts with all the stable halogens to form trihalides: 2 Pr (s) + 3 F2 (g) → 2 PrF3 (s) [green] 2 Pr (s) + 3 Cl2 (g) → 2 PrCl3 (s) [green] 2 Pr (s) + 3 Br2 (g) → 2 PrBr3 (s) [green] 2 Pr (s) + 3 I2 (g) → 2 PrI3 (s)The tetrafluoride, PrF4, is also known, and is produced by reacting a mixture of sodium fluoride and praseodymium(III) fluoride with fluorine gas, producing Na2PrF6, following which sodium fluoride is removed from the reaction mixture with liquid hydrogen fluoride. Additionally, praseodymium forms a bronze diiodide; like the diiodides of lanthanum, cerium, and gadolinium, it is a praseodymium(III) electride compound.Praseodymium dissolves readily in dilute sulfuric acid to form solutions containing the chartreuse Pr3+ ions, which exist as [Pr(H2O)9]3+ complexes: 2 Pr (s) + 3 H2SO4 (aq) → 2 Pr3+ (aq) + 3 SO2−4 (aq) + 3 H2 (g)Dissolving praseodymium(IV) compounds in water does not result in solutions containing the yellow Pr4+ ions; because of the high positive standard reduction potential of the Pr4+/Pr3+ couple at +3.2 V, these ions are unstable in aqueous solution, oxidising water and being reduced to Pr3+. The value for the Pr3+/Pr couple is −2.35 V. However, in highly basic aqueous media, Pr4+ ions can be generated by oxidation with ozone.Although praseodymium(V) in the bulk state is unknown, the existence of praseodymium in its +5 oxidation state (with the stable electron configuration of the preceding noble gas xenon) under noble-gas matrix isolation conditions was reported in 2016. The species assigned to the +5 state were identified as [PrO2]+, its O2 and Ar adducts, and PrO2(η2-O2). Chemical properties: Organopraseodymium compounds Organopraseodymium compounds are very similar to those of the other lanthanides, as they all share an inability to undergo π backbonding. They are thus mostly restricted to the mostly ionic cyclopentadienides (isostructural with those of lanthanum) and the σ-bonded simple alkyls and aryls, some of which may be polymeric. The coordination chemistry of praseodymium is largely that of the large, electropositive Pr3+ ion, and is thus largely similar to those of the other early lanthanides La3+, Ce3+, and Nd3+. For instance, like lanthanum, cerium, and neodymium, praseodymium nitrates form both 4:3 and 1:1 complexes with 18-crown-6, whereas the middle lanthanides from promethium to gadolinium can only form the 4:3 complex and the later lanthanides from terbium to lutetium cannot successfully coordinate to all the ligands. Such praseodymium complexes have high but uncertain coordination numbers and poorly defined stereochemistry, with exceptions resulting from exceptionally bulky ligands such as the tricoordinate [Pr{N(SiMe3)2}3]. There are also a few mixed oxides and fluorides involving praseodymium(IV), but it does not have an appreciable coordination chemistry in this oxidation state like its neighbour cerium. However, the first example of a molecular complex of praseodymium(IV) has recently been reported. Isotopes: Praseodymium has only one stable and naturally occurring isotope, 141Pr. It is thus a mononuclidic and monoisotopic element, and its standard atomic weight can be determined with high precision as it is a constant of nature. This isotope has 82 neutrons, which is a magic number that confers additional stability. This isotope is produced in stars through the s- and r-processes (slow and rapid neutron capture, respectively). Thirty-eight other radioisotopes have been synthesized. All of these isotopes have half-lives under a day (and most under a minute), with the single exception of 143Pr with a half-life of 13.6 days. Both 143Pr and 141Pr occur as fission products of uranium. The primary decay mode of isotopes lighter than 141Pr is positron emission or electron capture to isotopes of cerium, while that of heavier isotopes is beta decay to isotopes of neodymium. History: In 1751, the Swedish mineralogist Axel Fredrik Cronstedt discovered a heavy mineral from the mine at Bastnäs, later named cerite. Thirty years later, the fifteen-year-old Wilhelm Hisinger, from the family owning the mine, sent a sample of it to Carl Scheele, who did not find any new elements within. In 1803, after Hisinger had become an ironmaster, he returned to the mineral with Jöns Jacob Berzelius and isolated a new oxide, which they named ceria after the dwarf planet Ceres, which had been discovered two years earlier. Ceria was simultaneously and independently isolated in Germany by Martin Heinrich Klaproth. Between 1839 and 1843, ceria was shown to be a mixture of oxides by the Swedish surgeon and chemist Carl Gustaf Mosander, who lived in the same house as Berzelius; he separated out two other oxides, which he named lanthana and didymia. He partially decomposed a sample of cerium nitrate by roasting it in air and then treating the resulting oxide with dilute nitric acid. The metals that formed these oxides were thus named lanthanum and didymium.While lanthanum turned out to be a pure element, didymium was not and turned out to be only a mixture of all the stable early lanthanides from praseodymium to europium, as had been suspected by Marc Delafontaine after spectroscopic analysis, though he lacked the time to pursue its separation into its constituents. The heavy pair of samarium and europium were only removed in 1879 by Paul-Émile Lecoq de Boisbaudran and it was not until 1885 that Carl Auer von Welsbach separated didymium into praseodymium and neodymium. Von Welsbach confirmed the separation by spectroscopic analysis, but the products were of relatively low purity. Since neodymium was a larger constituent of didymium than praseodymium, it kept the old name with disambiguation, while praseodymium was distinguished by the leek-green colour of its salts (Greek πρασιος, "leek green"). The composite nature of didymium had previously been suggested in 1882 by Bohuslav Brauner, who did not experimentally pursue its separation. Occurrence and production: Praseodymium is not particularly rare, despite it being in the rare-earth metals, making up 9.2 mg/kg of the Earth's crust. This value is between those of thorium (9.6 mg/kg) and samarium (7.05 mg/kg), and makes praseodymium the fourth-most abundant of the lanthanides, behind cerium (66.5 mg/kg), neodymium (41.5 mg/kg), and lanthanum (39 mg/kg); it is less abundant than the rare-earth elements yttrium (33 mg/kg) and scandium (22 mg/kg). Instead, praseodymium's classification as a rare-earth metal comes from its rarity relative to "common earths" such as lime and magnesia, the few known minerals containing it for which extraction is commercially viable, as well as the length and complexity of extraction. Although not particularly rare, praseodymium is never found as a dominant rare earth in praseodymium-bearing minerals. It is always preceded by cerium and lanthanum and usually also by neodymium. Occurrence and production: The Pr3+ ion is similar in size to the early lanthanides of the cerium group (those from lanthanum up to samarium and europium) that immediately follow in the periodic table, and hence it tends to occur along with them in phosphate, silicate and carbonate minerals, such as monazite (MIIIPO4) and bastnäsite (MIIICO3F), where M refers to all the rare-earth metals except scandium and the radioactive promethium (mostly Ce, La, and Y, with somewhat less Nd and Pr). Bastnäsite is usually lacking in thorium and the heavy lanthanides, and the purification of the light lanthanides from it is less involved. The ore, after being crushed and ground, is first treated with hot concentrated sulfuric acid, evolving carbon dioxide, hydrogen fluoride, and silicon tetrafluoride. The product is then dried and leached with water, leaving the early lanthanide ions, including lanthanum, in solution.The procedure for monazite, which usually contains all the rare earth, as well as thorium, is more involved. Monazite, because of its magnetic properties, can be separated by repeated electromagnetic separation. After separation, it is treated with hot concentrated sulfuric acid to produce water-soluble sulfates of rare earth. The acidic filtrates are partially neutralized with sodium hydroxide to pH 3–4, during which thorium precipitates as hydroxide and is removed. The solution is treated with ammonium oxalate to convert rare earth to their insoluble oxalates, the oxalates are converted to oxides by annealing, and the oxides are dissolved in nitric acid. This last step excludes one of the main components, cerium, whose oxide is insoluble in HNO3. Care must be taken when handling some of the residues as they contain 228Ra, the daughter of 232Th, which is a strong gamma emitter.Praseodymium may then be separated from the other lanthanides via ion-exchange chromatography, or by using a solvent such as tributyl phosphate where the solubility of Ln3+ increases as the atomic number increases. If ion-exchange chromatography is used, the mixture of lanthanides is loaded into one column of cation-exchange resin and Cu2+ or Zn2+ or Fe3+ is loaded into the other. An aqueous solution of a complexing agent, known as the eluant (usually triammonium edtate), is passed through the columns, and Ln3+ is displaced from the first column and redeposited in a compact band at the top of the column before being re-displaced by NH+4. The Gibbs free energy of formation for Ln(edta·H) complexes increases along with the lanthanides by about one quarter from Ce3+ to Lu3+, so that the Ln3+ cations descend the development column in a band and are fractionated repeatedly, eluting from heaviest to lightest. They are then precipitated as their insoluble oxalates, burned to form the oxides, and then reduced to metals. Applications: Leo Moser (not to be confused with the mathematician of the same name), son of Ludwig Moser, founder of the Moser Glassworks in what is now Karlovy Vary in the Czech Republic, investigated the use of praseodymium in glass coloration in the late 1920s, yielding a yellow-green glass given the name "Prasemit". However, at that time far cheaper colorants could give a similar color, so Prasemit was not popular, few pieces were made, and examples are now extremely rare. Moser also blended praseodymium with neodymium to produce "Heliolite" glass ("Heliolit" in German), which was more widely accepted. The first enduring commercial use of purified praseodymium, which continues today, is in the form of a yellow-orange "Praseodymium Yellow" stain for ceramics, which is a solid solution in the zircon lattice. This stain has no hint of green in it; by contrast, at sufficiently high loadings, praseodymium glass is distinctly green rather than pure yellow.Like many other lanthanides, praseodymium's shielded f-orbitals allow for long excited state lifetimes and high luminescence yields. Pr3+ as a dopant ion therefore sees many applications in optics and photonics. These include DPSS-lasers, single-mode fiber optical amplifiers, fiber lasers, upconverting nanoparticles as well as activators in red, green, blue, and ultraviolet phosphors. Silicate crystals doped with praseodymium ions have also been used to slow a light pulse down to a few hundred meters per second.As the lanthanides are so similar, praseodymium can substitute for most other lanthanides without significant loss of function, and indeed many applications such as mischmetal and ferrocerium alloys involve variable mixes of several lanthanides, including small quantities of praseodymium. The following more modern applications involve praseodymium specifically or at least praseodymium in a small subset of the lanthanides: In combination with neodymium, another rare-earth element, praseodymium is used to create high-power magnets notable for their strength and durability. In general, most alloys of the cerium-group rare earths (lanthanum through samarium) with 3d transition metals give extremely stable magnets that are often used in small equipment, such as motors, printers, watches, headphones, loudspeakers, and magnetic storage. Applications: Praseodymium–nickel intermetallic (PrNi5) has such a strong magnetocaloric effect that it has allowed scientists to approach within one thousandth of a degree of absolute zero. As an alloying agent with magnesium to create high-strength metals that are used in aircraft engines; yttrium and neodymium are also viable substitutes. Praseodymium is present in the rare-earth mixture whose fluoride forms the core of carbon arc lights, which are used in the motion picture industry for studio lighting and projector lights. Praseodymium compounds give glasses, enamels and ceramics a yellow color. Praseodymium is a component of didymium glass, which is used to make certain types of welder's and glass blower's goggles. Applications: Praseodymium oxide in solid solution with ceria or ceria-zirconia has been used as an oxidation catalyst.Due to its role in permanent magnets used for wind turbines, it has been argued that praseodymium will be one of the main objects of geopolitical competition in a world running on renewable energy. However, this perspective has been criticized for failing to recognize that most wind turbines do not use permanent magnets and for underestimating the power of economic incentives for expanded production. Biological role and precautions: The early lanthanides have been found to be essential to some methanotrophic bacteria living in volcanic mudpots, such as Methylacidiphilum fumariolicum: lanthanum, cerium, praseodymium, and neodymium are about equally effective. Praseodymium is otherwise not known to have a biological role in any other organisms, but it is not very toxic either. Intravenous injection of rare earths into animals has been known to impair liver function, but the main side effects from inhalation of rare-earth oxides in humans come from radioactive thorium and uranium impurities.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded