text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Walsh function**
Walsh function:
In mathematics, more specifically in harmonic analysis, Walsh functions form a complete orthogonal set of functions that can be used to represent any discrete function—just like trigonometric functions can be used to represent any continuous function in Fourier analysis. They can thus be viewed as a discrete, digital counterpart of the continuous, analog system of trigonometric functions on the unit interval. But unlike the sine and cosine functions, which are continuous, Walsh functions are piecewise constant. They take the values −1 and +1 only, on sub-intervals defined by dyadic fractions.
Walsh function:
The system of Walsh functions is known as the Walsh system. It is an extension of the Rademacher system of orthogonal functions.Walsh functions, the Walsh system, the Walsh series, and the fast Walsh–Hadamard transform are all named after the American mathematician Joseph L. Walsh. They find various applications in physics and engineering when analyzing digital signals.
Historically, various numerations of Walsh functions have been used; none of them is particularly superior to another. This articles uses the Walsh–Paley numeration.
Definition:
We define the sequence of Walsh functions Wk:[0,1]→{−1,1} , k∈N as follows.
Definition:
For any natural number k, and real number x∈[0,1] , let kj be the jth bit in the binary representation of k, starting with k0 as the least significant bit, and xj be the jth bit in the fractional binary representation of x , starting with x1 as the most significant fractional bit.Then, by definition Wk(x)=(−1)∑j=0∞kjxj+1 In particular, W0(x)=1 everywhere on the interval, since all bits of k are zero.
Definition:
Notice that W2m is precisely the Rademacher function rm.
Thus, the Rademacher system is a subsystem of the Walsh system. Moreover, every Walsh function is a product of Rademacher functions: Wk(x)=∏j=0∞rj(x)kj
Comparison between Walsh functions and trigonometric functions:
Walsh functions and trigonometric functions are both systems that form a complete, orthonormal set of functions, an orthonormal basis in Hilbert space L2[0,1] of the square-integrable functions on the unit interval. Both are systems of bounded functions, unlike, say, the Haar system or the Franklin system.
Comparison between Walsh functions and trigonometric functions:
Both trigonometric and Walsh systems admit natural extension by periodicity from the unit interval to the real line R . Furthermore, both Fourier analysis on the unit interval (Fourier series) and on the real line (Fourier transform) have their digital counterparts defined via Walsh system, the Walsh series analogous to the Fourier series, and the Hadamard transform analogous to the Fourier transform.
Properties:
The Walsh system {Wk},k∈N0 is a commutative multiplicative discrete group isomorphic to ∐n=0∞Z/2Z , the Pontryagin dual of Cantor group ∏n=0∞Z/2Z . Its identity is W0 , and every element is of order two (that is, self-inverse).
Properties:
The Walsh system is an orthonormal basis of Hilbert space L2[0,1] . Orthonormality means ∫01Wk(x)Wl(x)dx=δkl ,and being a basis means that if, for every f∈L2[0,1] , we set fk=∫01f(x)Wk(x)dx then ∫01(f(x)−∑k=0NfkWk(x))2dx→N→∞0 It turns out that for every f∈L2[0,1] , the series ∑k=0∞fkWk(x) converge to f(x) for almost every x∈[0,1] The Walsh system (in Walsh-Paley numeration) forms a Schauder basis in Lp[0,1] , 1<p<∞ . Note that, unlike the Haar system, and like the trigonometric system, this basis is not unconditional, nor is the system a Schauder basis in L1[0,1]
Generalizations:
Walsh-Ferleger systems Let D=∏n=1∞Z/2Z be the compact Cantor group endowed with Haar measure and let D^=∐n=1∞Z/2Z be its discrete group of characters. Elements of D^ are readily identified with Walsh functions. Of course, the characters are defined on D while Walsh functions are defined on the unit interval, but since there exists a modulo zero isomorphism between these measure spaces, measurable functions on them are identified via isometry.
Generalizations:
Then basic representation theory suggests the following broad generalization of the concept of Walsh system.
Generalizations:
For an arbitrary Banach space (X,||⋅||) let {Rt}t∈D⊂Aut(X) be a strongly continuous, uniformly bounded faithful action of D on X. For every γ∈D^ , consider its eigenspace Xγ={x∈X:Rtx=γ(t)x} . Then X is the closed linear span of the eigenspaces: Span ¯(Xγ,γ∈D^) . Assume that every eigenspace is one-dimensional and pick an element wγ∈Xγ such that ||wγ||=1 . Then the system {wγ}γ∈D^ , or the same system in the Walsh-Paley numeration of the characters {wk}k∈N0 is called generalized Walsh system associated with action {Rt}t∈D . Classical Walsh system becomes a special case, namely, for Rt:x=∑j=1∞xj2−j↦∑j=1∞(xj⊕tj)2−j where ⊕ is addition modulo 2.
Generalizations:
In the early 1990s, Serge Ferleger and Fyodor Sukochev showed that in a broad class of Banach spaces (so called UMD spaces ) generalized Walsh systems have many properties similar to the classical one: they form a Schauder basis and a uniform finite dimensional decomposition in the space, have property of random unconditional convergence.
One important example of generalized Walsh system is Fermion Walsh system in non-commutative Lp spaces associated with hyperfinite type II factor.
Generalizations:
Fermion Walsh system The Fermion Walsh system is a non-commutative, or "quantum" analog of the classical Walsh system. Unlike the latter, it consists of operators, not functions. Nevertheless, both systems share many important properties, e.g., both form an orthonormal basis in corresponding Hilbert space, or Schauder basis in corresponding symmetric spaces. Elements of the Fermion Walsh system are called Walsh operators.
Generalizations:
The term Fermion in the name of the system is explained by the fact that the enveloping operator space, the so-called hyperfinite type II factor R , may be viewed as the space of observables of the system of countably infinite number of distinct spin 12 fermions. Each Rademacher operator acts on one particular fermion coordinate only, and there it is a Pauli matrix. It may be identified with the observable measuring spin component of that fermion along one of the axes {x,y,z} in spin space. Thus, a Walsh operator measures the spin of a subset of fermions, each along its own axis.
Generalizations:
Vilenkin system Fix a sequence α=(α1,α2,...) of integers with αk≥2,k=1,2,… and let G=Gα=∏n=1∞Z/αkZ endowed with the product topology and the normalized Haar measure. Define A0=1 and Ak=α1α2…αk−1 . Each x∈G can be associated with the real number |x|=∑k=1∞xkAk∈[0,1].
This correspondence is a module zero isomorphism between G and the unit interval. It also defines a norm which generates the topology of G . For k=1,2,… , let ρk:G→C where exp cos sin (2πxkαk).
The set {ρk} is called generalized Rademacher system. The Vilenkin system is the group G^=∐n=1∞Z/αkZ of (complex-valued) characters of G , which are all finite products of {ρk} . For each non-negative integer n there is a unique sequence n0,n1,… such that 0≤nk<αk+1,k=0,1,2,… and n=∑k=0∞nkAk.
Then G^=χn|n=0,1,… where χn=∑k=0∞ρk+1nk.
In particular, if 2...
, then G is the Cantor group and G^={χn|n=0,1,…} is the (real-valued) Walsh-Paley system.
Generalizations:
The Vilenkin system is a complete orthonormal system on G and forms a Schauder basis in Lp(G,C) , 1<p<∞ Binary Surfaces Romanuke showed that Walsh functions can be generalized to binary surfaces in a particular case of function of two variables. There also exist eight Walsh-like bases of orthonormal binary functions, whose structure is nonregular (unlike the structure of Walsh functions). These eight bases are generalized to surfaces (in the case of the function of two variables) also. It was proved that piecewise-constant functions can be represented within each of nine bases (including the Walsh functions basis) as finite sums of binary functions, when weighted with proper coefficients.
Generalizations:
Nonlinear Phase Extensions Nonlinear phase extensions of discrete Walsh-Hadamard transform were developed. It was shown that the nonlinear phase basis functions with improved cross-correlation properties significantly outperform the traditional Walsh codes in code division multiple access (CDMA) communications.
Applications:
Applications of the Walsh functions can be found wherever digit representations are used, including speech recognition, medical and biological image processing, and digital holography.
Applications:
For example, the fast Walsh–Hadamard transform (FWHT) may be used in the analysis of digital quasi-Monte Carlo methods. In radio astronomy, Walsh functions can help reduce the effects of electrical crosstalk between antenna signals. They are also used in passive LCD panels as X and Y binary driving waveforms where the autocorrelation between X and Y can be made minimal for pixels that are off. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Obliteration by incorporation**
Obliteration by incorporation:
In sociology of science, obliteration by incorporation (OBI) occurs when at some stage in the development of a science, certain ideas become so universally accepted and commonly used that their contributors are no longer cited. Eventually, its source and creator are forgotten ("obliterated") as the concept enters common knowledge (is "incorporated"). Obliteration occurs when "the sources of an idea, finding or concept, become obliterated by incorporation in canonical knowledge, so that only a few are still aware of their parentage".
Concept:
The concept was introduced by Robert K. Merton in 1949, although some incorrectly attribute it to Eugene Garfield, whose work contributed to the popularization of Merton's theory. Merton introduced the concept of "obliteration by incorporation" in his landmark work, Social Theory and Social Structure in 1949 (although the revised edition of 1968 is usually cited (pp. 27–28, 35–37 in the enlarged edition)). Merton also introduced the less known counterpart to this concept, adumbrationism, meaning the attribution of insights, ideas or analogies absent from original works.In the process of "obliteration by incorporation", both the original idea and the literal formulations of it are forgotten due to prolonged and widespread use, and enter into everyday language (or at least the everyday language of a given academic discipline), no longer being attributed to their creator.Thus they become similar to common knowledge. Merton notes that this process is much more common in highly codified fields of natural sciences than in social sciences. It can also lead to ignoring or hiding the early sources of recent ideas under the claims of novelty and originality. Allan Chapman notes that 'obliteration by incorporation' often affects famous individuals, to whom attribution becomes considered as obvious and unnecessary, thus leading to their exclusion from citations, even if they and their ideas have been mentioned in the text. Marianne Ferber and Eugene Garfield concur with Chapman, noting that obliteration often occurs when the citation count and reputation of an affected scientist have already reached levels much higher than average.The obliteration phenomenon is a concept in library and information science, referring to the tendency for truly ground-breaking research papers to fail to be cited after the ideas they put forward are fully accepted into the orthodox world view. For example, Albert Einstein's paper on the theory of relativity is rarely cited in modern research papers on physical cosmology, despite its direct relevance.
Examples:
Many terms and phrases were so evocative that they quickly suffered the fate of 'obliteration by incorporation'. Examples include: double helix structure of DNA, introduced by James D. Watson and Francis Crick periodic table of elements, introduced by Dmitri Mendeleev self-fulfilling prophecy, introduced by Robert K. Merton role model, introduced by Robert K. Merton deconstruction, introduced by Jacques Derrida | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dentinogenesis**
Dentinogenesis:
Dentinogenesis is the formation of dentin, a substance that forms the majority of teeth. Dentinogenesis is performed by odontoblasts, which are a special type of biological cell on the outer wall of dental pulps, and it begins at the late bell stage of a tooth development. The different stages of dentin formation after differentiation of the cell result in different types of dentin: mantle dentin, primary dentin, secondary dentin, and tertiary dentin.
Odontoblast differentiation:
Odontoblasts differentiate from cells of the dental papilla. This is an expression of signaling molecules and growth factors of the inner enamel epithelium (IEE).
Formation of mantle dentin:
They begin secreting an organic matrix around the area directly adjacent to the IEE, closest to the area of the future cusp of a tooth. The organic matrix contains collagen fibers with large diameters (0.1-0.2 μm in diameter). The odontoblasts begin to move toward the center of the tooth, forming an extension called the odontoblast process. Thus, dentin formation proceeds toward the inside of the tooth. The odontoblast process causes the secretion of hydroxyapatite crystals and mineralization of the matrix (mineralisation occurs due to matrix vesicles). This area of mineralization is known as mantle dentin and is a layer usually about 20-150 μm thick.
Formation of primary dentin:
Whereas mantle dentin forms from the preexisting ground substance of the dental papilla, primary dentin forms through a different process. Odontoblasts increase in size, eliminating the availability of any extracellular resources to contribute to an organic matrix for mineralization. Additionally, the larger odontoblasts cause collagen to be secreted in smaller amounts, which results in more tightly arranged, heterogeneous nucleation that is used for mineralization. Other materials (such as lipids, phosphoproteins, and phospholipids) are also secreted. There is some dispute about the control of mineralization during dentinogenesis.The dentin in the root of a tooth forms only after the presence of Hertwig epithelial root sheath (HERS), near the cervical loop of the enamel organ. Root dentin is considered different from dentin found in the crown of the tooth (known as coronal dentin) because of the different orientation of collagen fibers, as well as the possible decrease of phosphophoryn levels and less mineralization.Maturation of dentin or mineralization of predentin occurs soon after its apposition, which takes place two phases: primary and secondary. Initially, the calcium hydroxyapatite crystals form as globules, or calcospherules, in the collagen fibers of the predentin, which allows for both the expansion and fusion during the primary mineralization phase. Later, new areas of mineralization occur as globules form in the partially mineralized predentin during the secondary mineralization phase. These new areas of crystal formation are more or less regularly layered on the initial crystals, allowing them to expand, although they fuse incompletely.
Formation of primary dentin:
In areas where both primary and secondary mineralization have occurred with complete crystalline fusion, these appear as lighter rounded areas on a stained section of dentin and are considered globular dentin. In contrast, the darker arclike areas in a stained section of dentin are considered interglobular dentin. In these areas, only primary mineralization has occurred within the predentin, and the globules of dentin do not fuse completely. Thus, interglobular dentin is slightly less mineralized than globular dentin. Interglobular dentin is especially evident in coronal dentin, near the DEJ, and in certain dental anomalies, such as in dentin dysplasia.
Formation of secondary dentin:
Secondary dentin is formed after root formation is finished and occurs at a much slower rate. It is not formed at a uniform rate along the tooth, but instead forms faster along sections closer to the crown of a tooth. This development continues throughout life and accounts for the smaller areas of pulp found in older individuals.
Formation of tertiary dentin:
Tertiary dentin is deposited at specific sites in response to injury by odontoblasts or replacement odontoblasts from the pulp depending on the severity of the injury. Tertiary dentin can be divided into reactionary or reparative dentin. Reactionary dentin is formed by odontoblasts when the injury does not damage the odontoblast layer. Reparative dentin is formed by replacement odontoblasts when the injury is so severe that it damages a part of the primary odontoblast layer. Thus a type of tertiary dentin forms in reaction to stimuli, such as attrition or dental caries. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Session (cricket)**
Session (cricket):
In cricket, a session is a period of play during which overs are played continuously until a break in play is called.
Session (cricket):
In Test matches, each of the five potential days of the match typically comprises three main sessions, usually referred to as the morning, afternoon, and evening sessions. The morning and afternoon sessions are usually separated by a 40-minute lunch break, and the afternoon and evening sessions by a 20-minute tea break. Each of the three sessions is approximately 30 overs long, and is broken up further into two to three minor sessions varying in length, separated by drinks breaks. The exact timing of these intra-session breaks is the umpiring team's call.In One Day Internationals, matches are played over two innings, with three sessions in each, usually in lengths of 15, 15, and 20 overs. These three sessions may also contain short drinks breaks. Additionally, day-time ODI matches include a lunch break between the first and second innings. In day-night ODI matches, the lunch break is replaced by a dinner break.Sessions of play often influence a team's tactics for a match, especially as natural light varies over the course of the day, and the pitch wears over the course of a match, whether one-day or Test. For example, teams usually choose opening Test batsmen who can navigate opening bowlers, who often bowl aggressively in the first session of a Test match. Similarly, Test teams sometimes deploy a nightwatchman during the closing session of a day so as not to lose important wickets in conditions that might be difficult for an incoming batsman to manage. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Depiction**
Depiction:
Depiction is reference conveyed through pictures. A picture refers to its object through a non-linguistic two-dimensional scheme, and is distinct from writing or notation. A depictive two-dimensional scheme is called a picture plane and may be constructed according to descriptive geometry, where they are usually divided between projections (orthogonal and various oblique angles) and perspectives (according to number of vanishing points).
Depiction:
Pictures are made with various materials and techniques, such as painting, drawing, or prints (including photography and movies) mosaics, tapestries, stained glass, and collages of unusual and disparate elements. Occasionally, picture-like features may be recognised in simple inkblots, accidental stains, peculiar clouds or a glimpse of the moon, but these are special cases, and it is controversial whether they count as genuine instances of depiction. Similarly, sculpture and theatrical performances are sometimes said to depict, but this requires a broad understanding of 'depict', as simply designating a form of representation that is not linguistic or notational. The bulk of studies of depiction however deal only with pictures. While sculpture and performance clearly represent or refer, they do not strictly picture their objects. Objects pictured may be factual or fictional, literal or metaphorical, realistic or idealised and in various combination. Idealised depiction is also termed schematic or stylised and extends to icons, diagrams and maps. Classes or styles of picture may abstract their objects by degrees, conversely, establish degrees of the concrete (usually called, a little confusingly, figuration or figurative, since the 'figurative' is then often quite literal). Stylisation can lead to the fully abstract picture, where reference is only to conditions for a picture plane – a severe exercise in self-reference and ultimately a sub-set of pattern.
Depiction:
But just how pictures function remains controversial. Philosophers, art historians and critics, perceptual psychologists and other researchers in the arts and social sciences have contributed to the debate and many of the most influential contributions have been interdisciplinary. Some key positions are briefly surveyed below.
Resemblance:
Traditionally, depiction is distinguished from denotative meaning by the presence of a mimetic element or resemblance. A picture resembles its object in a way a word or sound does not. Resemblance is no guarantee of depiction, obviously. Two pens may resemble one another but do not therefore depict each other. To say a picture resembles its object especially is only to say that its object is that which it especially resembles; which strictly begins with the picture itself. Indeed, since everything resembles something in some way, mere resemblance as a distinguishing trait is trivial. Moreover, depiction is no guarantee of resemblance to an object. A picture of a dragon does not resemble an actual dragon. So resemblance is not enough.
Resemblance:
Theories have tried either to set further conditions to the kind of resemblance necessary, or sought ways in which a notational system might allow such resemblance. It is widely believed that the problem with a resemblance theory of depiction is that resemblance is a symmetrical relation between terms (necessarily, if x resembles y, then y resembles x) while in contrast depiction is at best a non-symmetrical relation (it is not necessary that, if x depicts y, y depicts x). If this is right, then depiction and resemblance cannot be identified, and a resemblance theory of depiction is forced to offer a more complicated explanation, for example by relying on experienced resemblance instead, which clearly is an asymmetrical notion (that you experience x as resembling y does not mean you also experience y as resembling x). Others have argued, however, that the concept of resemblance is not exclusively a relational notion, and so that the initial problem is merely apparent.In art history, the history of actual attempts to achieve resemblance in depictions is usually covered under the terms "realism", naturalism", or "illusionism".
Illusion:
The most famous and elaborate case for resemblance modified by reference, is made by art historian Ernst Gombrich. Resemblance in pictures is taken to involve illusion. Instincts in visual perception are said to be triggered or alerted by pictures, even when we are rarely deceived. The eye supposedly cannot resist finding resemblances that accord with illusion. Resemblance is thus narrowed to something like the seeds of illusion. Against the one-way relation of reference Gombrich argues for a weaker or labile relation, inherited from substitution. Pictures are thus both more primitive and powerful than stricter reference.
Illusion:
But whether a picture can deceive a little while it represents as much seems gravely compromised. Claims for innate dispositions in sight are also contested. Gombrich appeals to an array of psychological research from James J. Gibson, R. L. Gregory, John M. Kennedy, Konrad Lorenz, Ulric Neisser and others in arguing for an 'optical' basis to perspective, in particular (see also perspective (graphical). Subsequent cross-cultural studies in depictive competence and related studies in child-development and vision impairment are inconclusive at best.
Illusion:
Gombrich's convictions have important implications for his popular history of art, for treatment and priorities there. In a later study by John Willats (1997) on the variety and development of picture planes, Gombrich's views on the greater realism of perspective underpin many crucial findings.
Dual invariants:
A more frankly behaviouristic view is taken by the perceptual psychologist James J. Gibson, partly in response to Gombrich. Gibson treats visual perception as the eye registering necessary information for behaviour in a given environment. The information is filtered from light rays that meet the retina. The light is called the stimulus energy or sensation. The information consists of underlying patterns or 'invariants' for vital features to the environment.
Dual invariants:
Gibson's view of depiction concerns the re-presentation of these invariants. In the case of illusions or trompe l'oeil, the picture also conveys the stimulus energy, but generally the experience is of perceiving two sets of invariants, one for the picture surface, another for the object pictured. He pointedly rejects any seeds of illusion or substitution and allows that a picture represents when two sets of invariants are displayed. But invariants tell us little more than that the resemblance is visible, dual invariants only that the terms of reference are the same as those for resemblance
Seeing-in:
A similar duality is proposed by the philosopher of art Richard Wollheim. He calls it 'twofoldness'. Our experience of the picture surface is called the 'configurational' aspect, and our experience of the object depicted the 'recognitional'. Wollheim's main claim is that we are simultaneously aware of both the surface and the depicted object. The concept of twofoldness has been very influential in contemporary analytic aesthetics, especially in the writings of Dominic Lopes and of Bence Nanay. Again, illusion is forestalled by the prominence of the picture surface where an object is depicted. Yet the object depicted quite simply is the picture surface under one reading, the surface indifferent to picture, another. The two are hardly compatible or simultaneous. Nor do they ensure a reference relation.
Seeing-in:
Wollheim introduces the concept of 'seeing-in' to qualify depictive resemblance. Seeing-in is a psychological disposition to detect a resemblance between certain surfaces, such as inkblots or accidental stains, etc. and three-dimensional objects. The eye is not deceived, but finds or projects some resemblance to the surface. This is not quite depiction, since the resemblance is only incidental to the surface. The surface does not strictly refer to such objects. Seeing-in is a necessary condition to depiction, and sufficient when in accordance with the maker's intentions, where these are clear from certain features to a picture. But seeing-in cannot really say in what way such surfaces resemble objects either, only specify where they perhaps first occur.
Seeing-in:
Wollheim's account of how a resemblance is agreed or modified, whereby maker and user anticipate each other's roles, does not really explain how a resemblance refers, but rather when an agreed resemblance obtains.
Other psychological resources:
The appeal to broader psychological factors in qualifying depictive resemblance is echoed in the theories of philosophers such as Robert Hopkins, Flint Schier and Kendall Walton. They enlist 'experience', 'recognition' and 'imagination' respectively. Each provides additional factors to an understanding or interpretation of pictorial reference, although none can explain how a picture resembles an object (if indeed it does), nor how this resemblance is then also a reference.
Other psychological resources:
For example, Schier returns to the contrast with language to try to identify a crucial difference in depictive competence. Understanding a pictorial style does not depend upon learning a vocabulary and syntax. Once grasped, a style allows the recognition of any object known to the user. Of course recognition allows a great deal more than that – books teaching children to read often introduce them to many exotic creatures such as a kangaroo or armadillo through illustrations. Many fictions and caricatures are promptly recognised without prior acquaintance of either a particular style or the object in question. So competence cannot rely on a simple index or synonymy for objects and styles.
Other psychological resources:
Schier's conclusion that lack of syntax and semantics in reference then qualifies as depiction, leaves dance, architecture, animation, sculpture and music all sharing the same mode of reference. This perhaps points as much to limitations in a linguistic model.
Notation:
Reversing orthodoxy, the philosopher Nelson Goodman starts from reference and attempts to assimilate resemblance. He denies resemblance as either necessary or sufficient condition for depiction but surprisingly, allows that it arises and fluctuates as a matter of usage or familiarity.For Goodman, a picture denotes. Denotation is divided between description, covering writing and extending to more discursive notation including music and dance scores, to depiction at greatest remove. However, a word does not grow to resemble its object, no matter how familiar or preferred. To explain how a pictorial notation does, Goodman proposes an analogue system, consisting of undifferentiated characters, a density of syntax and semantics and relative repleteness of syntax. These requirements taken in combination mean that a one-way reference running from picture to object encounters a problem. If its semantics is undifferentiated, then the relation flows back from object to picture. Depiction can acquire resemblance but must surrender reference. This is a point tacitly acknowledged by Goodman, conceding firstly that density is the antithesis of notation and later that lack of differentiation may actually permit resemblance. A denotation without notation lacks sense.
Notation:
Nevertheless, Goodman's framework is revisited by philosopher John Kulvicki and applied by art historian James Elkins to an array of hybrid artefacts, combining picture, pattern and notation.
Pictorial semiotics:
Pictorial semiotics aims for just the kind of integration of depiction with notation undertaken by Goodman, but fails to identify his requirements for syntax and semantics. It seeks to apply the model of structural linguistics, to reveal core meanings and permutations for pictures of all kinds, but stalls in identifying constituent elements of reference, or as semioticians prefer, 'signification'. Similarly, they accept resemblance although call it 'iconicity' (after Charles Sanders Peirce, 1931–58) and are uncomfortable in qualifying its role. Older practitioners, such as Roland Barthes and Umberto Eco variously shift analysis to underlying 'connotations' for an object depicted or concentrate on description of purported content at the expense of more medium-specific meaning. Essentially they establish a more general iconography.
Pictorial semiotics:
A later adherent, Göran Sonesson, rejects Goodman's terms for syntax and semantics as alien to linguistics, no more than an ideal and turns instead to the findings of perceptual psychologists, such as J. M. Kennedy, N. H. Freeman and David Marr in order to detect underlying structure. Sonesson accepts 'seeing-in', although prefers Edmund Husserl's version. Resemblance is again grounded in optics or the visible, although this does not exclude writing nor reconcile resemblance with reference. Discussion tends to be restricted to the function of outlines in schemes for depth.
Deixis:
The art historian Norman Bryson persists with a linguistic model and advances a detail of parsing and tense, 'deixis'. He rejects resemblance and illusion as incompatible with the ambiguities and interpretation available to pictures and is also critical of the inflexible nature of structuralist analysis. Deixis is taken as the rhetoric of the narrator, indicating the presence of the speaker in a discourse, a bodily or physical aspect as well as an explicit temporal dimension. In depiction this translates as a difference between 'The Gaze' where deixis is absent and 'The Glance' where it is present. Where present, details to materials indicate how long and in what way the depiction was made, where absent, a telling suppression or prolonging of the act. The distinction attempts to account for the 'plastic' or medium-specific qualities absent from earlier semiotic analyses and somewhat approximates the 'indexic' aspect to signs introduced by Peirce.
Deixis:
Deixis offers a more elaborate account of the picture surface and broad differences to expression and application but cannot qualify resemblance.
Iconography:
Lastly, iconography is the study of pictorial content, mainly in art, and would seem to ignore the question of how to concentrate upon what. But iconography's findings take a rather recondite view of content, are often based on subtle literary, historical and cultural allusion and highlight a sharp difference in terms of resemblance, optical accuracy or intuitive illusion. Resemblance is hardly direct or spontaneous for the iconographer, reference rarely to the literal or singular. Visual perception here is subject to reflection and research, the object as much reference as referent.
Iconography:
The distinguished art historian Erwin Panofsky allowed three levels to iconography. The first is 'natural' content, the object recognised or resembling without context, on a second level, a modifying historical and cultural context and at a third, deeper level, a fundamental structure or ideology (called iconology). He even ascribed the use of perspective a deep social meaning (1927). However more recently, a natural or neutral level tends to be abandoned as mythical. The cultural scholar W. J. T. Mitchell looks to ideology to determine resemblance and depiction as acknowledgement of shifts in relations there, albeit by an unspecified scheme or notation.
Iconography:
Iconography points to differences in scope for a theory of depiction. Where stylistics and a basic object is nominated, resemblance is prominent, but where more elaborate objects are encountered, or terms for nature denied, simple perception or notation flounder. The difference corresponds somewhat to the division in philosophy between the analytic and continental.
Other issues:
Dozens of factors influence depictions and how they are represented. These include the equipment used to create the depiction, the creator's intent, vantage point, mobility, proximity, publication format, among others, and, when dealing with human subjects, their potential desire for impression management.
Other debates about the nature of depiction include the relationship between seeing something in a picture and seeing face to face, whether depictive representation is conventional, how understanding novel depictions is possible, the aesthetic and ethical value of depiction and the nature of realism in pictorial art. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hair washing**
Hair washing:
Hair washing is the cosmetic act of keeping hair clean by washing it. To remove sebum from hair, some apply a surfactant, usually shampoo (sometimes soap) to their hair and lather the surfactant with water. The surfactant is rinsed out with water along with the dirt that it bonds to.
Hair washing:
Furthermore, there are dry shampoos; powders that remove sebum from hair by soaking it up prior to being combed out. People often use dry shampoo if they would like to postpone their hair wash or simply to save time.Hair wash and dry shampoo keep the hair healthy, add volume to the hair, remove dirt and odors, and remove oils from the scalp.
Hairdressing:
Most hairdressers in Canada, US and Europe and Latin America, offer a hair wash as a service before or after a haircut. This is usually done to make the hair more manageable for the hairdresser performing the haircut. After a haircut, it can remove loose strands of hair. It is also a relaxing practice, and many clients enjoy a hair wash as part of a haircut.
Hairdressing:
Hairdressers use specialized basins to perform a hair wash; these can be either forward or backward style. In the backward version (the more common), the client sits in a chair, and leans their head back into a sink, with the hairdresser standing behind them. In the forward version, the client leans forward over a sink, and the hairdresser stands over them to wash their hair. In some parts of the world, such as China, it is not uncommon to see what is referred to as an 'upright' shampoo. In this style, the client sits in a chair, while a hairdresser applies shampoo to their hair and adds water. They then rinse off into a basin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mersenne Twister**
Mersenne Twister:
The Mersenne Twister is a general-purpose pseudorandom number generator (PRNG) developed in 1997 by Makoto Matsumoto (松本 眞) and Takuji Nishimura (西村 拓士). Its name derives from the fact that its period length is chosen to be a Mersenne prime.
The Mersenne Twister was designed specifically to rectify most of the flaws found in older PRNGs.
The most commonly used version of the Mersenne Twister algorithm is based on the Mersenne prime 19937 −1 . The standard implementation of that, MT19937, uses a 32-bit word length. There is another implementation (with five variants) that uses a 64-bit word length, MT19937-64; it generates a different sequence.
Application:
Software The Mersenne Twister is used as default PRNG by the following software: Programming languages: Dyalog APL, IDL, R, Ruby, Free Pascal, PHP, Python (also available in NumPy, however the default was changed to PCG64 instead as of version 1.17), CMU Common Lisp, Embeddable Common Lisp, Steel Bank Common Lisp, Julia (up to Julia 1.6 LTS, still available in later, but a better/faster RNG used by default as of 1.7) Linux libraries and software: GLib, GNU Multiple Precision Arithmetic Library, GNU Octave, GNU Scientific Library Other: Microsoft Excel, GAUSS, gretl, Stata, SageMath, Scilab, Maple, MATLABIt is also available in Apache Commons, in the standard C++ library (since C++11), and in Mathematica. Add-on implementations are provided in many program libraries, including the Boost C++ Libraries, the CUDA Library, and the NAG Numerical Library.The Mersenne Twister is one of two PRNGs in SPSS: the other generator is kept only for compatibility with older programs, and the Mersenne Twister is stated to be "more reliable". The Mersenne Twister is similarly one of the PRNGs in SAS: the other generators are older and deprecated. The Mersenne Twister is the default PRNG in Stata, the other one is KISS, for compatibility with older versions of Stata.
Advantages:
Permissively-licensed and patent-free for all variants except CryptMT.
Passes numerous tests for statistical randomness, including the Diehard tests and most, but not all of the TestU01 tests.
A very long period of 19937 −1 . Note that while a long period is not a guarantee of quality in a random number generator, short periods, such as the 32 common in many older software packages, can be problematic.
k-distributed to 32-bit accuracy for every 623 (for a definition of k-distributed, see below) Implementations generally create random numbers faster than hardware-implemented methods. A study found that the Mersenne Twister creates 64-bit floating point random numbers approximately twenty times faster than the hardware-implemented, processor-based RDRAND instruction set.
Disadvantages:
Relatively large state buffer, of 2.5 KiB, unless the TinyMT variant (discussed below) is used.
Mediocre throughput by modern standards, unless the SFMT variant (discussed below) is used.
Exhibits two clear failures (linear complexity) in both Crush and BigCrush in the TestU01 suite. The test, like Mersenne Twister, is based on an F2 -algebra.
Multiple instances that differ only in seed value (but not other parameters) are not generally appropriate for Monte-Carlo simulations that require independent random number generators, though there exists a method for choosing multiple sets of parameter values.
Disadvantages:
Poor diffusion: can take a long time to start generating output that passes randomness tests, if the initial state is highly non-random—particularly if the initial state has many zeros. A consequence of this is that two instances of the generator, started with initial states that are almost the same, will usually output nearly the same sequence for many iterations, before eventually diverging. The 2002 update to the MT algorithm has improved initialization, so that beginning with such a state is very unlikely. The GPU version (MTGP) is said to be even better.
Disadvantages:
Contains subsequences with more 0's than 1's. This adds to the poor diffusion property to make recovery from many-zero states difficult.
Is not cryptographically secure, unless the CryptMT variant (discussed below) is used. The reason is that observing a sufficient number of iterations (624 in the case of MT19937, since this is the size of the state vector from which future iterations are produced) allows one to predict all future iterations.
Alternatives:
An alternative generator, WELL ("Well Equidistributed Long-period Linear"), offers quicker recovery, and equal randomness, and nearly equal speed.Marsaglia's xorshift generators and variants are the fastest in the class of LFSRs.64-bit MELGs ("64-bit Maximally Equidistributed F2 -Linear Generators with Mersenne Prime Period") are completely optimized in terms of the k-distribution properties.The ACORN family (published 1989) is another k-distributed PRNG, which shows similar computational speed to MT, and better statistical properties as it satisfies all the current (2019) TestU01 criteria; when used with appropriate choices of parameters, ACORN can have arbitrarily long period and precision.
Alternatives:
The PCG family is a more modern long-period generator, with better cache locality, and less detectable bias using modern analysis methods.
k-distribution:
A pseudorandom sequence xi of w-bit integers of period P is said to be k-distributed to v-bit accuracy if the following holds.
Let truncv(x) denote the number formed by the leading v bits of x, and consider P of the k v-bit vectors trunc trunc trunc v(xi+k−1))(0≤i<P) Then each of the 2kv possible combinations of bits occurs the same number of times in a period, except for the all-zero combination that occurs once less often.
Algorithmic detail:
For a w-bit word length, the Mersenne Twister generates integers in the range [0,2w−1] The Mersenne Twister algorithm is based on a matrix linear recurrence over a finite binary field F2 . The algorithm is a twisted generalised feedback shift register (twisted GFSR, or TGFSR) of rational normal form (TGFSR(R)), with state bit reflection and tempering. The basic idea is to define a series xi through a simple recurrence relation, and then output numbers of the form xiT , where T is an invertible F2 -matrix called a tempering matrix.
Algorithmic detail:
The general algorithm is characterized by the following quantities (some of these explanations make sense only after reading the rest of the algorithm): w: word size (in number of bits) n: degree of recurrence m: middle word, an offset used in the recurrence relation defining the series x , 1≤m<n r: separation point of one word, or the number of bits of the lower bitmask, 0≤r≤w−1 a: coefficients of the rational normal form twist matrix b, c: TGFSR(R) tempering bitmasks s, t: TGFSR(R) tempering bit shifts u, d, l: additional Mersenne Twister tempering bit shifts/maskswith the restriction that 2nw−r−1 is a Mersenne prime. This choice simplifies the primitivity test and k-distribution test that are needed in the parameter search.
Algorithmic detail:
The series x is defined as a series of w-bit quantities with the recurrence relation: := xk+m⊕((xku∣xk+1l)A)k=0,1,… where ∣ denotes concatenation of bit vectors (with upper bits on the left), ⊕ the bitwise exclusive or (XOR), xku means the upper w − r bits of xk , and xk+1l means the lower r bits of xk+1 . The twist transformation A is defined in rational normal form as:with Iw−1 as the (w−1)(w−1) identity matrix. The rational normal form has the benefit that multiplication by A can be efficiently expressed as: (remember that here matrix multiplication is being done in F2 , and therefore bitwise XOR takes the place of addition)where x0 is the lowest order bit of x As like TGFSR(R), the Mersenne Twister is cascaded with a tempering transform to compensate for the reduced dimensionality of equidistribution (because of the choice of A being in the rational normal form). Note that this is equivalent to using the matrix A where A=T−1∗AT for T an invertible matrix, and therefore the analysis of characteristic polynomial mentioned below still holds.
Algorithmic detail:
As with A, we choose a tempering transform to be easily computable, and so do not actually construct T itself. The tempering is defined in the case of Mersenne Twister as y≡x⊕((x≫u)&d)y≡y⊕((y≪s)&b)y≡y⊕((y≪t)&c)z≡y⊕(y≫l) where x is the next value from the series, y is a temporary intermediate value, and z is the value returned from the algorithm, with ≪ and ≫ as the bitwise left and right shifts, and & as the bitwise AND. The first and last transforms are added in order to improve lower-bit equidistribution. From the property of TGFSR, s+t≥⌊w2⌋−1 is required to reach the upper bound of equidistribution for the upper bits.
Algorithmic detail:
The coefficients for MT19937 are: 32 624 397 31 9908B0DF 16 11 FFFFFFFF 16 9D2C5680 16 15 EFC60000 16 18 Note that 32-bit implementations of the Mersenne Twister generally have d = FFFFFFFF16. As a result, the d is occasionally omitted from the algorithm description, since the bitwise and with d in that case has no effect.
Algorithmic detail:
The coefficients for MT19937-64 are: 64 312 156 31 B5026F5AA96619E9 16 29 5555555555555555 16 17 71D67FFFEDA60000 16 37 FFF7EEE000000000 16 43 Initialization The state needed for a Mersenne Twister implementation is an array of n values of w bits each. To initialize the array, a w-bit seed value is used to supply x0 through xn−1 by setting x0 to the seed value and thereafter setting xi=f×(xi−1⊕(xi−1≫(w−2)))+i for i from 1 to n−1 The first value the algorithm then generates is based on xn , not on x0 The constant f forms another parameter to the generator, though not part of the algorithm proper.
Algorithmic detail:
The value for f for MT19937 is 1812433253.
The value for f for MT19937-64 is 6364136223846793005.
Algorithmic detail:
Comparison with classical GFSR In order to achieve the 2nw−r−1 theoretical upper limit of the period in a TGFSR, ϕB(t) must be a primitive polynomial, ϕB(t) being the characteristic polynomial of -th row S=(0IrIw−r0)A The twist transformation improves the classical GFSR with the following key properties: The period reaches the theoretical upper limit 2nw−r−1 (except if initialized with 0) Equidistribution in n dimensions (e.g. linear congruential generators can at best manage reasonable distribution in five dimensions) Pseudocode The following pseudocode implements the general Mersenne Twister algorithm. The constants w, n, m, r, a, u, d, s, b, t, c, l, and f are as in the algorithm description above. It is assumed that int represents a type sufficient to hold values with w bits: // Create a length n array to store the state of the generator int[0..n-1] MT int index := n+1 const int lower_mask = (1 << r) - 1 // That is, the binary number of r 1's const int upper_mask = lowest w bits of (not lower_mask) // Initialize the generator from a seed function seed_mt(int seed) { index := n MT[0] := seed for i from 1 to (n - 1) { // loop over each element MT[i] := lowest w bits of (f * (MT[i-1] xor (MT[i-1] >> (w-2))) + i) // Extract a tempered value based on MT[index] // calling twist() every n numbers function extract_number() { if index >= n { if index > n { error "Generator was never seeded" // Alternatively, seed with constant value; 5489 is used in reference C code twist() int y := MT[index] y := y xor ((y >> u) and d) y := y xor ((y << s) and b) y := y xor ((y << t) and c) y := y xor (y >> l) index := index + 1 return lowest w bits of (y) // Generate the next n values from the series x_i function twist() { for i from 0 to (n-1) { int x := (MT[i] and upper_mask) | (MT[(i+1) mod n] and lower_mask) int xA := x >> 1 if (x mod 2) != 0 { // lowest bit of x is 1 xA := xA xor a MT[i] := MT[(i + m) mod n] xor xA index := 0
Variants:
CryptMT CryptMT is a stream cipher and cryptographically secure pseudorandom number generator which uses Mersenne Twister internally. It was developed by Matsumoto and Nishimura alongside Mariko Hagita and Mutsuo Saito. It has been submitted to the eSTREAM project of the eCRYPT network. Unlike Mersenne Twister or its other derivatives, CryptMT is patented.
Variants:
MTGP MTGP is a variant of Mersenne Twister optimised for graphics processing units published by Mutsuo Saito and Makoto Matsumoto. The basic linear recurrence operations are extended from MT and parameters are chosen to allow many threads to compute the recursion in parallel, while sharing their state space to reduce memory load. The paper claims improved equidistribution over MT and performance on a very old GPU (Nvidia GTX260 with 192 cores) of 4.7 ms for 5×107 random 32-bit integers.
Variants:
SFMT The SFMT (SIMD-oriented Fast Mersenne Twister) is a variant of Mersenne Twister, introduced in 2006, designed to be fast when it runs on 128-bit SIMD.
It is roughly twice as fast as Mersenne Twister.
It has a better equidistribution property of v-bit accuracy than MT but worse than WELL ("Well Equidistributed Long-period Linear").
It has quicker recovery from zero-excess initial state than MT, but slower than WELL.
It supports various periods from 2607 − 1 to 2216091 − 1.Intel SSE2 and PowerPC AltiVec are supported by SFMT. It is also used for games with the Cell BE in the PlayStation 3.
Variants:
TinyMT TinyMT is a variant of Mersenne Twister, proposed by Saito and Matsumoto in 2011. TinyMT uses just 127 bits of state space, a significant decrease compared to the original's 2.5 KiB of state. However, it has a period of 127 −1 , far shorter than the original, so it is only recommended by the authors in cases where memory is at a premium. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Digital imaging technician**
Digital imaging technician:
A digital imaging technician (DIT) was created for the motion picture industry in response to the transition from the long established film movie camera medium into the current digital cinema era. The DIT is the camera department crew member who works in collaboration with the cinematographer on workflow, systemization, camera settings, signal integrity and image manipulation to achieve the highest image quality and creative goals of cinematography in the digital realm.With the progression of the digitization ever more tasks concerning data management emerged: the position of the Digital Imaging Technician was introduced. The DIT is the connector between on-set time and post production. DITs support the camera team with technical and creative tasks with the digital camera. Their purpose is to ensure the best technical quality possible, as well as production safety. DITs are responsible for tasks during preparation, on-set time and post production. They are also responsible for managing data on set, such as making backups and quality checks of the material. In post production, the DIT hands the recordings to the post production team, possibly after checking the quality of the material and generating working copies.
Digital imaging technician:
Data backups and quality control are of great significance for the DIT who has to make sure that the original camera data and metadata is backed up at least twice daily, ensuring data integrity with checksum verification. Furthermore, the data may be backed up on LTO tape which is hardier than electronic devices and is used for long-term storage. Another copy must be made on a transfer data carrier that will be sent to post production along with the reports of the content. Again, the data has to be backed up. The data has to be accessible at all times and should be saved in a system where it can be reviewed, displaying the metadata of each clip.
Digital imaging technician:
The DIT's role on-set has become especially prevalent through assisting cinematographers, normally accustomed to film stock, in achieving their desired look digitally. This is accomplished by the DIT through monitoring picture exposure, setting up Color Decision List (CDL) on daily basis and, if requested, "look up tables" (LUTs) for the post-production. Additionally, the DIT handles any settings in the digital camera's menu system, such as recording format and outputs. As a courtesy, the DIT also secures the digital audio recorded by the external digital audio recorder operated by the Production Sound Mixer.
Related positions:
Alongside the digital imaging technician, the digital loader position has also been created. Digital loaders may be independent or work under a digital imaging technician. A data wrangler is another common term for the role of digital loader. A digital loader supports the camera department by managing, transferring and securing all the digital data acquired on-set via any digital cinematography cameras, interacting with the 2nd AC. Depending on the scale of the project, the DIT can also wrangle data, but never the opposite. In tradition with the classic role of film loader, the 2nd AC may alternatively take on the responsibility of digital loading as an additional duty.
Related positions:
A digital loader may also be called a loader, DMT (Data Management Technician), data wrangler or film loader. Loaders usually, however, have responsibilities in addition to data wrangling, like maintaining the camera truck and completing paperwork for the camera crew. Traditional film loaders also continue to be employed on film productions where the skills of a DIT aren't applicable.
Related positions:
Prior to the DIT position and its focus on on-set image work, several other positions, including video controllers, video shaders and video engineers performed similar functions of exposure and color control on a live video feed. While video positions still exist (especially in live broadcast and studio television), the DIT position has become commonplace in cinema, commercials and digital television. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Truba**
Truba:
The Wooden Trumpet (truba (Ukrainian: Труба) Lihava, Cossack Trumpet, Sihnal'na truba).
The truba, or lihava, is an instrument of the surma-horn type, only with a mouthpiece like that of a standard trumpet made of wood. The instrument has seven to ten finger-holes and is used in contemporary folk instrument orchestras.
Sources:
Humeniuk, A. - Ukrainski narodni muzychni instrumenty - Kyiv: Naukova dumka, 1967 Mizynec, V. - Ukrainian Folk Instruments - Melbourne: Bayda books, 1984 Cherkaskyi, L. - Ukrainski narodni muzychni instrumenty // Tekhnika, Kyiv, Ukraine, 2003 - 262 pages. ISBN 966-575-111-5 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Matti Pietikäinen (academic)**
Matti Pietikäinen (academic):
Matti Kalevi Pietikäinen is a computer scientist. He is currently Professor (emer.) in the Center for Machine Vision and Signal Analysis, University of Oulu, Finland. His research interests are in texture-based computer vision, face analysis, affective computing, biometrics, and vision-based perceptual interfaces. He was Director of the Center for Machine Vision Research, and Scientific Director of Infotech Oulu.
Biography:
Pietikäinen received the Doctor of Science in Technology degree from University of Oulu, Finland, in 1982. From 1980 to 1981 and from 1984 to 1985 he was with the Computer Vision Laboratory at the University of Maryland, working with a pioneer of the computer image analysis, Professor Azriel Rosenfeld. After the first visit, he established computer vision research at University of Oulu. For the 25th Anniversary book of his group in Oulu, see the list of selected publications.
Biography:
He has authored over 350 refereed scientific publications, which have been frequently cited. He has made pioneering contributions to local binary patterns (LBP) methodology, texture-based image and video analysis, and facial image analysis.
He has been Associate Editor of IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), Pattern Recognition, IEEE Transactions on Forensics and Security, Image and Vision Computing, and IEEE Transactions on Biometrics, Behavior and Identity Science. He has also been Guest Editor for several special issues, including IEEE TPAMI and International Journal of Computer Vision.
Biography:
In 2011, he was named an IEEE Fellow for his contributions to texture and facial image analysis for machine vision. Already in 1994, he received the IAPR Fellow nomination for contributions to machine vision and its applications in industry and service to the IAPR In 2018, he received the IAPR's King-Sun Fu Prize for fundamental contributions to texture analysis and facial image analysis. He was named a Highly Cited Researcher by Clarivate Analytics in 2018. Since February 2023 he will be listed by Webometrics among the Highly Cited Researchers whose h-index is at least 100.
Selected publications:
Ojala, T.; Pietikäinen, M.; Harwood, D. (1996). "A comparative study of texture measures with classification based on feature distributions". Pattern Recognition. 29 (1): 51–59. Bibcode:1996PatRe..29...51O. doi:10.1016/0031-3203(95)00067-4.
Sauvola, J.; Pietikäinen, M. (2000). "Adaptive document image binarization". Pattern Recognition. 33 (2): 225–236. Bibcode:2000PatRe..33..225S. doi:10.1016/S0031-3203(99)00055-2. hdl:10338.dmlcz/145819.
Ojala, T.; Pietikäinen, M.; Mäenpää, T. (2002). "Multiresolution gray-scale and rotation invariant texture classification with local binary patterns". IEEE Transactions on Pattern Analysis and Machine Intelligence. 24 (7): 971–987. CiteSeerX 10.1.1.157.1576. doi:10.1109/tpami.2002.1017623. S2CID 14540685.
Heikkilä, M.; Pietikäinen, M. (2006). "A texture-based method for modeling the background and detecting moving objects". IEEE Transactions on Pattern Analysis and Machine Intelligence. 28 (4): 657–662. CiteSeerX 10.1.1.404.508. doi:10.1109/TPAMI.2006.68. PMID 16566514. S2CID 1152842.
Ahonen, T.; Hadid, A.; Pietikäinen, M. (2006). "Face description with local binary patterns: Application to face recognition". IEEE Transactions on Pattern Analysis and Machine Intelligence. 28 (12): 2037–2041. doi:10.1109/tpami.2006.244. PMID 17108377. S2CID 369876.
Pietikäinen, M.; Aikio, H.; Karppinen, K. (2006). From algorithms to vision systems - Machine Vision Group 25 years. University of Oulu.
Zhao, G.; Pietikäinen, M. (2007). "Dynamic texture recognition using local binary patterns with an application to facial expressions". IEEE Transactions on Pattern Analysis and Machine Intelligence. 29 (6): 915–928. CiteSeerX 10.1.1.714.2104. doi:10.1109/tpami.2007.1110. PMID 17431293. S2CID 16451924.
Heikkilä, M.; Pietikäinen, M.; Schmid, C. (2009). "Description of interest regions with local binary patterns". Pattern Recognition. 42 (3): 425–436. Bibcode:2009PatRe..42..425H. CiteSeerX 10.1.1.323.7119. doi:10.1016/j.patcog.2008.08.014.
Pietikäinen, M.; Hadid, A.; Zhao, G.; Ahonen, T. (2011). Computer vision using local binary patterns. Springer.
Määttä, J.; Hadid, A.; Pietikäinen, M. (2011). Face spoofing detection from single images using micro-texture analysis. Proc. International Joint Conference on Biometrics (IJCB). pp. 1–7.
Pfister, T.; Li, X.; Zhao, G.; Pietikäinen, M. (2011). Recognising spontaneous facial micro-expressions. Proc. IEEE International Conference on Computer Vision (ICCV). pp. 1449–1456.
Zhou, Z.; Hong, X.; Zhao, G.; Pietikäinen, M. (2014). "A compact representation of visual speech data using latent variables". IEEE Transactions on Pattern Analysis and Machine Intelligence. 36 (1): 181–187. doi:10.1109/TPAMI.2013.173. PMID 24231875. S2CID 18321703.
Li, X.; Chen, J.; Zhao, G.; Pietikäinen, M. (2014). Remote heart rate measurement from face videos under realistic situations. Proc. IEEE Conference on Pattern Recognition and Computer Vision (CVPR). pp. 4265–4271.
Liu, L.; Lao, S.; Fieguth, P.; Guo, Y.; Wang, X.; Pietikäinen, M. (2016). "Median robust extended local binary pattern for texture classification". IEEE Transactions on Image Processing. 25 (3): 1368–1381. Bibcode:2016ITIP...25.1368L. doi:10.1109/TIP.2016.2522378. PMID 26829791.
Liu, L.; Fieguth, P.; Guo, Y.; Wang, X.; Pietikäinen, M. (2017). "Local binary features for texture classification: Taxonomy and experimental study". Pattern Recognition. 62: 135–160. Bibcode:2017PatRe..62..135L. doi:10.1016/j.patcog.2016.08.032.
Liu, L.; Chen, J.; Fieguth, P.; Zhao, G.; Chellappa, R.; Pietikäinen, M. (2019). "From BoW to CNN: Two decades of texture representation for texture classification". International Journal of Computer Vision. 127 (1): 74–109. arXiv:1801.10324. doi:10.1007/s11263-018-1125-z. S2CID 52919290.
Liu, L.; Ouyang, W.; Wang, X.; Fieguth, P.; Chen, J.; Liu, X.; Pietikäinen, M. (2020). "Deep learning for generic object detection: A Survey". International Journal of Computer Vision. 128 (2): 261–318. doi:10.1007/s11263-019-01247-4. S2CID 52177403.
Pietikäinen, M.; Silven, O. (2021). Challenges of artificial intelligence - From machine learning and computer vision to emotional intelligence. University of Oulu.
Pietikäinen, M.; Silven, O. (2023). How will artificial intelligence affect our lives in the 2050s?. University of Oulu, jultika.oulu.fi/Record/isbn978-952-62-3687-2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**9-Methylene-fluorene**
9-Methylene-fluorene:
9-Methylene-fluorene or dibenzofulvene (DBF) is a polycyclic aromatic hydrocarbon with chemical formula C14H10.
Properties:
9-Methylene-fluorene is an intermediate of Fmoc cleavage reaction. It is an analog of a 1,1-diphenylethylene. Polymerization of 9-methylene-fluorene produces a π-stacked polymer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Game without a value**
Game without a value:
In the mathematical theory of games, in particular the study of zero-sum continuous games, not every game has a minimax value. This is the expected value to one of the players when both play a perfect strategy (which is to choose from a particular PDF).
Game without a value:
This article gives an example of a zero-sum game that has no value. It is due to Sion and Wolfe.Zero-sum games with a finite number of pure strategies are known to have a minimax value (originally proved by John von Neumann) but this is not necessarily the case if the game has an infinite set of strategies. There follows a simple example of a game with no minimax value.
Game without a value:
The existence of such zero-sum games is interesting because many of the results of game theory become inapplicable if there is no minimax value.
The game:
Players I and II choose numbers x and y respectively, between 0 and 1. The payoff to player I is That is, after the choices are made, player II pays K(x,y) to player I (so the game is zero-sum).
The game:
If the pair (x,y) is interpreted as a point on the unit square, the figure shows the payoff to player I. Player I may adopt a mixed strategy, choosing a number according to a probability density function (pdf) f , and similarly player II chooses from a pdf g . Player I seeks to maximize the payoff K(x,y) , player II to minimize the payoff, and each player is aware of the other's objective.
Game value:
Sion and Wolfe show that but These are the maximal and minimal expectations of the game's value of player I and II respectively.
Game value:
The sup and inf respectively take the supremum and infimum over pdf's on the unit interval (actually Borel probability measures). These represent player I and player II's (mixed) strategies. Thus, player I can assure himself of a payoff of at least 3/7 if he knows player II's strategy, and player II can hold the payoff down to 1/3 if he knows player I's strategy.
Game value:
There is no epsilon equilibrium for sufficiently small ε , specifically, if 0.0476 . Dasgupta and Maskin assert that the game values are achieved if player I puts probability weight only on the set {0,1/2,1} and player II puts weight only on {1/4,1/2,1} Glicksberg's theorem shows that any zero-sum game with upper or lower semicontinuous payoff function has a value (in this context, an upper (lower) semicontinuous function K is one in which the set {P∣K(P)<c} (resp {P∣K(P)>c} ) is open for any real number c).
Game value:
The payoff function of Sion and Wolfe's example is not semicontinuous. However, it may be made so by changing the value of K(x, x) and K(x, x + 1/2) (the payoff along the two discontinuities) to either +1 or −1, making the payoff upper or lower semicontinuous, respectively. If this is done, the game then has a value.
Generalizations:
Subsequent work by Heuer discusses a class of games in which the unit square is divided into three regions, the payoff function being constant in each of the regions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Geologic temperature record**
Geologic temperature record:
The geologic temperature record are changes in Earth's environment as determined from geologic evidence on multi-million to billion (109) year time scales. The study of past temperatures provides an important paleoenvironmental insight because it is a component of the climate and oceanography of the time.
Methods:
Evidence for past temperatures comes mainly from isotopic considerations (especially δ18O); the Mg/Ca ratio of foram tests, and alkenones, are also useful. Often, many are used in conjunction to get a multi-proxy estimate for the temperature. This has proven crucial in studies on glacial/interglacial temperature.
Description of the temperature record:
Pleistocene The last 3 million years have been characterized by cycles of glacials and interglacials within a gradually deepening ice age. Currently, the Earth is in an interglacial period, beginning about 20,000 years ago (20 kya).
Description of the temperature record:
The cycles of glaciation involve the growth and retreat of continental ice sheets in the Northern Hemisphere and involve fluctuations on a number of time scales, notably on the 21 ky, 41 ky and 100 ky scales. Such cycles are usually interpreted as being driven by predictable changes in the Earth orbit known as Milankovitch cycles. At the beginning of the Middle Pleistocene (0.8 million years ago, close to the Brunhes–Matuyama geomagnetic reversal) there has been a largely unexplained switch in the dominant periodicity of glaciations from the 41 ky to the 100 ky cycle.
Description of the temperature record:
The gradual intensification of this ice age over the last 3 million years has been associated with declining concentrations of the greenhouse gas carbon dioxide, though it remains unclear if this change is sufficiently large to have caused the changes in temperatures. Decreased temperatures can cause a decrease in carbon dioxide as, by Henry's Law, carbon dioxide is more soluble in colder waters, which may account for 30ppmv of the 100ppmv decrease in carbon dioxide concentration during the last glacial maximum. [1]Similarly, the initiation of this deepening phase also corresponds roughly to the closure of the Isthmus of Panama by the action of plate tectonics. This prevented direct ocean flow between the Pacific and Atlantic, which would have had significant effects on ocean circulation and the distribution of heat. However, modeling studies have been ambiguous as to whether this could be the direct cause of the intensification of the present ice age.
Description of the temperature record:
This recent period of cycling climate is part of the more extended ice age that began about 40 million years ago with the glaciation of Antarctica.
Description of the temperature record:
Initial Eocene thermal maxima In the earliest part of the Eocene period, a series of abrupt thermal spikes have been observed, lasting no more than a few hundred thousand years. The most pronounced of these, the Paleocene-Eocene Thermal Maximum (PETM) is visible in the figure at right. These are usually interpreted as caused by abrupt releases of methane from clathrates (frozen methane ices that accumulate at the bottom of the ocean), though some scientists dispute that methane would be sufficient to cause the observed changes. During these events, temperatures in the Arctic Ocean may have reached levels more typically associated with modern temperate (i.e. mid-latitude) oceans. During the PETM, the global mean temperature seems to have risen by as much as 5–8 °C (9–14 °F) to an average temperature as high as 23 °C (73 °F), in contrast to the global average temperature of today at just under 15 °C (60 °F). Geologists and paleontologists think that during much of the Paleocene and early Eocene, the poles were free of ice caps, and palm trees and crocodiles lived above the Arctic Circle, while much of the continental United States had a sub-tropical environment.
Description of the temperature record:
Cretaceous thermal optimum During the later portion of the Cretaceous, from 100 to 66 million years ago, average global temperatures reached their highest level during the last ~200 million years. This is likely to be the result of a favorable configuration of the continents during this period that allowed for improved circulation in the oceans and discouraged the formation of large scale ice sheet.
Description of the temperature record:
Fluctuations during the remainder of the Phanerozoic The Phanerozoic eon, encompassing the last 542 million years and almost the entire time since the origination of complex multi-cellular life, has more generally been a period of fluctuating temperature between ice ages, such as the current age, and "climate optima", similar to what occurred in the Cretaceous. Roughly 4 such cycles have occurred during this time with an approximately 140 million year separation between climate optima. In addition to the present, ice ages have occurred during the Permian-Carboniferous interval and the late Ordovician-early Silurian. There is also a "cooler" interval during the Jurassic and early Cretaceous, with evidence of increased sea ice, but the lack of continents at either pole during this interval prevented the formation of continental ice sheets and consequently this is usually not regarded as a full-fledged ice age. In between these cold periods, warmer conditions were present and often referred to as climate optima. However, it has been difficult to determine whether these warmer intervals were actually hotter or colder than occurred during the Cretaceous optima.
Description of the temperature record:
Late Proterozoic ice ages The Neoproterozoic era (1,000 to 538.8 million years ago), provides evidence of at least two and possibly more major glaciations. The more recent of these ice ages, encompassing the Marinoan & Varangian glacial maxima (about 560 to 650 million years ago), has been proposed as a snowball Earth event with continuous sea ice reaching nearly to the equator. This is significantly more severe than the ice ages during the Phanerozoic. Because this ice age terminated only slightly before the rapid diversification of life during the Cambrian explosion, it has been proposed that this ice age (or at least its end) created conditions favorable to evolution. The earlier Sturtian glacial maxima (~730 million years) may also have been a snowball Earth event though this is unproven.
Description of the temperature record:
The changes that lead to the initiation of snowball Earth events are not well known, but it has been argued that they necessarily led to their own end. The widespread sea ice prevents the deposition of fresh carbonates in ocean sediment. Since such carbonates are part of the natural process for recycling carbon dioxide, short-circuiting this process allows carbon dioxide to accumulate in the atmosphere. This increases the greenhouse effect and eventually leads to higher temperatures and the retreat of sea ice.
Description of the temperature record:
Overall view Direct combination of these interpreted geological temperature records is not necessarily valid, nor is their combination with other more recent temperature records, which may use different definitions. Nevertheless, an overall perspective is useful even when imprecise. In this view time is plotted backwards from the present, taken as 2015 CE. It is scaled linear in five separate segments, expanding by about an order of magnitude at each vertical break. Temperatures in the left-hand panel are very approximate, and best viewed as a qualitative indication only. Further information is given on the graph description page.
Description of the temperature record:
Other temperature changes in Earth's past About 800 to 1,800 million years ago, there was a period of climate stasis, also known as the Boring Billion. During this period there was hardly any tectonic activity, no glaciations and the atmosphere composition remained stable. It is bordered by two different oxygenation and glacial events.
Description of the temperature record:
Temperature reconstructions based on oxygen and silicon isotopes from rock samples have predicted much hotter Precambrian sea temperatures. These predictions suggest ocean temperatures of 55–85 °C during the period of 2,000 to 3,500 million years ago, followed by cooling to more mild temperatures of between 10-40 °C by 1,000 million years ago. Reconstructed proteins from Precambrian organisms have also provided evidence that the ancient world was much warmer than today.However, other evidence suggests that the period of 2,000 to 3,000 million years ago was generally colder and more glaciated than the last 500 million years. This is thought to be the result of solar radiation approximately 20% lower than today. Solar luminosity was 30% dimmer when the Earth formed 4.5 billion years ago, and it is expected to increase in luminosity approximately 10% per billion years in the future.On very long time scales, the evolution of the sun is also an important factor in determining Earth's climate. According to standard solar theories, the sun will gradually have increased in brightness as a natural part of its evolution after having started with an intensity approximately 70% of its modern value. The initially low solar radiation, if combined with modern values of greenhouse gases, would not have been sufficient to allow for liquid oceans on the surface of the Earth. However, evidence of liquid water at the surface has been demonstrated as far back as 4,400 million years ago. This is known as the faint young sun paradox and is usually explained by invoking much larger greenhouse gas concentrations in Earth's early history, though such proposals are poorly constrained by existing experimental evidence. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Net metering**
Net metering:
Net metering (or net energy metering, NEM) is an electricity billing mechanism that allows consumers who generate some or all of their own electricity to use that electricity anytime, instead of when it is generated. This is particularly important with renewable energy sources like wind and solar, which are non-dispatchable (when not coupled to storage). Monthly net metering allows consumers to use solar power generated during the day at night, or wind from a windy day later in the month. Annual net metering rolls over a net kilowatt-hour (kWh) credit to the following month, allowing solar power that was generated in July to be used in December, or wind power from March in August.
Net metering:
Net metering policies can vary significantly by country and by state or province: if net metering is available, if and how long banked credits can be retained, and how much the credits are worth (retail/wholesale). Most net metering laws involve monthly rollover of kWh credits, a small monthly connection fee, require a monthly payment of deficits (i.e. normal electric bill), and annual settlement of any residual credit. Net metering uses a single, bi-directional meter and can measure the current flowing in two directions.
Net metering:
Net metering can be implemented solely as an accounting procedure, and requires no special metering, or even any prior arrangement or notification.Net metering is an enabling policy designed to foster private investment in renewable energy.
History:
Net metering originated in the United States, where small wind turbines and solar panels were connected to the electrical grid, and consumers wanted to be able to use the electricity generated at a different time or date from when it was generated. The first two projects to use net metering were an apartment complex and a solar test house in Massachusetts in 1979. Minnesota is commonly cited as passing the first net metering law, in 1983, and allowed anyone generating less than 40 kW to either roll over any credit to the next month, or be paid for the excess. In 2000 this was amended to compensation "at the average retail utility energy rate". This is the simplest and most general interpretation of net metering, and in addition allows small producers to sell electricity at the retail rate.Utilities in Idaho adopted net metering in 1980, and in Arizona in 1981. Massachusetts adopted net metering in 1982. By 1998, 22 states or utilities therein had adopted net metering. Two California utilities initially adopted a monthly "net metering" charge, which included a "standby charge", until the Public Utilities Commission (PUC) banned such charges. In 2005, all U.S. utilities were required to consider adopting rules offering net metering "upon request" by the Energy Policy Act of 2005. Excess generation is not addressed. As of 2013, 43 U.S. states have adopted net metering, as well as utilities in 3 of the remaining states, leaving only 4 states without any established procedures for implementing net metering. However, a 2017 study showed that only 3% of U.S. utilities offer full retail compensation for net metering with the remainder offering less than retail rates, having credit expire annually, or some form of indefinite rollover.Net metering was slow to be adopted in Europe, especially in the United Kingdom, because of confusion over how to address the value added tax (VAT). Only one utility company in Great Britain offers net metering.The United Kingdom government is reluctant to introduce the net metering principle because of complications in paying and refunding the value added tax that is payable on electricity, but pilot projects are underway in some areas.
History:
In Canada, some provinces have net metering programs.
History:
In the Philippines, Net Metering scheme is governed by Republic Act 9513 (Renewable Energy Act of 2008) and its implementing rules and regulation (IRR). The implementing body is the Energy Regulatory Commission (ERC) in consultation with the National Renewable Energy Board (NREB). Unfortunately, the scheme is not a true net metering scheme but in reality a net billing scheme. As the Dept of Energy's Net Metering guidelines say, “Net-metering allows customers of Distribution Utilities (DUs) to install an on-site Renewable Energy (RE) facility not exceeding 100 kilowatts (kW) in capacity so they can generate electricity for their own use. Any electricity generated that is not consumed by the customer is automatically exported to the DU’s distribution system. The DU then gives a peso credit for the excess electricity received equivalent to the DU’s blended generation cost, excluding other generation adjustments, and deducts the credits earned to the customer’s electric bill.” Thus Philippine consumers who generate their own electricity and sell their surplus to the utility are paid what is called the "generation cost" which is often less than 50% of the retail price of electricity.
Controversy:
Net metering is controversial as it affects different interests on the grid. A report prepared by Peter Kind of Energy Infrastructure Advocates for the trade association Edison Electric Institute stated that distributed generation systems, like rooftop solar, present unique challenges to the future of electric utilities. Utilities in the United States have led a largely unsuccessful campaign to eliminate net metering.
Controversy:
Benefits Renewable advocates point out that while distributed solar and other energy efficiency measures do pose a challenge to electric utilities' existing business model, the benefits of distributed generation outweigh the costs, and those benefits are shared by all ratepayers. Grid benefits of private distributed solar investment include reduced need for centralizing power plants and reduced strain on the utility grid. They also point out that, as a cornerstone policy enabling the growth of rooftop solar, net metering creates a host of societal benefits for all ratepayers that are generally not accounted for by the utility analysis, including: public health benefits, employment and downstream economic effects, market price impacts, grid security benefits, and water savings.An independent report conducted by the consulting firm Crossborder Energy found that the benefits of California's net metering program outweigh the costs to ratepayers. Those net benefits will amount to more than US$92 million annually upon the completion of the current net metering program.A 2012 report on the cost of net metering in the State of California, commissioned by the California Public Utilities Commission (CPUC), showed that those customers without distributed generation systems will pay US$287 in additional costs to use and maintain the grid every year by 2020. The report also showed the net cost will amount to US$1.1 billion by 2020. Notably, the same report found that solar customers do pay more on their power bills than what it costs the utility to serve them (Table 5, page 10: average 103% of their cost of service across the three major utilities in 2011).
Controversy:
Drawbacks Many electric utilities state that owners of generation systems do not pay the full cost of service to use the grid, thus shifting their share of the cost onto customers without distributed generation systems. Most owners of rooftop solar or other types of distributed generation systems still rely on the grid to receive electricity from utilities at night or when their systems cannot generate sufficient power.A 2014 report funded by the Institute for Electric Innovation claims that net metering in California produces excessively large subsidies for typical residential rooftop solar photovoltaic (PV) facilities. These subsidies must then be paid for by other residential customers, most of whom are less affluent than the rooftop solar PV customers. In addition, the report points out that most of these large subsidies go to the solar leasing companies, which accounted for about 75 percent of the solar PV facilities installed in 2013. The report concludes that changes are needed in California, ranging from the adoption of retail tariffs that are more cost-reflective to replacing net metering with a separate "Buy All - Sell All" arrangement that requires all rooftop solar PV customers to buy all of their consumed energy under the existing retail tariffs and separately sell all of their onsite generation to their distribution utilities at the utilities' respective avoided costs.
Post-net metering successor tariffs:
On a nationwide basis, energy officials have debated replacement programs for net metering for several years. As of 2018, a few "replicable models" have emerged. Utility companies have always contended that customers with solar get their bills reduced by too much under net metering, and as a result, that shifts costs for keeping up the grid infrastructure to the rest of the non-solar customers. "The policy has led to heated state-level debates since 2003 over whether — and how — to construct a successor to the policy," according to Utility Dive. The key challenge to constructing pricing and rebate schemes in a post-net metering environment is how to compensate rooftop solar customers fairly while not imposing costs on non-solar customers. Experts have said that a good "successor tariff," as the post-net metering policies have been called, is one that supports the growth of distributed energy resources in a way where customers and the grid get benefits from it.Thirteen states swapped successor tariffs for retail rate net metering programs in 2017. In 2018, three more states made similar changes. For example, compensation in Nevada will go down over time, but today the compensation is at the retail rate (meaning, solar customers who send energy to the grid get compensated at the same rate they pay for electricity). In Arizona, the new solar rate is ten percent below the retail rate.The two most common successor tariffs are called net billing and buy-all-sell-all (BASA). "Net billing pays the retail rate for customer-consumed PV generation and a below retail rate for exported generation. With BASA, the utility both charges and compensates at a below-retail rate."
Comparison:
There is considerable confusion between the terms "net metering" and "feed-in tariff"# (FIT). In general there are three types of compensation for local, distributed generation: Net metering: always at retail, and which is not technically compensation, although it may become compensation if there is excess generation and payments are allowed by the utility.
Feed-in tariff: generally above retail, and reduces to retail as the percentage of adopters increases.
Power purchase agreement: Compensation generally below retail, also known as a "Standard Offer Program", can be above retail, particularly in the case of solar, which tends to be generated close to peak demand.Net metering only requires one meter. A feed-in tariff requires two.
Time of use metering:
Time of use (TOU) net metering employs a smart (electric) meter that is programmed to determine electricity usage any time during the day. Time-of-use allows utility rates and charges to be assessed based on when the electricity was used (i.e., day/night and seasonal rates). Typically the generation cost of electricity is highest during the daytime peak usage period at sunset, and lowest in the middle of night.
Time of use metering:
Time of use metering is a significant issue for renewable-energy sources, since, for example, solar power systems tend to produce most energy at noon and produce little power during the daytime peak-price period (see also duck curve), and no power during the night period when price is low. California, Italy and Australia has installed so many photovoltaic cells that peak prices no longer are during the day, but are instead in the evening. TOU net metering affects the apparent cost of net metering to a utility.
Market rate net metering:
In market rate net metering systems the user's energy use is priced dynamically according to some function of wholesale electric prices. The users' meters are programmed remotely to calculate the value and are read remotely. Net metering applies such variable pricing to excess power produced by a qualifying system.
Market rate metering systems were implemented in California starting in 2006, and under the terms of California's net metering rules will be applicable to qualifying photovoltaic and wind systems. Under California law the payback for surplus electricity sent to the grid must be equal to the (variable, in this case) price charged at that time.
Market rate net metering:
Net metering enables small systems to result in zero annual net cost to the consumer provided that the consumer is able to shift demand loads to a lower price time, such as by chilling water at a low cost time for later use in air conditioning, or by charging a battery electric vehicle during off-peak times, while the electricity generated at peak demand time can be sent to the grid rather than used locally (see Vehicle-to-grid). No credit is given for annual surplus production.
Excess generation:
Excess generation is a separate issue from net metering, but it is normally dealt with in the same rules, because it can arise. If local generation offsets a portion of the demand, net metering is not used. If local generation exceeds demand some of the time, for example during the day, net metering is used. If local generation exceeds demand for the billing cycle, best practices calls for a perpetual roll over of the kilowatt-hour credits, although some regions have considered having any kWh credits expire after 36 months. The normal definition of excess generation is annually, although the term is equally applicable monthly. The treatment of annual excess generation (and monthly) ranges from lost, to compensation at avoided cost, to compensation at retail rate. Left over kWh credits upon termination of service would ideally be paid at retail rate, from the consumer standpoint, and lost, from the utility standpoint, with avoided cost a minimum compromise. Some regions allow optional payment for excess annual generation, which allows perpetual roll over or payment, at the customers choice. Both wind and solar are inherently seasonal, and it is highly likely to use up a surplus later, unless more solar panels or a larger wind turbine have been installed than needed.
Energy storage:
Net metering systems can have energy storage integrated, to store some of the power locally (i.e. from the renewable energy source connected to the system) rather than selling everything back to the mains electricity grid. Often, the batteries used are industrial deep cycle batteries as these last for 10 to 20 years. Lead-acid batteries are often also still used, but last much less long (5 years or so). Lithium-ion batteries are sometimes also used, but too have a relatively short lifespan. Finally, nickel-iron batteries last the longest with a lifespan of up to 40 years. A 2017 study of solar panels with battery storage indicated an 8 to 14 percent extra consumption of electricity from charging and discharging batteries.
Adoption by country:
Australia In some Australian states, the "feed-in tariff" is actually net metering, except that it pays monthly for net generation at a higher rate than retail, with Environment Victoria Campaigns Director Mark Wakeham calling it a "fake feed-in tariff." A feed-in tariff requires a separate meter, and pays for all local generation at a preferential rate, while net metering requires only one meter. The financial differences are very substantial.
Adoption by country:
In Victoria, from 2009, householders were paid 60 cents for every excess kilowatt hour of energy fed back into the state electricity grid. This was around three times the retail price for electricity at that time. However, subsequent state governments reduced the feed-in in several updates, until in 2016 the feed-in is as low as 5 cents per kilowatt hour.
Adoption by country:
In Queensland starting in 2008, the Solar Bonus Scheme pays 44 cents for every excess kilowatt hour of energy fed back into the state electricity grid. This is around three times the current retail price for electricity. However, from 2012, the Queensland feed in tariff has been reduced to 6-10 cents per kilowatt hour depending on which electricity retailer the customer has signed up with.
Adoption by country:
Australian smart grid technologist, Steve Hoy, originated the opposing concept of "True Zero", as opposed to "Net Zero", to express the emerging capability to trace electricity through net metering. The meter allows consumers to trace their electricity to the source, making clean energy more accessible to everyone.
Adoption by country:
Canada Ontario allows net metering for systems up to 500 kW, however credits can only be carried for 12 consecutive months. Should a consumer establish a credit where they generate more than they consume for 8 months and use up the credits in the 10th month, then the 12-month period begins again from the date that the next credit is shown on an invoice. Any unused credits remaining at the end of 12 consecutive months of a consumer being in a credit situation are cleared at the end of that billing.Areas of British Columbia serviced by BC Hydro are allowed net metering for up to 100 kW. At each annual anniversary on March 1 the customer is paid a market price, calculated as daily average mid-Columbia price for a previous year. FortisBC which serves an area in South Central BC also allows net-metering for up to 50 kW. Customers are paid their existing retail rate for any net energy they produce. The City of New Westminster, which has its own electrical utility, also allows net metering.New Brunswick allows net metering for installations up to 100 kW. Credits from excess generated power can be carried over until March at which time any excess credits are lost.SaskPower allows net metering for installations up to 100 kW. Credits from excess generated power can be carried over until the customer's annual anniversary date, at which time any excess credits are lost.
Adoption by country:
In Nova Scotia, in 2015, 43 residences and businesses began using solar panels for electricity. By 2017, the number was up to 133. These customers’ solar systems are net metered. The excess power produced by the solar panels is bought back from the homeowner by Nova Scotia Power at the same rate that the utility sells it to its customers. “The downside for Nova Scotia Power is that it must maintain the capacity to produce electricity even when it is not sunny.” European Union Denmark established net-metering for privately owned PV systems in mid-1998 for a pilot-period of four years. In 2002 the net-metering scheme was extended another four years up to end of 2006. Net-metering has proved to be a cheap, easy to administer and effective way of stimulating the deployment of PV in Denmark; however the relative short time window of the arrangement has so far prevented it from reaching its full potential. During the political negotiations in the fall of 2005 the net-metering for privately owned PV systems was made permanent.The Netherlands has net-metering since 2004. Initially there was a limit of 3000 kWh per year. Later this limit was increased to 5000 kWh. The limit was removed altogether on January 1, 2014.Italy offers a support scheme, mixing net-metering and a well segmented premium FiT.Slovenia has annual net-metering since January 2016 for up to 11 kVA. In a calendar year up to 10 MVA can be installed in the country.In 2010, Spain, net-metering has been proposed by the Asociación de la Industria Fotovoltaica (ASIF) to promote renewable electricity, without requiring additional economic support. Net-metering for privately owned systems will be established in 2019, after Royal Decree 244/2019 was accepted by the government on April 5.Some form of net metering is now proposed by Électricité de France. According to their website, energy produced by home-owners is bought at a higher price than what is charged as consumers. Hence, some recommend to sell all energy produced, and buy back all energy needed at a lower price. The price has been fixed for 20 years by the government.Ireland is planning to implement a net metering system, under the "Micro-generation Support Scheme" Under the proposed scheme, micro-generators can sell 30% of the excess electricity they produce and export it back to the grid. The price that electricity will be sold at is being formulated during the consultation process.Poland has introduced net metering for private and commercial renewable energy sources of up to 50 kW in 2015. Under this legislation energy sent to grid must be used within one year from feed-in, otherwise it is considered as lost. The amount of energy that was exported and can be taken back by the user is subtracted by 20% for installations up to 10 kW, or by 30% for installations up to 50 kW. This legislation guarantees that this net metering policy will be kept for a minimum of 15 years from the moment of registering renewable energy source. This legislation together with government subsidies for microgeneration created a substantial boost in installations of PV systems in Poland.
Adoption by country:
Portugal has a very limited form of "net-metering" that is constrained to 15 minute periods where the excess injected in the grid is not compensated when above the consumption from the grid within each 15 minute period. Only the injected energy up to the consumed energy within the same 15 minute period is netted out of the final monthly bill. In fact the old analog electricity meters that would allow for true net-metering are immediately replaced when a consumer installs solar pv.
Adoption by country:
India Almost every state in India has implemented net-metering, wherein, the consumers are allowed to sell the surplus energy generated by their solar system to the grid and get compensated for the same. However, the net-metering policy is not common throughout the country and varies from state to state.
Adoption by country:
To avail of net-metering in the country, the consumer is required to submit an application with the local electricity distribution company along with the planned rooftop solar project and requisite fee. The distribution company reviews the application and the feasibility of the solar project, which is either approved or rejected. If approved, another application for registration of the rooftop is submitted to the distribution company. An agreement is signed between the consumer and the company, and the net-meter is installed.
Adoption by country:
The Indian states of Karnataka and Andhra Pradesh have started the implementation of net metering, and the policy was announced by the respective state electricity boards in 2014. After review and inspection by the electricity board, a bidirectional meter is installed. Applications are taken up for up to 30% of the distribution transformer capacity on a first-come, first-served basis and technical feasibility.Since September 2015, Maharashtra state (MERC) has also had a net metering policy and consumers have started installation of Solar Rooftop Grid Tie Net metering systems. MERC Policy allows up to 40% transformer capacity to be on Solar net metering.The various DISCOMs in Maharashtra namely MSEDCL, Tata, Reliance and Torrent Power are expected to support net metering.
Adoption by country:
As of now MSEDCL does not use the TOD (Time of The Day differential) charging tariffs for residential consumers and net metering. Export and import units are considered at par for calculating Net Units and bill amount.
United States
Net purchase and sale:
Net purchase and sale is a different method of providing power to the electricity grid that does not offer the price symmetry of net metering, making this system a lot less profitable for home users of small renewable electricity systems.
Net purchase and sale:
Under this arrangement, two uni-directional meters are installed—one records electricity drawn from the grid, and the other records excess electricity generated and fed back into the grid. The user pays retail rate for the electricity they use, and the power provider purchases their excess generation at its avoided cost (wholesale rate). There may be a significant difference between the retail rate the user pays and the power provider's avoided cost.Germany, Spain, Ontario (Canada), some states in the USA, and other countries, on the other hand, have adopted a price schedule, or feed-in tariff (FIT), whereby customers get paid for any electricity they generate from renewable energy on their premises. The actual electricity being generated is counted on a separate meter, not just the surplus they feed back to the grid. In Germany, for the solar power generated, a feed-in tariff is being paid in order to boost solar power (figure from 2009). Germany once paid several times the retail rate for solar but has successfully reduced the rates drastically while actual installation of solar has grown exponentially at the same time due to installed cost reductions. Wind energy, in contrast, only receives around a half of the domestic retail rate, because the German system pays what each source costs (including a reasonable profit margin).
Virtual net metering:
Another method of producing power to the grid is through virtual net metering (also called peer-to-peer (P2P) energy trading, wheeling and sometimes local energy trading). Peer-to-peer energy trading is a novel paradigm of power system operation, where sellers can generate their own energy in dwellings, offices and factories, and share it with each other locally. Several companies offering virtual net metering use blockchain technology.
Related technology:
Sources that produce direct current, such as solar panels, must be coupled with an electrical inverter to convert the output to alternating current for use with conventional appliances. The phase of the outgoing power must be synchronized with the grid, and a mechanism must be included to disconnect the feed in the event of grid failure. This is for safety – for example, workers repairing downed power lines must be protected from "downstream" sources, in addition to being disconnected from the main "upstream" distribution grid. Although a small generator lacks the power to energize a loaded line, this can happen if the line is isolated from other loads. Solar inverters are designed for safety – while one inverter could not energize a line, a thousand might. In addition, electrical workers are trained to treat every line as though it was live, even when they know it should be safe.
Solar guerrilla:
Solar guerrilla (or the guerrilla solar movement) is a term originated by Home Power Magazine and is applied to someone who connects solar panels without permission or notification and uses monthly net metering without regard for law. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Follicular hyperplasia**
Follicular hyperplasia:
Follicular hyperplasia (FH) is a type of lymphoid hyperplasia and is classified as a lymphadenopathy, which means a disease of the lymph nodes. It is caused by a stimulation of the B cell compartment and by abnormal cell growth of secondary follicles. This typically occurs in the cortex without disrupting the lymph node capsule. The follicles are pathologically polymorphous, are often contrasting and varying in size and shape. Follicular hyperplasia is distinguished from follicular lymphoma in its polyclonality and lack of bcl-2 protein expression, whereas follicular lymphoma is monoclonal, and expresses bcl-2.
Signs & Symptoms:
Lymphadenopathies such as follicular hyperplasia can show various symptoms such as fever, chills, night sweats, unexplained weight loss and prominent localizing symptoms are non age and non-gender specific.Although human lymph nodes cannot be seen with the naked eye, if you press against the skin you can sometimes feel for swelling and pressure. Swelling of lymph nodes can range from pea sized to golf ball sized depending on the given condition. A person can have reactive lymph nodes throughout multiple areas of the body which can cause swelling, pain, warmth and tenderness.
Causes:
The following are examples of potential causes for reactive lymphadenopathies, all of which have predominantly follicular patterns: Rheumatoid arthritis Sjögren syndrome IgG4-related disease (IgG4-related lymphadenopathy) Kimura disease Toxoplasmosis Syphilis Castleman disease Progressive transformation of germinal centers (PTGC)Microorganisms can infect lymph nodes by causing pain and inflammation including redness and tenderness. Bacterial, fungal and viral infections including Bartonella, Staphylococcal, Granulomatous, Adenoviral and Lyme disease are all associated with follicular hyperplasia.Other autoimmune related diseases that are associated are rheumatoid arthritis, systemic lupus, dermatomyositis and Sjögren syndrome. Immunoglobulin G- related diseases are immune-mediated fibroinflammatory conditions that affect many organs in the body. Chronic inflammatory disorders such as Kimura disease are also the outcome of inflamed or enlarged lymph nodes. Bacteria such as Treponema pallidum, or parasites such as Toxoplasma gondii, can cause enlargement of the lymph nodes as well. Other related causes such as lymphoproliferative conditions known as Castleman disease, or progressive transformation of germinal centers cause lymph node enlargement. Environmental conditions that may also play a role are animal or insect exposure, chronic medication usage and immunization status. The patient's occupational history such as metal work or coal mining may expose them to silicon or beryllium.Follicular hyperplasia is common in children and young adults, but is not limited to any age; it is also common among the elderly and is non-sex specific. Children often experience reactive lymph nodes when they are younger due to new exposure of environmental pathogens, even without development of an infection. Clinically, follicular hyperplasia lymphadenopathy is usually restricted to a single area on the body, but can also be on several parts of the body as well. Follicular hyperplasia is one of the most common types of lymphadenopathies and can be associated with paracortical and sinus hyperplasia.
Mechanism:
The specific pathology of follicular hyperplasia has not been fully understood yet. It is known, however, that a stimulation of the B cell compartment and by abnormal cell growth of secondary follicles are key factors to the pathology of follicular hyperplasia. This typically occurs in the cortex without disrupting the lymph node capsule. It has also been described that the condition may stem from primary reactive lymphoid proliferations that may be triggered by an unidentified antigens or some sort of chronic irritation by ultimately causing lymph node enlargement.Lymph node enlargement can occur for many reasons. First of all, the lymph nodes function to act as a filter for the reticuloendothelial system. They contain multi-layered sinus by exposing B and T cell lymphocytes and macrophages and are found within the blood. When the immune system recognizes foreign proteins in order to mount an attack, it requires help from these blood cells. During this reaction the responding cell lines become duplicated in response to a foreign attack, and therefore increase in size. Node size is considered abnormal when it exceeds 1 cm, however this differs from children to adults.Localized or specific adenopathy often occur in clusters or groups of lymph nodes that can migrate to various areas of the body. Lymph nodes are distributed within all areas of the body and when enlarged, reflect the location of lymphatic drainage. The node appearance can range from tender, fixed or mobile and discrete or matted together. It is important to note that reactive lymph nodes are not necessarily a bad thing, in fact they are a good indication that the lymphatic system is working hard. Lymph fluids can build up in lymph nodes as a way to trap harmful bacteria and other harmful pathogens in the body to prevent it from spreading to other areas.Substances that are present within the interstitial fluids such as microorganisms, antigens or even cancer can enter lymphatic vessels by forming the lymphatic fluids. Lymph nodes help filter these fluids by removing material towards the direction of blood circulation. When antigens are presented, the lymphocytes inside of the node trigger a response which can cause proliferation or cell enlargement. This is also referred to as reactive lymphadenopathy.
Diagnosis:
Follicular hyperplasia can be distinguished among other diseases by observing the density of a lymph follicle on low magnification. Lymph nodes with reactive follicles contain extensions outside its capsule, follicles present throughout the entire node, obvious centroblasts and the absence or diminishing mantle zones. Immunohistochemistry can help distinguish a difference between a patient with follicular lymphoma to follicular hyperplasia. Reactive follicular hyperplasia does not express BCL2 proteins in B cell germinal centers and are absent light chain reaction in immunostaining and flow cytometry as well as absent IG rearrangements.Localized, or specific lymphadenopathies should be evaluated for etiologies that are associated with lymphatic drainage patterns. During a complete lymphatic physical examination, generalized lymphadenopathy may or may not be ruled out.BCL2 protein expression is usually absent in follicular hyperplasia but prominent in follicular lymphomas. A comparison with other stains that include germinal center markers such as BCL-6 or CD10 is useful to compare when determining a proper diagnosis. CD10 positive cells are metalloproteinase which activate or deactivate peptides through proteolytic cleavage.An official diagnosis of follicular hyperplasia might include imagining such as a PET scan and a tissue biopsy, depending on the clinical location and also the location of lymphadenopathy. A common blood panel test may help rule out other possible diagnosis, such as lymphomas based on the number of red, white and platelet cells found in the blood. If the patient has low blood cell counts, this can be an indication of lymphoma. Another indication of lymphoma compared to follicular hyperplasia is high levels of lactic dehydrogenase (LDH) and C-reactive proteins (CRP). A lymph node biopsy may reveal an official diagnosis for lymphoma, by ruling out follicular hyperplasia which can be determined by the rate of proliferation.
Treatment:
Factors that identify etiology of the patient include age, duration of lymphadenopathy, external exposures, associated symptoms and location on the body.
Treatment:
Beta blockers such as Atenolol or ACE inhibitors like Captopril, can cause certain lymphadenopathies for some individuals. Captopril is an analog to proline and completely inhibits angiotensin converting enzymes (ACE) and as a result decreases angiostatin II production. It can also inhibit tumor angiogenesis through MMPs and endothelial cell migration, which can ultimately cause lymph node enlargement.Carbamazepine is an anticonvulsant that works by decreasing nerve impulses that are brought on by seizures. Some of the more serious side effects are allergic skin reactions and low blood cell counts. Other anticonvulsant medications, Phenytoin and Primidone, can also cause lymph node enlargement due to changes in the blood after drug administration. Other medications such as Pyrimethamine, Quinidine and Trimethoprim/ sulfamethoxazole antibiotics also change blood chemistry after administration and can cause lymph node enlargement. Hydralazine and Allopurinol are medications that are prescribed to patients to lower their blood pressure and are vasodilators causing the blood cells to dilate.A family history, regional exam and epidemiological cues are usually the most useful information to utilize for treatment options because it can help classify the patient's condition as either low risk, or high risk. Low risk, meaning the patient is not exposed to malignancy or serious disease and high risk means that they are. If the patient is not at risk for malignancy or serious illness, it is recommended for the physician to observe any changes or symptom resolutions within 3- 4 weeks. If the lymphadenopathy does not resolve the next step would be getting a biopsy. Proceeding a tissue sample, an effective treatment for follicular hyperplasia is surgical removal of the lesion after an initial conformation of the disease based on the patients biopsy results.
Prognosis:
Typically follicular hyperplasia is categorized as a benign lymphadenopathy. This is usually almost always treatable, but only until it progresses into malignancy. Therefore, follicular hyperplasia patients tend to live a long life until their condition is either treated or goes away on its own. Follicular hyperplasia becomes problematic when left untreated by increasing the risks for developing various types of cancers.
Epidemiology:
Follicular hyperplasia is one of the most common types of benign lymphadenopathies. It can be typically found in children and young adults however all ages are subject to follicular hyperplasia, including the elderly. Lymphadenopathies such as follicular hyperplasia, are usually localized but can also be generalized and are non gender specific. Over 75% of all lymphadenopathies are observed as local, usually involving specifically the head and neck regions. It has been estimated that patients who present lymphadenopathy has an estimated 1.1% chance of developing malignancy.The rate of childhood malignancy associated with lymphadenopathy is low, however this increases with age. A majority of reported cases in children are usually caused by infections or benign etiologies. In one study, 628 patients underwent a nodal biopsy and resulted benign or self-limited causes found in nearly 79% in patients younger than 30 years of age. This was also the case for nearly 59% of patients between the ages of 31-59 years old and 39% in patients that were older than 50 years of age. Lymphadenopathies that last more than 2 weeks or over one year and does not develop progression of cancer cells have a low chance of neoplastic effects.
Current Research:
During a recent 2019 study, a 51 year old woman was examined by medical professionals at the department of Oral and Maxillofacial surgery, Tokyo Medical University Hospital, to treat an existing condition which was an inflamed mass of cells noted in patient's bottom left gum line. A sample biopsy was obtained from the patient's mouth and the indicated results showed benign lymphoid tissues. Further investigation under a microscope revealed lymphocytic tissues composed of scattered lymphoid follicles with obvious germinal centers and well differentiated lymphocytes surrounded by defiant mantle zones. Immunochemical staining revealed positivity for lymphoid particles CD20 and CD79. This study was significant because they were able to diagnose a very rare case of follicular lymphoid hyperplasia derived from an unusual origin site of the mouth, however they were unable to determine the onset of her condition. It is important to note that after the mass was removed there was no signs of recurrence after the first year of removal. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spent caustic**
Spent caustic:
Spent caustic is a waste industrial caustic solution that has become exhausted and is no longer useful (or spent). Spent caustics are made of sodium hydroxide or potassium hydroxide, water, and contaminants. The contaminants have consumed the majority of the sodium (or potassium) hydroxide and thus the caustic liquor is spent, for example, in one common application H2S (gas) is scrubbed by the NaOH (aqueous) to form NaHS (aq) and H2O (l), thus consuming the caustic.
Types:
Ethylene spent caustic comes from the caustic scrubbing of cracked gas from an ethylene cracker. This liquor is produced by a caustic scrubbing tower. Ethylene product gas is contaminated with H2S(g) and CO2(g), and those contaminants are removed by absorption in the caustic scrubbing tower to produce NaHS(aq) and Na2CO3(aq). The sodium hydroxide is consumed and the resulting wastewater (ethylene spent caustic) is contaminated with the sulfides and carbonates and a small fraction of organic compounds.
Types:
Refinery spent caustic comes from multiple sources: the Merox processing of gasoline; the Merox processing of kerosene/jet fuel; and the caustic scrubbing/Merox processing of LPG. In these streams sulfides and organic acids are removed from the product streams into the caustic phase. The sodium hydroxide is consumed and the resulting wastewaters (cresylic for gasoline; naphthenic for kerosene/jet fuel; sulfidic for LPG -spent caustics) are often mixed and called refinery spent caustic. This spent caustic is contaminated with sulfides, carbonates, and in many cases a high fraction of organic acids.
Treatment technologies:
Spent caustics are malodorous wastewaters that are difficult to treat in conventional wastewater processes. Typically the material is disposed of by high dilution with biotreatment, deep well injection, incineration, wet air oxidation, Humid Peroxide Oxidation or other speciality processes. Most ethylene spent caustics are disposed of through wet air oxidation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Histamine intolerance**
Histamine intolerance:
Histamine intolerance, sometimes called histaminosis, is an over-accumulation of dietary histamine in the human body. Histamine intolerance is sometimes informally called an allergy; however, the intolerance is technically caused by the gradual accumulation of extracellular histamine due to an imbalance.
Roughly 1% of the population has histamine intolerance; of those, 80% are middle-aged.
General:
The imbalance in histamine intolerance is between the synthesis and selective release of histamine from certain granulocytes (i.e., mast cells and basophils), versus the breakdown of histamine by the enzymes which metabolize it, such as diamine oxidase (DAO) and histamine N-methyltransferase (HNMT).In contrast, allergic reactions involving an immediate allergic response to an allergen are caused by anaphylactic degranulation, which is the abrupt and explosive release of "pre-formed mediators", including histamine, from mast cells and basophils throughout the body.
Symptoms:
Possible symptoms after ingestion of histamine-rich food include: Skin rash, hives, eczema, itching Headache, flushing, migraine, dizziness Narrowed or runny nose, difficulty breathing, bronchial asthma, sore throat Bloating, diarrhea, constipation, nausea / vomiting, abdominal pain, stomach sticking, heartburn High blood pressure (hypertension), tachycardia, cardiac arrhythmias, low blood pressure (hypotension) Menstrual disorders (dysmenorrhea), cystitis, urethritis and mucosal irritation of female genitalia Water retention (edema), bone marrow edema (BME), joint pain Fatigue, seasickness, tiredness, sleep disorders Confusion, nervousness, depressive moods
Metabolism:
In the human body, histamine is metabolized extracellularly by the enzyme diamine oxidase (DAO), and intracellularly by histamine N-methyltransferase (HNMT) and aldehyde oxidases (AOX1). In histamine intolerance, the activity of DAO is limited, and histamine taken up by the diet and formed in the body is only partially metabolized. The consumption of histamine-containing food (e.g., red wine or hard cheese) leads to a pseudoallergic reaction. It is unclear how histamine passes through the intestinal wall during absorption and enters the blood without coming into contact with the aldehyde oxidases expressed in intestinal cells and histamine N-methyltransferases.
Potentially harmful foods:
The following food categories have been quoted in literature as histamine rich: Meat and fish Fish products, especially canned fish Ham Offal Pork Salami Smoked meat Other seafood Dairy Matured ("hard") cheeses - the higher degree of ripeness, the higher histamine content Alcohol Beer (especially top-fermented and cloudy/colored) Some French Champagne (made partially with red grapes) Red Wine Tobacco Active or passive exposure to tobacco smoke is suspected of favouring histamine intolerance, but has not been adequately studied.
Potentially harmful foods:
Fruits, vegetables, legumes and roots Avocado Bamboo sprouts Beans Citrus fruits Eggplant Horseradish Mushrooms Papayas Plums Raisins Sauerkraut Spinach Strawberries Tomatoes Other molds (e.g. noble-mold from cheeses and salamis) Other Chocolate (chocolate itself does not contain histamine, but it does contain cocoa, which blocks the function of the histamine-clearing enzyme DAO) Nuts Products with vinegar, such as pickles or mustard Soy and soy products (e.g., tofu)(This list is drawn from the German Wikipedia article on histamine intolerance. It has been further expanded using Verträglichkeit von histaminhaltigen Lebensmitteln (PDF; 28 kB)).
Drug interactions:
Some medicines or so-called histamine-liberators (e.g., certain food additives) may delay the breakdown of histamine, or release histamine in the body.
Alcohol consumption increases the permeability of the cell membrane and thus lowers the histamine tolerance limit, which is why particularly strong reactions can occur when mixing alcohol and histamine-rich foods (e.g., red wine and cheese).
Drug interactions:
Incompatibility of anti-inflammatory and analgesic medications in persons with histamine intolerance: Anti-inflammatory / analgesic drugs that increase allergen-specific histamine release in allergy sufferers are reaction inducing:[8]Anti-inflammatory/analgesic drugs that inhibit the allergen-specific histamine release in people with allergies are not reaction including:[8]Contrast agents – X-ray contrast allergy: R. Jarisch: Contrast reaction is misleadingly referred to as allergy and, because contrast media contain iodine, is almost always mistaken for iodine allergy. "Contrast agents release histamine. The reason why, in most cases, nothing happens when administering contrast media is that most patients have no histamine intolerance. But if a patient reacts, anaphylactic shock is inevitable. "For safety reasons, an antihistamine should always be given to people with histamine intolerance prior to examination with an X-ray contrast medium. In addition, adherence to a histamine-free diet 24 hours before x-ray studies with contrast agents is recommended for minimizing histamine exposure. p. 127/128 in [8]
Diagnosis:
For a diagnosis, the case history is essential. However, since many complaints such as headaches, migraines, bronchial asthma, hypotension, arrhythmia and dysmenorrhea (painful periods) may be caused by something other than histamine intolerance, it is not surprising that half of suspected diagnoses are not confirmed.The diagnosis is usually made by intentionally provoking a reaction. However, since histamine can potentially cause life-threatening conditions, the following procedure is preferred: take blood samples before and after a 14-day diet, and measure changes in histamine and diamine oxidase (DAO) levels. Rather than increase histamine during the test diet, eliminate it. This procedure does not endanger the patient. Quite the contrary: in the presence of histamine intolerance, the symptoms have improved or disappeared completely. At the same time, the histamine blood level halves and the DAO increases significantly. If there is no histamine intolerance, the blood levels do not change and neither do the symptoms. Simultaneously, food allergy, cross-reactions with pollen, fructose malabsorption, lactose intolerance, and celiac disease should be excluded.
Therapy:
The basis of treatment is a reduction of the dietary histamine through a histamine-poor diet. Certain foods (e.g., citrus fruits) and certain medicines (e.g., morphine) which do not contain histamine per se are also to be avoided, because they are known to release histamine stored in the body (histamine liberation).If eating histamine-containing foods is unavoidable, antihistamines and cromolyn sodium may be effective. The intake of diaminoxidase (DAO) in capsule form with meals may reduce the symptoms of histamine intolerance.In cases of high blood glutamate, such as can occur in some cases of eczema and histamine intolerance, Reinhart Jarisch recommends vitamin B6 treatment. This promotes the body's own synthesis of DAO and thus fights the effects of histamine intolerance. The reference ranges (normal values) for blood glutamic acid are 20–107 in infants, 18–65 in children and 28-92 μmol / ml in adults.
Literature:
Abbot, Lieners, Mayer, Missbichler, Pfisterer, Schmutz: Nahrungsmittelunverträglichkeit (Histaminintoleranz). HSC, Mauerbach 2006, ISBN 3-9502287-0-5.
Reinhart Jarisch: Histamin-Intoleranz, Histamin und Seekrankheit. Thieme 2004, ISBN 3-13-105382-8.
Nadja Schäfers: Histaminarm kochen – vegetarisch. pala-Verlag, Darmstadt 2009, ISBN 978-3-89566-263-8.
Anja Völkel: Gesunde Küche: bewusst genießen – schmackhaft & lecker. AVA-Verlag, 2013, ISBN 978-3-944321-13-4.
I. Reese: Streitthema Histaminintoleranz. (CME zertifizierte Fortbildung) In: Der Hautarzt. 65, 2014, S. 559–566, doi:10.1007/s00105-014-2815-2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**KakaoStyle**
KakaoStyle:
KakaoStyle (Hangul: 카카오스타일) is a mobile application that curates fashion content from various places. KakaoTalk users are able to check various fashion trends with app and see what their friends are also interested in. The app can even give suggestions and options to purchase the clothing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Service Update Management Assistant**
Service Update Management Assistant:
The Service Update Management Assistant (SUMA) automates the update process for the AIX operating system by the retrieval of maintenance updates from IBM.Without extensive configuration it is capable of automatically downloading, when available, entire maintenance levels and the latest security updates. It is also capable of comparisons against currently installed software, fix repositories and maintenance levels.
SUMA is capable of e-mail notification for currently available downloads.
History:
SUMA was introduced in AIX 5L Version 5.3. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Triosmium dodecacarbonyl**
Triosmium dodecacarbonyl:
Triosmium dodecacarbonyl is a chemical compound with the formula Os3(CO)12. This yellow-colored metal carbonyl cluster is an important precursor to organo-osmium compounds. Many of the advances in cluster chemistry have arisen from studies on derivatives of Os3(CO)12 and its lighter analogue Ru3(CO)12.
Structure and synthesis:
The cluster has D3h symmetry, consisting of an equilateral triangle of Os atoms, each of which bears two axial and two equatorial CO ligands. Each of the three osmium centers has an octahederal structure with four CO ligands and the other two osmium atoms. The Os–Os bond distance is 2.88 Â (288 pm). Ru3(CO)12 has the same structure, whereas Fe3(CO)12 is different, with two bridging CO ligands resulting in C2v symmetry.
Structure and synthesis:
Os3(CO)12 is prepared by the direct reaction of OsO4 with carbon monoxide at 175 °C under high pressures: 3 OsO4 + 24 CO → Os3(CO)12 + 12 CO2The yield is nearly quantitative.
Reactions:
Many chemical reactions of Os3(CO)12 have been examined. Direct reactions of ligands with the cluster often lead to complex product distributions. Os3(CO)12 converts to more labile derivatives such as Os3(CO)11(MeCN) and Os3(CO)10(MeCN)2 using Me3NO as a decarbonylating agent: Os3(CO)12 + (CH3)3NO + CH3CN → Os3(CO)11(CH3CN) + CO2 + (CH3)3N Os3(CO)11(CH3CN) + (CH3)3NO + CH3CN → Os3(CO)10(CH3CN)2 + CO2 + (CH3)3NOs3(CO)11(MeCN) reacts with a variety of even weakly basic ligands to form adducts.
Reactions:
Purging a solution of triosmium dodecacarbonyl in boiling octane (or similar inert solvent of similar boiling point) with H2 gives the dihydride Os3H2(CO)10: Os3(CO)12 + H2 → Os3H2(CO)10 + 2 COOsmium pentacarbonyl is obtained by treating solid triosmium dodecacarbonyl with 200 atmospheres of carbon monoxide at 280-290 °C.
Os3(CO)12 + 3 CO → 3 Os(CO)5 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**E-Sampark**
E-Sampark:
e-Sampark is a mechanism used by the Government of India to contact citizens electronically and is a part of the Digital India campaign. The name is derived from the Hindi word sampark meaning contact.The main features are: sending informational and public service messages via e-mails, SMSs and outbound dialing usage of the customised user lists SMSs can be sent via the smartphone application option to subscribe to the e-Sampark database by individuals, etc. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Electrical load**
Electrical load:
An electrical load is an electrical component or portion of a circuit that consumes (active) electric power, such as electrical appliances and lights inside the home. The term may also refer to the power consumed by a circuit. This is opposed to a power source, such as a battery or generator, which produces power.The term is used more broadly in electronics for a device connected to a signal source, whether or not it consumes power. If an electric circuit has an output port, a pair of terminals that produces an electrical signal, the circuit connected to this terminal (or its input impedance) is the load. For example, if a CD player is connected to an amplifier, the CD player is the source, and the amplifier is the load.Load affects the performance of circuits with respect to output voltages or currents, such as in sensors, voltage sources, and amplifiers. Mains power outlets provide an easy example: they supply power at constant voltage, with electrical appliances connected to the power circuit collectively making up the load. When a high-power appliance switches on, it dramatically reduces the load impedance.
Electrical load:
The voltages will drop if the load impedance is not much higher than the power supply impedance. Therefore, switching on a heating appliance in a domestic environment may cause incandescent lights to dim noticeably.
A more technical approach:
When discussing the effect of load on a circuit, it is helpful to disregard the circuit's actual design and consider only the Thévenin equivalent. (The Norton equivalent could be used instead, with the same results.) The Thévenin equivalent of a circuit looks like this: With no load (open-circuited terminals), all of VS falls across the output; the output voltage is VS . However, the circuit will behave differently if a load is added. Therefore, we would like to ignore the details of the load circuit, as we did for the power supply, and represent it as simply as possible. For example, if we use an input resistance to represent the load, the complete circuit looks like this: Whereas the voltage source by itself was an open circuit, adding the load makes a closed circuit and allows charge to flow. This current places a voltage drop across RS , so the voltage at the output terminal is no longer VS . The output voltage can be determined by the voltage division rule: VOUT=VS⋅RLRL+RS If the source resistance is not negligibly small compared to the load impedance, the output voltage will fall.
A more technical approach:
This illustration uses simple resistances, but a similar discussion can be applied in alternating current circuits using resistive, capacitive, and inductive elements. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Perinatal matrices**
Perinatal matrices:
Perinatal matrices or basic perinatal matrices, in pre-perinatal and transpersonal psychology, is a theoretical model of describing the state of awareness before and during birth.
In the context of perinatal psychology, perinatal matrices refer to the psychological and emotional experiences and imprints that occur during the prenatal and birth process. It is believed that these early experiences have a significant impact on an individual's development, personality, and well-being throughout their life.
Perinatal matrices:
Perinatal matrices are influenced by various factors, including the mother's emotional state during pregnancy, the quality of the prenatal environment, the birthing process, and the early bonding between the mother and child. It is believed that negative experiences or traumas during this period can create imprints in the individual's psyche that may manifest as emotional or behavioral patterns later in life.
Perinatal matrices:
Understanding and working with perinatal matrices is an important aspect of prenatal and perinatal psychology. Therapists and practitioners in this field aim to help individuals identify and heal any unresolved issues or traumas from the perinatal period, as well as support healthy bonding and attachment between parents and their newborns.
Perinatal matrices:
The study and exploration of perinatal matrices provide valuable insights into the impact of early experiences on human development and the potential for healing and growth through addressing these early imprints. It contributes to the broader field of psychology, helping individuals and professionals gain a deeper understanding of the complexities of human nature and the importance of early life experiences. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**KZ Andromedae**
KZ Andromedae:
KZ Andromedae (often abbreviated to KZ And) is a double lined spectroscopic binary in the constellation Andromeda. Its apparent visual magnitude varies between 7.91 and 8.03 during a cycle slightly longer than 3 days.
System:
Both stars in the KZ Andromedae system are main sequence stars of spectral type K2Ve, meaning that the spectrum shows strong emission lines. This is caused by their active chromospheres that cause large spots on the surface.KZ Andromedae is listed in the Washington Double Star Catalog as the secondary component in a visual binary system, with the primary being HD 218739. In 50 years of observations, there is little evidence of relative motion between the two stars; however, they have a common proper motion and a similar radial velocity.
Variability:
The rotational velocity of both stars is consistent with a synchronous rotation of the pair, and the rotational period is itself comparable to the brightness variation period. KX Andromedae is thus classified as a BY Draconis variable, and the variability is caused by the large spots on the surface. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Calreticulin protein family**
Calreticulin protein family:
In molecular biology, the calreticulin protein family is a family of calcium-binding proteins. This family includes Calreticulin, Calnexin and Camlegin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Metamfepramone**
Metamfepramone:
Metamfepramone (INN, also known as dimethylcathinone, dimethylpropion, and dimepropion (BAN)) is a stimulant drug of the phenethylamine, and cathinone chemical classes. Dimethylcathinone was evaluated as an appetite suppressant and for the treatment of hypotension, but was never widely marketed.It was used as a recreational drug in Israel under the name rakefet, but was made illegal in 2006.Metamfepramone is metabolized to produce N-methylpseudoephedrine and methcathinone. It has also been found to be about 1.6 times less potent than methcathinone, making it roughly equipotent to cathinone itself.
Legality:
In the United States Metamfepramone (N,N-Dimethyl-cathinone) is considered a schedule 1 controlled substance as a positional isomer of Mephedrone | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Luchacovirus**
Luchacovirus:
Luchacovirus is a subgenus of viruses in the genus Alphacoronavirus, consisting of a single species, Lucheng Rn rat coronavirus. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Opigolix**
Opigolix:
Opigolix (INN, USAN; developmental code name ASP-1707) is a small-molecule, non-peptide, orally active gonadotropin-releasing hormone antagonist (GnRH antagonist) which was under development by Astellas Pharma for the treatment of endometriosis and rheumatoid arthritis. It was also under investigation for the treatment of prostate cancer. It reached phase II clinical trials for both endometriosis and rheumatoid arthritis prior to the discontinuation of its development in April 2018. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Suffix automaton**
Suffix automaton:
In computer science, a suffix automaton is an efficient data structure for representing the substring index of a given string which allows the storage, processing, and retrieval of compressed information about all its substrings. The suffix automaton of a string S is the smallest directed acyclic graph with a dedicated initial vertex and a set of "final" vertices, such that paths from the initial vertex to final vertices represent the suffixes of the string.
Suffix automaton:
In terms of automata theory, a suffix automaton is the minimal partial deterministic finite automaton that recognizes the set of suffixes of a given string S=s1s2…sn . The state graph of a suffix automaton is called a directed acyclic word graph (DAWG), a term that is also sometimes used for any deterministic acyclic finite state automaton.
Suffix automaton:
Suffix automata were introduced in 1983 by a group of scientists from the University of Denver and the University of Colorado Boulder. They suggested a linear time online algorithm for its construction and showed that the suffix automaton of a string S having length at least two characters has at most {\textstyle 2|S|-1} states and at most {\textstyle 3|S|-4} transitions. Further works have shown a close connection between suffix automata and suffix trees, and have outlined several generalizations of suffix automata, such as compacted suffix automaton obtained by compression of nodes with a single outgoing arc.
Suffix automaton:
Suffix automata provide efficient solutions to problems such as substring search and computation of the largest common substring of two and more strings.
History:
The concept of suffix automaton was introduced in 1983 by a group of scientists from University of Denver and University of Colorado Boulder consisting of Anselm Blumer, Janet Blumer, Andrzej Ehrenfeucht, David Haussler and Ross McConnell, although similar concepts had earlier been studied alongside suffix trees in the works of Peter Weiner, Vaughan Pratt and Anatol Slissenko. In their initial work, Blumer et al. showed a suffix automaton built for the string S of length greater than 1 has at most 2|S|−1 states and at most 3|S|−4 transitions, and suggested a linear algorithm for automaton construction.In 1983, Mu-Tian Chen and Joel Seiferas independently showed that Weiner's 1973 suffix-tree construction algorithm while building a suffix tree of the string S constructs a suffix automaton of the reversed string {\textstyle S^{R}} as an auxiliary structure. In 1987, Blumer et al. applied the compressing technique used in suffix trees to a suffix automaton and invented the compacted suffix automaton, which is also called the compacted directed acyclic word graph (CDAWG). In 1997, Maxime Crochemore and Renaud Vérin developed a linear algorithm for direct CDAWG construction. In 2001, Shunsuke Inenaga et al. developed an algorithm for construction of CDAWG for a set of words given by a trie.
Definitions:
Usually when speaking about suffix automata and related concepts, some notions from formal language theory and automata theory are used, in particular: "Alphabet" is a finite set Σ that is used to construct words. Its elements are called "characters"; "Word" is a finite sequence of characters ω=ω1ω2…ωn . "Length" of the word ω is denoted as |ω|=n "Formal language" is a set of words over given alphabet; "Language of all words" is denoted as Σ∗ (where the "*" character stands for Kleene star), "empty word" (the word of zero length) is denoted by the character ε "Concatenation of words" α=α1α2…αn and β=β1β2…βm is denoted as α⋅β or αβ and corresponds to the word obtained by writing β to the right of α , that is, αβ=α1α2…αnβ1β2…βm "Concatenation of languages" A and B is denoted as A⋅B or AB and corresponds to the set of pairwise concatenations AB={αβ:α∈A,β∈B} If the word ω∈Σ∗ may be represented as ω=αγβ , where α,β,γ∈Σ∗ , then words α , β and γ are called "prefix", "suffix" and "subword" (substring) of the word ω correspondingly; If T=T1…Tn and TlTl+1…Tr=S (with 1≤l≤r≤n ) then S is said to "occur" in T as a subword. Here l and r are called left and right positions of occurrence of S in T correspondingly.
Automaton structure:
Formally, deterministic finite automaton is determined by 5-tuple A=(Σ,Q,q0,F,δ) , where: Σ is an "alphabet" that is used to construct words, Q is a set of automaton "states", q0∈Q is an "initial" state of automaton, F⊂Q is a set of "final" states of automaton, δ:Q×Σ↦Q is a partial "transition" function of automaton, such that δ(q,σ) for q∈Q and σ∈Σ is either undefined or defines a transition from q over character σ .Most commonly, deterministic finite automaton is represented as a directed graph ("diagram") such that: Set of graph vertices corresponds to the state of states Q Graph has a specific marked vertex corresponding to initial state q0 Graph has several marked vertices corresponding to the set of final states F Set of graph arcs corresponds to the set of transitions δ Specifically, every transition {\textstyle \delta (q_{1},\sigma )=q_{2}} is represented by an arc from q1 to q2 marked with the character σ . This transition also may be denoted as {\textstyle q_{1}{\begin{smallmatrix}{\sigma }\\[-5pt]{\longrightarrow }\end{smallmatrix}}q_{2}} .In terms of its diagram, the automaton recognizes the word ω=ω1ω2…ωm only if there is a path from the initial vertex q0 to some final vertex q∈F such that concatenation of characters on this path forms ω . The set of words recognized by an automaton forms a language that is set to be recognized by the automaton. In these terms, the language recognized by a suffix automaton of S is the language of its (possibly empty) suffixes.
Automaton structure:
Automaton states "Right context" of the word ω with respect to language L is a set [ω]R={α:ωα∈L} that is a set of words α such that their concatenation with ω forms a word from L . Right contexts induce a natural equivalence relation [α]R=[β]R on the set of all words. If language L is recognized by some deterministic finite automaton, there exists unique up to isomorphism automaton that recognizes the same language and has the minimum possible number of states. Such an automaton is called a minimal automaton for the given language L . Myhill–Nerode theorem allows it to define it explicitly in terms of right contexts: In these terms, a "suffix automaton" is the minimal deterministic finite automaton recognizing the language of suffixes of the word S=s1s2…sn . The right context of the word ω with respect to this language consists of words α , such that ωα is a suffix of S . It allows to formulate the following lemma defining a bijection between the right context of the word and the set of right positions of its occurrences in S For example, for the word S=abacaba and its subword ω=ab , it holds endpos(ab)={2,6} and [ab]R={a,acaba} . Informally, [ab]R is formed by words that follow occurrences of ab to the end of S and endpos(ab) is formed by right positions of those occurrences. In this example, the element x=2∈endpos(ab) corresponds with the word s3s4s5s6s7=acaba∈[ab]R while the word a∈[ab]R corresponds with the element 7−|a|=6∈endpos(ab) It implies several structure properties of suffix automaton states. Let |α|≤|β| , then: If [α]R and [β]R have at least one common element x , then endpos(α) and endpos(β) have a common element as well. It implies α is a suffix of β and therefore endpos(β)⊂endpos(α) and [β]R⊂[α]R . In aforementioned example, a∈[ab]R∩[cab]R , so ab is a suffix of cab and thus [cab]R={a}⊂{a,acaba}=[ab]R and endpos(cab)={6}⊂{2,6}=endpos(ab) If [α]R=[β]R , then endpos(α)=endpos(β) , thus α occurs in S only as a suffix of β . For example, for α=b and β=ab it holds that [b]R=[ab]R={a,acaba} and endpos(b)=endpos(ab)={2,6} If [α]R=[β]R and γ is a suffix of β such that |α|≤|γ|≤|β| , then [α]R=[γ]R=[β]R . In the example above [c]R=[bac]R={aba} and it holds for "intermediate" suffix γ=ac that [ac]R={aba} .Any state q=[α]R of the suffix automaton recognizes some continuous chain of nested suffixes of the longest word recognized by this state."Left extension" γ← of the string γ is the longest string ω that has the same right context as γ . Length |γ←| of the longest string recognized by q=[γ]R is denoted by len(q) . It holds: "Suffix link" link(q) of the state q=[α]R is the pointer to the state p that contains the largest suffix of α that is not recognized by q In this terms it can be said q=[α]R recognizes exactly all suffixes of α← that is longer than len(link(q)) and not longer than len(q) . It also holds: Connection with suffix trees A "prefix tree" (or "trie") is a rooted directed tree in which arcs are marked by characters in such a way no vertex v of such tree has two out-going arcs marked with the same character. Some vertices in trie are marked as final. Trie is said to recognize a set of words defined by paths from its root to final vertices. In this way prefix trees are a special kind of deterministic finite automata if you perceive its root as an initial vertex. The "suffix trie" of the word S is a prefix tree recognizing a set of its suffixes. "A suffix tree" is a tree obtained from a suffix trie via the compaction procedure, during which consequent edges are merged if the degree of the vertex between them is equal to two.By its definition, a suffix automaton can be obtained via minimization of the suffix trie. It may be shown that a compacted suffix automaton is obtained by both minimization of the suffix tree (if one assumes each string on the edge of the suffix tree is a solid character from the alphabet) and compaction of the suffix automaton. Besides this connection between the suffix tree and the suffix automaton of the same string there is as well a connection between the suffix automaton of the string S=s1s2…sn and the suffix tree of the reversed string SR=snsn−1…s1 .Similarly to right contexts one may introduce "left contexts" [ω]L={β∈Σ∗:βω∈L} , "right extensions" ω→ corresponding to the longest string having same left context as ω and the equivalence relation [α]L=[β]L . If one considers right extensions with respect to the language L of "prefixes" of the string S it may be obtained: , which implies the suffix link tree of the string S and the suffix tree of the string SR are isomorphic:Similarly to the case of left extensions, the following lemma holds for right extensions: Size A suffix automaton of the string S of length n>1 has at most 2n−1 states and at most 3n−4 transitions. These bounds are reached on strings abb…bb=abn−1 and abb…bc=abn−2c correspondingly. This may be formulated in a stricter way as |δ|≤|Q|+n−2 where |δ| and |Q| are the numbers of transitions and states in automaton correspondingly.
Construction:
Initially the automaton only consists of a single state corresponding to the empty word, then characters of the string are added one by one and the automaton is rebuilt on each step incrementally.
Construction:
State updates After a new character is appended to the string, some equivalence classes are altered. Let [α]Rω be the right context of α with respect to the language of ω suffixes. Then the transition from [α]Rω to [α]Rωx after x is appended to ω is defined by lemma: After adding x to the current word ω the right context of α may change significantly only if α is a suffix of ωx . It implies equivalence relation ≡Rωx is a refinement of ≡Rω . In other words, if [α]Rωx=[β]Rωx , then [α]Rω=[β]Rω . After the addition of a new character at most two equivalence classes of ≡Rω will be split and each of them may split in at most two new classes. First, equivalence class corresponding to empty right context is always split into two equivalence classes, one of them corresponding to ωx itself and having {ε} as a right context. This new equivalence class contains exactly ωx and all its suffixes that did not occur in ω , as the right context of such words was empty before and contains only empty word now.Given the correspondence between states of the suffix automaton and vertices of the suffix tree, it is possible to find out the second state that may possibly split after a new character is appended. The transition from ω to ωx corresponds to the transition from ωR to xωR in the reversed string. In terms of suffix trees it corresponds to the insertion of the new longest suffix xωR into the suffix tree of ωR . At most two new vertices may be formed after this insertion: one of them corresponding to xωR , while the other one corresponds to its direct ancestor if there was a branching. Returning to suffix automata, it means the first new state recognizes ωx and the second one (if there is a second new state) is its suffix link. It may be stated as a lemma: It implies that if α=β (for example, when x didn't occur in ω at all and α=β=ε ), then only the equivalence class corresponding to the empty right context is split.Besides suffix links it is also needed to define final states of the automaton. It follows from structure properties that all suffixes of a word α recognized by q=[α]R are recognized by some vertex on suffix path (q,link(q),link2(q),…) of q . Namely, suffixes with length greater than len(link(q)) lie in q , suffixes with length greater than len(link(link(q)) but not greater than len(link(q)) lie in link(q) and so on. Thus if the state recognizing ω is denoted by last , then all final states (that is, recognizing suffixes of ω ) form up the sequence (last,link(last),link2(last),…) Transitions and suffix links updates After the character x is appended to ω possible new states of suffix automaton are [ωx]Rωx and [α]Rωx . Suffix link from [ωx]Rωx goes to [α]Rωx and from [α]Rωx it goes to link([α]Rω) . Words from [ωx]Rωx occur in ωx only as its suffixes therefore there should be no transitions at all from [ωx]Rωx while transitions to it should go from suffixes of ω having length at least α and be marked with the character x . State [α]Rωx is formed by subset of [α]Rω , thus transitions from [α]Rωx should be same as from [α]Rω . Meanwhile, transitions leading to [α]Rωx should go from suffixes of ω having length less than |α| and at least len(link([α]Rω)) , as such transitions have led to [α]Rω before and corresponded to seceded part of this state. States corresponding to these suffixes may be determined via traversal of suffix link path for [ω]Rω Construction algorithm Theoretical results above lead to the following algorithm that takes character x and rebuilds the suffix automaton of ω into the suffix automaton of ωx The state corresponding to the word ω is kept as last After x is appended, previous value of last is stored in the variable p and last itself is reassigned to the new state corresponding to ωx States corresponding to suffixes of ω are updated with transitions to last . To do this one should go through p,link(p),link2(p),… , until there is a state that already has a transition by x Once the aforementioned loop is over, there are 3 cases: If none of states on the suffix path had a transition by x , then x never occurred in ω before and the suffix link from last should lead to q0 If the transition by x is found and leads from the state p to the state q , such that len(p)+1=len(q) , then q does not have to be split and it is a suffix link of last If the transition is found but len(q)>len(p)+1 , then words from q having length at most len(p)+1 should be segregated into new "clone" state cl If the previous step was concluded with the creation of cl , transitions from it and its suffix link should copy those of q , at the same time cl is assigned to be common suffix link of both q and last Transitions that have led to q before but corresponded to words of the length at most len(p)+1 are redirected to cl . To do this, one continues going through the suffix path of p until the state is found such that transition by x from it doesn't lead to q .The whole procedure is described by the following pseudo-code: function add_letter(x): define p = last assign last = new_state() assign len(last) = len(p) + 1 while δ(p, x) is undefined: assign δ(p, x) = last, p = link(p) define q = δ(p, x) if q = last: assign link(last) = q0 else if len(q) = len(p) + 1: assign link(last) = q else: define cl = new_state() assign len(cl) = len(p) + 1 assign δ(cl) = δ(q), link(cl) = link(q) assign link(last) = link(q) = cl while δ(p, x) = q: assign δ(p, x) = cl, p = link(p) Here q0 is the initial state of the automaton and new_state() is a function creating new state for it. It is assumed last , len , link and δ are stored as global variables.
Construction:
Complexity Complexity of the algorithm may vary depending on the underlying structure used to store transitions of the automaton. It may be implemented in log |Σ|) with O(n) memory overhead or in O(n) with O(n|Σ|) memory overhead if one assumes that memory allocation is done in O(1) . To obtain such complexity, one has to use the methods of amortized analysis. The value of len(p) strictly reduces with each iteration of the cycle while it may only increase by as much as one after the first iteration of the cycle on the next add_letter call. Overall value of len(p) never exceeds n and it is only increased by one between iterations of appending new letters that suggest total complexity is at most linear as well. The linearity of the second cycle is shown in a similar way.
Generalizations:
The suffix automaton is closely related to other suffix structures and substring indices. Given a suffix automaton of a specific string one may construct its suffix tree via compacting and recursive traversal in linear time. Similar transforms are possible in both directions to switch between the suffix automaton of S and the suffix tree of reversed string SR . Other than this several generalizations were developed to construct an automaton for the set of strings given by trie, compacted suffix automation (CDAWG), to maintain the structure of the automaton on the sliding window, and to construct it in a bidirectional way, supporting the insertion of a characters to both the beginning and the end of the string.
Generalizations:
Compacted suffix automaton As was already mentioned above, a compacted suffix automaton is obtained via both compaction of a regular suffix automaton (by removing states which are non-final and have exactly one out-going arc) and the minimization of a suffix tree. Similarly to the regular suffix automaton, states of compacted suffix automaton may be defined in explicit manner. A two-way extension γ⟷ of a word γ is the longest word ω=βγα , such that every occurrence of γ in S is preceded by β and succeeded by α . In terms of left and right extensions it means that two-way extension is the left extension of the right extension or, which is equivalent, the right extension of the left extension, that is {\textstyle {\overset {\scriptstyle \longleftrightarrow }{\gamma }}={\overset {\scriptstyle \leftarrow }{\overset {\rightarrow }{\gamma }}}={\overset {\rightarrow }{\overset {\scriptstyle \leftarrow }{\gamma }}}} . In terms of two-way extensions compacted automaton is defined as follows: Two-way extensions induce an equivalence relation {\textstyle {\overset {\scriptstyle \longleftrightarrow }{\alpha }}={\overset {\scriptstyle \longleftrightarrow }{\beta }}} which defines the set of words recognized by the same state of compacted automaton. This equivalence relation is a transitive closure of the relation defined by {\textstyle ({\overset {\scriptstyle {\rightarrow }}{\alpha \,}}={\overset {\scriptstyle {\rightarrow }}{\beta \,}})\vee ({\overset {\scriptstyle {\leftarrow }}{\alpha }}={\overset {\scriptstyle {\leftarrow }}{\beta }})} , which highlights the fact that a compacted automaton may be obtained by both gluing suffix tree vertices equivalent via α←=β← relation (minimization of the suffix tree) and gluing suffix automaton states equivalent via α→=β→ relation (compaction of suffix automaton). If words α and β have same right extensions, and words β and γ have same left extensions, then cumulatively all strings α , β and γ have same two-way extensions. At the same time it may happen that neither left nor right extensions of α and γ coincide. As an example one may take S=β=ab , α=a and γ=b , for which left and right extensions are as follows: α→=β→=ab=β←=γ← , but γ→=b and α←=a . That being said, while equivalence relations of one-way extensions were formed by some continuous chain of nested prefixes or suffixes, bidirectional extensions equivalence relations are more complex and the only thing one may conclude for sure is that strings with the same two-way extension are substrings of the longest string having the same two-way extension, but it may even happen that they don't have any non-empty substring in common. The total number of equivalence classes for this relation does not exceed n+1 which implies that compacted suffix automaton of the string having length n has at most n+1 states. The amount of transitions in such automaton is at most 2n−2 Suffix automaton of several strings Consider a set of words T={S1,S2,…,Sk} . It is possible to construct a generalization of suffix automaton that would recognize the language formed up by suffixes of all words from the set. Constraints for the number of states and transitions in such automaton would stay the same as for a single-word automaton if you put n=|S1|+|S2|+⋯+|Sk| . The algorithm is similar to the construction of single-word automaton except instead of last state, function add_letter would work with the state corresponding to the word ωi assuming the transition from the set of words {ω1,…,ωi,…,ωk} to the set {ω1,…,ωix,…,ωk} .This idea is further generalized to the case when T is not given explicitly but instead is given by a prefix tree with Q vertices. Mohri et al. showed such an automaton would have at most 2Q−2 and may be constructed in linear time from its size. At the same time, the number of transitions in such automaton may reach O(Q|Σ|) , for example for the set of words T={σ1,aσ1,a2σ1,…,anσ1,anσ2,…,anσk} over the alphabet Σ={a,σ1,…,σk} the total length of words is equal to {\textstyle O(n^{2}+nk)} , the number of vertices in corresponding suffix trie is equal to O(n+k) and corresponding suffix automaton is formed of O(n+k) states and O(nk) transitions. Algorithm suggested by Mohri mainly repeats the generic algorithm for building automaton of several strings but instead of growing words one by one, it traverses the trie in a breadth-first search order and append new characters as it meet them in the traversal, which guarantees amortized linear complexity.
Generalizations:
Sliding window Some compression algorithms, such as LZ77 and RLE may benefit from storing suffix automaton or similar structure not for the whole string but for only last k its characters while the string is updated. This is because compressing data is usually expressively large and using O(n) memory is undesirable. In 1985, Janet Blumer developed an algorithm to maintain a suffix automaton on a sliding window of size k in O(nk) worst-case and log k) on average, assuming characters are distributed independently and uniformly. She also showed O(nk) complexity cannot be improved: if one considers words construed as a concatenation of several (ab)mc(ab)md words, where k=6m+2 , then the number of states for the window of size k would frequently change with jumps of order m , which renders even theoretical improvement of O(nk) for regular suffix automata impossible.The same should be true for the suffix tree because its vertices correspond to states of the suffix automaton of the reversed string but this problem may be resolved by not explicitly storing every vertex corresponding to the suffix of the whole string, thus only storing vertices with at least two out-going edges. A variation of McCreight's suffix tree construction algorithm for this task was suggested in 1989 by Edward Fiala and Daniel Greene; several years later a similar result was obtained with the variation of Ukkonen's algorithm by Jesper Larsson. The existence of such an algorithm, for compacted suffix automaton that absorbs some properties of both suffix trees and suffix automata, was an open question for a long time until it was discovered by Martin Senft and Tomasz Dvorak in 2008, that it is impossible if the alphabet's size is at least two.One way to overcome this obstacle is to allow window width to vary a bit while staying O(k) . It may be achieved by an approximate algorithm suggested by Inenaga et al. in 2004. The window for which suffix automaton is built in this algorithm is not guaranteed to be of length k but it is guaranteed to be at least k and at most 2k+1 while providing linear overall complexity of the algorithm.
Applications:
Suffix automaton of the string S may be used to solve such problems as: Counting the number of distinct substrings of S in O(|S|) on-line, Finding the longest substring of S occurring at least twice in O(|S|) Finding the longest common substring of S and T in O(|T|) Counting the number of occurrences of T in S in O(|T|) Finding all occurrences of T in S in O(|T|+k) , where k is the number of occurrences.It is assumed here that T is given on the input after suffix automaton of S is constructed.Suffix automata are also used in data compression, music retrieval and matching on genome sequences. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Metadesign**
Metadesign:
Metadesign (or meta-design) is an emerging conceptual framework aimed at defining and creating social, economic and technical infrastructures in which new forms of collaborative design can take place. It consists of a series of practical design-related tools for achieving this.
As a methodology, its aim is to nurture emergence of the previously unthinkable as possibilities or prospects through the collaboration of designers within interdisciplinarity 'metadesign' teams. Inspired by the way living systems work, this new field aims to help improve the way we feed, clothe, shelter, assemble, communicate and live together.
History:
Metadesign has been initially put forward as an industrial design approach to complexity theory and information systems by Dutch designer Andries Van Onck in 1963, while at Ulm School of Design (later at Politecnico di Milano and Rome and Florence ISIA). Since then, several different design, creative and research approaches have used the name "Metadesign", ranging from Humberto Maturana and Francisco Varela's biological approach, to Gerhard Fischer's and Elisa Giaccardi's techno-social approach, and Paul Virilio's techno-policital approach.
History:
Later on, a very active group was present at Politecnico di Milano, and several different universities and graduate programs began applying Metadesign in design teaching around the world generally based at Van Onck's approach, further developed at Politecnico di Milano. Nevertheless, there's a very active, but widely dispersed, group that base their activities at Maturana and Varela's approach.
More recently, some efforts have been made to systematize Metadesign as a structured creative process, such as (1) Fischer's and Giaccardi's and (2) Caio Vassão's academic works, among several others, based on a much wider reference frame, ranging from post-structuralist philosophy, Neil Postman's media ecology, Christopher Alexander's pattern languages and deep ecology.
This variety of approaches is justified by the myriad interpretations that can be derived from the etymological structure of the term.
Re-designing design:
The Greek word 'meta' originally meant 'beyond' or 'after' and is now sometimes used to imply a comprehensive, insightful self-awareness. Employed as a prefix, it explicitly denotes self-referentiality. Metadesign, therefore, alludes to a design practice that (re)designs itself (see Maturana and Varela's term autopoiesis). The idea of Metadesign acknowledges that future uses and problems cannot be completely anticipated at design time. Aristotle's influential theory of design defined it by saying that the 'cause' of design was its final state. This teleological perspective is similar to the orthodox idea of an economic payback at the point of sale, rather than successive stages when the product could be seen to achieve high levels of perceived value, throughout the whole design cycle. Some supporters of metadesign hope that it will extend the traditional notion of system design beyond the original development of a system by allowing users to become co-designers.
The importance of languaging:
By harnessing creative teamwork within a suitable co-design framework, some metadesigners have sought to catalyse changes at a behavioural level. However, as Albert Einstein said, "We can't solve problems by using the same kind of thinking we used when we created them". This points to a need for appropriate innovation at all levels, including the metaphorical language that serves to sustain a given paradigm. In practical terms this adds considerable complexity to the task of managing actions and outcomes. What may be so neatly described as 'new knowledge', in practical terms, exists as an interpersonal and somatic web of tacit knowledge that needs to be interpreted and applied by many collaborators. This tends to reduce the semantic certainty of roles, actions and descriptors within a given team, making it necessary to rename particular shared experiences that seem inappropriately defined. In other instances it may be necessary to invent new words to describe perceived gaps in what can be discussed within a prevailing vernacular. Humberto Maturana's work on distributed language and the field of biosemiotics is germane to this task. Some researchers have used bisociation in order to create an auspicious synergy of benign synergies. In aspiring to this outcome, metadesign teams will cultivate auspicious 'diversities-of-diversities'. It suggests that metadesign would offer a manifold ethical space. In this respect, related approaches include what Arthur Koestler (1967) called holarchy, or what John Dewey and John Chris Jones have called 'creative democracy'.
Metadesign conceptual tools:
Regarding a wide range of applications and contexts, Vassão has argued that Metadesign can be understood as a set of four "conceptual tools", utilizing Gilles Deleuze's understanding of the term "tool": Levels of abstraction (the ability to understand the structure and limits of abstractions, language and instrumental thinking); Diagrams and topology (the use of diagrammatic thinking and design, sustained by topological understanding); Procedural design (the creation of realities through the use of procedures, such as in game and role playing, as well as in procedural design, art and architecture); Emergence (the absence of absolute control, and the ability to take advantage of unintended and unforeseen results).Vassão has argued that, in all different approaches to metadesign, the presence of these conceptual tools can be verified. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Embryo quality**
Embryo quality:
Embryo quality is the ability of an embryo to perform successfully in terms of conferring a high pregnancy rate and/or resulting in a healthy person. Embryo profiling is the estimation of embryo quality by qualification and/or quantification of various parameters. Estimations of embryo quality guides the choice in embryo selection in in vitro fertilization.
Embryo quality:
In general, embryo profiling for prediction of pregnancy rates focuses mainly on visual profiles and short-term biomarkers including expression of RNA and proteins, preferably in the surroundings of embryos to avoid any damage to them. On the other hand, embryo profiling for health prediction puts more focus on the genome, and where there is a risk of a genetic disorder it more often involves cell sampling from the embryo for preimplantation genetic diagnosis.
Prediction of pregnancy rates:
Microscopy Embryo quality is mainly evaluated by microscopy at certain time points using a morphological scoring system. This has shown to significantly improve pregnancy rates. Assessment of morphological features as a reliable non-invasive method that provides valuable information in prediction of IVF/intra cytoplasmic sperm injection (ICSI) outcome has been frequently used as an soring system of the embryo quality. The parameters for evaluation at day 2-3: Number of cells and division rhythm: The optimal number of cells is 4 at day 2 and 8 at day 3 (A quality). In day 3 9-10 cells is B, >=10 is C (suboptimal) and <=4 is D (barely implant). A normal division rate is to double cell number each 24 hours. A higher rate implies chromosomal abnormalities and a lower rate entails possible embryo arrest (it is dying).
Prediction of pregnancy rates:
Fragmentation: happens due to cell apoptosis and can be quantified by the % of the embryo total volume eccupied by fragments. Fragments are cytoplasm fractions without nuclei.
Prediction of pregnancy rates:
Cells symmetry and size: it is normal that all blastomeres had same or similar size in embryos with 2, 4 or 8 cells, while for the rest of embryos, a certain variety in cells size is normal. When the number of cells is impaired, if all of them have the same size, it is considered asymmetrical. Those embryos with one big size blasomere is considered abnormal and is associated with high rate of polyploidy.
Prediction of pregnancy rates:
Multinucleation: multinucleated blastomeres on day 2 and day 3 is associated to a lower implantation rate. These embryos often are mosaics or with aneuploidy. It is more related to abnormalities on day 2 than on day 3.
Prediction of pregnancy rates:
Cytoplasm aspect: the presence of vesicles on day 3 is considered a sign of embryo genome activation and, therefore, of good prognosis. The presence of vacuoles is a sign of bad prognosis.Time-lapse microscopy is an expansion of microscopy wherein the morphology of embryos is studied over time. As of 2014, time-lapse microscopy for embryo quality assessment is emerging from the experimental stage to something with enough evidence for broader clinical use. Studies using the EmbryoScope (tm) time-lapse incubator have used several indicators for embryo quality, such as direct cleavage from 1 to 3 cells, as well as the initiation of compaction and start of blastulation. Also, two-pronuclear zygotes (2PN) transitioning through 1PN or 3PN states tend to develop into poorer-quality embryos than those that constantly remain 2PN.
Prediction of pregnancy rates:
Molecular analysis Molecular analysis can be performed by taking one of the cells from an embryo. The analysis can vary in extent from a single target biomarker to entire genomes, transcriptomes, proteomes and metabolomes. The results may be used to score embryos by comparing the patterns with ones that have previously been found among embryos in successful versus unsuccessful pregnancies: Transcriptome profiling In transcriptome evaluation, gene expression profiling studies of human embryos are limited due to legal and ethical issues.Gene expression profiling of cumulus cells surrounding the oocyte and early embryo, or on granulosa cells, provides an alternative that does not involve sampling from the embryo itself. Profiling of cumulus cells can give valuable information regarding the efficiency of an ovarian hyperstimulation protocol, and may indirectly predict oocyte aneuploidy, embryo development and pregnancy outcomes, without having to perform any invasive procedure directly in the embryo.In addition, microRNA (miRNA) and cell-free DNA (cfDNA) can be sampled from the vicinity of embryos, functioning as transcriptome-level-markers of embryo quality.
Prediction of pregnancy rates:
Proteome profiling Proteome profiling of embryos can indirectly be evaluated by sampling of proteins found in the vicinity of embryos, thereby providing a non-invasive method of embryo profiling. Examples of protein markers evaluated in such profiling include CXCL13 and granulocyte-macrophage colony-stimulating factor, where lower protein amounts are associated with higher implantation rates. The presence of soluble HLA-G might be considered as another parameter if a choice has to be made between embryos of equal visible quality.Another level of opportunity can be achieved by having the evaluation of the embryo profile tailored to the maternal status in regard to, for example health or immune status, potentially further detailed by similar profiling of the maternal genome, transcriptome, proteome and metabolome. Two examples of proteins that may be included in maternal profiling are endometrium-derived stathmin and annexin A2, whose down- and up-regulation, respectively, are associated with higher rates of successful implantation.
Prediction of pregnancy rates:
Genome profiling A systematic review and meta-analysis of existing randomized controlled trials came to the result that there is no evidence of a beneficial effect of PGP as measured by live birth rate. On the contrary, for women of advanced maternal age, PGP significantly lowers the live birth rate. Technical drawbacks, such as the invasiveness of the biopsy, and chromosomal mosaicism are the major underlying factors for inefficacy of PGP.A major drawback of genomic profiling for embryo quality is that the results generally rely on the assessment of a single cell, PGP has inherent limitations as the tested cell may not be representative of the embryo because of mosaicism.When used for women of advanced maternal age and for patients with repetitive IVF failure, PGP is mainly carried out as a screening for detection of chromosomal abnormalities such as aneuploidy, reciprocal and Robertsonian translocations, and few cases for other abnormalities such as chromosomal inversions or deletions. The principle behind it is that, since it is known that numerical chromosomal abnormalities explain most of the cases of pregnancy loss, and a large proportion of the human embryos are aneuploid, the selective replacement of euploid embryos should increase the chances of a successful IVF treatment. Comprehensive chromosome analysis methods include array-comparative genomic hybridization (aCGH), quantitative PCR and SNP arrays. Combined with single blastomere biopsy on day-3 embryos, aCGH is very robust with 2.9% of tested embryos with no results, and associated with low error rates (1.9%). There is no evidence that testing the embryo for abnormal number of chromosomes increases the number of live births.In addition to screening for specific abnormalities, techniques are in development that can avail for up to full genome sequencing, from which genetic profiling can score the DNA patterns by comparing with ones that have previously been found among embryos in successful or unsuccessful pregnancies.
Health prediction:
The main method currently used to predict the health of a resultant person of an embryo is preimplantation genetic diagnosis (also called preimplantation genetic screening, preimplantation genetic profiling or PGP), in order to determine whether the resultant person will inherit a specific disease or not. On the other hand, a systematic review and meta-analysis of existing randomized controlled trials came to the result that there is no evidence of a beneficial effect of PGP as measured by live birth rate. On the contrary, for women of advanced maternal age, PGP significantly lowers the live birth rate. Technical drawbacks, such as the invasiveness of the biopsy, and chromosomal mosaicism are the major underlying factors for inefficacy of PGP. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aromatic amine**
Aromatic amine:
In organic chemistry, an aromatic amine is an organic compound consisting of an aromatic ring attached to an amine. It is a broad class of compounds that encompasses anilines, but also many more complex aromatic rings and many amine substituents beyond NH2. Such compounds occur widely.
Aromatic amines are widely used as precursor to pesticides, pharmaceuticals, and dyes.
Aromatic amines in textiles:
Since August 2012, the new standard EN 14362-1:2012 Textiles - Methods for determination of certain aromatic amines derived from azo colorants - Part 1: Detection of the use of certain azo colorants accessible with and without extracting the fibres is effective. It had been officially approved by the European Committee for Standardization (CEN) and supersedes the test standards EN 14362-1: 2003 and EN 14362-2: 2003.
Aromatic amines in textiles:
The standard describes a procedure to detect EU banned aromatic amines derived from azo colorants in textile fibres, including natural, man-made, regenerated, and blended fibres. The standard is also relevant for all coloured textiles, e.g. dyed, printed, and coated textiles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Marilyn Walker**
Marilyn Walker:
Marilyn A. Walker is an American computer scientist. She is professor of computer science and head of the Natural Language and Dialogue Systems Lab at the University of California, Santa Cruz (UCSC). Her research includes work on computational models of dialogue interaction and conversational agents, analysis of affect, sarcasm and other social phenomena in social media dialogue, acquiring causal knowledge from text, conversational summarization, interactive story and narrative generation, and statistical methods for training the dialogue manager and the language generation engine for dialogue systems.
Biography:
Walker received an M.S. in computer science from Stanford University in 1987, and an M.A in linguistics and a Ph.D. in computer and information science from the University of Pennsylvania in 1993. Walker was awarded a Royal Society Wolfson Research Fellowship at the University of Sheffield from 2003 to 2009. She was inducted as a Fellow of the Association for Computational Linguistics (ACL) in December 2016 for "fundamental contributions to statistical methods for dialog optimization, to centering theory, and to expressive generation for dialog". She served as the general chair of the 2018 North American Association for Computational Linguistics (NAACL-2018) conference.
Biography:
Walker pioneered the use of statistical methods for dialog optimization at AT&T Bell Labs Research where she conducted some of the first experiments on reinforcement learning for optimizing dialogue systems. She also pioneered the use of statistical NLP methods for Natural Language Generation with the development of the first statistical sentence planner for dialogue systems. Her research on Centering Theory is taught in standard textbooks on NLP.
Biography:
She has published over 200 papers and is the holder of 13 U.S. patents. Her work on the evaluation of dialogue systems conducted at AT&T Bell Labs Research (PARADISE: A framework for evaluating spoken dialogue agents) is a classic, having been cited more than 800 times. At UCSC, her lab focuses on computational modeling of dialogue and user-generated content in social media such as weblogs, including spoken dialogue systems and interactive stories. She leads the Athena team, selected as one of the contenders of Alexa Prize Challenge 3, with seven lab members competing in the 2019/2020 Alexa Prize. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shooting guard**
Shooting guard:
The shooting guard (SG), also known as the two, two guard or off guard, is one of the five traditional positions in a regulation basketball game. A shooting guard's main objective is to score points for their team and steal the ball on defense. Some teams ask their shooting guards to bring up the ball as well; these players are known colloquially as combo guards. A player who can switch between playing shooting guard and small forward is known as a swingman. In the NBA, shooting guards usually range from 6 ft 4 in (1.93 m) to 6 ft 6 in (1.98 m) while in the WNBA, shooting guards tend to be between 5 ft 10 in (1.78 m) and 6 ft 1 in (1.85 m).
Characteristics and styles of play:
The Basketball Handbook by Lee Rose describes a shooting guard as a player whose primary role is to score points. As the name suggests, most shooting guards are good long-range shooters, typically averaging 35–40 percent from three-point range. Many shooting guards are also strong and athletic, and have the ability to get inside the paint and drive to the basket.
Characteristics and styles of play:
Typically, shooting guards are taller than point guards. Height at the position varies; many bigger shooting guards also play small forward. Shooting guards should be good ball handlers and be able to pass reasonably well, though passing is not their main priority. Since good shooting guards may attract double-teams, they are frequently the team's back-up ball handlers to the point guard and typically get a fair number of assists.Shooting guards must be able to score in various ways, especially late in a close game when defenses are tighter. They need to have a good free throw percentage too, to be reliable in close games and to discourage opposing players from fouling. Because of the high level of offensive skills shooting guards need, they are often a team's primary scoring option, and sometimes the offense is built around them.
Characteristics and styles of play:
In the NBA, there are some shooting guards referred to as "3 and D" players. The term 3 and D implies that the player is a good 3 point shooter who can also play effective defense. The 3 and D player has become very important as the game sways to be perimeter oriented.Good shooting guards can often play point guard to a certain extent. It is usually accepted that point guards should have the ball in their hands at most times in the game, but sometimes the shooting guard has a significant enough influence on the team where they handle the ball extremely often, to the point where the point guard may be reduced to a backup ball handler or a spot-up shooter, a player who "spots-up" for catch-and-shoot shots to provide spacing for the offense. Notable shooting guards include Michael Jordan, Kobe Bryant, Dwyane Wade, Manu Ginobili, James Harden, Klay Thompson, Clyde Drexler, Jerry West, George Gervin, Vince Carter, Donovan Mitchell, and Allen Iverson.
Skills and qualities:
It is important for a shooting guard to develop skills in defense, passing and strength in addition to shooting ability. This position displays the most movement offensively when trying to get an open shot, along with keeping things under control on the defensive end.
Understanding that this position is shaped around the shooting ability of the athlete, many external abilities implemented into the player will overall help construct the potential the athlete possesses. External abilities would consist of strong ball handling, a sharp mind, and the development of a high basketball intelligence.
Shooting guards are often used as the secondary ball handler to help eliminate pressure of the 1 guard. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ferrihydrite**
Ferrihydrite:
Ferrihydrite (Fh) is a widespread hydrous ferric oxyhydroxide mineral at the Earth's surface, and a likely constituent in extraterrestrial materials. It forms in several types of environments, from freshwater to marine systems, aquifers to hydrothermal hot springs and scales, soils, and areas affected by mining. It can be precipitated directly from oxygenated iron-rich aqueous solutions, or by bacteria either as a result of a metabolic activity or passive sorption of dissolved iron followed by nucleation reactions. Ferrihydrite also occurs in the core of the ferritin protein from many living organisms, for the purpose of intra-cellular iron storage.
Structure:
Ferrihydrite only exists as a fine grained and highly defective nanomaterial. The powder X-ray diffraction pattern of Fh contains two scattering bands in its most disordered state, and a maximum of six strong lines in its most crystalline state. The principal difference between these two diffraction end-members, commonly named two-line and six-line ferrihydrites, is the size of the constitutive crystallites. The six-line form has been classified as a mineral by the IMA in 1973 with the nominal chemical formula 5Fe2O3•9H2O. Other proposed formulas are Fe5HO8•4H2O and Fe2O3•2FeO(OH)•2.6H2O. However, its formula is fundamentally indeterminate as its water content is variable. The two-line form is also called hydrous ferric oxides (HFO).
Structure:
Due to the nanoparticulate nature of ferrihydrite, the structure has remained elusive for many years and is still a matter of controversy. Drits et al., using X-ray diffraction data, proposed in 1993 a multiphase structure model for six-line ferrihydrite with three components: (1) defect-free crystallites (f-phase) with double-hexagonal stacking of oxygen and hydroxyl layers (ABAC sequence) and disordered octahedral Fe occupancies, (2) defective crystallites (d-phase) with a short-range feroxyhite-like (δ-FeOOH) structure, and (3) subordinate ultradisperse hematite (α-Fe2O3). The diffraction model has been confirmed in 2002 by Neutron diffraction, and the three components were observed by High-resolution transmission electron microscopy. A single phase model for both ferrihydrite and hydromaghemite has been proposed by Michel et al., in 2007-2010, based on pair distribution function (PDF) analysis of x-ray total scattering data. The structural model, isostructural with the mineral akdalaite (Al10O14(OH)2), contains 20% tetrahedrally and 80% octahedrally coordinated iron. Manceau et al. showed in 2014 that the Drits et al. model reproduces the PDF data as well as the Michel et al. model, and he suggested in 2019 that the tetrahedral coordination arises from maghemite and magnetite impurities observed by electron microscopy.
Porosity and environmental absorbent potential:
Because of the small size of individual nanocrystals, Fh is nanoporous yielding large surface areas of several hundred square meters per gram. In addition to having a high surface area to volume ratio, Fh also has a high density of local or point defects, such as dangling bonds and vacancies. These properties confer a high ability to adsorb many environmentally important chemical species, including arsenic, lead, phosphate, and organic molecules (e.g., humic and fulvic acids). Its strong and extensive interaction with trace metals and metalloids is used in industry, at large-scale in water purification plants, as in North Germany and to produce the city water at Hiroshima, and at small scale to clean wastewaters and groundwaters, for example to remove arsenic from industrial effluents and drinking water. Its nanoporosity and high affinity for gold can be used to elaborate Fh-supported nanosized Au particles for the catalytic oxidation of CO at temperatures below 0 °C. Dispersed six-line ferrihydrite nanoparticles can be entrapped in a vesicular state to increase their stability.
Metastability:
Ferrihydrite is a metastable mineral. It is known to be a precursor of more crystalline minerals like hematite and goethite by aggregation-based crystal growth. However, its transformation in natural systems generally is blocked by chemical impurities adsorbed at its surface, for example silica as most of natural ferrihydrites are siliceous.Under reducing conditions as those found in gley soils, or in deep environments depleted in oxygen, and often with the assistance of microbial activity, ferrihydrite can be transformed in green rust, a layered double hydroxide (LDH), also known as the mineral fougerite. However, a short exposure of green rust to atmospheric oxygen is sufficient to oxidize it back to ferrihydrite, making it a very elusive compound. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sunflower trypsin inhibitor**
Sunflower trypsin inhibitor:
Sunflower trypsin inhibitor (SFTI) is a small, circular peptide produced in sunflower seeds, and is a potent inhibitor of trypsin. It is the smallest known member of the Bowman-Birk family of serine protease inhibitors.One example of Sunflower trypsin inhibitor is Sunflower trypsin inhibitor-1 (SFTI-1). Sunflower trypsin inhibitor-1 is a potent Bowman-Birk inhibitor. Sunflower trypsin inhibitor-1 is the simplest cysteine-rich peptide scaffold because it is a bicyclic 14 amino acid peptide and only has one disulfide bond. The disulfide bond divides the peptide into two loops. One loop is a functional trypsin inhibitory and the second loop is a nonfunctional loop. The nonfunctional loop can be replaced by a bioactive loop. It is extracted from a seed of a sunflower called Helianthus annuus. The synthesis of SFTI is not known however, it can evolutionarily linked to a gene-coded product from classic Bowman-Birk inhibitors. STFI is used in radiopharmaceutical, antimicrobial, and pro-angiogenic peptides.
Synthetic inhibitors and the structure of SFTI:
By modifying the amino acid sequence of sunflower trypsin inhibitor, more specifically, sunflower trypsin inhibitor-1 (SFTI-1), researchers have been able to develop synthetic serine protease inhibitors that have specificity and improved inhibitory activity towards certain serine proteases that are found in the human body, such as tissue kallikreins and human matriptase-1. For instance, researchers from the Institute of Child Health and the Department of Chemistry of the University College London, have created two SFTI-1 analogs (I10G and I10H) by substituting residue 10 of SFTI-1 (isoleucine, I) with glycine (G) and histidine (H), respectively. Out of the two analogs, SFTI-I10H was found to be the more potent KLK5 inhibitor. Another group of researchers from the previously mentioned institute and department of the University College London, conducted further research on the development of synthetic kallikrein inhibitors by modifying the amino acid sequence of SFTI-I10H. Out of the six SFTI-I10H variants that were constructed by modifying SFTI-I10H, the first and second variant (K5R_I10H and I10H_F12W) demonstrated improved KLK5 inhibition and the sixth variant (K5R_I10H_F12W) showed dual-inhibition of KLK5 and KLK7, improved KLK5 inhibition potency, and specificity for KLK5 and KLK14. The first variant (K5R_I10H) was made by replacing residue 5 of SFTI-I10H (lysine, K) with arginine (R), and in order to get the second variant (I10H_F12W) residue 12 (phenylalanine, F) was replaced with tryptophan (W). Lastly, the sixth variant (K5R_I10H_F12W) was developed by combining the amino acid substitutions of the first and second variants.Moreover, researchers from the Clemens-Schöpf Institute of Organic Chemistry and Biochemistry and Helmholtz-Institute for Pharmaceutical Research Saarland, developed potent synthetic human matriptase-1 inhibitors based on a different SFTI-1 variant, SDMI-1. SFTI-1 derived matriptase inhibitor-1 (SDMI-1) was previously developed by replacing residue 10 of SFTI-1 (isoleucine, I) with arginine (R) and residue 12 (phenylalanine, F) with histidine (H). Further modifications of SDMI-1 resulted in synthetic matriptase-1 inhibitors with improved inhibitory activity, matriptase binding, and inhibition potency. The SDMI-1 variant that resulted in enhanced inhibitory activity was developed by replacing residue 1 of SDMI-1 (glycine, G) with lysine (K) and by keeping it as a monocyclic structure. The SDMI-1 variant that resulted in improved matriptase binding was created by using the same amino acid substitutions of the previously mentioned SDMI-1 variant and by attaching a bulky fluorescein moiety to the side chain of lysine. Lastly, the SDMI-1 variant that had enhanced inhibition potency was developed by applying the same amino acid substitutions of the previous variants, cleaving the proline-aspartic acid sequence found at the C-terminus (PD-OH), and by making it a bicyclic compound via tail-to-side-chain cyclization. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Microbiologist**
Microbiologist:
A microbiologist (from Greek μῑκρος) is a scientist who studies microscopic life forms and processes. This includes study of the growth, interactions and characteristics of microscopic organisms such as bacteria, algae, fungi, and some types of parasites and their vectors. Most microbiologists work in offices and/or research facilities, both in private biotechnology companies and in academia. Most microbiologists specialize in a given topic within microbiology such as bacteriology, parasitology, virology, or immunology.
Duties:
Microbiologists generally work in some way to increase scientific knowledge or to utilise that knowledge in a way that improves outcomes in medicine or some industry. For many microbiologists, this work includes planning and conducting experimental research projects in some kind of laboratory setting. Others may have a more administrative role, supervising scientists and evaluating their results. Microbiologists working in the medical field, such as clinical microbiologists, may see patients or patient samples and do various tests to detect disease-causing organisms.For microbiologists working in academia, duties include performing research in an academic laboratory, writing grant proposals to fund research, as well as some amount of teaching and designing courses. Microbiologists in industry roles may have similar duties except research is performed in industrial labs in order to develop or improve commercial products and processes. Industry jobs may also include some degree of sales and marketing work, as well as regulatory compliance duties. Microbiologists working in government may have a variety of duties, including laboratory research, writing and advising, developing and reviewing regulatory processes, and overseeing grants offered to outside institutions. Some microbiologists work in the field of patent law, either with national patent offices or private law practices. Her duties include research and navigation of intellectual property regulations. Clinical microbiologists tend to work in government or hospital laboratories where their duties include analyzing clinical specimens to detect microorganisms responsible for the disease. Some microbiologists instead work in the field of science outreach, where they develop programs and materials to educate students and non-scientists and encourage interest in the field of microbiology for the younger generation .
Education:
Entry-level microbiology jobs generally require at least a bachelor's degree in microbiology or a related field. These degree programs frequently include courses in chemistry, physics, statistics, biochemistry, and genetics, followed by more specialized courses in sub-fields of interest. Many of these courses have laboratory components to teach trainees basic and specialized laboratory skills.Higher-level and independent jobs like a clinical/Medical Microbiologist in a hospital or medical research centre generally require a Masters in Microbiology along with PhD in any of the life-sciences (Biochem, Micro, Biotech, Genetics, etc) as well as several years experience as a microbiologist. This often includes time spent as a postdoctoral researcher wherein one leads research projects and prepares to transition to an independent career. Postdoctoral researchers are often evaluated largely based on their record of published academic papers, as well as recommendations from their supervisors and colleagues.In certain sub-fields of microbiology, licenses or certifications are available or required in order to qualify for certain positions. This is true for clinical microbiologists, as well as those involved in food safety and some aspects of pharmaceutical/medical device development.
Job outlook:
Microbiologists will continue to be needed to advance basic science knowledge and to contribute to development of pharmaceuticals and biotechnology products. However, job prospects vary widely by job and location.In the United States, the Bureau of Labor Statistics predicts that employment of microbiologists will grow 4 percent from 2014 (22,400 employed) to 2024 (23,200 employed). This represents slower growth than the average occupation, as well as slower growth than life scientists as a whole (6 percent projected). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pre-Lie algebra**
Pre-Lie algebra:
In mathematics, a pre-Lie algebra is an algebraic structure on a vector space that describes some properties of objects such as rooted trees and vector fields on affine space.
The notion of pre-Lie algebra has been introduced by Murray Gerstenhaber in his work on deformations of algebras.
Pre-Lie algebras have been considered under some other names, among which one can cite left-symmetric algebras, right-symmetric algebras or Vinberg algebras.
Definition:
A pre-Lie algebra (V,◃) is a vector space V with a bilinear map ◃:V⊗V→V , satisfying the relation (x◃y)◃z−x◃(y◃z)=(x◃z)◃y−x◃(z◃y).
Definition:
This identity can be seen as the invariance of the associator (x,y,z)=(x◃y)◃z−x◃(y◃z) under the exchange of the two variables y and z Every associative algebra is hence also a pre-Lie algebra, as the associator vanishes identically. Although weaker than associativity, the defining relation of a pre-Lie algebra still implies that the commutator x◃y−y◃x is a Lie bracket. In particular, the Jacobi identity for the commutator follows from cycling the x,y,z terms in the defining relation for pre-Lie algebras, above.
Examples:
Vector fields on an affine space Let U⊂Rn be an open neighborhood of Rn , parameterised by variables x1,⋯,xn . Given vector fields u=ui∂xi , v=vj∂xj we define u◃v=vj∂ui∂xj∂xi . The difference between (u◃v)◃w and u◃(v◃w) , is (u◃v)◃w−u◃(v◃w)=vjwk∂2ui∂xj∂xk∂xi which is symmetric in v and w . Thus ◃ defines a pre-Lie algebra structure.
Examples:
Given a manifold M and homeomorphisms ϕ,ϕ′ from U,U′⊂Rn to overlapping open neighborhoods of M , they each define a pre-Lie algebra structure ◃,◃′ on vector fields defined on the overlap. Whilst ◃ need not agree with ◃′ , their commutators do agree: u◃v−v◃u=u◃′v−v◃′u=[v,u] , the Lie bracket of v and u Rooted trees Let T be the free vector space spanned by all rooted trees.
Examples:
One can introduce a bilinear product ↶ on T as follows. Let τ1 and τ2 be two rooted trees.
Examples:
τ1↶τ2=∑s∈Vertices(τ1)τ1∘sτ2 where τ1∘sτ2 is the rooted tree obtained by adding to the disjoint union of τ1 and τ2 an edge going from the vertex s of τ1 to the root vertex of τ2 Then (T,↶) is a free pre-Lie algebra on one generator. More generally, the free pre-Lie algebra on any set of generators is constructed the same way from trees with each vertex labelled by one of the generators. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Neointima**
Neointima:
Neointima typically refers to scar tissue that forms within tubular anatomical structures such as blood vessels, as the intima is the innermost lining of these structures. Neointima can form as a result of vascular surgery such as angioplasty or stent placement. It is actually due to proliferation of smooth muscle cells in the media giving rise to appearance of fused intima and media | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Causalism**
Causalism:
Causalism holds behavior and actions to be the result of previous mental states, such as beliefs, desires, or intentions, rather than from a present conscious will guiding one's actions. One of the foremost proponents for this view was Donald Davidson (philosopher), who believed "that reasons explain actions just inasmuch as they are the causes of those actions". His views were mainly set out in his famous paper ‘Actions, Reasons and Causes’ (1963).Causalism is in accord with how most people have traditionally explained their actions, but critics point out that certain habitual actions such as scratching an itch are only noticed during or after the fact, if at all, making the causalist explanation that such behaviors have a mental antecedent that isn’t recalled seem ad hoc. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Invention (musical composition)**
Invention (musical composition):
In music, an invention is a short composition (usually for a keyboard instrument) in two-part counterpoint. (Compositions in the same style as an invention but using three-part counterpoint are known as sinfonias. Some modern publishers call them "three-part inventions" to avoid confusion with symphonies.) Well-known examples are the fifteen inventions that make up the first half of Johann Sebastian Bach's Inventions and Sinfonias. Inventions are usually not performed in public, but serve as exercises for keyboard students, and as pedagogical exercises for composition students.
Form:
Inventions are similar in style to a fugue, though they are much simpler. They consist of a short exposition, a longer development, and, sometimes, a short recapitulation. The key difference is that inventions do not generally contain an answer to the subject in the dominant key, whereas the fugue does. Two-part and three-part inventions are in contrapuntal style.
Exposition In the exposition, a short motif is introduced by one voice in the tonic key. This is also known as the theme. The subject is then repeated in the second voice in the tonic key while the initial voice either plays a countersubject or plays in free counterpoint.
Form:
Development The development comprises the bulk of the piece. Here the composer develops the subject by writing variations either melodically or harmonically. This usually involves the alternation of episodes with statements of the theme, similar to the development of a fugue. In minor- and major-mode inventions, the theme is typically restated in the relative major and the dominant, respectively. New key areas are reached through episodes, which usually move sequentially through the circle of fifths. The final episode ends on a half cadence in the original key, and is often exaggerated to make the subject sound extra special when it returns. Many of Bach‘s Inventions follow this plan, including BWV 775 and BWV 782.
Form:
Recapitulation If an invention does have any recapitulation at all, it tends to be extremely short—sometimes only two or four measures. The composer repeats the theme in the upper voice and the piece ends. The repetition of the theme contains very little variation (or no variation at all) on the original theme. The lower line usually plays the countersubject, and if there is no countersubject, plays in free counterpoint.
History:
The invention is primarily a work of Johann Sebastian Bach. Inventions originated from contrapuntal improvisations in Italy, especially from the form of the composer Francesco Antonio Bonporti. Bach adapted and modified the form to what is considered to be a formal invention. Bach wrote 15 inventions (BWV 772–786) as exercises for his son, Wilhelm Friedemann Bach. Bach later wrote a set of 15 three-part inventions, called sinfonias (BWV 787–801). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Transport in Shenzhen**
Transport in Shenzhen:
Shenzhen has an extensive transport network, including various forms of land, water and air transport.
Rail transport:
National railway Started with an intermediate station on the Kowloon–Canton Railway (Now Guangzhou-Shenzhen Railway) in 1910, Shenzhen is served by China's national railway network, China Railway, where train services between Shenzhen and cities across the whole China run. The stations are currently handling high-speed trains to Guangzhou, Wuhan, Beijing, Hangzhou, Nanchang and intermediate stations on the Beijing-Guangzhou-Shenzhen-Hong Kong HSR, Xiamen-Shenzhen Railway, and Ganzhou-Shenzhen section of Beijing-Hong Kong HSR routes.
Rail transport:
There are 8 railway stations for passenger service in Shenzhen including: Shenzhen railway stationShenzhen railway station, located in Luohu District, connected to the Luohu Port to Hong Kong, is the most important train station in the city. Guangzhou-Shenzhen Railway, which uses near high speed CRH trains for frequent passenger service, begins at this station. There are also a few long-distance trains departing from this station. Passengers can transfer to Shenzhen Metro Line 1 here.
Rail transport:
Shenzhen North railway stationShenzhen North railway station, located in Longhua District, is the main terminal for high-speed rail train service in Shenzhen. Guangzhou–Shenzhen–Hong Kong Express Rail Link and Xiamen-Shenzhen Railway both serve this station, offering frequent high-speed train service to other parts of China.
Passengers can transfer to Shenzhen Metro Line 4, Line 5 or Line 6 here.
Shenzhen East railway stationShenzhen East railway station, formerly Buji Railway Station, located in Buji subdistrict of Longgang District, on Guangzhou-Shenzhen Railway, is one of the major terminal for long-distance trains departing from Shenzhen.
Passengers can transfer to Shenzhen Metro Line 3 Line 5 or Line 14 here.
Shenzhen West railway stationShenzhen West railway station, located in Nantou, Nanshan District, is one of the auxiliary train stations, with a few departures for long-haul trains.
Futian railway stationFutian railway station, located directly in the city centre, Futian District, is an en route station of Guangzhou–Shenzhen–Hong Kong Express Rail Link.
Passengers can transfer to Shenzhen Metro Line 2, Line 3 or Line 11 here.
Shenzhen Pingshan railway stationShenzhen Pingshan railway station is an en route station of Xiamen-Shenzhen Railway, serving Pingshan District. Passengers can transfer to Shenzhen Metro Line 16 here.
Guangmingcheng railway stationGuangmingcheng railway station is an en route station of Guangzhou–Shenzhen–Hong Kong Express Rail Link, serving Guangming District.
Pinghu railway stationPinghu railway station is an en route station of Guangzhou-Shenzhen Railway in Pinghu Subdistrict, Longgang District, which is served by CRH trains between Shenzhen and Guangzhou. Passengers can transfer to Shenzhen Metro Line 10 here.
Shenzhen also holds the actual administration of the underground platform area of Hong Kong West Kowloon Railway Station for political reasons.
Rail transport:
Regional railway There are also short-haul regional railway stations in Shenzhen, currently on Guangzhou-Shenzhen Intercity Railway (穗深城际铁路) only. Note that it is different from Guangzhou-Shenzhen Railway (广深铁路) mentioned above. The former covers Bao'an District and the latter covers Longgang District and Luohu District of Shenzhen. To tell them apart, their Chinese names are using different acronyms of Guangzhou "穗" and "广".
Rail transport:
Shenzhen Airport railway station Shenzhen Airport North railway station Fuhai West railway station Shajing West railway stationAdditional railways, Shenzhen–Shanwei high-speed railway, Longgang-Dapeng intercity railway, Shenzhen–Dayawan intercity railway, Shenzhen–Huizhou intercity railway and the extension of Guangzhou–Shenzhen intercity railway, are currently under construction, with metro access of some en route stations opened, like Universiade railway station and Wuhe railway station.
Freight railway And here are the freight railway stations in Shenzhen, mostly on Guangzhou-Shenzhen Railway, Pinghu-Nanshan Railway and Pinghu-Yantian Railway.
Rail transport:
Yantian railway station Xili railway station Sungang railway station Lilang railway station Mugu railway station Pinghu South railway station Henggang railway station Bantian railway station Georges Bassil Metro Shenzhen Metro was first opened on 28th Dec., 2004, then imposed the latest expansion in 2022. Now there are 16 lines covering 547 km (340 mi) in the metro system, named Line 1 to Line 12, Line 14, Line 16, Line 20 and Line 6 Branch (de facto Shenzhen section of Line 1 (Dongguan Rail Transit)), with 304 stations in total and 57 interchange stations. Line 13 and some extension of current metro lines are under construction.A single journey normal ticket in the metro costs 2 RMB to 15 RMB and a single journey business ticket of Line 11 costs three times as much as travel fare of normal ticket. Discounts of 5% off are given using Shenzhen Tong IC Card instead of a single journey normal ticket.The metro system is operated by two companies, Shenzhen Metro Corporation and MTR Corporation, Shenzhen. MTR Shenzhen is now operating Line 4 of Shenzhen Metro.
Rail transport:
Tram Shenzhen Tram refers to light rail system in Shenzhen, operating in Longhua District and Pingshan District, respectively.
Rail transport:
Shenzhen Tram in Longhua District consists of 11.7 km (7.3 mi), 2 lines and 21 stations. It opened on 28th Oct., 2017 and integrates central Guanlan, the north side of Longhua into Qinghu Station of the city's rail network. It is expected to help local residents commute and relieve traffic congestion, especially when the north extension of Shenzhen Metro Line 4 was still being built. Each single ticket costs 2 RMB.
Rail transport:
The 2 lines in Longhua are: Line 1: Qinghu–Xiawei. 8.6 km (5.3 mi).
Line 2: Qinghu–Xinlan. 6.8 km (4.2 mi).Shenzhen Tram in Pingshan District opened on 28 Dec., 2022. An experimental 8.7-km Line 1 with 11 stations, is constructed with technology of a local manufacturer BYD. It connects central Pingshan and Pingshan Railway Station, as well as the city's rail network. Each single ticket costs 2 or 3 RMB.
The line in Pingshan is: Line 1: Pingshan Railway Station–BYD North. 8.7 km (5.4 mi).
Road transport:
Road transport in Shenzhen consists of various forms of transport as follows: Buses Intercity buses and coaches Bus and coach services of customised routes Taxicabs Vehicle for hire services Public bicycles Highway system Urban roads Greenway system Interchanges Pedestrians Buses Bus services in Shenzhen began in 1975, and now have expanded to a network consisting of about 1000 regular routes. Three franchised companies, Shenzhen Bus Group, Shenzhen Eastern Bus and Shenzhen Western Bus operate most of the routes, with the remaining operated by a few private companies.
Road transport:
Bus services in Shenzhen are subsidized by the government, where the operators have to set the bus fares according to a guideline. Bus fares usually range from 2 RMB to 10 RMB, except for branches, where the fare can be 1 RMB or 2 RMB, and premium services, which may be charged as much as 40 RMB. Fare has to be given when boarding the bus in short-haul routes and expresses with no change. However, for most long-haul routes, fare is collected manually according to the travel distance of the passenger. Shenzhen Tong IC Card or its mobile payment is accepted on most of the bus routes with 20% off at least, except a few privately operated premium routes.
Road transport:
Bus routes in Shenzhen are categorised into three categories, beginning from Dec. 2008: ExpressesThese are long-haul routes connecting the city and the suburbs/exurbs, travelling on motorways. The buses used for these routes, which are normally actually coaches for long-distance travel, are green. Normally, no standing passengers are allowed on these routes. These routes are charged with a flat fare with a maximum of 10 RMB, according to the distance of the route. Renumbered routes in this category start with E, for example, E11 and E33.
Road transport:
Main-linesThese are medium to long routes, travelling on trunk roads, for example, national highway G107, using full-sized cyan transit buses. These routes are charged according to the travel distance of the passengers, from 2 RMB to 10 RMB, if the full fare is greater than 3 RMB, sectional fares and manual fare collection are used, with passengers of short-haul routes paying only 2 RMB, 2.5 RMB or 3 RMB. Renumbered routes in this category start with M, for example, M206 and M408, and most of the routes in the old numbering scheme fall into this category, e.g. 1 and 337.
Road transport:
BranchesThese are short-haul routes travelling in neighbourhoods, narrow streets and alleys, using orange minibuses/midibuses. With one exception, these routes are charged a flat fare of 1 RMB or 2 RMB. Renumbered routes in this category start with B, for example, B611 and B753. Route 915 in the old numbering scheme also falls into this category.
Road transport:
In addition, there are some other bus routes, not belonging to the above categories, with Chinese characters forming part of the route number, which include: 高峰专线 A: Rush hour routes 高快巴士 B: Rush hour expresses (Places)假日专线 C:Holiday series of routes 旅游 D: Traveling routes 深惠 E: Intercity bus routes connecting Shenzhen with Huizhou. These are all standard bus routes using transit buses, not long-distance coaches.Those letters A, B, C, D, E indicate where route numbers are written.
Road transport:
Old numbering schemeBefore Dec. 2008, bus routes in Shenzhen were numbered using the hundred district according to the districts where the route operated in. Note that changes after Dec. 2008 for routes with old numbering may break the rule below.
1-299: Full-sized bus routes operating in the central districts including Futian District, Luohu District, Nanshan District and Yantian District, which become main-lines in the current categorisation.
300-399: Full-sized bus routes crossing the former border of the Special Economic Zone(SEZ), which become main-lines in the current categorisation.
400-499: Minibus routes in the 4 central districts, abolished in 2004.
500-599: Minibus routes crossing the former border of the SEZ, abolished in 2020.
600-699: Full-sized bus routes serving Bao'an District, Longhua District and Guangming District, which become main-lines in the current categorisation.
700-799: Minibus-then routes serving Bao'an District, Longhua District and Guangming District, gradually replaced by full-sized bus after 2004, which become main-lines in the current categorisation.
800-899: Full-sized bus routes serving Longgang District, Pingshan District and Dapeng New District, which become main-lines in the current categorisation.
900-999: Minibus-then routes serving Longgang District, Pingshan District and Dapeng New District, gradually replaced by full-sized bus after 2004, which become main-lines (except 915 becoming a branch) in the current categorisation.
Road transport:
N-prefixed: Nightly route services at night which are usually parallel to some regular routes at daytime. Sometimes a letter N appearing before the route number started with E, M or B means a nightly route which is parallel to the corresponding route without N. For example, NA1 is the nightly service of A1. This usage have been put into operation since Oct. 2018.A letter A or B may be added after the route number, which indicates small variations of the route, and a letter K appearing before the route number means the route an express which is parallel to the corresponding route without K. For example, K113 (now M133) is the express of 113. These usage have been abolished since July 2018.New routes starting from Dec. 2008 no longer use this numbering scheme, and old routes extensively modified are renumbered to the new scheme assigning a number starting with E, M or B instead.
Road transport:
As of Dec. 2017, the entire fleet of over 16,300 buses has been replaced with electric buses, the largest fleet of electric buses of any city in the world. The city began rolling out electric buses made by BYD in 2009, and has heavily invested in acquiring electric buses and taxis since.
Road transport:
Intercity buses and coaches Long-distance coaches: there are a lot of long-haul coach stations in Shenzhen, with coach services to the other parts of Guangdong, Hong Kong, Macau and other various parts of China. Shenzhen Coach Station, also called Yinhu Coach Station, is located in Yinhu Subdistrict, Luohu District. There are also coach stations at Shenzhen Bao'an International Airport and several railway stations like Shenzhen railway station and Shenzhen North railway station.
Road transport:
Unregulated coaches: there are also some coaches running between Shenzhen and other cities in Guangdong, for example, Guangzhou and Dongguan, with a "route number" starting with 长 (meaning long), for example, 长16路. These numbered coaches are mainly unregulated or even illegal, which are not recommended for passengers.
Road transport:
Transit buses: apart from coaches, transit buses can also be used for intercity travel between Shenzhen and its neighbouring cities, Dongguan, Huizhou and Hong Kong. The "intercity" bus routes like 深惠X线 are official regulated bus routes between these cities, and there are also a few de facto intercity bus routes with regular numbering, like M184, M325, M589, with 208 from Huizhou, 285, 786 from Dongguan, B1 from Hong Kong which travel across the city border.Here are the list of coach stations currently existing and not affiliated to railway stations, airports or ports (as those could be incorporated), yet some of them might be under renovation or expansion thus are not operating. Most of them are located outside the central districts, each serving at subdistrict (as a part of their names) level often. They are getting fewer and fewer due to the other competitive forms of transport. Bao'an Coach Station Buji Coach Station Futian Transport Hub Gongming Coach Station Guanlan Coach Station Henggang Coach Station Kengzi Coach Station Longgang Coach Station Longgang East Coach Station Longgang Long-haul Coach Station Longhua Coach Station Pingdi Coach Station Pingshan Coach Station Xinqiao Central Coach Station Shenzhen Coach Station Songgang Coach Station Bus and coach services of customised routes Thanks to the rapid development of Information Technology and sharing economy, bus and coach services of customised routes have spread throughout China, including Shenzhen. They leave the city boundaries disregarded providing services both in and between the cities.
Road transport:
Services in the cityBesides bus routes designated by Transport Commission of Shenzhen and its organizations, there are also bus services of customised routes ("定制公交" in Chinese). That is, passengers book tickets at certain apps "E巴士" or "优点巴士" and choose their routes in advance, and they can take these buses. With a few stops like expresses, these routes provide commutes for work, study or travel faster than regular buses. Passengers can also submit their origin and destinations to the apps to lodge routes of their own. When a certain number of people share the same locations the routes between them would be put into operation.
Road transport:
Number of these routes operated by franchised companies often start with P, PJ, PT by Shenzhen Eastern Bus or F, H, T by Shenzhen Bus Group whose information would not be shown at regular bus stops, so passengers can only get their information with the apps. They are popular among workers and visitors in Shenzhen as an alternative of comfort. The first customised route operated by franchised companies started operation in Jan. 2016 by Shenzhen Eastern Bus.
Road transport:
Vehicle for hire services Most vehicles for hire accept mobile payments such as Alipay and WeChat Pay.
All colors of taxicabs are able to operate in the entire Shenzhen, as follows: Red taxis and Green taxis are fuel taxis united together by governments in May 2017, then were replaced by blue ones in Dec 2018.
Road transport:
Blue taxis are electric vehicles and fuel surcharge does not apply on them.The typical taxi fare consists of 2 parts, 10 RMB for up to 2 km (about 1.24 mile) first and 2.7 RMB/km (about 4.34 RMB/mile) for the distance between 2 and 20 km. Extra 30% of taxi fare for 20 to 35 km, and 60% for the distance remained. A 30% night fare is also required between 23:00 to 6:00 the next day.DiDi and some other privately operated hire services are also very popular in Shenzhen.
Road transport:
Public Bicycles Public bicycle systems in Shenzhen can be roughly divided into 2 kinds.
Road transport:
Dock-based Public BicyclesDock-based public bicycle system in Shenzhen started operating in Yantian District in Dec 2011, being the first public bicycle system in Shenzhen. Then it spread to Luohu, Futian, Longgang and Nanshan Districts. Yantian public bicycle system is the only one covering the whole district in Shenzhen. These bicycle systems are franchised by governments at district level and usually incompatible with each other. Franchised bicycle system in Luohu District suspended in Feb 2018, and that of Yantian District suspended in Apr 2022.
Road transport:
Bike sharingBike sharing usually refers to dockless public bicycle system by private sectors in China. It starts in Oct 2016 in Nanshan District by Mobike. Users download their apps and scan QR codes to unlock for a ride. Then many private bike sharing operators like Ofo, Xiaoming etc. appeared and were developing rapidly in 2017. The government has begun to regulate the number of these bicycles as there are too many. Some operators like Bluegogo also met a bankruptcy because of the high operational cost, leading to only 3 operators from 2020-Mobike by Tencent, DiDi Bike by DiDi and Hellobike by Alibaba-up to now.
Road transport:
Highway System Highway system in Shenzhen is a part of National highway system in China as well as Provincial highway system called Guangdong highway. They include expressways and normal highways.
Road transport:
ExpresswaysExpressways in Shenzhen usually need a fare of 0.45 RMB/km (about 0.72 RMB/mile) for a private car due to the provincial standard, while more needed for a larger vehicle. Speed limits also vary with the type of vehicles that usually ranged from 60 km/h (37 mph) to 120 km/h (75 mph). Normal highwaysNormal highways are free with lower speed limits than expressways.
Road transport:
The following are their numbers with names or destinations: National highway in ShenzhenNumber of these highways starts with G.
Jinggang'ao Expressway Shennan Expressway Wushen Expressway Shenhai Expressway Shenzhen Outer Ring Expressway Changshen Expressway Pearl River Delta Ring Expressway Beijing-Hong Kong Shanhaiguan-Shenzhen Dongying-Shenzhen Dandong-DongxingProvincial highway in ShenzhenNumber of these highways starts with S.
Guangshen Riverside Expressway Shuiguan Expressway Congguanshen Expressway Huishen Coastal Expressway Longda Expressway Nanguang Expressway Huiyan Road Danping Express Nanping Express Xichong-Bao'an Daya Bay Nuclear Power Plant-Longhua Coastal BoulevardsThere are also County highways and Country highways in Shenzhen, but many of them have been detoured or renewed with the rapid urbanization of the city.
Urban Roads Being a relatively new city dating back to only the late 1970s, Shenzhen, especially the former SEZ, has had the advantage of planned street grids.
Typically, urban roadways in Shenzhen are designated as street, road, avenue and boulevard. Streets in Shenzhen tend to be narrow, with one or two lanes, roads have two to four lanes, while avenues and boulevards are wide, which can have anywhere between four and twelve lanes.
Pedestrians There are 2 famous pedestrian streets in Shenzhen.
Road transport:
East Gate Pedestrian StreetLocated in Luohu District, East Gate Pedestrian Street, is one of the oldest pedestrian streets in Shenzhen. Commercial activities had begun there even before the city was built. In 1990, the first McDonald's in Mainland China opened there. As the busiest pedestrian street in Shenzhen, it covers a comprehensive range of goods and mainly focuses on clothing. People can now get there by Shenzhen Metro Line 1 or Line 3 at Laojie(meaning Old Street) Station, or Line 3 at Shaibu Station.
Road transport:
Huaqiang North Pedestrian StreetHuaqiang North Pedestrian Street is located in Futian District. It turned pedestrian in late 2016 after the construction of Shenzhen Metro Line 7, later with the underground commercial part completed in July 2018. It was once well known as an ideal marketplace for electronic devices but becoming less popular as online shopping increases. People can now get there by Shenzhen Metro Line 2 or Line 7 at Huaqiang North Station, Line 1 at Huaqiang Road Station, or Line 3 or Line 7 at Huaxin Station.
Maritime transport:
Ferries There are ferries from Shekou Cruise Centre to other cities in Pearl River Delta region, including Hong Kong, Macau, Zhuhai, etc. There is also a ferry terminal at Shenzhen Bao'an International Airport, with direct ferry connecting Hong Kong International Airport totally in airside. Moreover, there are a few ferries traveling inside Shenzhen like the Yantian-Nan'ao ferry connecting Yantian District with Nan'ao Subdistrict, Dapeng New District.
Maritime transport:
Port The city's 260 km (162 mi) coastline is divided by the main landmass of Hong Kong (namely the New Territories and the Kowloon Peninsula) into two halves, the eastern and the western. Shenzhen's western port area, in Nanshan District, lies to the east of Lingdingyang in the Pearl River Estuary and possesses a deep water harbour with superb natural shelters. It is about 20 nautical miles (37 km; 23 mi) from Hong Kong to the south and 60 nautical miles (111 km; 69 mi) from Guangzhou to the north. By passing Pearl River system, the western port area is connected with the cities and counties in Pearl River Delta networks; by passing On See Dun waterway, it extends all ports both at home and abroad. On the other hand, Shenzhen's eastern port area is located in Yantian District, connected with Pinghu-Yantian Railway at Yantian freight Railway Station.
Maritime transport:
Shenzhen handled a record number of containers in 2005, ranking as the world's fourth-busiest port, after rising trade increased cargo shipments through the city. China International Marine Containers, and other operators of the port handled 16.2 million standard 20 ft (6.1 m) boxes last year, a 19 percent increase. Investors in Shenzhen are expanding to take advantage of rising volume.Yantian International Container Terminals, Chiwan Container terminals, Shekou Container Terminals, China Merchants Port and Shenzhen Haixing (Mawan port) are the major port terminals in Shenzhen.
Air:
Donghai Airlines, Shenzhen Airlines and Jade Cargo International are located at Shenzhen Bao'an International Airport. The airport is 32 km (20 mi) from central Shenzhen and connects the city with many other parts of China, as well as international destinations. The airport also serves as an Asian-Pacific cargo hub for UPS Airlines. Shenzhen Donghai Airlines has its head office in the Shenzhen Airlines facility on the airport property. SF Airlines has its headquarters in the International Shipping Center.Shenzhen is also served by Hong Kong International Airport; ticketed passengers can take ferries from the Shekou Cruise Centre and the Fuyong Ferry Terminal to the HKIA Skypier. There are also coach bus services connecting Shenzhen with Hong Kong International Airport.There are also heliports in Nanshan District and Yantian District for official use or luxurious service. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hatsumei Boy Kanipan**
Hatsumei Boy Kanipan:
Hatsumei Boy Kanipan (発明BOYカニパン, Hatsumei Bōi Kanipan, trans. Inventor Boy Kanipan) is a 1998 Japanese anime television series produced by NAS and TV Tokyo and animated by Studio Comet. It was immediately followed by a second season titled Chō Hatsumei Boy Kanipan (超発明BOYカニパン, Chō Hatsumei Bōi Kanipan, trans. Super Inventor Boy Kanipan). In late 2000, Saban Entertainment licensed the series to air on Fox Kids in September 2001 but it did not air for unknown reasons.
Plot:
In the future, humans now live on the artificial planet Planet Sharaku, built by Dr. Taishi. The people living on this planet are highly technologically advanced. All inventors are required to have an inventor's license, the license has certain levels (levels C, B, A, and TAISHI level) depending on the evaluation of the inventor.
Plot:
The story revolves around the life of Kanipan, an inventor that wants to reach TAISHI level in inventor's degree together with his interface robot named Kid. While pursuing his dreams he encounters and confronts villains whom are against AI Robots. They terrorize the citizens by making the robots evil by installing a customized chip. Later on a robotic insect, which overrides the robot's system, is used.
Characters:
Kanipan (カニパン, Kanipan) Voiced by: Junko Takeuchi (Japanese) Kanipan is a 10-year-old IT engineer and hero of the story.Kid (キッド, Kiddo) Voiced by: Rie Iwatsubo (Japanese) Kid is Kanipan's interface robot.Angelica (アンジェリカ, Anjerika) Voiced by: Yumi Kakazu (Japanese) Milk (ミルク, Miruku) Voiced by: Kanako Mitsuhashi (Japanese) Milk is a 10-year-old rich young girl and famous superstar idol in Sharaku.Igor (イゴール, Igoru) Voiced by: Masami Iwasaki (Japanese) Milk's interface robot and her personal butler.Nuts (ナッツ, Nattsu) Voiced by: Harumi Ikoma (Japanese) Pochi (ポチ, Pochi) Voiced by: Takashi Matsuyama Ravioli (ラビオリ, Rabiori) Voiced by: Nanaho Katsuragi Dr. Taishi Voiced by: Shoichiro Akaboshi (Japanese)
Music:
Opening Themes"LOVE LOVE Phantasy" Lyrics by Hero Matsui and Keichi Ueno Composition and arrangement by Keichi Ueno Performed by Whoops!!"First Time Being In Love, I Knew It Was You (恋してはじめて知った君, Koishite Hajimete Shitta Kimi)" Lyrics by Ohta Shinichirou & Hata Hideki Composition and arrangement by Ohta Shinichirou, Kobayashi Masamichi & Arai Yasunori Performed by BAADEnding ThemesEver (いつか, Itsuka) Lyrics by Suzi Kim Composition and arrangement by Hero Matsui Performed by Whoops!!"O·K!" Lyrics by Akihito Tokunaga & Terukado Ohnishi Composition and arrangement by XL Performed by XL | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Probe card**
Probe card:
A probe card (commonly referred to as a DUT board) is used in automated integrated circuit testing. It is an interface between an electronic test system and a semiconductor wafer.
Use and manufacture:
A probe card or DUT board is a printed circuit board (PCB), and is the interface between the integrated circuit and a test head, which in turn attaches to automatic test equipment (ATE) (or "tester"). Typically, the probe card is mechanically docked to a Wafer testing prober and electrically connected to the ATE . Its purpose is to provide an electrical path between the test system and the circuits on the wafer, thereby permitting the testing and validation of the circuits at the wafer level, usually before they are diced and packaged. It normally comprises a PCB and some form of contact elements, usually metallic.A semiconductor manufacturer will typically require a new probe card for each new device wafer and for device shrinks (when the manufacturer reduces the size of the device while keeping its functionality) because the probe card is effectively a custom connector that takes the universal pattern of a given tester and translates the signals to connect to electrical pads on the wafer. For testing of Dynamic random-access memory (DRAM) and Flash memory (FLASH) devices, these pads are typically made of aluminum and are 40–90 um per side. Other devices may have flat pads, or raised bumps or pillars made of copper, copper alloys or many types of solders such as lead-tin, tin-silver and others.
Use and manufacture:
The probe card must make good electrical contact to these pads or bumps during the testing of the device. When the testing of the device is complete, the prober will index the wafer to the next device to be tested.
Use and manufacture:
Normally a probe card is inserted into a wafer prober, inside which the position of the wafer to be tested will be adjusted to ensure a precise contact between the probe card and wafer. Once the probe card and the wafer are loaded, a camera in the prober will optically locate several tips on the probe card and several marks or pads on the wafer, and using this information it will align the pads on the device under test (DUT) to the probe card contacts.
Design and types:
Probe cards are broadly classified into needle type, vertical type, and MEMS (Micro Electro-Mechanical System) type depending on shape and forms of contact elements. MEMS type is the most advanced technology currently available. The most advanced type of probe card currently can test an entire 12" wafer with one touchdown.
Probe cards or DUT boards are designed to meet both the mechanical and electrical requirements of the particular chip and the specific test equipment to be used. One type of DUT board is used for testing the individual die of a silicon wafer before they are cut free and packaged, and another type is used for testing packaged IC's.
Efficiency factors:
Probe card efficiency is affected by many factors. Perhaps the most important factor impacting probe card efficiency is the number of DUTs that can be tested in parallel. Many wafers today are still tested one device at a time. If one wafer had 1000 of these devices and the time required to test one device was 10 seconds and the time for the prober to move from one device to another device was 1 second, then to test an entire wafer would take 1000 x 11 seconds = 11,000 seconds or roughly 3 hours. If however, the probe card and the tester could test 16 devices in parallel (with 16 times the electrical connections) then the test time would be reduced by almost exactly 16 times (to about 11 minutes).Advanced Tester Resource Enhancement (ATRE) is a powerful means of increasing the number of DUTs that can be tested by a probe card in parallel (or in one touchdown during which probe card needles remain in contact with the wafer DUTs). ATRE allows the sharing of tester resources among DUTs using active components, which have the ability to connect and disconnect DUTs from the tester resources. Without ATRE, a single tester resource (power, DC or AC signal) would normally only go directly to one DUT. However by installing ATRE-configured relays (switches) onto the probe card PCB, the tester resource can split or branch out to multiple DUTs. For example in a x4 sharing configuration, 1 power signal is fed into 4 relays whose outputs go to 4 DUTs, respectively. Then by turning each relay ON and OFF sequentially (in the case of a DUT current measurement test), the tester can test each of the 4 DUTs in turn during the same touchdown (without having to move the prober from one device to the other). Therefore a tester that has only 256 power signals will appear to have its resources expanded or enhanced so as to enable it to test 1024 DUTs in one touchdown, thanks to the 1024 onboard relays in the x4 sharing scheme implemented on the probe card. ATRE brings dramatic savings in terms of test time and cost, as it can allow a chip manufacturer or test house to validate more DUTs in one touchdown without the need to purchase a more advanced tester equipped with more resources.
Efficiency factors:
Contamination issues Another major factor is debris that accumulates on the tips of the probe needles. Normally these are made of tungsten or tungsten/rhenium alloys or advanced palladium based alloys like PdCuAg. Some modern probe cards have contact tips manufactured by MEMS technologies.Irrespective of the probe tip material, contamination builds up on the tips as a result of successive touchdown events (where the probe tips make physical contact with the bond pads of the die). Accumulation of debris has an adverse effect on the critical measurement of contact resistance. To return a used probe card to a contact resistance that is acceptable, the probe tips must be spotless. Cleaning can be done offline using an NWR style laser to reclaim the tips by selectively removing the contamination. Online cleaning can be used during testing to optimize the testing results within the wafer or within wafer lots. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nicocodeine**
Nicocodeine:
Nicocodeine (Lyopect, Tusscodin) is an opioid analgesic and cough suppressant, an ester of codeine closely related to dihydrocodeine and the codeine analogue of nicomorphine. It is not commonly used in most countries, but has activity similar to other opiates. Nicocodeine and nicomorphine were introduced in 1957 by Lannacher Heilmittel of Austria. Nicocodeine is metabolised in the liver by demethylation to produce nicomorphine, also known as 6-nicotinoylmorphine, and subsequently further metabolised to morphine. Side effects are similar to those of other opiates and include itching, nausea and respiratory depression. Related opioid analogues such as nicomorphine and nicodicodeine were first synthesized. The definitive synthesis, which involves treating anhydrous codeine base with nicotinic anhydride at 130 °C, was published by Pongratz and Zirm in Monatshefte für Chemie in 1957, simultaneously with the two analogues in an article about amides and esters of various organic acids.Nicocodeine is almost always used as the hydrochloride salt, which has a free base conversion ratio of .917. In the past, the tartrate, bitartrate, phosphate, hydrobromide, methiodide, hydroiodide, and sulfate were used in research or as pharmaceuticals.
Nicocodeine:
Nicocodeine is regulated in most cases as is codeine and similar weak opiate drugs like ethylmorphine, benzylmorphine, dihydrocodeine and its other close derivatives like acetyldihydrocodeine (although not the stronger hydrocodone or oxycodone, which are regulated like morphine) and others of this class in the laws of countries and the Single Convention On Narcotic Drugs. One notable example is the fact that nicocodeine is a Schedule I/Narcotic controlled substance in the United States along with heroin as nicocodeine was never introduced for medical use in the United States.
Nicocodeine:
Nicodicodeine is a similar drug which is to nicocodeine as dihydrocodeine is to codeine. The metabolites of nicodicodeine include dihydromorphine where nicocodeine is turned into morphine as noted above.
Nicocodeine cough medicines are available as syrups, extended-release syrups, and sublingual drops. Analgesic preparations are also in the form of sublingual drops and tablets for oral administration. Nicocodeine is approximately the same strength as hydrocodone; it has a faster onset of action.
The 2013 DEA annual production quota for nicocodeine and its two related drugs are zero. Nicocodeine's ACSCN is 9309. Nicodicodeine is not assigned an ACSCN and is presumably controlled as either an ester of dihydromorphine or derivative of nicomorphine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Surface Mini**
Surface Mini:
Surface Mini is an unreleased tablet computer that Microsoft designed as the successor to the Surface 2 in the Microsoft Surface family. The device has a Qualcomm Snapdragon 800 processor and a 7.5-inch (19 cm) 4:3 aspect ratio touchscreen display that defaults to portrait mode. Like its predecessor, the Surface Mini runs Windows RT 8.1, a mobile operating system that was designed for the ARM architecture and has limitations including an inability to install Win32 applications; programs can only be installed from the Windows Store.
Surface Mini:
Rumors of the development of a seven-inch tablet computer were reported in April 2013 when Microsoft was preparing to develop a device in response to competitors' tablet computers with a similar form factor, including Apple's iPad Mini and Google's Nexus 7. The Surface Mini's released date was postponed from 2013 to 2014, and was canceled several weeks before the Surface event in 2014. The existence of the cancelled device was later revealed by Panos Panay during an October 2015 interview with Wired Magazine. Despite its cancellation, Windows Central obtained the unreleased device and published its specifications and images of the tablet.
History:
In April 2013, Wall Street Journal reported rumors of the development of seven-inch (18 cm) tablet computer by Microsoft, which was preparing a new lineup of Surface tablet computers that included the unnamed seven-inch tablet, in response to competitor's products in this form factor including Apple's iPad Mini and Google's Nexus 7. The upcoming Surface RT mini-tablet computer was to be equipped with a Qualcomm Snapdragon 800 processor, which replaced the Nvidia Tegra 3 processor that was used in first generation of Surface RT tablets. The inclusion of Snapdragon 800 processor added LTE connectivity, which allows the device to be equipped with mobile broadband, which Mike Angiulo, Microsoft's chief of ecosystem and planning, hinted at as a possibility of future connected devices in early 2013.During the Build developer conference in 2013, Microsoft revealed the tools that allow development of applications designed for devices with 7-and-7.5-inch (18 and 19 cm) displays, which suggested Microsoft was expecting original equipment manufacturers (OEMs) to offer devices with these screen sizes. Despite it not being shown during Microsoft's Surface event in 2013, rumors of the Surface Mini tablet persist into September; according to these rumors, its display screen-size had increased to seven and a half inches, a screen size that PC manufacturers did not widely adopt. The rumored device's screen aspect ratio was also switched from 16:9 to 4:3 for easier to use in portrait mode.According to Mary Jo Foley of ZDNet, Microsoft postponed the release of the Surface Mini to 2014 because it was focusing on development of the Windows 8.1 Update, which was released on April 8, 2014. When The Verge asked Microsoft's chief product officer Panos Panay about the possibility of a mini tablet computer for its Surface line, he refused to answer but hinted at the possibility of multiple form factors and screen sizes in its future Surface line.
History:
Cancellation While several media including The Verge and Engadget anticipated the device would be announced at the 2014 Surface event, according to Bloomberg, Microsoft had canceled the product a few weeks before the event. The Surface Mini tablet was not revealed during the 2014 Surface event; the Surface Pro 3 was shown instead.Several signs of the Surface Mini's existence include mentions of the device in the Surface Pro 3 user manual and Microsoft's earnings reports in 2014.
History:
On October 26, 2015, during an interview with Panos Panay in Wired Magazine, Panos showed off the Surface Book and the canceled Surface Mini, saying he loved the tablet his team built but never shipped officially. He also stated the device's design felt like that of a Moleskine notebook.Despite the cancellation of Surface Mini, Windows Central obtained the unreleased device and published several images of it in June 2017. Windows Central published its review of the Surface Mini in October 2019, and gave a favorable review, awarding it four stars out of five and saying it would be better if the Surface Mini was updated with better specs and equipped with Windows 10. Windows Central said the Surface Mini was canceled because devices running Windows RT were excluded from upgrading to Windows 10 and because Windows 10 for ARM was unavailable at that time.
Features:
Software The Surface Mini shipped with Windows RT 8.1, a mobile operating system that unlike Windows 8.1, on which it is based, has several limitations. Windows RT 8.1 includes Microsoft Office 2013 RT, a desktop application that is optimized for ARM systems. Windows Store apps are the only third-party applications that can be installed on Windows RT 8.1. Despite providing the traditional Windows desktop environment, users cannot install Win32 applications or applications optimized for ARM. Windows RT 8.1 excludes Windows Media Player in favor of multimedia apps found on the Windows Store; devices are pre-loaded with the in-house Xbox Music and Xbox Video apps.There is no upgrade path for Surface Mini to run Windows 10, hence the device remains running Windows RT until the end of support on January 10, 2023.
Features:
Hardware The exterior of the Surface Mini is a combination of metal, glass and polyurethane materials. The outer shell is made from polyurethane materials, which are scratch resistant. It would have been supplied with a kickstand that was limited to three positions and was also released with the Surface 3. There is a build-in pen loop would have allowed users to attach the Surface Pen stylus into the device when not in use, and could also be used to open the kickstand. Metal was used for the volume button, power button, headphone jack, Micro USB port and microSD card slot. The weight of the Surface Mini is 0.8 lb (360 g), and it measured 8x5.5x0.35 in (215.9×139.7×8.89 mm).The Surface Mini has a Qualcomm Snapdragon 800 processor and 1 GB of random access memory. There are three physical switches; a power button, a volume control, and a Start Menu button. The Surface Mini's touchscreen display is a 1,440 by 1,080 pixel, 7.5 inches (19 cm) and 4:3 aspect ratio display that has a pixel density of 240 ppi. The default orientation for the device is portrait; the Start Menu button is placed at the bottom of the screen, although the device supports screen rotation in all four orientations. The device is equipped with 32 GB of solid-state storage for programs and data, which can be increased using a micro SD card. The device supports Wi-Fi 802.11a/b/g/n and Bluetooth 4.0, and has a front-facing 2.1 MP camera and a rear-facing 5 MP camera. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Graminoid**
Graminoid:
In botany and ecology, graminoid refers to a herbaceous plant with a grass-like morphology, i.e. elongated culms with long, blade-like leaves. They are contrasted to forbs, herbaceous plants without grass-like features.
The plants most often referred to include the families Poaceae (grasses in the strict sense), Cyperaceae (sedges), and Juncaceae (rushes). These are not closely related but belong to different clades in the order Poales. The grasses (Poaceae) are by far the largest family with some 12,000 species.
Besides their similar morphology, graminoids share the widespread occurrence and often dominance in open habitats such as grasslands or marshes. They can however also be found in the understory of forests. Sedges and rushes tend to prefer wetter habitats than grasses.
Examples of graminoid plants | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Angel Roleplaying Game**
Angel Roleplaying Game:
The Angel Roleplaying Game is a role-playing game published by Eden Studios, Inc. in 2003.
Background:
Based on the Angel TV series, this game uses Eden's Unisystem game system.
Books and products:
Angel RPG: Corebook (ISBN 1-891153-97-8) Though it is presented as a distinct line, the Angel Role-playing Game serves as a wholly compatible companion to the Buffy the Vampire Slayer roleplaying game. The ruleset is virtually identical, but the game offers an extensive point-based system which allows players and Directors to create their own supernatural package Qualities, thus producing characters ranging from psychics to demons. The book is also noteworthy for its introduction of organizational rules, allowing players and Directors to easily and quickly define the resources, influence and obligation associated with any given group - including, potentially, the player characters themselves.
Books and products:
Given the differences in both characters and setting, the Angel RPG describes that series' main cast and adversaries through the end of Season 3. Los Angeles is also presented, with the expected focus upon the fictional aspects of L.A. introduced on the show. Finally, the book includes the pregenerated adventure "Blood Brothers," which offers a plotline and general tone reminiscent of Angel's somewhat darker style. This is only the first part of a larger adventure, concluded in the Director's Screen (see below).
Books and products:
A Limited Edition (ISBN 1-891153-99-4) of this book was produced in 2003, featuring a black leatherette cover printed with an abstract design, a pale foil "Angel" graphic, and a cloth bookmark. This printing was limited to 500 copies.
Angel RPG: Director's Screen (ISBN 1-891153-98-6) This product includes a four-panel cardstock screen offering easy reference charts for the Director, as well as a 32-page booklet containing tips, aids, additional charts, and "Blood Brothers, Part Two," the conclusion to the adventure presented in the Corebook.
Angel RPG: Character Journal (ISBN 1-891153-58-7) Much like the Buffy Character Journal (see above), this 16-page booklet was expected to provide a vastly expanded character sheet allowing players to record a great deal of information. Like many of the Buffyverse supplements, it was never released due to the end of the Fox/Eden licensing agreement.
Books and products:
Angel RPG: Investigator's Casebook (ISBN 1-891153-43-9) This supplement would have provided expanded rules and setting detail, an overview of the American legal system (appropriately enough, as the employees of Angel Investigations frequently butt heads with police officers, lawyers and others on the show), and a pregenerated adventure. Eden Studios will not release this product since the recent end of their license with Fox.
Reception:
The Angel role-playing game won the Origins Award for Best Roleplaying Game in its year of release. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Parametric determinism**
Parametric determinism:
Ernest Ezra Mandel (Dutch: [manˈdɛl]; also known by various pseudonyms such as Ernest Germain, Pierre Gousset, Henri Vallin, Walter (5 April 1923 – 20 July 1995), was a Belgian Marxian economist, Trotskyist activist and theorist, and Holocaust survivor. He fought in the underground resistance against the Nazis during the occupation of Belgium.
Life:
Born in Frankfurt, Mandel was recruited to the Belgian section of the international Trotskyist movement, the Fourth International, in his youth in Antwerp. His parents, Henri and Rosa Mandel, were Jewish emigres from Poland, the former a member of Rosa Luxemburg's and Karl Liebknecht's Spartacist League. The beginning of Mandel's period at university was interrupted when the German occupying forces closed the university.
Life:
During World War II, while still a teenager, he joined the Belgian Trotskyist organisation alongside Abraham Leon and Martin Monath. He twice escaped after being arrested in the course of resistance activities, and survived imprisonment in the German concentration camp at Dora. After the war, he became the youngest member of the Fourth International secretariat, alongside Michel Pablo and others. He gained respect as a prolific journalist with a clear and lively style, as an orthodox Marxist theoretician, and as a talented debater. He wrote for numerous media outlets in the 1940s and 1950s including Het Parool, Le Peuple, l'Observateur and Agence France-Presse. At the height of the Cold War, he publicly defended the merits of Marxism in debates with the social democrat and future Dutch premier Joop den Uyl.
Activity:
After the 1946 World Congress of the Fourth International, Mandel was elected into the leadership of the International Secretariat of the Fourth International. In line with its policy, he joined the Belgian Socialist Party where he was a leader of a militant socialist tendency, becoming editor of the socialist newspaper La Gauche (and writing for its Flemish sister publication, Links), a member of the economic studies commission of the General Federation of Belgian Labour and an associate of the Belgian syndicalist André Renard. He and his comrades were expelled from the Socialist Party not long after the Belgian general strike of 1960–61 for opposing its coalition with the Christian Democrats and its acceptance of anti-strike legislation.
Activity:
He was one of the main initiators of the 1963 reunification between the International Secretariat, which he led along with Michel Pablo, Pierre Frank and Livio Maitan, and the majority of the International Committee of the Fourth International, a public faction led by James Cannon's Socialist Workers Party that had withdrawn from the FI in 1953. The regroupment formed the reunified Fourth International (also known as the USFI or USec). Until his death in 1995, Mandel remained the most prominent leader and theoretician of both the USFI and of its Belgian section, the Revolutionary Workers' League.
Activity:
Until the publication of his massive book Marxist Economic Theory in French in 1962, Mandel's Marxist articles were written mainly under a variety of pseudonyms and his activities as Fourth Internationalist were little known outside the left. After publishing Marxist Economic Theory, Mandel travelled to Cuba and worked closely with Che Guevara on economic planning, after Guevara (who was fluent in French) had read the new book and encouraged Mandel's interventions.He resumed his university studies and graduated from what is now the École Pratique des Hautes Études in Paris in 1967. Only from 1968 did Mandel become well known as a public figure and Marxist politician, touring student campuses in Europe and America giving talks on socialism, imperialism and revolution.
Activity:
Although officially barred from West Germany (and several other countries at various times, including the United States, France, Switzerland, and Australia), he gained a PhD from the Free University of Berlin in 1972 (where he taught for some months), published as Late Capitalism, and he subsequently gained a lecturer position at the Free University of Brussels.
Activity:
Mandel gained mainstream attention in the United States following the rejection of his visa by Attorney General John N. Mitchell against the suggestion of Secretary of State William P. Rogers in 1969. Attorney General Mitchell acted under the Immigration and Nationality Act of 1952 (also known as the McCarran–Walter Act). This act states that those who "advocate the economic, international and governmental doctrines of world Communism" and "who write or public any written or printed matter advocating or teaching the economic international and governmental doctrines of world Communism" can have their visas barred. Mandel had been granted visas in 1962 and 1968 but had violated the conditions of his second visit unknowingly by asking for donations for the defence in the legal cases of French demonstrators. As a result of his rejected visa, a number of American scholars came out to vouch for his right to visit the United States. They attempted to highlight that he did not affiliate with the Communist Party and had publicly spoken out against the invasion of Czechoslovakia in 1968.In 1971, a Federal Court in New York voted to void Mitchell's decision, stating that the United States could not bar a visitor, but on 29 June 1972, the Supreme Court ruled, 6 to 3, that Mitchell had acted within his job description in rejecting the visa. In 1972, his exclusion from the United States was upheld in the US Supreme Court case Kleindienst v. Mandel.
Activity:
In 1978, he delivered the Alfred Marshall Lectures at the University of Cambridge, on the topic of the long waves of capitalist development.Mandel campaigned on behalf of numerous dissident left-wing intellectuals suffering political repression, advocated for the cancellation of the Third World debt, and, in the Mikhail Gorbachev era, spearheaded a petition for the rehabilitation of the accused in the Moscow Trials of 1936–1938. When in his seventies, he travelled to Russia to defend his vision of democratic socialism and continued to support the idea of Revolution in the West until his death.
Writings:
In total, he published approximately 2,000 articles and around 30 books during his life in German, Dutch, French, English and other languages, which were in turn translated into many more languages. During the Second World War, he was one of the editors of the underground newspaper, Het Vrije Woord. In addition, he edited or contributed to many books, maintained a voluminous correspondence, and was booked for speaking engagements worldwide. He considered it his mission to transmit the heritage of classical Marxist thought, deformed by the experience of Stalinism and the Cold War, to a new generation. And to a large extent he did influence a generation of scholars and activists in their understanding of important Marxist concepts. In his writings, perhaps most striking is the tension between creative independent thinking and the desire for a strict adherence to Marxist doctrinal orthodoxy. Due to his commitment to socialist democracy, he has even been characterised as "Luxemburgist".
"Parametric determinism":
Parametric determinism is a Marxist interpretation of the course of history. It was formulated by Ernest Mandel and can be viewed as one variant of Karl Marx's historical materialism or as a philosophy of history.
"Parametric determinism":
In an article critical of the analytical Marxism of Jon Elster, Mandel explains the idea as follows: Dialectical determinism as opposed to mechanical, or formal-logical determinism, is also parametric determinism; it permits the adherent of historical materialism to understand the real place of human action in the way the historical process unfolds and the way the outcome of social crises is decided. Men and women indeed make their own history. The outcome of their actions is not mechanically predetermined. Most, if not all, historical crises have several possible outcomes, not innumerable fortuitous or arbitrary ones; that is why we use the expression 'parametric determinism' indicating several possibilities within a given set of parameters.
"Parametric determinism":
Formal rationality and dialectical reason In formal-logical determinism, human action is considered either rational, and hence logically explicable, or else arbitrary and random (in which case human actions can be comprehended at best only as patterns of statistical distributions, i.e. as degrees of variability relative to some constants). But in dialectical determinism, human action may be non-arbitrary and determinate, hence reasonable, even although it is not explicable exclusively in terms of deductive inference. The action selected by people from a limited range of options may not be the "most logical" or "most optimal" one, but it can be shown to be non-arbitrary and reasonable under the circumstances, if the total context is considered.What this means is that in human situations typically several "logics" are operating at the same time which together determine the outcomes of those situations: the logic of the actors themselves in their consciousness and actions; the logic of the given parameters constraining their behaviour; and the logic of the interactive (reflexive) relationship between actors and their situation.If one considered only one of these aspects, one might judge people's actions "irrational", but if all three aspects are taken into account, what people do may appear "very reasonable". Dialectical theory aims to demonstrate this, by linking different "logical levels" together as a total picture, in a non-arbitrary way. "Different logical levels" means that particular determinants regarded as irrelevant at one level of analysis are excluded, but are relevant and included at another level of analysis with a somewhat different (or enlarged) set of assumptions depending on the kind of problem being investigated.For example, faced with a situation, the language which people use to talk about it, reveals that they can jump very quickly from one context to another related context, knowing very well that at least some of the inferences that can be drawn in the one context are not operative in the other context. That's because they know that the assumptions in one context differ to some degree from the other. Nevertheless, the two contexts can coexist, and can be contained in the same situation, which we can demonstrate by identifying the mediating links. This is difficult to formalize precisely, yet people do it all the time, and think it perfectly "reasonable". For another example, people will say "you can only understand this if you are in the situation yourself" or "on the ground." What they mean is that the meaning of the totality of interacting factors involved can only be understood by experiencing them. Standing outside the situation, things seem irrational, but being there, they appear very reasonable.
"Parametric determinism":
Dialectical theory does not mean that, in analyzing the complexity of human action, inconvenient facts are simply and arbitrarily set aside. It means, rather, that those facets of the subject matter which are not logically required at a given stage of the analysis are set aside. Yet, and this is the point, as the analysis progresses, the previously disregarded aspects are integrated step by step into the analysis, in a consistent way. The proof of the validity of the procedure is that, at the end, the theory has made the subject matter fully self-explanatory, since all salient aspects have been given their appropriate place in the theory, so that all of it becomes comprehensible, without resort to shallow tautologies. This result can obviously be achieved only after the research has already been done, and the findings can be arranged in a convincing way. A synthesis cannot be achieved without a preceding analysis. So dialectical analysis is not a "philosopher's stone" that provides a quick short-cut to the "fount of wisdom", but a mode of presenting findings of the analysis after knowledge has been obtained through inquiry and research, and dialectical relationships have been verified. Because only then does it become clear where the story should begin and end, so that all facets are truly explained. According to Ernest Mandel, "Marx's method is much richer than the procedures of ' successive concretization' or 'approximation' typical of academic science."In mainstream social theory, the problem of "several logics" in human action is dealt with by game theory, a kind of modelling which specifies the choices and options which actors have within a defined setting, and what the effects are of their decisions. The main limitation of that approach is, that the model is only as good as the assumptions on which it is based, while the choice of assumptions is often eclectic or fairly arbitrary. Dialectical theory attempts to overcome this problem, by paying attention to the sources of assumptions, and by integrating the assumptions in a consistent way.
"Parametric determinism":
Making history One common problem in historical analysis is to understand to what extent the results of human actions can be attributed to free choices and decisions people made (or free will), and to what extent they are a product of social or natural forces beyond their control.To solve this problem theoretically, Mandel suggests that in almost any human situation, some factors ("parameters") are beyond the control of individuals, while some other conditions are under their control (arguably, one group of people could "impose parameters" on another, analogous to parents imposing constraints on children). Some things can under the circumstances be changed by human action, according to choice, but others cannot or will not be, and can thus be regarded as constants. A variable can vary, yet it cannot vary in any direction whatever but only within the given parameters. In a general sense, a "parameter" is a given condition imposed on a situation, or a controlled variable, but more specifically it refers to a condition which, in some way, limits the amount and type of variability there can be.
"Parametric determinism":
Those given, objective parameters which are beyond people's control (and thus cannot normally be changed by them) limit the realm of possibilities in the future; they rule out some conceivable future developments or alternatively make them more likely to happen. In that sense human action is "determined" and "determinate". If that wasn't so, then it would be impossible to predict anything much about human behaviour.
"Parametric determinism":
Some of these parameters refer to limits imposed by the physical world, others to limits imposed by the social set-up or social structure that individuals and groups operate within. The dominant ideology or religion could also be a given parameter. If for example most people follow a certain faith, this shapes their whole cultural life, and is something to be reckoned with that isn't easily changed.
"Parametric determinism":
At the same time, however, the given parameters cannot usually determine in total what an individual or group will do, because they have at least some (and sometimes a great deal) of personal or behavioural autonomy. They can think about their situation, and make some free choices and decisions about what they will do, within the framework of what is objectively possible for them (the choices need not be rational or fully conscious ones, they could just be non-arbitrary choices influenced by emotions and desires). Sentient (self-aware) organisms, of which human beings are the most evolved sort, are able to vary their own response to given situations according to internally evaluated and decided options. In this sense, Karl Marx had written: People make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past.
"Parametric determinism":
"The past" (what really happened before, as distinct from its results in the present) is not something which can be changed at all in the present, only reinterpreted, and therefore the past is a given constant which delimits what can possibly happen in the present and in the future. If the future seems relatively "open-ended" that is just because in the time-interval between now and the future, new options and actions could significantly alter what exactly the future will be. Yet the variability of possible outcomes in the future is not infinite, but delimited by what happened before.
"Parametric determinism":
Ten implications Ten implications of this view are as follows: At any point in time, the outcomes of an historical process are partly predetermined, and partly uncertain because they depend on what human choices and decisions will be made in the present. Those choices are not made in a vacuum, but in an environment which makes those choices possible, makes them meaningful and gives them effect. Otherwise they would not be real choices, only imaginary choices.
"Parametric determinism":
While the past and the present rule out some courses of action, a human choice is always possible between a finite number of realistic options, which often enables the experienced analyst to specify the "most likely scenarios" of what could happen in the future. Some things cannot happen, and some things are more likely to happen than others.
"Parametric determinism":
Once an important choice has been made and acted upon, this will have an effect on the realm of possibilities; in particular, it will shift to a greater or lesser extent the parameters delimiting what can happen in the future. Thus, once "a train of events has been set in motion", it will foreclose other possibilities, and also it might open up some new ones. If masses of people make important new choices, whether in response to circumstances or in response to a new idea, a qualitative change occurs; in that case, most people begin to behave differently.
"Parametric determinism":
The process of history is both determined, in that the given parameters delimit the possible outcomes, but also open-ended, insofar as human action (or inaction) can change the historical outcomes within certain limits. Human history-making is therefore a reciprocal interaction between what people do, and the given circumstances.
"Parametric determinism":
To some extent at least, it is possible to predict with useful accuracy what will happen in the future, if one has sufficient experience, knowledge, and insight into the relevant causal factors at work as well as how they are related. This may be a work of science or sustained practical experience. In turn, future perspectives can importantly influence human action in the present.
"Parametric determinism":
In historical analysis and portrayals, the analytical challenge is to understand what part of a course of events is attributable to conscious human actions and decisions, what part is shaped by the combination of given circumstances in which the human actors had to act, and what exactly is the relationship between them (the link between the "part" and the "whole").
"Parametric determinism":
Because the ability to prove historical assessments scientifically is limited, ideology, a mind-set or a social mentality about the state of the world typically plays an important role in the perspectives people develop (Mandel refers here to an idea by Lucien Goldmann). With hindsight, it may be possible to trace out accurately why events necessarily developed in the way that they did, and not otherwise. But at the time they are happening, this is usually not, or not completely possible, and the hope (or fear) for a particular future may play an important role (here Mandel refers to the philosophy of Ernst Bloch). In addition, ideology influences whether one looks upon past events as failures or successes (as many historians have noted, history is often rewritten by the victors in great historical battles to cast themselves in an especially positive light). There is no "non-partisan" history-writing in this sense, at best we can say the historian had full regard for the known facts pertaining to the given case and frankly acknowledges his biases.
"Parametric determinism":
"History" in general cannot be simply defined as "the past", because it is also "the past living in the present" and "the future living in the present". Historical thinking is not just concerned with what past events led to the present, but also with those elements from the past which are contained in the present and elements that point to the future. It involves both antecedents and consequents, including future effects. Only on that basis can we define how people can "make history" as a conscious praxis.
"Parametric determinism":
The main reason for studying history is not because we should assign praise or blame, or simply because it is interesting, but because we need to study past experience to understand the present and the future. History can be seen as a "laboratory", the lab-record of which shows how, under given conditions, people tried to achieve their goals, and what the results of their experimentations were. This can provide insight into what is likely or unlikely to succeed in future. At the very least, each generation has to come to grips with the experience of the previous generation, as well as educating the future generation.
"Parametric determinism":
The theory of historicism according to which the historical process as a whole has an overall purpose or teleology (or "grand design") is rejected. With Karl Marx and Friedrich Engels, Mandel thought that "'History does nothing... It is people, real, living people who do all that... “history” is not, as it were, a person apart, using people as a means to achieve its own aims; history is nothing but the activity of people pursuing their aims". With the proviso that people did so within given parameters not of their own making, allowing us to identify broad historical movements as determinate processes. The historical process is also not a matter of linear progress according to inevitable stages—both progress and regress can occur, and different historical outcomes are possible depending on what people do.
"Parametric determinism":
Perceptions and illusions According to the theory of parametric determinism, the "human problem" in this context is usually not that human beings lack free choice or free will, or that they cannot in principle change their situation (at least to some extent), but rather is their awareness of the options open to them, and their belief in their own ability to act on them—influenced as they may be, by their ideology, experience and emotions.
"Parametric determinism":
Perceptions of what people can change or act upon may vary a great deal, they might overestimate it, or underestimate it. Thus it may take scientific inquiry to find out what perceptions are realistic. By discovering what the determinism is, we can learn better how we can be free. Simply put, we could "bang our head against a wall", but we could also go over the wall, through a door in the wall, or around the wall. At crucial points, humans can "make history" actively with a high awareness of what they are doing, changing the course of history, but they can also "be made by history" to the extent that they passively conform to (or are forced to conform to) a situation which is mostly not of their own making and which they may not understand.
"Parametric determinism":
As regards the latter, Mandel referred to the condition of alienation in the sense of a diminished belief in the ability to have control over one's own life, or feeling estranged from one's real nature and purpose in life. People might reify aspects of their situation. They might regard something as inevitable ("God's will") or judge "nothing could be done to prevent it" when the real point is that, for specific reasons, nobody was prepared to do anything about it—something could have been done, but it wasn't. Thus "historical inevitability" can also be twisted into a convenient apology to justify a course of events.
"Parametric determinism":
In this process of making choices within a given objective framework of realistic options, plenty of illusions are also possible, insofar as humans may have all kinds of gradations of (maybe false) awareness about their true situation. They may, as Mandel argues, not even be fully aware of what motivates their own actions, quite aside from not knowing fully what the consequences of their actions will be. A revolutionary seeking to overthrow the old order to make way for a new one obviously faces many "unknowns".
"Parametric determinism":
Therefore, human action can have unintended consequences, including effects which are completely opposite to what was intended. This means that popular illusions can also shape the outcomes of historical events. If most people believe something to be the case, even although it is not true, this fact can also become a parameter limiting what can happen or influencing what will happen.
"Parametric determinism":
Skeptical reply Because terrible illusions can occur, some historians are skeptical about the ability of people to change the world for the better in any real and lasting way. Postmodernism casts doubt on the existence of progress in history as such—if e.g. Egyptians built the Great Pyramid of Giza in 2500 BC, and Buzz Aldrin and Neil Armstrong landed on the moon in 1969, this represents no progress for humanity.
"Parametric determinism":
However, Mandel argued that this skepticism is itself based on perceptions of what people are able to know about their situation and their history. Ultimately, the skeptic believes that it is impossible for people to have sufficient knowledge of a kind that they can really change the human condition for the better, except perhaps in very small ways. It just is what it is. This skeptical view does not necessarily imply a very "deterministic" view of history however; history could also be viewed as an unpredictable chaos or too complex to fathom.
"Parametric determinism":
However, most politicians and political activists (including Mandel himself) at least do not believe that history generally is an unpredictable chaos, because in that case their own standpoints would be purely arbitrary and be perceived as purely arbitrary. Usually, they would argue, the chaos is limited in space and time, because in perpetual chaos, human life can hardly continue anyway; in that case, people become reactive beasts. Since people mostly do want to survive, they need some order and predictability. One can understand what really happened in history reasonably well, if one tries. Human beings can understand human experience because they are human, and the more relevant experience they obtain, the better they can understand it.
"Parametric determinism":
Conscious human action, Mandel argues, is mainly non-arbitrary and practical, it has a certain "logic" to it even if people are not (yet) fully aware of this. The reality they face is ordered in basic ways, and therefore can be meaningfully understood. Masses of people might go into a "mad frenzy" sometimes that might be difficult to explain in rational terms, but this is the exception, not the rule. What is true is that a situation of chaos and disorder (when nothing in society seems to work properly anymore) can powerfully accentuate the irrational and non-rational aspects of human behaviour. In such situations, people with very unreasonable ideas can rise to power. This is, according to Mandel, part of the explanation of fascism.
"Parametric determinism":
Historical latency and the possibilities for change The concept of parametric determinism has as its corollary the concept of historical latency. It is not just that different historical outcomes are possible, but that each epoch of human history contains quite a few different developmental potentials. The indications of these potentials can be empirically identified, and are not simply a speculation about "what could conceivably happen".
"Parametric determinism":
But they are latent factors in the situation, insofar as they will not necessarily be realised or actualised. Their realisation depends on human action, on the recognition of the potential that is there, and the decision to do something about it. Thus, Mandel argues that both socialism and barbarism exist as broad "latent" developmental possibilities within modern capitalist society, even if they are not realised, and whether and which of these will be realised, depends on human choices and human actions.
"Parametric determinism":
Effective action to change society, he argues, has to set out from the real possibilities there are for an alternative way of doing things, not from abstract speculations about a better world. Some things are realistically possible, but not just "anything" is possible. The analytical challenge—often very difficult—is therefore to understand correctly what the real possibilities are, and which course of action would have the most fruitful effect. One can do only what one is able to do and no more, but much depends on choices about how to spend one's energies.
"Parametric determinism":
Typically in wars and revolutions, when people exert themselves to the maximum and have to improvise, it is discovered that people can accomplish far more than they previously thought they could do (also captured in the saying "necessity is the mother of invention"). The whole way people think is suddenly changed. But in times of cultural pessimism, general exhaustion prevails and people are generally skeptical or cynical about their ability to achieve or change very much at all. If the bourgeoisie beats down the workers and constrains their freedom, so that workers have to work more and harder for less and less pay, pessimistic moods can prevail for quite some time. If, on the other hand, the bourgeois economy is expanding, the mood of society can become euphoric, and people believe that just about anything is possible. A famous leftwing slogan in May 1968 was "tout est possible" ("anything is possible"). Similarly, in the boom of the later 1990s, many people in rich countries believed that all human problems could finally be resolved.
"Parametric determinism":
That is just to say that what is possible to achieve can be both pessimistically underestimated and optimistically exaggerated at any time. Truly conservative people will emphasize how little potential there is for change, while rebels, visionaries, progressives and revolutionaries will emphasize how much could be changed. An important role for social scientific inquiry and historiography is therefore to relativise all this, and place it in a more objective perspective by looking at the relevant facts.
"Parametric determinism":
Criticisms While Mandel himself made some successful predictions about the future of world society (for instance, he is famous for predicting at the beginning of the 1960s, like Milton Friedman did, that the postwar economic boom would end at the close of the decade), his Trotskyist critics (including his biographer Jan Willem Stutje) argue, with the benefit of hindsight, that he was far too optimistic and hopeful about the possibility of a workers' revolution in Eastern Europe and the Soviet Union during the Mikhail Gorbachev era and after—and more generally, that his historical optimism distorted his political perspectives, so that he became too "certain" about a future that he could not be so certain about, or else crucially ambivalent.This is arguably a rather shallow criticism insofar as the situation could well have developed in different directions, which is precisely what Mandel himself argued; in politics, one could only try to make the most of the situation at the time, and here pessimism was not conducive to action. But the more substantive criticism is that many of Mandel's future scenarios were simply not realistic, and that in reality things turned out rather differently from what he thought. This raises several questions: whether the theory of parametric determinism in history is faulty; whether Mandel's application of the theory in his analyses was faulty; how much we can really foresee anyway, and what distinguishes forecast from prophecy; and whether and how much people learn from history anyway.
"Parametric determinism":
In answering these criticisms, Mandel himself would probably have referred to what he often called the "laboratory of history". That is, we can check the historical record, to see who predicted what, the grounds given for the prediction, and the results. On that basis, we can verify empirically what kind of thinking (and what kind of people) will produce the most accurate predictions, and what we can really predict with "usable accuracy". One reason why he favoured Marxism was because he believed it provided the best intellectual tools for predicting the future of society. He often cited Leon Trotsky as an example of a good Marxist able to predict the future. In 1925, Trotsky wrote: The essence of Marxism consists in this, that it approaches society concretely, as a subject for objective research, and analyzes human history as one would a colossal laboratory record. Marxism appraises ideology as a subordinate integral element of the material social structure. Marxism examines the class structure of society as a historically conditioned form of the development of the productive forces; Marxism deduces from the productive forces of society the inter-relations between human society and surrounding nature, and these, in turn are determined at each historical stage by man’s technology, his instruments and weapons, his capacities and methods for struggle with nature. Precisely this objective approach arms Marxism with the insuperable power of historical foresight. This may all seem a trivial "academic" or "scholastic" debate, similar to retrospective speculations about "what could have been different", but it has very important implications for the socialist idea of a planned economy. Obviously, if it is not possible to predict much about human behaviour with usable accuracy, then not much economic planning is feasible either—since a plan requires at least some expectation that its result can and will be realised in the future, even if the plan is regularly adjusted for new (and unanticipated) circumstances. In general, Mandel believed that the degree of predictability in human life was very much dependent on the way society itself was organised. If e.g. many producers competed with each other for profits and markets, based on privatized knowledge and business secrets, there was much unpredictability in what would happen. If the producers coordinated their efforts co-operatively, much would be predictable.A deeper problem, to which Mandel alludes with his book Trotsky: A study in the dynamic of his thought, is that if we regard certain conditions as possible to change for the better, we might be able to change them, even if currently people believe it is impossible—whereas if we regard them as unchangeable, we are unlikely to change them at all, even although they could possibly be changed ( a similar insight occurs in pragmatism). That is, we make things possible, by doing something about them rather than do nothing. This, however, implies that even when we try our best to be objective and realistic about history or anything else, we remain subjects influenced by subjective perceptions or elements of fear, hope, will or faith that defy reason or practicality.
"Parametric determinism":
Simply put, it is very difficult to bring scientific truths and political action together, as Marxists aim to do, in such a way that we really change the things we can change for the better to the maximum, and do not try to change things we really cannot change anyway (Marxists call this "the unity of theory and practice"). In other words, the will to change things can involve subjective perceptions of a kind for which even the best historical knowledge may offer no assistance or guide. And all perceptions of "history-making" may inescapably involve ideology, thus—according to skeptics—casting some doubt on the very ability of people to distinguish objectively between what can be changed, and what cannot. The boundary between the two might be rather blurry. This is the basis of Karl Popper's famous philosophy of social change by "small steps" only.
"Parametric determinism":
Mandel's reply to this skepticism essentially was to agree that there were always "unknowns" or "fuzzy" areas in human experience; for people to accomplish anything at all or "make their own history", required taking a risk, calculated or otherwise. One could indeed see one's life as a "wager" ultimately staked on a belief, scientifically grounded or otherwise. However, he argued it was one thing to realise all that, but another to say that the "unknowns" are "unknowable". Thus, for good or for ill, "you don't know, what you haven't tried" and more specifically "you don't know, what you haven't tried to obtain knowledge about". The limits of knowledge and human possibilities could not be fixed in advance by philosophy; they had to be discovered through the test of practice. This attitude recalls Marx's famous comment that "All social life is essentially practical. All mysteries which lead theory to mysticism find their rational solution in human practice, and in the comprehension of this practice.". Mandel believed, with Marx, that "ignorance never helped anybody" except those who profited from its existence ("never underestimate human gullibility, including your own").
"Parametric determinism":
The general task of revolutionary science was to overcome ignorance about human life, and this could not very well be done by reconciling people with their allegedly "predetermined" fate at every opportunity. We all know we will die eventually, but that says little yet about what we can achieve before that point. Skepticism has its uses, but what those uses are, can only be verified from experience; a universal skepticism would be just as arbitrary as the belief that "anything is possible"—it did not lead to any new experience from which something could be learnt, including learning about the possibilities of human freedom. And such learning could only occur through making conscious choices and decisions within given parameters, i.e. in a non-arbitrary (non-chaotic) environment, permitting at least some predictability and allowing definite experiential conclusions.Mandel often reiterated that most people do not learn all that much from texts or from history, they learn from their own experience. They might be affected by history without knowing it. But anybody concerned with large-scale social change was almost automatically confronted with the need to place matters in broader historical perspective. One had to understand deeply the limits, consequences and implications of human action. Likewise, politicians making decisions affecting large numbers of people could hardly do without a profound sense of history
Death and legacy:
Mandel died at his home in Brussels in 1995 after suffering from a heart attack.Mandel is probably best remembered for being a populariser of basic Marxist ideas, for his books on late capitalism and Long-Wave theory, and for his moral-intellectual leadership in the Trotskyist movement. Despite critics claiming that he was 'too soft on Stalinism', Mandel remained a classic rather than a conservative Trotskyist: writing about the Soviet bureaucracy but also why capitalism had not suffered a death agony. His late capitalism was late in the sense of delayed rather than near-death. He still believed though that this system hadn't overcome its tendency to crises. A leading German Marxist, Elmar Altvater, stated that Mandel had done much for the survival of Marxism in the German Federal Republic.
Sources:
BiographiesAchcar, Gilbert, ed. (2003). Gerechtigkeit und Solidarität. Ernest Mandels Beitrag zum Marxismus. Köln: Neuer isp-Verlag.
North, David (1997). Ernest Mandel 1923–1995: A Critical Assessment of His Role in the History of the Fourth International. Labour Press Books.
Stutje, Jan Willem (2007). Ernest Mandel: Rebel tussen Droom en Daad. Antwerpen: Houtekiet/Amsab.Published in English as: Stutje, Jan Willem (2009). Ernest Mandel: A Rebel's Dream Deferred. Verso. ISBN 9781844673162.Ali, Tariq (September–October 1995). "Tariq Ali interviews Ernest Mandel: the luck of a crazy youth". New Left Review. New Left Review. I (213).
Manuel Kellner: Against Capitalism and Bureaucracy. Ernest Mandel’s Theoretical Contributions (= Historical Materialism Book Series, Volume: 279), Bill, Leiden 2023, ISBN 978-90-04-53349-3. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Task (teaching style)**
Task (teaching style):
The Task teaching style is an option available to students under Student-Directed Teaching, a progressive teaching technology that aims to give the student a greater sense of ownership in his or her own education.
This teaching style is "for those students who required formal instruction and yet are capable of making some choice as to the appropriate practice for them to master the objective." This formal instruction happens at the same time as the Command students.
Task (teaching style):
Under Task, the teacher will: Provide a unit plan consisting of the objectives for several days, written in a language that students can understand Provide formal instruction Limit formal instruction to 25% of the time Provide an instruction area Assign an appropriate amount of choice in practice related to the instruction Provide a checking station with answer keys Use good questioning techniques and negotiation to help steer the students to becoming more independent Spend approximately 60% of the total class time with the students whose choice was Task (remember Command and Task are together for formal instruction) Provide perception checks and final tests as indicated in the unit plan Provide a second evaluative activity if required by an individual studentThe student will: Listen to the instruction Consider what they know and what they don't know when selecting the amount and type of practice Declare the mark expected on each perception check Do more than one perception check if the declared mark is not reached within the flexibility factorAssignments for students choosing Task style might look something like this: On page 159 there are some practice questions. Do any 3 of the first 5, any 2 of the next 5 and any 4 of the next 10. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Timehop**
Timehop:
Timehop is an application for smartphones that collects old photos and posts from Facebook, Instagram, Twitter, Apple Photos, Google Photos and Dropbox photos and distributes the past.
The company was founded in 2011 by Jonathan Wegener and Benny Wong. As of January 2016, Timehop had 12 million users.
History:
Timehop began as 4SquareAnd7YearsAgo, which was created at Foursquare's Hackathon in February 2011. The original aim of the app was to build the service that would replay the past foursquare checkins in real-time. The product was simplified into a daily email.
A few months later Jonathan Wegener and Benny Wong launched PastPosts.com followed by And7YearsAgram before finally merging under a single brand Timehop.In the summer of 2013, the company raised $3 million in funding by existing investor, Spark Capital. The funds helped build the Android version of the app.
The iOS version reached one million downloads and the app has been in the Top 200 in the U.S. App Store. In 2014, Timehop raised $10 million in Series B funding.
Controversy:
In December 2016, Timehop released the 4.0 update to their app which replaced the scrolling timeline with separate pages for each entry. The update also removed a number of previous features. As a result of the update, Timehop received more than 7,000 1-star reviews in the iOS app store. Although Timehop quickly released an update which restored some of the features, it did not restore the scrolling timeline or Swarm check-ins and the app still has predominantly 1-star reviews. On January 14, 2017, TechCrunch reported that Timehop CEO and co-founder Jonathan Wegener had stepped down and was replaced by Matt Raoul, the former design lead. Wegener stated that his departure "has nothing to do with the new version.” In early July 2018, Timehop had a network intrusion that lead to a data breach. According to the company, 21 million accounts were affected.
Devices:
Timehop is a free application for IOS and Android devices which can be downloaded from the App Store or Google Play. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Meteotsunami**
Meteotsunami:
A meteotsunami or meteorological tsunami is a tsunami-like sea wave of meteorological origin. Meteotsunamis are generated when rapid changes in barometric pressure cause the displacement of a body of water. In contrast to "ordinary" impulse-type tsunami sources, a traveling atmospheric disturbance normally interacts with the ocean over a limited period of time (from several minutes to several hours). Tsunamis and meteotsunamis are otherwise similar enough that it can be difficult to distinguish one from the other, as in cases where there is a tsunami wave but there are no records of an earthquake, landslide, or volcanic eruption.: 1036 Meteotsunamis, rather, are triggered due to extreme weather events including severe thunderstorms, squalls and storm fronts; all of which can quickly change atmospheric pressure. Meteotsunamis typically occur when severe weather is moving at the same speed and direction of the local wave action towards the coastline. The size of the wave is enhanced by coastal features such as shallow continental shelves, bays and inlets.Only about 3% of historical tsunami events (from 2000 BC through 2014) are known to have meteorological origins, although their true prevalence may be considerably higher than this because 10% of historical tsunamis have unknown origins, tsunami events in the past are often difficult to validate, and meteotsunamis may have previously been misclassified as seiche waves. Seiches are classified as a long-standing wave with longer periods and slower changes in water levels. They are also restricted to enclosed or partially enclosed basins.
Characteristics:
Meteotsunamis are restricted to local effects because they lack the energy available to significant seismic tsunami. However, when they are amplified by resonance they can be hazardous. Meteotsunami events can last anywhere from a few minutes to a couple of hours. Their size, length and period is heavily dependent on the speed and severity of the storm front. They are progressive waves which can affect enclosed basins and also large areas of coastline. These events have produced waves over six feet in height and can resemble storm surge flooding.
Frequency of events:
In April 2019, NOAA determined that 25 meteotsunamis, on average, impact the Eastern Seaboard of the United States every year. In the Great Lakes, even more of these events occur; on average, 126 times a year.
Frequency of events:
In some parts of the world, they are common enough to have local names: rissaga or rissague (Catalan), ressaca or resarca (Portuguese), milgħuba (Maltese), marrobbio or marrubio (Italian), Seebär (German), abiki or yota (Japanese), šćiga (Croatian). Some bodies of water are more susceptible than others, including anywhere that the natural resonance frequency matches that of the waves, such as in long and narrow bays, particularly where the inlet is aligned with the oncoming wave.: 4 Examples of particularly susceptible areas include Nagasaki Bay,: 1038–1040 : 8 the eastern Adriatic Sea,: 1046 : 8 and the Western Mediterranean.: 1044
Other notable events:
In 1929, a wave 6 meters in height pulled ten people from the shore, to their deaths in Grand Haven, Michigan. A three-meter wave that hit the Chicago waterfront in 1954 swept people off of piers, drowning seven.: 10 A meteotsunami that struck Nagasaki Bay on 31 March 1979 achieved a maximum wave height of 5 meters; there were three fatalities. In June 2013, a derecho off the New Jersey coast triggered a widespread meteotsunami event, where tide gauges along the East Coast, Puerto Rico and Bermuda reported "tsunami-like" conditions. The peak wave amplitude was 1 foot above normal sea level in Newport, RI. In New Jersey, divers were pulled over a breakwater and three people were swept off a jetty, two seriously injured, when a six-foot wave struck the Barnegat Inlet. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GDP-D-glucose phosphorylase**
GDP-D-glucose phosphorylase:
GDP-D-glucose phosphorylase (EC 2.7.7.78) is an enzyme with systematic name GDP:alpha-D-glucose 1-phosphate guanylyltransferase. This enzyme catalyses the following chemical reaction GDP-alpha-D-glucose + phosphate ⇌ alpha-D-glucose 1-phosphate + GDPThe enzyme may be involved in prevention of misincorporation of glucose in place of mannose residues into glycoconjugates. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Progestogen ester**
Progestogen ester:
A progestogen ester is an ester of a progestogen or progestin (a synthetic progestogen). The prototypical progestogen is progesterone, an endogenous sex hormone. Esterification is frequently employed to improve the pharmacokinetics of steroids, including oral bioavailability, lipophilicity, and elimination half-life. In addition, with intramuscular injection, steroid esters are often absorbed more slowly into the body, allowing for less frequent administration. Many (though not all) steroid esters function as prodrugs.
Progestogen ester:
Esterification is particularly salient in the case of progesterone because progesterone itself shows very poor oral pharmacokinetics and is thus ineffective when taken orally. Unmodified, it has an elimination half-life of only 5 minutes, and is almost completely inactivated by the liver during first-pass metabolism. Micronization, however, has allowed for progesterone to be effective orally, although oral micronized progesterone was not developed until recent years.Examples of important progestogen esters include the 17α-hydroxyprogesterone derivatives medroxyprogesterone acetate, megestrol acetate, cyproterone acetate, and hydroxyprogesterone caproate, the 19-norprogesterone derivative nomegestrol acetate, and the 19-nortestosterone derivatives norethisterone acetate and norethisterone enanthate.
Progestogen esters:
Estrogens were discovered in 1929, and beginning in 1936, a variety of estradiol esters, such as estradiol benzoate and estradiol dipropionate, were introduced for clinical use. Testosterone esters, such as testosterone propionate and testosterone phenylacetate, were also introduced around this time. In contrast to estradiol and testosterone, progesterone proved more difficult to esterify. In fact, esterification involves the replacement of a hydroxyl group with an alkoxy group, and unlike estradiol and testosterone, progesterone does not possess any hydroxyl groups, so it is actually not chemically possible to esterify progesterone itself. The first progestogen esters were not introduced until the mid-1950s, and were esters of 17α-hydroxyprogesterone (which, unlike progesterone, has a hydroxyl group available for esterification) rather than of progesterone; they included 17α-hydroxyprogesterone caproate (Delalutin, Proluton) and 17α-hydroxyprogesterone acetate (Prodrox). The following quote of de Médicis Sajous et al. (1961) details the development of progestogen esters: Over a period of several years, many tens of thousands of dollars were invested by Upjohn in an effort to find an easily absorbed, orally active progesterone ester. The effort met with but limited success. One promising ester, [17α-hydroxyprogesterone acetate], marketed as Prodox, was found. It was more active by mouth than other progesterone preparations then on the market, but it was not so active orally as desired.To obtain a progestational drug with the wanted properties, it appeared necessary to alter the progesterone molecule itself. Beginning about 1957, Upjohn steroid chemists accordingly prepared a series of progesterones modified in the various ways that had been found to multiply the power of cortisone and hydrocortisone. One of the modifications — worked out by a team under Dr. John C. Babcock — was the attachment of a carbon atom and three hydrogen atoms — a methyl group — to carbon 6 in the first ring of the progesterone steroid nucleus. A similar modification had been the key step in creating Medrol, Upjohn's high-potency, antiinflammatory cortisone-type steroid. The new progestational agent was [6α-methyl-17α-hydroxyprogesterone acetate] or [medroxyprogesterone acetate], which Upjohn has trademarked Provera. It has proved to be the most potent progestational drug yet uncovered — hundreds of times more active orally than progesterone and, weight for weight, some fifty times more active by subcutaneous injection. Provera was placed on the market in 1959.
Progestogen esters:
Medroxyprogesterone acetate (Provera) entered clinical use and became widely marketed, largely superseding the 17α-hydroxyprogesterone esters. A variety of analogues of medroxyprogesterone acetate, such as chlormadinone acetate, cyproterone acetate, and megestrol acetate, were subsequently developed and introduced as well. Progestogen esters of other groups of progestins have also been introduced, including the 19-norprogesterone derivatives gestonorone caproate, segesterone acetate (nestorone), nomegestrol acetate, and norgestomet (11β-methyl-17α-acetoxy-19-norprogesterone) and the 19-nortestosterone derivatives etynodiol diacetate, norethisterone acetate, norethisterone enanthate, and quingestanol acetate.
Progestogen esters:
Although esters of steroidal androgens and estrogens are generally inactive themselves and act as prodrugs, the same is not true for many progestogen esters. For instance, esters of 17α-hydroxyprogesterone derivatives, such as hydroxyprogesterone caproate, medroxyprogesterone acetate, and cyproterone acetate, are highly active themselves (in fact, they are far more active than their unesterified forms) and are not prodrugs, forming little or none of their parent compounds (in the cases of the examples given, hydroxyprogesterone, medroxyprogesterone, and cyproterone, respectively). On the other hand, esters of 19-nortestosterone derivatives, such as etynodiol diacetate, norethisterone acetate, norethisterone enanthate, and quingestanol acetate, are all prodrugs.
Progestogen ethers:
Although it cannot be esterified, progesterone possesses ketone groups at the C3 and C20 positions, and for this reason, it is possible to etherify it; that is, progesterone ethers are possible. Quingestrone (Enol-Luteovis) is a progesterone ether (specifically, the 3-cyclopentyl ether of progesterone) that has been marketed in Italy as an oral contraceptive. Quingestrone is a variant of progesterone with improved pharmacokinetics, including higher potency, oral activity, greater lipophilicity, and a longer half-life. Two other progestogens, pentagestrone (never marketed) and pentagestrone acetate (Gestovis, Gestovister), are the 3-cyclopentyl enol ethers of 17α-hydroxyprogesterone and 17α-hydroxyprogesterone acetate, respectively, while progesterone 3-acetyl enol ether (never marketed) is the 3-acetyl enol ether of progesterone.Although it was originally thought that progesterone ethers like quingestrone were prodrugs of progesterone, it was subsequently found that this is not the case and that quingestrone instead seems to be transformed directly into the corresponding alcohols rather than ketones. These alcohols are progesterone metabolites like pregnanolones and pregnanediols, and as some of these metabolites, for instance 3β-dihydroprogesterone, have potent progestogenic activity, this may account for the clinical efficacy of progestogen ethers like quingestrone as progestogens.
Progestogen oximes:
While not esters, C3 and C20 oxime conjugates of progesterone, such as progesterone carboxymethyloxime (progesterone 3-(O-carboxymethyl)oxime; P4-3-CMO), P1-185 (progesterone 3-O-(L-valine)-E-oxime), EIDD-1723 (progesterone (20E)-20-[O-[(phosphonooxy)methyl]oxime] sodium salt), EIDD-036 (progesterone 20-oxime), and VOLT-02 (chemical structure unreleased), have been developed as water-soluble progesterone and neurosteroid prodrugs, although none have completed clinical development or been marketed as of yet.Some 19-nortestosterone progestins, including the marketed progestins norgestimate and norelgestromin and the non-marketed progestin norethisterone acetate oxime, are C3 oximes, although they have potent progestogenic activity of their own and are not necessarily prodrugs of the corresponding ketones. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MILEPOST GCC**
MILEPOST GCC:
MILEPOST GCC is a free, community-driven, open-source, adaptive, self-tuning compiler that combines stable production-quality GCC, Interactive Compilation Interface and machine learning plugins to adapt to any given architecture and program automatically and predict profitable optimizations to improve program execution time, code size and compilation time. It is currently used and supported by academia and industry and is intended to open up research opportunities to automate compiler and architecture design and optimization.MILEPOST GCC is currently a part of the community-driven Collective Tuning Initiative (cTuning) to enable self-tuning computing systems based on collaborative open-source R&D infrastructure with unified interfaces and to improve the quality and reproducibility of research on code and architecture optimization. MILEPOST GCC is connected with the Collective Optimization Database to collect and reuse profitable optimization cases from the community and predict high-quality optimizations based on statistical analysis of past optimization data.
MILEPOST GCC:
In January 2018, the cTuning foundation and the Raspberry Pi Foundation published an interactive article featuring MILEPOST GCC and Collective Knowledge framework "for collaborative research into multi-objective autotuning and machine learning techniques." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hemotoxin**
Hemotoxin:
Hemotoxins, haemotoxins or hematotoxins are toxins that destroy red blood cells, disrupt blood clotting, and/or cause organ degeneration and generalized tissue damage. The term hemotoxin is to some degree a misnomer since toxins that damage the blood also damage other tissues. Injury from a hemotoxic agent is often very painful and can cause permanent damage and in severe cases death. Loss of an affected limb is possible even with prompt treatment.
Hemotoxin:
Hemotoxins are frequently employed by venomous animals, including snakes (vipers and pit vipers) and spiders (brown recluse). Animal venoms contain enzymes and other proteins that are hemotoxic or neurotoxic or occasionally both (as in the Mojave rattlesnake, the Japanese mamushi, and similar species). In addition to killing the prey, part of the function of a hemotoxic venom for some animals is to aid digestion. The venom breaks down protein in the region of the bite, making prey easier to digest.
Hemotoxin:
The process by which a hemotoxin causes death is much slower than that of a neurotoxin. Snakes which envenomate a prey animal may have to track the prey as it flees. Typically, a mammalian prey will stop fleeing not because of death, but due to shock caused by the venomous bite. Symptoms are dependent upon species, size, location of bite and the amount of venom injected. In humans, symptoms include nausea, disorientation, and headache; these may be delayed for several hours.
Hemotoxin:
Hemotoxins are used in diagnostic studies of the coagulation system. Lupus anticoagulant is detected by changes in the dilute Russell's viper venom time, which is a laboratory assay based on—as its name indicates—venom of the Russell's viper. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**No pain, no gain**
No pain, no gain:
No pain, no gain (or "No gain without pain") is a proverb, used since the 1980s as an exercise motto that promises greater value rewards for the price of hard and even painful work. Under this conception competitive professionals, such as athletes and artists, are required to endure pain (physical suffering) and stress (mental/emotional suffering) to achieve professional excellence. Medical experts agree that the proverb is wrong for exercise.
Exercise motto:
It came into prominence after 1982 when actress Jane Fonda began to produce a series of aerobics workout videos. In these videos, Fonda would use "No pain, no gain" and "Feel the burn" as catchphrases for the concept of working out past the point of experiencing muscle aches.It expresses the belief that solid large muscle is the result of training hard. Delayed onset muscle soreness is often used as a measure of the effectiveness of a workout.In terms of the expression used for development, the discomfort caused may be beneficial in some instances while detrimental in others. Detrimental pain can include joint pain. Beneficial pain usually refers to that resulting from tearing microscopic muscle fibers, which will be rebuilt more densely, making a bigger muscle.
Exercise motto:
The expression has been adopted in a variety of sports and fitness activities, beginning in 1982 to present day.
David B. Morris wrote in The Scientist in 2005, "'No pain, no gain' is an American modern mini-narrative: it compresses the story of a protagonist who understands that the road to achievement runs only through hardship." The concept has been described as being a modern form of Puritanism.
Origin:
The ancient Greek poet Hesiod (c. 750-650 BC) expresses this idea in Works and Days where he wrote: ...But before the road of Excellence the immortal gods have placed sweat. And the way to it is long and steep and rough at first. But when one arrives at the summit, then it is easy, even though remaining difficult.
Origin:
The ancient Greek playwright Sophocles (5th Century BC) expresses this idea in the play Electra (line 945). This line is translated as: "nothing truly succeeds without pain", "nothing succeeds without toil", "there is no success without hard work", and “Without labour nothing prospers (well).”A form of this expression is found in the beginning of the second century, written in The Ethics of the Fathers 5:23 (known in Hebrew as Pirkei Avot), which quotes Ben Hei Hei as saying, "According to the pain is the reward." This is interpreted to be a spiritual lesson; without the pain in doing what God commands, there is no spiritual gain.
Origin:
In 1577 British poet Nicholas Breton wrote: "They must take pain that look for any gain."One of the earliest attestations of the phrase comes from the poet Robert Herrick in his "Hesperides". In the 1650 edition, a two-line poem was added: NO PAINS, NO GAINS.If little labour, little are our gains: Man's fate is according to his pains.
Origin:
A version of the phrase was crafted by Benjamin Franklin, in his persona of Poor Richard (1734), to illustrate the axiom "God helps those who help themselves": Industry need not wish, as Poor Richard says, and he that lives upon hope will die fasting. There are no gains, without pains...In the phrase, Franklin's central thesis was that everyone should exercise 45 minutes each day. In 1853 R. C. Trench wrote in On Lessons in Proverbs iv: "For the most part they courageously accept the law of labour, No pains, no gains,—No sweat, no sweet, as the appointed law and condition of man's life." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Unconditional love**
Unconditional love:
Unconditional love is known as affection without any limitations, or love without conditions. This term is sometimes associated with other terms such as true altruism or complete love. Each area of expertise has a certain way of describing unconditional love, but most will agree that it is that type of love which has no bounds and is unchanging. In Christianity, unconditional love is thought to be part of the Four Loves; affection, friendship, eros, and charity. In ethology, or the study of animal behavior, unconditional love would refer to altruism, which in turn refers to the behavior by individuals that increases the fitness of another while decreasing the fitness of the individual committing the act. In psychology, unconditional love refers to a state of mind in which one has the goal of increasing the welfare of another, despite the lack of any evidence of benefit for oneself.
Conditional love:
Some authors make a distinction between unconditional love and conditional love. In conditional love, love is "earned" on the basis of conscious or unconscious conditions being met by the lover, whereas in unconditional love, love is "given freely" to the loved one "no matter what". Loving is primary. Conditional love requires some kind of finite exchange, whereas unconditional love is seen as infinite and measureless. Unconditional love should not be confused with unconditional dedication: unconditional dedication or "duty" refers to an act of the will irrespective of feelings (e.g. a person may consider that they have a duty to stay with someone); unconditional love is an act of the feelings irrespective of will.
Conditional love:
Unconditional love separates the individual from their behavior. However, the individual may exhibit behaviors that are unacceptable in a particular situation.
Humanistic psychology:
Humanistic psychologist Carl Rogers spoke of an unconditional positive regard and dedication towards one single support. Rogers stated that the individual needed an environment that provided them with genuineness, authenticity, openness, self-disclosure, acceptance, empathy, and approval. Rogers proposed this idea of Unconditional Positive Regard not only in social and familial situations, but also encouraged getting the healthy loving environment in therapy situations as well. It is important that in face-to-face therapy settings this environment is fostered along with empathy and understanding for the individual. It is through unconditional positive regard that change happens because the individual can feel that openness, love, and ability to be themselves again which fosters a true desire to change for the right reasons. Also, Abraham Maslow supported the unconditional love perspective by saying that in order to grow, an individual had to have a positive perspective of themselves. In Man's Search For Meaning, logotherapist and Holocaust survivor Viktor Frankl draws parallels between the human capacity to love unconditionally and living a meaningful life. Frankl writes: "Love is the only way to grasp another human being in the innermost core of his personality. No one can become fully aware of the essence of another human being unless he loves him. ... Furthermore, by his love, the loving person enables the beloved person to actualize ... potentialities." For Frankl, unconditional love is a means by which we enable and reach human potential.
Neurological basis:
There has been some evidence to support a neural basis for unconditional love, showing that it stands apart from other types of love.
Neurological basis:
In a study conducted by Mario Beauregard and his colleagues, using an fMRI procedure, they studied the brain imaging of participants who were shown different sets of images either referring to "maternal love" (unconditional love) or "romantic love". Seven areas of the brain became active when these participants called to mind feelings of unconditional love. Three of these were similar to areas that became active when it came to romantic love. The other four active parts activated during the unconditional love portions of the experiment were different, showing certain brain regions associated with rewarding aspects, pleasurable (non-sexual) feelings, and human maternal behaviors. Through the associations made between the different regions, results show that the feeling of love for someone without the need of being rewarded is different from the feeling of romantic love.Along with the idea of "mother love", which is commonly associated with unconditional love, a study found patterns in the neuroendocrine system and motivation-affective neural system. Using the fMRI procedure, mothers watched a video of themselves playing with their children in a familiar environment, like home. The procedure found part of the amygdala and nucleus accumbens were responsive on levels of emotion and empathy. Emotion and empathy (compassion) are descriptives of love, therefore it supports the idea that the neural occurrences are evidence of unconditional love.
Religious perspective:
Christianity In Christianity, the term "unconditional love" can be used to indicate God's love for a person irrespective of that person. This comes from the concept of God sending His only Son, Jesus Christ down from heaven to earth to die on a cross in order to take the punishment for all of humanity's sins. If someone chooses to believe in this, commonly called "The Gospel", then Jesus' price on the cross pays for their sins so they can freely enter into heaven, and not hell. The term is not explicitly used in the Bible, and advocates for God's conditional or unconditional love, using different passages or interpretations to support their point of view, are both encountered due to the different facets of God's nature. The cross is a clear indicator of God's unconditional love in that there is no way to earn one's way to heaven, one must simply believe. In all other religions cited below, there is a conditional striving to achieve a sense of unconditional love, based on one's own efforts and understanding. In Christianity, it all depends on Jesus, not the person's effort nor understanding. A passage in scriptures cites this "For it is by grace you have been saved, through faith—and this is not from yourselves, it is the gift of God—" Ephesians 2:8,9, NIV. God's discipline can be viewed as conditional based on people's choices, but His actual love through Jesus is unconditional, and this is where some may become confused. His salvation is a free gift, but His discipline, which is shaping of good character, can look more conditional. Ultimately, knowing God and free passage to heaven have already been supplied by a God of unconditional love, one can simply choose to believe in order to receive such love. The civil rights leader and Pastor, Dr. Martin Luther King Jr. was quoted as saying "I believe that unarmed truth and unconditional love will have the final word in reality".
Religious perspective:
Buddhism In Buddhism one of the most important concepts is called "bodhicitta". There are two kinds of Bodhicitta. They are relative and absolute bodhicitta.
Religious perspective:
In relative bodhicitta, one learns about the desire to gain the understanding of unconditional love, which in Buddhism is expressed as loving-kindness and compassion. The point is to develop bodhicitta for all living (sentient) beings. Absolute bodhicitta is a more esoteric tantric teaching. Understanding the principle of loving-kindness and compassion is expressed when one treats all living beings as if one was or had been (in former lives) their own mother. One's mother will do anything for the benefit of her child. The most loving of all relationships may be that between a mother and her child. Of course, if all beings treated all other living beings as they would their own child, then there would be much less enmity in this world. The importance of this cannot be overstated. At every moment one has the opportunity to make a choice how to act, and to be completely mindful of one's actions means that in every interaction with another being one will consciously act with loving-kindness and compassion toward every other being, no matter what the nature of that interaction.
Religious perspective:
Hinduism Hinduism and Buddhism, the Sanskrit word "bhakti" is apparently used by some to refer to unconditional love, even though its root meaning seems to be "participate". Bhakti or bhakthi is unconditional religious devotion of a devotee in worship of a divine.
Religious perspective:
Islam In Islamic belief, unconditional love can only be directed to Allah. The highest spiritual attainment in Islam is related to the love of God. "Yet there are men who take (for worship) others besides God, as equal (with God): They love them as they should love God. But those of Faith are overflowing in their love for God." O lovers! The religion of the love of God is not found in Islam alone.
Religious perspective:
In the realm of love, there is neither belief, nor unbelief.
Religious perspective:
In Islamic Sufism, unconditional love is the basis for the divine love Ishq-e-Haqeeqi, elaborated by many great Muslim saints to date. Prominent mystics explain the concept in its entirety and describe its hardcore reality.Rabia of Basra was the one who first set forth the doctrine of divine love known as ishq-e-haqeeqi and is widely considered to be the most important of the early renunciants, one mode of piety that would eventually become labeled as Sufism.She prayed: Ishq itself means to love God selflessly and unconditionally. For Rumi, "Sufism" itself is Ishq and not the path of asceticism (zuhd). According to Sultan Bahoo, Ishq means to serve God unconditionally by devoting one's entire life to Him and asking no reward in return.
Religious perspective:
Other religions Neopaganism in general, and Wicca in particular, commonly use a traditional inspirational text Charge of the Goddess, which affirms that the Goddess's "law is love unto all beings".Mohism, China around 500 BCE, bases its entire premise on the supremacy of such an element, comparing one's duty to the indiscriminate generosity of "The Sky", or "Heaven", in contrast to Confucianism, which based its model of society on family love and duty. Later schools engaged in much debate on exactly how unconditional one could be in actual society (cf. "...who is my neighbour?" in "The Good Samaritan" story of Jesus of Nazareth).
Religious perspective:
Unitarian Universalism, though not having a set religious creed or doctrine, generally accepts the belief that all human beings are worthy and in need of unconditional love though charity in the community and spiritual understanding. The Unitarian Universalist Association explicitly argues this in the Seven Principles, where the "inherent worth and dignity" of all humans is a regularly cited source arguing for unconditional love. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Catenin alpha-1**
Catenin alpha-1:
αE-catenin, also known as Catenin alpha-1 is a protein that in humans is encoded by the CTNNA1 gene. αE-catenin is highly expressed in cardiac muscle and localizes to adherens junctions at intercalated disc structures where it functions to mediate the anchorage of actin filaments to the sarcolemma. αE-catenin also plays a role in tumor metastasis and skin cell function.
Structure:
Human αE-catenin protein is 100.0 kDa and 906 amino acids. Catenins (α,β,and γ (also known as plakoglobin)) were originally identified in complex with E-cadherin, an epithelial cell adhesion protein. αE-catenin is highly expressed in cardiac muscle and is homologous to the protein vinculin; however, aside from vinculin, αE-catenin has no homology to established actin-binding proteins. The N-terminus of αE-catenin binds β-catenin or γ-catenin/plakoglobin, and the C-terminus binds actin directly or indirectly via vinculin or α-actinin.
Function:
Though αE-catenin exhibits substantial expression in cardiac muscle, αE-catenin is most well known for role in metastasizing tumor cells. αE-catenin also plays a role in epithelial tissue, both at adherens junctions and in signaling pathways.In cardiomyocytes, αE-catenin is present in cell to cell regions known as adherens junctions which lie within intercalated discs; these junctions anchor the actin cytoskeleton to the sarcolemma and provide strong cell adhesion.Functional αE-catenin is required for normal embryonic development, as a mutation eliminating the C-terminal 1/3 of the protein resulting in a complete loss-of-function phenotype showed disruption of the trophoblast epithelium and arrested development at the blastocyst stage.αE-catenin specifically, not β- or γ-catenin, binds F-actin and organizes and tethers the filaments at regions of cell-cell contact. Studies show that full-length αE-catenin binds and bundles F-actin in a superior fashion relative to individual N-terminal or C-terminal domains.αE-catenin, along with β-catenin and plakoglobin form distinct complexes with N-cadherin that are involved in forming cell-cell contacts and differentiation of cardiomyocytes. Catenin-N-cadherin complexes are apparently necessary for and precede the first cell to cell contact, precursory to gap junction formation. The anchorage of cadherin-catenin complexes to actin filaments by αE-catenin is regulated by tyrosine phosphorylation.Functional insights into αE-catenin function have come from studies employing transgenesis. Mice harboring a cardiac-specific deletion of αE-catenin exhibited abnormalities in cardiac dimensions and function, representative of dilated cardiomyopathy. This was further characterized by disorganization of intercalated disc structures and mitochondria, as well as compensatory increases in β-catenin and decreases in localization of cadherin and vinculin at intercalated discs. Knockout mice also exhibited high susceptibility to death following stress.
Interactions:
αE-catenin has been shown to interact with: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Breast prostheses**
Breast prostheses:
Breast prostheses are breast forms intended to look like breasts. They are often used temporarily or permanently by women after mastectomy or lumpectomy procedures, but may also be used by for aesthetic purposes. There are a number of materials and designs; although, the most common construction is gel (silicone or water-based) in a plastic film meant to feel similar to a person's skin. Prostheses may be purchased at a surgical supply store, pharmacy, custom lingerie shop, or even through private services that come to a person's home. There are many types of ready made breast prostheses including full or standard prostheses, partial prostheses such a shell prostheses, and stick on prostheses. Customized options are also available from specialty shops, which are moulded to fit an individual's chest by taking an impression of the breast(s). The areola and nipple may be replicated as part of the breast form or as separate nipple prosthesis. Both custom made and off-the shelf breast prostheses come in varieties that are designed to either be held in a pocket in a specially designed mastectomy bra or attached to the skin via adhesive or other methods and worn with a standard bra. There are many factors to consider when selecting breast prostheses such as different types and the care they require, insurance coverage, and psychosocial effects.
Uses:
External breast prostheses are commonly used in women who have undergone surgical treatment for breast cancer such as a mastectomy or lumpectomy. They have a variety of physical benefits including improved symmetry and balance, as well as psychological benefits such as improved self-confidence. Outside of post-surgical uses, prosthetics are also used by individuals to create the illusion of breasts.
Uses:
Mastectomy Breast prostheses are most commonly used after a mastectomy, usually a consequence of cancer. They are often molded to mimic the natural shape of a woman's breast and can either be used temporarily or for long-term use as an alternative to, or prior to surgical breast reconstruction. Depending on the type of mastectomy performed, progress of post-operative healing, and other various factors, surgeons will determine the time when a woman can start to use a prosthesis. A prescription may be required for breast prostheses and mastectomy bras for insurance purposes.Up to 90% of women use a prosthetic after surgery, temporarily or permanently. Over half of these women choose full weight options, while others will opt for more lightweight prosthetic devices. Some choose to make homemade prostheses, using materials such as rice and cotton.
Uses:
Post-mastectomy bras Post-mastectomy bras are similar to regular bras with the exception of containing spandex stretch pockets on the inside that help keep the breast prosthesis in place. Post-mastectomy bras can be found at specialty shops or mastectomy boutiques and some shops are also willing to stitch pockets into regular bras and swimsuits.to hold prostheses.
Uses:
Post-surgical camisoles Post-surgical camisoles are convenient for women to be used immediately after their breast surgery, especially if their breasts feel sore or sensitive. They are often made with soft cotton fabric and are designed to avoid rubbing or causing irritation to the skin. The camisoles have pockets for draining and similar to post-mastectomy bras, they have stitching to help hold fiber breast prosthesis in place. Right after breast surgery, women are advised to avoid or limit their arm and shoulder movement; camisoles are ideal for this reasons because they are pulled over the hip.
Uses:
Attachable breast prostheses Attachable breast prostheses can be used as an alternative to post-mastectomy bras. Attachable breast prostheses can be attached directly to the skin via adhesives and can also be worn with a regular bra.
Homemade breast prostheses Some women may choose to re-purpose the supplies found in their homes to create homemade breast prostheses. For example, shoulder pads or nylons may be used as fillers for their bras. Homemade versions can be ideal for those who prefer loose-fitting clothes where the breast shape is not as defined.
Uses:
Lumpectomy After a lumpectomy or a quadrantectomy individuals may be left with an asymmetrical silhouette. Breast prostheses can help to act as an equalizer to accommodate for the missing tissue. Examples of breast prostheses after small but not total breast tissue removal include partial breast prosthesis, and attachable breast prostheses (also known as a contact prostheses).Partial breast prosthesis are available in a variety of materials such as silicone, foam, or fiber. These inserts are able to discretely fit into a regular bra or into the insert of a mastectomy bra.
Uses:
Attachable breast prostheses anchor directly onto your body and are secured using adhesive or Velcro. Attachable prostheses can be custom made as a partial breast shape, as well as coming readily available in full sizes. These prostheses, unlike the partial prostheses, move independent of a bra and can be worn along with a regular bra. For those who do not want a bra specially designed for prostheses, an attachable option may be a consideration.
Uses:
Breast enhancement Transgender and Cross-dressing Many pre or non-hormonal trans women and men who cross-dress as women use breast prostheses in order to create the illusion of feminine breasts. They are sometimes combined with cleavage enhancement techniques when used with clothing with low necklines.
Uses:
Full frontal cleavage tops are also available, mainly marketed to the transgender community. They incorporate a pair of breast prostheses in a one-piece skin coloured garment that is designed to provide the illusion of natural cleavage. Such garments have the disadvantage of having a visible top edge at the neck, which requires the wearing of a choker or similar necklace to hide the top edge of the garment. The edges of the breast prostheses are often distinguishable through the thin outer cover.
Uses:
Psychosocial considerations After a lumpectomy or mastectomy, both physical and psychological health of women can be affected as an outcome of tissue removal. A breast prosthesis is an alternative post-surgical option to breast reconstruction to aid with these consequences. Breast tissue removal can leave women with an altered center of gravity, and could have negative impacts on posture as well as balance. A prosthesis may help to correct balance and posture deficiencies caused by tissue removal. Additionally, partial or full loss of a breast can result in loss of self-esteem for some women. As a result, they may have feelings of introversion, shyness, or insecurity about their new appearance. Breast prostheses may not only add to physical appearance, it may also have psychological benefits by providing a sense of femininity for women.
Types:
Styles Full/standard prosthesis - This prosthesis goes directly onto the breast wall and is used in those who have had all breast tissue removed. Size, shape and skin tone can be customized to match the other breast, or if both breasts have been removed, any size may be selected.
Types:
Partial prosthesis - Partial prosthesis contain two layers of silicone with a thin layer of film to gently adhere to the breast. Unlike a full prosthesis, this can be used in situations where part of the breast has been removed. It is worn over the breast tissue inside the bra to create a fuller appearance and fill the breast outline.
Types:
Shell prosthesis - When breasts differ in size from each other, this type of partial prosthesis can be used. A soft shell of made of silicon is placed around the smaller breast to help match the size of the larger one. Shell prosthesis can be used right after surgery and are ideal for periods of inactivity. They are typically made to have a polyester front with a breathable cotton backing and are lightweight.
Types:
Stick-on prosthesis - This prosthesis sticks onto the skin and can be either full or partial. Women who have a more active lifestyle, or who wish to place less weight on their bra, prefer this prosthesis. Another benefit is that strapless clothing can be worn with this prosthesis, as long as the clothing can provide some support.
Custom-made prosthesis - Some shops can customize prosthetics to match natural color, size of other breast, and the bodies natural contour. Silicon and latex materials are normally used, however these customized prosthetics are more expensive than those that are not custom made.
Shapes Non-customised prostheses are made of different shapes to suit the extent of breast tissue removal or the shape of a crossdresser's chest. Asymmetric breast forms incorporate an extension towards the armpit to replicate the shape of the tail of Spence, while symmetric "triangle" or "teardrop" prostheses do not incorporate that extension. Customised prostheses will mirror the other breast.
Weight Silicone breasts come in a variety of weights to fit the needs of the user and are typically designed to have the same weight as natural breasts. Lightweight forms that are about 20-40% lighter than the standard form are ideal for physical activity such as sports or for sleeping.
Temperature Some users find that prostheses can get hot in warm and humid climates, though newer breast prostheses are designed to allow for better air circulation. Using a bra pocket or a prosthesis cover may also help with perspiration, however, it is important to cleanse the prosthesis often to prevent the perspiration from damaging the prosthesis.
Types:
Skin tone Many prostheses are available in colors which can suit different skin tones. Additionally, while finding an exact match for any skin tone may be difficult, companies have begun to add custom color to breast prostheses in order to match different skin tones. There may also be covers available for the prosthesis that can provide an even closer match.
History:
Breast prostheses have a long history. In the 19th century they were made of rubber. On 27 January 1874, a U.S. patent for a "breast pad" was issued to Frederick Cox (No. US 146805). His design consisted of rubber pads filled with air encased in cotton. Later in 1885, Charles L. Morehouse received US patent 326915 for his "Breast-Pad", made of natural rubber and inflatable with air at normal pressure. Newer designs such as that of Laura Wolfe's in 1904 parted with the air-filled design, which was prone to punctures, in favor of down feather and silk floss filling.While breast forms were mainly sold for post-surgical purposes, over time the aesthetic potential of these prosthetics was explored. Breast form development increased in the mid 20th century as more companies began to sell and market a variety of breast forms with new materials made possible by chemical engineering advancements. Eventually, marketing for breast prosthetics expanded to target people other than cisgender women looking for a surgical prosthetic or cosmetic enhancement. Companies like NearlyMe created branded products for trans and non-binary individuals.
Other Considerations:
Insurance Breast prostheses or mastectomy bras are covered by most insurances. To get these covered one should obtain a prescription from their physician with the diagnosis and a documentation of need. External breast prostheses are covered under Medicare part B following mastectomy; surgeries in the outpatient setting are also covered under Part B while Part A covers mastectomy surgeries in the inpatient setting. Custom-made prostheses are not usually covered by insurances due to their high costs.
Other Considerations:
Care Although breast prosthesis specific for swimming exist, extra care should be given to immediately rinse them after swimming to avoid damage from chlorine or saltwater. In general, a silicone breast prosthesis should be treated like one's own skin; it should be washed daily with soap and water and dried after. Some prosthesis may require additional or more specific care to keep it clean. Sharp objects such as brooches or pins should be avoided as they may puncture silicone breasts and cause leaking. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Adrian Kent**
Adrian Kent:
Adrian Kent is a British theoretical physicist, Professor of Quantum Physics at the University of Cambridge, member of the Centre for Quantum Information and Foundations, and Distinguished Visiting Research Chair at the Perimeter Institute for Theoretical Physics. His research areas are the foundations of quantum theory, quantum information science and quantum cryptography. He is known as the inventor of relativistic quantum cryptography. In 1999 he published the first unconditionally secure protocols for bit commitment and coin tossing, which were also the first relativistic cryptographic protocols. He is a co-inventor of quantum tagging, or quantum position authentication, providing the first schemes for position-based quantum cryptography. In 2005 he published with Lucien Hardy and Jonathan Barrett the first security proof of quantum key distribution based on the no-signalling principle.
Work:
Field theory Kent's early contributions to physics were on topics related to conformal field theory. Together with Peter Goddard and David Olive, he devised the coset construction that classifies the unitary highest weight representations of the Virasoro algebra, and he described the Virasoro algebra's singular vectors. In addition, he investigated the representation theory of N=2 superconformal algebras.
Work:
Quantum cryptography Kent is inventor of the field of relativistic quantum cryptography, where security of the cryptographic tasks is guaranteed from the properties of quantum information and from the relativistic physical principle stating that information cannot travel faster than the speed of light (no-signalling). In 1999 he published the first unconditionally secure protocols for bit commitment and strong coin tossing, relativistic protocols that evade no-go theorem by Mayers, Lo and Chau, and by Lo and Chau, respectively. He is a co-inventor of quantum tagging, or quantum position authentication, where the properties of quantum information and the no-signalling principle are used to authenticate the location of an object.He published with Lucien Hardy and Jonathan Barrett the first security proof for quantum key distribution based on the no-signalling principle, where two parties can generate a secure secret key even if their devices are not trusted and they are not described by quantum theory, as long as they satisfy the no-signalling principle. With Roger Colbeck, he invented quantum randomness expansion, a task where an initial private random string is expanded into a larger private random string.
Work:
Quantum foundations Kent is a critic of the many-worlds interpretation of quantum mechanics, as well as the consistent histories interpretation. He has outlined a solution to the quantum reality problem, also called the quantum measurement problem, that is consistent with relativistic quantum theory, proposing that physical reality is described by a randomly chosen configuration of physical quantities (or beables) like the stress–energy tensor, whose sample space is mathematically well defined and respects Lorentzian symmetry. He has proposed Causal Quantum Theory as an extension of quantum theory, according to which local causality holds and the reduction of the quantum state is a well-defined physical process, claiming that current Bell-type experiments have not completely ruled out this theory. He discovered the no-summoning theorem, which extends the no-cloning theorem of quantum information to Minkowski spacetime.
Work:
Other work Kent is a member of the advisory panel for the Cambridge Centre for the Study of Existential Risk. He has discussed the mathematics of risk assessments for global catastrophes. He has proposed a solution to Fermi’s paradox, hypothesizing that various intelligent extra-terrestrial civilizations have existed, interacted and competed for resources, and have evolved to avoid advertising their existence. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PaNie**
PaNie:
PaNie is a 25 kDa protein produced by the root rot disease-causing pathogen Pythium aphanidermatum. It stands for Pythium aphanidermatum Necrosis inducing elicitor. PaNie (aka NLPPya) belongs to a family of elicitors named the Nep1-like proteins (NLPs), which cause necrosis when injected into the leaves of dicotyledonous plants. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Embedded pavement flashing-light system**
Embedded pavement flashing-light system:
An embedded flashing-light system or an in-pavement flashing-light system is a type of device that is used at existing or new pedestrian crosswalks to warn drivers of oncoming pedestrian traffic. The device usually consists of LED lights that are embedded into the roadway alongside the crosswalk and are oriented to face oncoming traffic. When a pedestrian approaches the crosswalk, the system is activated and the LED lights begin to flash simultaneously. These lights are programmed to flash for a period of time that is sufficient for an average pedestrian to cross.
History:
The concept for an embedded pavement flashing light system was conceived by pilot Michael Harrison in Santa Rosa, California, in 1992 after a friend was involved in a pedestrian accident. He based it on his experience with airport runway lights embedded in the pavement, Mr. Harrison went on to found Lightguard Systems.
Types:
There are two different types of embedded pavement flashing light systems, passive and active. These types differ in how the system is activated.
Types:
With a passive system, the pedestrian activates the device merely by walking up to the crosswalk. This is accomplished by using one of several motion detection devices. These include microwave, motion sensors, video detection, pressure plates, or a light trip beam. With an active system, the device is usually activated by a button that a pedestrian pushes to cross. These active systems are generally similar to lighted pedestrian signs at traffic intersections. Because many pedestrians may not realize that they need to press a button to activate the system, it is generally recommended to install a passive system.
Effectiveness:
Compared with other types of warning devices, the effectiveness of the embedded pavement flashing light system seems to be high. When approaching a crosswalk with an embedded pavement flashing light system, drivers are more apt to slow down and yield to pedestrians than when drivers approach a crosswalk with another type of lighted warning device. Also, compared to a crosswalk with no warning device, drivers are more likely to slow down and yield to pedestrians when the embedded pavement flashing light system is in place. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alkyl polyglycoside**
Alkyl polyglycoside:
Alkyl polyglycosides (APGs) are a class of non-ionic surfactants widely used in a variety of cosmetic, household, and industrial applications. Biodegradable and plant-derived from sugars, these surfactants are usually glucose derivatives, and fatty alcohols. The raw materials are typically starch and fat, and the final products are typically complex mixtures of compounds with different sugars comprising the hydrophilic end and alkyl groups of variable length comprising the hydrophobic end. When derived from glucose, they are known as alkyl polyglucosides.
Uses:
APG is used to enhance the formation of foams in detergents. It is also used in the personal care industry because it is biodegradable and safe for sensitive skin.
Preparation:
Alkyl glycosides are produced by combining a sugar such as glucose with a fatty alcohol in the presence of acid catalysts at elevated temperatures. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Factice**
Factice:
Factice is vulcanized unsaturated vegetable or animal oil, used as a processing aid and property modifier in rubber.
Factice:
Longer chain fatty-acid containing oils such as rapeseed or meadowfoam produce a harder, more desirable factice. Soybean oil produces lower quality factice, though it can be mixed with longer-chain oils to yield factice nearly as good as that made from long chain oils alone. Oil-resistant factice is made with castor oil.Cross-linking the fatty-acid chains with sulfur (brown factice) or S2Cl2 (white factice) yields a rubbery material that improves the processing characteristics and ozone resistance of rubber. Varying the amount of factice changes the physical properties of the rubber; molded items might be 5-10% factice, extrusions 15-30%. Rubber erasers can have as much as 4 times as much factice as rubber in their composition. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Blackdamp**
Blackdamp:
Blackdamp (also known as stythe or choke damp) is an asphyxiant, reducing the available oxygen content of air to a level incapable of sustaining human or animal life. It is not a single gas but a mixture of unbreathable gases left after oxygen is removed from the air and typically consists of nitrogen, carbon dioxide and water vapour. The term is etymologically and practically related to terms for other underground mine gases such as fire damp, white damp, stink damp, and afterdamp.
Etymology:
The meaning of "damp" in this term, while most commonly understood to imply humidity, presents evidence of having been separated from that newer, irrelevant meaning at least by the first decade of the 18th century, where the original relevant meaning of "vapor" derives from a Proto-Germanic origin, dampaz, which gave rise to its immediate English predecessor, the Middle Low German damp (with no record of an Old English intermediary). The proto-Germanic dampaz gave rise to many other cognates, including the Old High German damph, the Old Norse dampi, and the modern German Dampf, the last of which still translates as "vapor".
Sources:
Blackdamp is encountered in enclosed environments such as mines, sewers, wells, tunnels and ships' holds. It occurs with particular frequency in abandoned or poorly ventilated coal mines. Coal, once exposed to the air of a mine, naturally begins absorbing oxygen and exuding carbon dioxide and water vapor. The amount of blackdamp exuded by a mine varies based on a number of factors, including the temperature (coal releases more carbon dioxide in the warmer months), the amount of exposed coal, and the type of coal, although all mines with exposed coal produce gas.
Hazards:
Blackdamp is considered a particularly pernicious type of damp (especially in a historical context), due to its omnipresence where exposed coal is found, and slow onset of symptoms. It produces no obvious odor (unlike the hydrogen sulfide of stinkdamp), is constantly being reintroduced to the air (instead of being released in pockets from actively mined sections), and does not require combustion in order to be released (unlike whitedamp or afterdamp). Many of the initial symptoms of oxygen deprivation (dizziness, light-headedness, drowsiness and poor coordination) are relatively innocuous and can easily be mistaken for simple fatigue, given the physically strenuous job of coal mining. The time between the onset of initial symptoms and the start of frank asphyxiation (and rapid unconsciousness) can be as short as seconds. Thus, if the warning signs are missed, a large number of miners can be rapidly incapacitated in the same short period of time, leaving no one to summon help.
Hazards:
In addition to the danger inside the mine, blackdamp can be "exhaled" in large quantities from mines (especially long-abandoned coal mines with few outlets for escaping gas) during sudden changes in atmospheric pressure, potentially causing asphyxiation on the surface.
Disasters:
The gas mixture has been responsible for many deaths among underground workers, especially miners—for example, the Hartley Colliery disaster, when 204 men and boys were trapped when the beam of an engine suddenly broke and fell down the single shaft, damaging the ventilation system and blocking it with debris. Despite rescuers' efforts, they could not be reached before they suffocated in the blackdamp atmosphere.
Detection and countermeasures:
Historically, the domestic canary was used as an early warning against carbon monoxide.
Detection and countermeasures:
In active mining operations, the threat from blackdamp is addressed with proper mineshaft ventilation as well as various detection methods, typically using miner's safety lamps or hand-held electronic gas detectors. The safety lamp is merely a specially designed lantern with a flame that is designed to automatically extinguish itself at an oxygen concentration of approximately 18% (normal atmospheric concentration of oxygen is c. 21%). This low detection threshold gives miners an unmistakable warning and allows them to escape before any potentially incapacitating effects are felt. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Adelphi Genetics Forum**
Adelphi Genetics Forum:
The Adelphi Genetics Forum is a non-profit learned society based in the United Kingdom. Its aims are "to promote the public understanding of human heredity and to facilitate informed debate about the ethical issues raised by advances in reproductive technology."It was founded by Sybil Gotto in 1907 as the Eugenics Education Society, with the aim of promoting the research and understanding of eugenics. Members came predominately from the professional class and included eminent scientists such as Francis Galton.: 145 The Society engaged in advocacy and research to further their eugenic goals, and members participated in activities such as lobbying Parliament, organizing lectures, and producing propaganda. It became the Eugenics Society in 1924: 144 (often referred to as the British Eugenics Society to distinguish it from others). From 1909 to 1968 it published The Eugenics Review, a scientific journal dedicated to eugenics.: 225 Membership reached its peak during the 1930s.The Society was renamed the Galton Institute in 1989. In 2021, it was renamed the Adelphi Genetics Forum. The organisation is currently based in Wandsworth, London.
History:
Creation of the Eugenics Education Society The Eugenics Education Society (EES) was founded in 1907 at the impetus of 21-year-old Sybil Gotto, a widowed social reformer.: 7 Inspired by Francis Galton's work on eugenics, Gotto began looking for supporters to start an organization aimed at educating the public about the benefits of eugenics.: 7 She was introduced to the lawyer Montague Crackanthorpe, who would become the second president of the EES, by James Slaughter, the Secretary of the Sociological Society.: 7 Crackanthorpe introduced Gotto to Galton, the statistician who coined the term "eugenics." Galton would go on to be Honorary President of the Society: 43 from 1907 to 1911. Gotto and Crackanthorpe presented their vision before a committee of the Moral Education League, requesting that the League change its name to the Eugenic and Moral Education League, but the committee decided that a new organization should be formed exclusively devoted to eugenics.: 29 The EES was located in Eccleston Square, London.The goals of Eugenics Education Society, as stated in first issue of the Eugenics Review were: “Persistently to set forth the National Importance of Eugenics in order to modify public opinion, and create a sense of responsibility in the respect of bringing all matters pertaining to human parenthood under the domination of Eugenic ideals.
History:
To spread a knowledge of the Laws of heredity so far as they are surely known, and so far as that knowledge might affect the improvement of the race.
History:
To further Eugenic Teaching at home, in the schools, and elsewhere." Membership The EES did not exist in isolation, but was rather a part of a large network of Victorian reform groups that existed in Britain at the turn of the twentieth century.: 9 Members of the Society were also involved in the National Association for the Care and Protection of the Feeble-minded, the Society for Inebrity, the Charity Organisation Society, and the Moral Education League.: 9 The British eugenics movement was a predominantly middle class: 26 and professional class phenomenon. Most members of the EES were educated and prominent in their fields – at one point all members were listed in professional directories. Two-thirds of the members were scientists,: 8 and the 1914 Council of the EES was dominated by professors and physicians. Women constituted a significant portion of the Society’s members, exceeding 50% in 1913 and 40% in 1937.: 56 While the majority of members came from the professional class, there were also a few members from the clergy and aristocracy, such as Reverend William Inge, the Dean of St Paul’s Cathedral,: 47 and the Earl and Countess of Limerick.: 44 The Society underwent considerable growth in its early years. By 1911, the London headquarters was supplemented by branches in Cambridge, "Oxford, Liverpool, Manchester, Birmingham, Southampton, Glasgow, and Belfast," as well as abroad in "Australia and New Zealand".: 97 The Society found support in leading academic institutions. Statistician R. A. Fisher was a founding member of the Cambridge University Branch, where Leonard Darwin, Reginald Punnett, and Reverend Inge lectured about the eugenic dangers a fertile working class posed to the educated middle class.: 101 Activities 1907 to 1939 The main activities the Eugenics Education Society engaged in were research, propaganda, and legislative lobbying. Many campaigns were joint efforts with other social reform groups. The EES met with 59 other organizations between 1907 and 1935.Shortly after the Society was founded, members protested the closing of London institutions housing alcoholic women. A resolution was drafted proposing that alcoholics be segregated to prevent their reproduction, as the EES held the eugenic belief that alcoholism was heritable.: 32 This resolution proved unsuccessful in Parliament in 1913.In 1910, the Society's Committee on Poor Law Reform refuted both the Majority and Minority Reports of the Royal Commission on the Poor Law, declaring their belief that poverty was rooted in the genetic deficiencies of the working class. This view was published in a special Poor Law issue of the Eugenics Review.: 72 The Committee suggested that paupers be detained in workhouses, under the authority of the Poor Law Guardians, to prevent their breeding.: 73 The same year, E. J. Lidbetter, EES member and former employee of the Poor Law Authority in London, attempted to prove the hereditary nature of poverty by compiling and studying the pedigrees of impoverished families.
History:
In 1912, President Leonard Darwin assembled a Research Committee to standardize the format and symbols used in pedigree studies. The members of the Committee were Edgar Schuster, Alexander M. Carr-Saunders, E. J. Lidbetter, Major Greenwood, Sybil Gotto, and A. F. Tredgold. The standardized pedigree they produced was published in the Eugenics Review and later adopted by Charles Davenport's Eugenics Record Office at Cold Spring Harbor in the United States.: 77 In 1912, a group of physicians from the EES met unsuccessfully with the President of the Local Government Board to advocate for the institutionalization of those infected with venereal disease.: 32 The Society’s interest in venereal disease continued during WWI, when the Royal Commission on Venereal Diseases was formed with the inclusion of members of the EES.: 33 In 1916, EES President Leonard Darwin, son of Charles Darwin, published a pamphlet entitled “Quality not Quantity,” encouraging members of the professional class to have more children.: 49 Darwin proposed a tax rebate for middle-class families in 1917, but the resolution was unsuccessful in Parliament.: 49 In 1919, Darwin stated his belief that fertility was inversely proportional to economic class before the Royal Commission on Income Tax.: 49 He feared the falling birth rate of the middle-class would result in a “national danger.”: 49 The Eugenics Education Society was renamed the Eugenics Society in 1924 to emphasize its commitment to scientific research extending beyond the role of public education.: 144 In the 1920s and 1930s, members of the Eugenics Society advocated for graded Family Allowances in which wealthier families would be given more funds for having more children, thus incentivizing fertility in the middle and upper classes.: 49 Statistician and EES member R. A. Fisher argued in 1932 that existing Family Allowances that only funded the poor were dysgenic, as they did not reward the breeding of individuals the EES viewed as eugenically desirable.: 49 In 1930, the Eugenics Society formed a Committee for Legalising Sterilisation, producing propaganda pamphlets touting sterilisation as there solution for eliminating heritable feeblemindedness.: 204 During this time period members of the Society such as Julian Huxley expressed support for eutelegenesis, a eugenic proposal to artificially inseminate women with the sperm of men deemed mentally and physically superior in an effort to better the race.: 77 1942 to 1989 The Eugenics Society underwent a hiatus during the Second World War and did not reconvene until 1942, under the leadership of General Secretary Carlos Blacker. In the postwar period, the Society shifted its focus from class differences to marriage, fertility, and the changing racial makeup of the UK.In 1944, R. C. Wofinden published an article in the Eugenics Review describing the features of "mentally deficient" working-class families and questioning whether mental deficiency led to poverty or vice versa.: 46 Blacker argued that poor heredity was the cause of poverty, but other members of the Society, such as Hilda Lewis, disagreed with this view.: 47 Following WWII, British eugenicists concerned by rising divorce rates and falling birth rates attempted to promote marriages between "desirable" individuals while preventing marriages between those deemed eugenically unfit.: 44 The British Social Hygiene Council, a group with ties to the Eugenics Society, formed the Marriage Guidance Council, an organization that offered pre-marital counseling to young couples.: 45 In 1954, the Eugenics Society was referred to by the North Kensington Marriage Welfare Centre's pamphlet "Eugenic Guidance," as a source for consultation for couples worried about passing on their "weaknesses.": 45 As a result of the British Nationality Act of 1948, which enabled Commonwealth citizens to immigrate to the UK, postwar Britain saw an influx of non-white populations.: 98 The Eugenics Society became concerned with changes to the racial makeup of the country, exemplified by its publication of G. C. L. Bertram's 1958 broadsheet on immigration from the West Indies.: 98 Bertram claimed that races were biologically distinct due to their evolved adaptations to different environments, and that miscegenation should only be permitted between similar races.: 99 In 1952, Blacker stepped down as Secretary of the Eugenics Society to become the administrative chairman of the International Planned Parenthood Federation, or IPPF.: 122 The IPPF was sponsored in part by the Eugenics Society and headquartered within the Society's offices in London.: 123 Blacker's influence continued in 1962, when he published an article in the Eugenics Review defending voluntary sterilization as humanitarian effort beneficial to mothers and their existing children.: 124 In 1957, Blacker addressed the dwindling membership of the Society (from 768 in 1932 to 456 in 1956) in "The Eugenics Society's Future". He recommended that the Society "pursue eugenic ends by less obvious means, that is by a policy of crypto-eugenics, which was apparently proving successful with the US Eugenics Society." In February 1960, the Council of the Society resolved that their "activities in crypto-eugenics should be pursued vigorously..." and to change its name to "The Galton Society".The last volume of the Eugenics Review was published in 1968. It was succeeded by the Journal of Biosocial Science.: 255 Following the 1960s, the Eugenics Society experienced a loss of support and prestige and eventually shifted its focus from eugenics in Britain to biosocial issues such as fertility and population control in Third World countries. The Eugenics Society changed its name to the Galton Institute in 1989, a reflection of the negative public sentiment towards eugenics following WWII.
History:
"Crypto" (secret) Eugenics In 1928, the Society published the first draft of its Sterilization Bill in the Eugenics Review. The following year a Parliamentary Committee for Legalising Eugenic Sterelization was established and, in July 1931, Archibald Church M.P. (a member of both the Committee and of the Eugenics Society) rose in the House of Commons to introduce a bill “to enable mental defectives to undergo sterilizing operations or sterilizing treatment upon their own application, or that of their spouses or parents or guardians.” In his speech, Church said that the bill was "... merely a first step in order that the community as a whole should be able to make an experiment on a small scale so that later on we may have the benefit of the results and experience gained in order to come to conclusions before bringing in a Bill for the compulsory sterilisation of the unfit." Nonetheless, it was defeated.At this point, “three weighty organisations” joined the campaign and “a concerted petition for an official inquiry was submitted to the then Minister of Health.” This led to the formation of a Departmental Committee on Sterilization (the Brock Committee) in June 1932. The apparent groundswell of support for sterilisation was deceptive. According to John Mcnichol: “Blacker admitted in private that the lobbying technique of the [Eugenics] society was to make it appear as if the demand for an official enquiry emanated from these large bodies, whereas in fact it was the [Eugenics] society that was masterminding the campaign”. “Between June 1932 and January 1934 the Brock committee held thirty-six meetings and interviewed sixty witnesses. Dominated by its chairman, who pulled every string to assist the society in its campaign (thus flagrantly violating civil service neutrality), the committee’s report recommended the legalization of voluntary sterilization for three identifiable categories of patient — mental defectives of the mentally disordered, persons suffering from a transmissable physical disability (for example, hereditary blindness), or persons likely to transmit mental disorder or defect.” Brock also met secretly with Blacker to advise him on how to improve the wording of the society’s draft sterlization bill.This practice of secrecy became official policy in 1960. In a 1957 memorandum to the Council of the Eugenics Society, Blacker made recommendations on how to promote the eugenic cause in the aftermath of the Second World War and how to fix the Society's dwindling membership (from 768 in 1932 to 456 in 1956). He suggested that they "pursue eugenic ends by less obvious means, that is by a policy of crypto-eugenics, which was apparently proving successful with the US Eugenics Society." In February 1960, the Council resolved that their "activities in crypto-eugenics should be pursued vigorously, and specifically that the Society should increase its monetary support of the FPA [Family Planning Association] and the IPPF [International Planned Parenthood Federation]" and to change its name to "The Galton Society".The Society played an influential though hidden role in the campaign for the 1967 Abortion Act.
Position on eugenics:
The website of the Adelphi Genetics Forum currently states that "The Adelphi Genetics Forum rejects outright the theoretical basis and practice of coercive eugenics, which it regards as having no place in modern life." Furthermore, "The Adelphi Genetics Forum wishes to state clearly and unequivocally that it deplores these outmoded and discredited ideas, which should play no part in society today," but also that "Galton's contribution to modern science deserves to be recognised and acknowledged." Former President Veronica van Heyningen has acknowledged that "Galton was a terrible racist," but she believes it is "reasonable to honour him by giving his name to institutions" due to his significant contribution to the field of genetics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Beta (grape)**
Beta (grape):
Beta is a winter-hardy variety of North American grape derived from a cross of the Vitis labrusca-based cultivar Concord and a selection of Vitis riparia, the wild riverbank grape, called Carver. It is an extremely cold-hardy grape that is self-fertile. This variety is grown successfully in Finland and was widely planted in Minnesota in the early 20th century. It ripens in late September in New York State. It bears dark, blue-black fruit that is used for jellies, fruit juices, etc. but rarely for wine.
History:
Beta was released by Louis Suelter, and named for his wife. Because of this, the proper pronunciation is actually "Bett-uh", but the name is more commonly assumed to follow the pronunciation of the Greek letter. Suelter released a number of other cultivars from the same cross, including the equally hardy Suelter grape. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Digital automatic coupling**
Digital automatic coupling:
Digital automatic coupling (DAC) has been developed in the 2020s to replace the English Buffers and chain couplings, initially in Europe.
It resembles the Scharfenberg coupler with extra contacts to join electrical circuits (power, detection and control) and air hoses.
Advantages:
Longer trains up to 750m.
Brakes remotely controlled like ECPB.
Monitoring of train and wagon performance.
Safety. No need for shunter to climb between buffers.
Other systems:
Couplings based on AAR and SA3 already have automated mechanical couplings, so do some of the advantageous features of DAC are lessened. These have a maximum draw gear load well in excess of that possible with the DAC, say 1800m instead of 750m.
Videos:
Demonstration of DAC 1 Demonstration of DAC 2 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pecan oil**
Pecan oil:
Pecan oil is an edible pressed oil extracted from the pecan nut. Pecan oil is neutral in flavor and takes on the flavor of whatever seasoning is being used with it. Pecan oil contains 9.5% saturated fat, which is less than in olive oil (13.5%), peanut oil (16.90%) or corn oil (12.70%). It is also used as a massage oil and in aromatherapy applications.
Pecan oil:
Pecan oil is considered a healthy oil as it is rich in monounsaturated fats, specifically oleic acid, (52.0%) and low in saturated fats. It also contains linoleic acid (36.6%), and small amounts of palmitic (7.1%), stearic (2.2%) and linolenic acids (1.5%). The overall balance of fatty acids in the oil may reduce LDL cholesterol (also known as "bad" cholesterol) and the risk of heart disease. The main application of this oil is its use in cooking. It has a high smoke point of 470 degrees F making it ideal for cooking at high temperatures and for deep frying. The mild nutty flavor enhances the flavor of ingredients, making it a popular component of salad dressings and dips. Pecan oil is much lighter than olive and is well suited for everyday cooking. It also generally does not contain preservatives or additives. Pecan oil is a good substitute for butter and other cooking oils, making it suitable for baking. It is recommended that the oil be refrigerated after opening to increase shelf life and reduce rancidity.
Pecan oil:
Pecan oil can sometimes be hard to find in local grocery stores because it is considered a specialty oil; however, it can be purchased online through a number of manufacturers' websites.
Processing:
Prior to extraction, the nuts are lightly roasted and ground. Mechanical extraction methods are then used to remove the oil. Most manufacturers avoid the use of chemical extraction methods in order to preserve the natural nutty flavor and nutrients of the oil.
Appearance:
Pecan oil is a light weight oil and is usually pale yellow in color.
Uses:
Cooking Salad dressings Dips Massage oil Aromatherapy Cosmetics Sunless tanning products Bio-fuel | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Paper towel**
Paper towel:
A paper towel is an absorbent, disposable towel made from paper. In Britain, paper towels for kitchen use are also known as kitchen rolls, kitchen paper, or kitchen towels. For home use, paper towels are usually sold in a roll of perforated sheets, but some are sold in stacks of pre-cut and pre-folded layers for use in paper-towel dispensers. Unlike cloth towels, paper towels are disposable and intended to be used only once. Paper towels absorb water because they are loosely woven, which enables water to travel between the fibers, even against gravity (capillary effect). They have similar purposes to conventional towels, such as drying hands, wiping windows and other surfaces, dusting, and cleaning up spills. Paper towel dispensers are commonly used in toilet facilities shared by many people, as they are often considered more hygienic than hot-air hand dryers or shared cloth towels.
History:
In 1907, the Scott Paper Company of Philadelphia, Pennsylvania, introduced paper tissues to help prevent the spread of colds from cloth towels in restrooms. Popular belief is that this was partly accidental and was the solution to a railroad car full of long paper rolls meant for toilet paper that were unsuitable to cut into such. In 1919, William E. Corbin, Henry Chase, and Harold Titus began experimenting with paper towels in the Research and Development building of the Brown Company in Berlin, New Hampshire. By 1922, Corbin perfected their product and began mass-producing it at the Cascade Mill on the Berlin/Gorham line. This product was called Nibroc Paper Towels (Corbin spelled backwards). In 1931, the Scott Paper Company of Philadelphia, Pennsylvania, introduced their paper towel rolls for kitchens. In 1995, Kimberly-Clark acquired Scott Paper Company.
Production:
Paper towels are made from either virgin or recycled paper pulp, which is extracted from wood or fiber crops. They are sometimes bleached during the production process to lighten coloration, and may also be decorated with colored images on each square (such as flowers or teddy bears). Resin size is used to improve the wet strength. Paper towels are packed individually and sold as stacks, or are held on a continuous roll, and come in two distinct classes: domestic and institutional. Many companies produce paper towels. Some common brand names are Bounty, Seventh Generation, Scott, and Viva, among many others.
Market:
Tissue products in North America, including paper towels, are divided into consumer and commercial markets, with household consumer usage accounting for approximately two thirds of total North American consumption. Commercial usage, or otherwise any use outside of the household, accounts for the remaining third of North American consumption. The growth in commercial use of paper towels can be attributed to the migration from folded towels (in public bathrooms, for example) to roll towel dispensers, which reduces the amount of paper towels used by each patron.Within the forest products industry, paper towels are a major part of the "tissue market", second only to toilet paper.Globally, Americans are the highest per capita users of paper towels in the home, at approximately 24 kilograms (53 lb) yearly consumption per capita (combined consumption approximately 7.8 million tonnes (7,700,000 long tons; 8,600,000 short tons) per year). This is 50% higher than in Europe and nearly 500% higher than in Latin America. By contrast, people in the Middle East tend to prefer reusable cloth towels, and people in Europe tend to prefer reusable cleaning sponges.Paper towels are popular primarily among people who have disposable income, so their use is higher in wealthy countries and low in developing countries.Growing hygiene consciousness during the COVID-19 pandemic led to a boost in paper towel market growth.
Environmental issues:
Paper towels are a global product with rising production and consumption. Being second in tissue consumption only to toilet paper (36% vs. 45% in the U.S.), the proliferation of paper towels, which are mostly non-recyclable, has globally adverse effects on the environment. However, paper towels made from recycled paper do exist, and are sold at many outlets. Some are manufactured from bamboo, which grows faster than trees.
Environmental issues:
Electric hand dryers are an alternative to using paper towels for hand drying. However, paper towels are quicker than hand dryers: after ten seconds, paper towels achieve 90% dryness, while hot air dryers require 40 seconds to achieve a similar dryness. Electric hand dryers may also spread bacteria to hands and clothing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scribes (software)**
Scribes (software):
Scribes is a lightweight free text editor for GNOME licensed under the terms of the GPL-2.0-or-later license.
Features:
Its features include crash-recovery via automatic saving, syntax highlighting, snippets, automatic word completion, pair character completion, smart indentation, bookmarks, and various text editing functions. However, it does not provide the options to turn off some of those features, such as involuntary automatic saving (which can overwrite the old version of opened file without user's consent).
Official description:
"Scribes focuses on streamlining your workflow. It does so by ensuring that common and repetitive operations are intelligently automated and also by eliminating factors that prevent you from focusing on your tasks.
The result is a text editor that provides a fluid user experience, that is easy and fun to use and that ensures the safety of your documents at all times." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Buffer gas**
Buffer gas:
A buffer gas is an inert or nonflammable gas. In the Earth's atmosphere, nitrogen acts as a buffer gas. A buffer gas adds pressure to a system and controls the speed of combustion with any oxygen present. Any inert gas such as helium, neon, or argon will serve as a buffer gas.
Uses:
Buffer gases are commonly used in many applications from high pressure discharge lamps to reduce line width of microwave transitions in alkali atoms. A buffer gas usually consists of atomically inert gases such as helium, argon, and nitrogen which are the primary gases used. Krypton, neon, and xenon are also used, primarily for lighting. In most scenarios, buffer gases are used in conjunction with other molecules for the main purpose of causing collisions with the other co-existing molecules.
Uses:
In fluorescent lamps, mercury is used as the primary ion from which light is emitted. Krypton is the buffer gas used in conjunction with the mercury which is used to moderate the momentum of collisions of mercury ions in order to reduce the damage done to the electrodes in the fluorescent lamp. Generally speaking, the longest lasting lamps are those with the heaviest noble gases as buffer gases.
Uses:
Buffer gas loading techniques have been developed for use in cooling paramagnetic atoms and molecules at ultra-cold temperatures. The buffer gas most commonly used in this sort of application is helium. Buffer gas cooling can be used on just about any molecule, as long as the molecule is capable of surviving multiple collisions with low energy helium atoms, which most molecules are capable of doing. Buffer gas cooling is allowing the molecules of interest to be cooled through elastic collisions with a cold buffer gas inside a chamber. If there are enough collisions between the buffer gas and the other molecules of interest before the molecules hit the walls of the chamber and are gone, the buffer gas will sufficiently cool the atoms. Of the two isotopes of helium (3He and 4He), the rarer 3He is sometimes used over 4He as it provides significantly higher vapor pressures and buffer gas density at sub-kelvin temperatures.
Uses:
Buffer gases are also commonly used in compressors used in power plants for supplying gas to gas turbines. The buffer gas fills the spaces between seals in the compressor. This space is usually about 2 micrometres wide. The gas must be completely dry and free of any contaminants. Contaminants can potentially lodge in the space between the seal and cause metal to metal contact in the compressor, leading to compressor failure. In this case the buffer gas acts in a way much like oil does in an automotive engine's bearings. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Invalid carriage**
Invalid carriage:
Invalid carriages were usually single seater road vehicles, buggies, or self-propelled vehicles for disabled people. They pre-dated modern electric mobility scooters and, from the 1920s, were generally powered by small gasoline/petrol engines, although some were battery powered. They were usually designed without foot-operated controls.
The term "invalid carriage" persists in the United Kingdom in the regulation of mobility devices for disabled people, but excludes most of the more powerful, motorised types.
History:
Origins Stephan Farffler was a Nuremberg watchmaker of the seventeenth century whose invention of a manumotive carriage in 1655 is widely considered to have been the first self-propelled wheelchair. He is believed to have been either a paraplegic or an amputee. As such, the chair was consistent with the later designs for self-propelled invalid carriages. The three-wheeled device is also believed to have been a precursor to the modern-day tricycle and bicycle.In England, the forerunner of the invalid carriage was the bath chair. It was invented by James Heath, of Bath (hence the name), in the early 18th century. Animal drawn versions of the bath chair became known as invalid carriages. An 1880 Monk and Co. invalid carriage is on display at the M Shed in Bristol.The firm of John Carter (an invalid and surgical furniture manufacturer in London, dating from 1870 to the late 1950s) advertised bath chairs, spinal carriages and self-propelling chairs, in its 1890s' list of "invalid comforts". Later it would market its products to wounded soldiers.
Between the wars:
Stanley Engineering Co. Ltd. of Egham, Surrey, began making self-propelled invalid carriages under the 'Argson' name in the 1920s. The Argson Runnymede was designed in South Africa manufactured in England from 1936 to 1954. They were either battery-powered or had a Villiers petrol engine. A petrol powered Runnymede drove across the Alps in 1947. Stanley Engineering was bought by C. B. Harper Ltd. in 1954.R. A. Harding Company of Bath was founded in 1921. They initially produced hand-propelled tricycles. In 1926 Harding's introduced a variety of powered invalid carriages. The De Luxe models A and B were powered by a 122cc Villiers engine, and the Pultney was powered by either a 200cc or 300cc JAP. There were also 24-volt or 36-volt electric machines. In 1945, the company was renamed R. A. Harding (Bath) Ltd., and the Pultney was discontinued. In December 1948 the De Luxe models were upgraded with larger rear wheels, a new petrol tank, and a fan-cooled Villiers 147cc unit. Hardings introduced a full-bodied model in 1956, called the Consort. Only 12 of these were made. The company closed down in 1988, having made hand-powered models until 1973 and motor-driven ones until 1966.From the 1930s to the late 1940s, Nelco Industries made a three-wheeled battery powered vehicle. Steering was enabled by means of a tiller connected to the front wheel; the tiller also provided speed control. Forward or reverse was enabled by a separate control. The 24 volt electric motor could act as a generator to recharge the battery when going downhill. The motor was 24 volt.
Post World War 2:
In 1946, Larmar Engineering made small single-seated cars that were specially designed for physically disabled people. The vehicles were only 80 cm wide and each had a body made of plywood and aluminum, a side door, seat, windshield and a soft top. A single-cylinder, two-stroke, 8 horse power engine from BSA, with 249 cc displacement, was mounted in the rear and it drove one of the rear wheels via a chain. From 1950, a two-cylinder, four-stroke, 10 hp engine, with 350 cc displacement, was available.
United Kingdom Ministry of Health contracts (1948–1978):
In 1948, Bert Greeves adapted a motorcycle, with the help of his paralysed cousin Derry Preston-Cobb, as transport for Preston-Cobb. Noticing the number of former servicemen injured in the Second World War, they spotted a commercial opportunity and approached the UK government for support, leading to the creation of Invacar Ltd. Invacar was not the only company to be contracted by the Ministry of Health to produce three-wheeled vehicles for disabled drivers. Others included Harding, Dingwall & Son, AC Cars, Barrett, Tippen & Son, Thundersley, Vernons Industries, and Coventry Climax.These early vehicles were each powered by an air-cooled Villiers 147 cc engine, but when production of that engine ceased in the early 1970s it was replaced by a much more powerful 4-stroke 500 cc or 600 cc Steyr-Puch engine, giving a reported top speed of 82 mph (132 km/h). They were low-cost, low-maintenance vehicles, designed specifically for people with physical disabilities. Production of them stopped in 1976, and the last ones were withdrawn from the road in 2003. Some still exist, however, and approximately 25 Invacars survive, that could become roadworthy once again. There are at least five Invacars in private ownership which are "road-legal", as well as several other unroadworthy examples which are awaiting their demise, including one in the Coventry Transport Museum collection, two in the Lakeland museum in Cumbria, and one road-going example "TWC" features on the HubNut YouTube channel. Another 1976 example, one of the last made, can be found on display at the National Motor Museum, Beaulieu in Hampshire, England.In Britain, in the 1960s and 1970s, invalid carriages were provided as subsidised, low-cost vehicles to improve the mobility of people with disabilities. Vehicles leased by the National Health Service had three wheels, were very lightweight, and therefore their suitability on roads among other traffic was often considered dubious on safety grounds. Invalid carriages are banned from motorways. All remaining NHS leased Invacar type invalid carriages were ultimately withdrawn in a safety recall in 2003.
United Kingdom Ministry of Health contracts (1948–1978):
Motorised invalid carriages, as described above, do not fall under the legal definition of "invalid carriage" under current (since 1988) regulations, unless they are limited to speeds of 8 mph or less (see below).
See www.invalidcarriageregister.org
Other countries:
Fritz Fend, a former technical officer with the Luftwaffe, also designed a three-wheel invalid carriage in 1948. The first version was unpowered, with the single wheel situated at the front. His powered version had a 38cc Victoria two-stroke engine, with a chain-driven single rear wheel; the two wheels at the front were used for steering. This latter version was called the Fend Flitzer, and some 250 were made between 1948 and 1951, when production ceased.Invalid carriages were also made in other countries, including Simson DUO in East Germany, SMZ in the Soviet Union, and Velorex in Czechoslovakia.The Duo was made initially by VEB Fahrzeugbau und Ausrüstungen Brandis (VEB FAB) from 1973 until 1978, whereupon manufacture was transferred to VEB Robur, more famous for making trucks in Zittau. Because many of the components are common with the Simson, the Duo is often classified as a Simson. Production ceased in 1989.
Since 1988:
In the United Kingdom, "invalid carriage" is a legal term denoting a device built for the use of one person with a physical disability, which does not require a driving licence and may be driven off-road by a disabled person, including on pavements. The law is slightly different in Northern Ireland.Regulations divide them into three classes: Class 1 comprises non-mechanically propelled vehicles, including wheelchairs and handcycles. Users of such vehicles are treated for most purposes in law as pedestrians. They are not subject to any speed limits.
Since 1988:
Class 2 comprises mechanically propelled vehicles such as motorized wheelchairs and mobility scooters, limited to 4 mph.
Since 1988:
Class 3 consists mostly of mobility scooters. They are limited to 8 mph, with a further limiter set to 4 mph which must be enabled when used on a pavement. They must be fitted with a speedometer and may not be used by children under 14.All three classes may be used on footways and cycle paths. Class 2 and 3 may be used on dual carriageways with a flashing amber beacon, and otherwise must comply with most other regulations pertaining to use of vehicles on the carriageway (e.g. lights and reflectors), though they are specifically excluded from the definition of "motor vehicle". All are banned from motorways. None of them cover the types described in the section above, which are ordinary motor vehicles subject to all regulations applicable to such.
Literature:
An Introduction to the British Invalid Carriage 1850 - 1978, Stuart Cyphus, Museum of Disability History, ISBN 9780984598380 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Two-dimensional polymer**
Two-dimensional polymer:
A two-dimensional polymer (2DP) is a sheet-like monomolecular macromolecule consisting of laterally connected repeat units with end groups along all edges. This recent definition of 2DP is based on Hermann Staudinger's polymer concept from the 1920s. According to this, covalent long chain molecules ("Makromoleküle") do exist and are composed of a sequence of linearly connected repeat units and end groups at both termini.
Two-dimensional polymer:
Moving from one dimension to two offers access to surface morphologies such as increased surface area, porous membranes, and possibly in-plane pi orbital-conjugation for enhanced electronic properties. They are distinct from other families of polymers because 2D polymers can be isolated as multilayer crystals or as individual sheets.The term 2D polymer has also been used more broadly to include linear polymerizations performed at interfaces, layered non-covalent assemblies, or to irregularly cross-linked polymers confined to surfaces or layered films. 2D polymers can be organized based on these methods of linking (monomer interaction): covalently linked monomers, coordination polymers and supramolecular polymers.
Two-dimensional polymer:
Topologically, 2DPs may thus be understood as structures made up from regularly tessellated regular polygons (the repeat units). Figure 1 displays the key features of a linear and a 2DP according to this definition. For usage of the term "2D polymer" in a wider sense, see "History".
Covalently-linked polymers:
There are several examples of covalently linked 2DPs which include the individual layers or sheets of graphite (called graphenes), MoS2, (BN)x and layered covalent organic frameworks. As required by the above definition, these sheets have a periodic internal structure.
Covalently-linked polymers:
A well-known example of a 2D polymer is graphene; whose optical, electronic and mechanical properties have been studied in depth. Graphene has a honeycomb lattice of carbon atoms that exhibit semiconducting properties. A potential repeat unit of graphene is a sp2-hybridized carbon atom. Individual sheets can in principle be obtained by exfoliation procedures, though in reality this is a non-trivial enterprise.
Covalently-linked polymers:
Molybdenumdisulfide can exist in two-dimensional, single or layered polymers where each Mo(IV) center occupies a trigonal prismatic coordination sphere.
Boron nitride polymers are stable in its crystalline hexagonal form where it has a two-dimensional layered structure similar to graphene. There are covalent bonds formed between boron and nitrogen atoms, yet the layers are held together by weak van der Waals interactions, in which the boron atoms are eclipsed over the nitrogen.
Covalently-linked polymers:
Two dimensional covalent organic frameworks (COFs) are one type of microporous coordination polymer that can be fabricated in the 2D plane. The dimensionality and topology of the 2D COFs result from both the shape of the monomers and the relative and dimensional orientations of their reactive groups. These materials contain desirable properties in fields of materials chemistry including thermal stability, tunable porosity, high specific surface area, and the low density of organic material. By careful selection of organic building units, long range π-orbital overlap parallel to the stacking direction of certain organic frameworks can be achieved.
Covalently-linked polymers:
Many covalent organic frameworks derive their topology from the directionality of the covalent linkages, thus small changes in organic linkers can dramatically affect their mechanical and electronic properties. Even small changes in their structure can induce dramatic changes in stacking behavior of molecular semiconductors.
Porphyrins are an additional class of conjugated, heterocyclic macrocycles. Control of monomer assembly through covalent assembly has also been demonstrated using covalent interactions with porphyrins. Upon thermal activation of porphyrin building blocks, covalent bonds form to create a conductive polymer, a versatile route for bottom-up construction of electronic circuits has been demonstrated.
Covalently-linked polymers:
COF synthesis It is possible to synthesize COFs using both dynamic covalent and non-covalent chemistry. The kinetic approach involves a stepwise process of polymerizing pre-assembled 2D-monomer while thermodynamic control exploits reversible covalent chemistry to allow simultaneous monomer assembly and polymerization. Under thermodynamic control, bond formation and crystallization also occur simultaneously. Covalent organic frameworks formed by dynamic covalent bond formation involves chemical reactions carried out reversibly under conditions of equilibrium control. Because the formation of COFs in dynamic covalent formation occurs under thermodynamic control, product distributions depend only on the relative stabilities of the final products. Covalent assembly to form 2D COFs has been previously done using boronate esters from catechol acetonides in the presence of a lewis acid (BF3*OEt2).2D polymerization under kinetic control relies on non-covalent interactions and monomer assembly prior to bond formation. The monomers can be held together in a pre-organized position by non-covalent interactions, such as hydrogen bonding or van der Waals.
Coordination polymers:
Metal organic frameworks Self-assembly can also be observed in the presence of organic ligands and various metals centers through coordinative bonds or supramolecular interactions. Molecular self- assembly involves the association by many weak, reversible interactions to obtain a final structure that represents a thermodynamic minimum. A class of coordination polymers, known also as metal-organic frameworks (MOFs), are metal-ligand compounds that extend "infinitely" into one, two or three dimensions.
Coordination polymers:
MOF synthesis Availability of modular metal centers and organic building blocks generate wide diversity in synthetic versatility. Their applications range from industrial use to chemiresistive sensors. The ordered structure of the frame is largely determined by the coordination geometry of the metal and directionality of functional groups upon the organic linker. Consequently, MOFs contain highly defined pore dimensions when compared with conventional amorphous nanoporous materials and polymers. Reticular Synthesis of MOFs is a term that has been recently coined to describe the bottom-up method of assembling cautiously designed rigid molecular building blocks into prearranged structures held together by strong chemical bonds. The synthesis of two-dimensional MOFs begins with the knowledge of a target "blueprint" or a network, followed by identification of the required building blocks for its assembly.By interchanging metal centers and organic ligands, one can fine-tune electronic and magnetic properties observed in MOFs. There have been recent efforts synthesize conductive MOFs using triyphenylene linkers. Additionally, MOFs have been utilized as reversible chemiresistive sensors.
Supramolecular polymers:
Supramolecular assembly requires non-covalent interactions directing the formation of 2D polymers by relying on electrostatic interactions such as hydrogen bonding and van der Waals forces. To design artificial assemblies capable of high selectivity requires correct manipulation of energetic and stereochemical features of non-covalent forces. Some benefits of non-covalent interactions is their reversible nature and response to external factors such as temperature and concentration. The mechanism of non-covalent polymerization in supramolecular chemistry is highly dependent on the interactions during the self-assembly process. The degree of polymerization depends highly on temperature and concentration. The mechanisms may be divided into three categories: isodesmic, ring-chain, and cooperative.
Supramolecular polymers:
One example of isodesmic associations in supramolecular aggregates is seen in Figure 7, (CA*M) cyanuric acid (CA) and melamine (M) interactions and assembly through hydrogen bonding. Hydrogen bonding has been used to guide assembly of molecules into two-dimensional networks, that can then serve as new surface templates and offer an array of pores of sufficient capacity to accommodate large guest molecules. An example of utilizing surface structures through non-covalent assembly uses adsorbed monolayers to create binding sites for target molecules through hydrogen bonding interactions. Hydrogen bonding is used to guide the assembly of two different molecules into a 2D honeycomb porous network under ultra high vacuum seen in figure 8. 2D polymers based on DNA have been reported
Characterization:
2DPs as two dimensional sheet macromolecules have a crystal lattice, that is they consist of monomer units that repeat in two dimensions. Therefore, a clear diffraction pattern from their crystal lattice should be observed as a proof of crystallinity. The internal periodicity is supported by electron microscopy imaging, electron diffraction and Raman-spectroscopic analysis.
2DPs should in principle also be obtainable by, e.g., an interfacial approach whereby proving the internal structure, however, is more challenging and has not yet been achieved.In 2014 a 2DP was reported synthesised from a trifunctional photoreactive anthracene derived monomer, preorganised in a lamellar crystal and photopolymerised in a [4+4]cycloaddition. Another reported 2DP also involved an anthracene-derived monomer
Applications:
2DPs are expected to be superb membrane materials because of their defined pore sizes. Furthermore, they can serve as ultrasensitive pressure sensors, as precisely defined catalyst supports, for surface coatings and patterning, as ultrathin support for cryo-TEM, and many other applications.
Applications:
Since 2D polymers provide an availability of large surface area and uniformity in sheets, they also found useful applications in areas such as selective gas adsorption and separation. Metal organic frameworks have become popular recently due to the variability of structures and topology which provide tunable pore structures and electronic properties. There are also ongoing methods for creation of nanocrystals of MOFs and their incorporation into nanodevices. Additionally, metal-organic surfaces have been synthesized with cobalt dithionlene catalysts for efficient hydrogen production through reduction of water as an important strategy for fields of renewable energy.The fabrication of 2D organic frameworks, have also synthesized two-dimensional, porous covalent organic frameworks to be used as storage media for hydrogen, methane and carbon dioxide in clean energy applications.
History:
First attempts to synthesize 2DPs date back to the 1930s when Gee reported interfacial polymerizations at the air/water interface in which a monolayer of an unsaturated fatty acid derivative was laterally polymerized to give a 2D cross-linked material. Since then a number of important attempts were reported in terms of cross-linking polymerization of monomers confined to layered templates or various interfaces. These approaches provide easy accesses to sheet-like polymers. However, the sheets' internal network structures are intrinsically irregular and the term "repeat unit" is not applicable (See for example:). In organic chemistry, creation of 2D periodic network structures has been a dream for decades. Another noteworthy approach is "on-surface polymerization" whereby 2DPs with lateral dimensions not exceeding some tens of nanometers were reported. Laminar crystals are readily available, each layer of which can ideally be regarded as latent 2DP. There have been a number of attempts to isolate the individual layers by exfoliation techniques (see for example:). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Budesonide/formoterol**
Budesonide/formoterol:
Budesonide/formoterol, sold under the brand name Symbicort among others, is a fixed-dose combination medication used in the management of asthma or chronic obstructive pulmonary disease (COPD). It contains budesonide, a steroid and formoterol, a long-acting β2-agonist (LABA). The product monograph does not support its use for sudden worsening or treatment of active bronchospasm. However, a 2020 review of the literature does support such use. It is used by breathing in the medication.Common (≥1/100 to <1/10) side effects include candidiasis, headache, tremor, palpitations, throat irritation, coughing, and dysphonia. Pneumonia is a common side effect in people with COPD, and other, less common side effects have been documented. There were concerns that the LABA component increases the risk of death in children with asthma, however these concerns were removed in 2017. Therefore, this combination is only recommended in those who are not controlled on an inhaled steroid alone. There is tentative evidence that typical doses of inhaled steroids and LABAs are safe in pregnancy. Both formoterol and budesonide are excreted in breast-milk.Budesonide/formoterol was approved for medical use in the United States in 2006. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2020, it was the 61st most commonly prescribed medication in the United States, with more than 11 million prescriptions.
Medical uses:
Maintenance Budesonide/formoterol has shown efficacy to prevent asthma attacks. It is unclear if the efficacy of budesonide/formoterol differs from that of fluticasone and salmeterol in chronic asthma.
Medical uses:
Exacerbation The combination is approved in the United States only as a maintenance medication in asthma and chronic obstructive pulmonary disease (COPD). However, a 2020 review of the literature does support use as needed during acute worsening in those with mild disease, and as maintenance followed by extra doses during worsening.Use for both maintenance and as-needed treatment is also known as single maintenance and reliever therapy (SMART) and is a well-established treatment. It has been shown to, 1) reduce asthma exacerbations that require oral corticosteroids, 2) reduce hospital visits better than maintenance on inhaled corticosteroids alone at a higher dose, or 3) inhaled corticosteroid at the same or higher dose together with a long-acting bronchodilator (LABA), with a short-acting bronchodilator (SABA) as a reliever.
Side effects:
Common (up to 1 in 10 people) Mild throat irritation Coughing Hoarseness Oral candidiasis (thrush. significantly less likely if the patient rinses their mouth out with water after inhalations) HeadacheOften mild, and usually disappear as the medication continues to be used: Heart palpitations Trembling Uncommon (up to 1 in 100 people) Feeling restlessness, nervous, or agitated Disturbed sleep Feeling dizzy Nausea Tachycardia (fast heart rate) Bruising of the skin Muscle cramps Rare (up to 1 in 1,000 people) Rash Itchiness Bronchospasm (tightening of the muscles in the airways causing wheezing immediately after use of the medication, which is possibly sign of an allergic reaction and should be met with immediate medical attention) Hypokalemia (low levels of potassium in the blood) Heart arrhythmia Very rare (up to 1 in 10,000 people) Depression Changes in behaviour, especially in children Chest pain or tightness in chest Increase in blood glucose levels Taste changes, such as an unpleasant taste in the mouth Changes in blood pressure Other With high doses for a long period of time.
Side effects:
Reduced bone mineral density, causing osteoporosis Cataracts Glaucoma Slowed rate of growth in children and adolescents Dysfunction of the adrenal gland, which affects the production of various hormones Allergic reaction Angioedema (swelling of the face, mouth, tongue, and/or throat. Difficulty swallowing. Hives. Difficulty breathing. Feeling of faintness) Bronchospasm (sudden acute wheezing or shortness of breath immediately after use of medication. The patient should use their reliever medication immediately.)
Society and culture:
Legal status It was approved for use in the United States in July 2006.Budesonide/formoterol was approved for use in the European Union in April 2014.There are several patents related to the drug; some of which have expired. It was initially marketed by AstraZeneca. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thermophotonics**
Thermophotonics:
Thermophotonics (often abbreviated as TPX) is a concept for generating usable power from heat which shares some features of thermophotovoltaic (TPV) power generation. Thermophotonics was first publicly proposed by solar photovoltaic researcher Martin Green in 2000. However, no TPX device is known to have been demonstrated to date, apparently because of the stringent requirement on the emitter efficiency. A TPX system consists of a light-emitting diode (LED) (though other types of emitters are conceivable), a photovoltaic (PV) cell, an optical coupling between the two, and an electronic control circuit. The LED is heated to a temperature higher than the PV temperature by an external heat source. If no power is applied to the LED, the system functions much like a very inefficient TPV system, but if a forward bias is applied at some fraction of the bandgap potential, an increased number of electron-hole pairs (EHPs) will be thermally excited to the bandgap energy. These EHPs can then recombine radiatively so that the LED emits light at a rate higher than the thermal radiation rate ("superthermal" emission). This light is then delivered to the cooler PV cell over the optical coupling and converted to electricity.
Thermophotonics:
The control circuit presents a load to the PV cell (presumably at the maximum power point) and converts this voltage to a voltage level that can be used to sustain the bias of the emitter. Provided that the conversion efficiencies of electricity to light and light to electricity are sufficiently high, the power harnessed from the PV cell can exceed the power going into the bias circuit, and this small fraction of excess power (originating from the heat difference) can be utilized. It is thus in some sense a photonic heat engine.
Thermophotonics:
Possible applications of thermophotonic generators include solar thermal electricity generation and utilization of waste heat. TPX systems may have the potential to generate power with useful levels of output at temperatures where only thermoelectric systems are now practical, but with higher efficiency.
Thermophotonics:
A patent application for a thermophotonic generator using a vacuum gap with thickness on the order of a micrometer or less was published by the US Patent Office in 2009 and assigned to MTPV Corporation of Austin, Texas, USA. This proposed variant of the technology allows better thermal insulation because of the gap between the hot emitter and cold receiver, while maintaining relatively good optical coupling between them due to the gap's being small relative to the optical wavelength. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Methylation**
Methylation:
In the chemical sciences, methylation denotes the addition of a methyl group on a substrate, or the substitution of an atom (or group) by a methyl group. Methylation is a form of alkylation, with a methyl group replacing a hydrogen atom. These terms are commonly used in chemistry, biochemistry, soil science, and the biological sciences.
In biological systems, methylation is catalyzed by enzymes; such methylation can be involved in modification of heavy metals, regulation of gene expression, regulation of protein function, and RNA processing. In vitro methylation of tissue samples is also one method for reducing certain histological staining artifacts. The reverse of methylation is demethylation.
In biology:
In biological systems, methylation is accomplished by enzymes. Methylation can modify heavy metals, regulate gene expression, RNA processing and protein function. It has been recognized as a key process underlying epigenetics.
Methanogenesis Methanogenesis, the process that generates methane from CO2, involves a series of methylation reactions. These reactions are effected by a set of enzymes harbored by a family of anaerobic microbes.
In reverse methanogenesis, methane serves as the methylating agent.
O-methyltransferases A wide variety of phenols undergo O-methylation to give anisole derivatives. This process, catalyzed by enzymes such as caffeoyl-CoA O-methyltransferase, is a key reaction in the biosynthesis of lignols, percursors to lignin, a major structural component of plants.
Plants produce flavonoids and isoflavones with methylations on hydroxyl groups, i.e. methoxy bonds. This 5-O-methylation affects the flavonoid's water solubility. Examples are 5-O-methylgenistein, 5-O-methylmyricetin or 5-O-methylquercetin, also known as azaleatin.
Proteins Together with ubiquitination and phosphorylation, methylation is a major biochemical process for modifying protein function. The most prevalent protein methylations affect arginine and lysine residue of specific histones. Otherwise histidine, glutamate, asparagine, cysteine are susceptible to methylation. Some of these products include S-methylcysteine, two isomers of N-methylhistidine, and two isomers of N-methylarginine.
Methionine synthase Methionine synthase regenerates methionine (Met) from homocysteine (Hcy). The overall reaction transforms 5-methyltetrahydrofolate (N5-MeTHF) into tetrahydrofolate (THF) while transferring a methyl group to Hcy to form Met. Methionine Syntheses can be cobalamin-dependent and cobalamin-independent: Plants have both, animals depend on the methylcobalamin-dependent form.
In biology:
In methylcobalamin-dependent forms of the enzyme, the reaction proceeds by two steps in a ping-pong reaction. The enzyme is initially primed into a reactive state by the transfer of a methyl group from N5-MeTHF to Co(I) in enzyme-bound cobalamin (Cob), forming methyl-cobalamin(Me-Cob) that now contains Me-Co(III) and activating the enzyme. Then, a Hcy that has coordinated to an enzyme-bound zinc to form a reactive thiolate reacts with the Me-Cob. The activated methyl group is transferred from Me-Cob to the Hcy thiolate, which regenerates Co(I) in Cob, and Met is released from the enzyme.
In biology:
Heavy metals: arsenic, mercury, cadmium Biomethylation is the pathway for converting some heavy elements into more mobile or more lethal derivatives that can enter the food chain. The biomethylation of arsenic compounds starts with the formation of methanearsonates. Thus, trivalent inorganic arsenic compounds are methylated to give methanearsonate. S-adenosylmethionine is the methyl donor. The methanearsonates are the precursors to dimethylarsonates, again by the cycle of reduction (to methylarsonous acid) followed by a second methylation. Related pathways apply to the biosynthesis of methylmercury.
In biology:
Epigenetic methylation DNA/RNA methylation DNA methylation in vertebrates typically occurs at CpG sites (cytosine-phosphate-guanine sites – that is, where a cytosine is directly followed by a guanine in the DNA sequence). This methylation results in the conversion of the cytosine to 5-methylcytosine. The formation of Me-CpG is catalyzed by the enzyme DNA methyltransferase. In mammals, DNA methylation is common in body cells, and methylation of CpG sites seems to be the default. Human DNA has about 80–90% of CpG sites methylated, but there are certain areas, known as CpG islands, that are CG-rich (high cytosine and guanine content, made up of about 65% CG residues), wherein none is methylated. These are associated with the promoters of 56% of mammalian genes, including all ubiquitously expressed genes. One to two percent of the human genome are CpG clusters, and there is an inverse relationship between CpG methylation and transcriptional activity. Methylation contributing to epigenetic inheritance can occur through either DNA methylation or protein methylation. Improper methylations of human genes can lead to disease development, including cancer.In invertebrates of honey bees, DNA methylation has been studied since the honey bee genome was sequenced in 2006. DNA methylation is associated with alternative splicing and gene regulation based on functional genomic research published in 2013. In addition, DNA methylation is associated with the changes of expression in immune genes when honey bees were under lethal viral infection in a timely manner. Several review papers have been published on the topics of DNA methylation in social insects. RNA methylation occurs in different RNA species viz. tRNA, rRNA, mRNA, tmRNA, snRNA, snoRNA, miRNA, and viral RNA. Different catalytic strategies are employed for RNA methylation by a variety of RNA-methyltransferases. RNA methylation is thought to have existed before DNA methylation in the early forms of life evolving on earth.N6-methyladenosine (m6A) is the most common and abundant methylation modification in RNA molecules (mRNA) present in eukaryotes. 5-methylcytosine (5-mC) also commonly occurs in various RNA molecules. Recent data strongly suggest that m6A and 5-mC RNA methylation affects the regulation of various biological processes such as RNA stability and mRNA translation, and that abnormal RNA methylation contributes to etiology of human diseases.In invertebrates such as social insects of honey bees, RNA methylation is studied to be a possible epigenetic mechanism underlying aggression via reciprocal crosses.
In biology:
Protein methylation Protein methylation typically takes place on arginine or lysine amino acid residues in the protein sequence. Arginine can be methylated once (monomethylated arginine) or twice, with either both methyl groups on one terminal nitrogen (asymmetric dimethylarginine) or one on both nitrogens (symmetric dimethylarginine), by protein arginine methyltransferases (PRMTs). Lysine can be methylated once, twice, or three times by lysine methyltransferases. Protein methylation has been most studied in the histones. The transfer of methyl groups from S-adenosyl methionine to histones is catalyzed by enzymes known as histone methyltransferases. Histones that are methylated on certain residues can act epigenetically to repress or activate gene expression. Protein methylation is one type of post-translational modification.
In biology:
Evolution Methyl metabolism is very ancient and can be found in all organisms on earth, from bacteria to humans, indicating the importance of methyl metabolism for physiology. Indeed, pharmacological inhibition of global methylation in species ranging from human, mouse, fish, fly, roundworm, plant, algae, and cyanobacteria causes the same effects on their biological rhythms, demonstrating conserved physiological roles of methylation during evolution.
In chemistry:
The term methylation in organic chemistry refers to the alkylation process used to describe the delivery of a CH3 group.
In chemistry:
Electrophilic methylation Methylations are commonly performed using electrophilic methyl sources such as iodomethane, dimethyl sulfate, dimethyl carbonate, or tetramethylammonium chloride. Less common but more powerful (and more dangerous) methylating reagents include methyl triflate, diazomethane, and methyl fluorosulfonate (magic methyl). These reagents all react via SN2 nucleophilic substitutions. For example, a carboxylate may be methylated on oxygen to give a methyl ester; an alkoxide salt RO− may be likewise methylated to give an ether, ROCH3; or a ketone enolate may be methylated on carbon to produce a new ketone.
In chemistry:
The Purdie methylation is a specific for the methylation at oxygen of carbohydrates using iodomethane and silver oxide.
Eschweiler–Clarke methylation The Eschweiler–Clarke reaction is a method for methylation of amines. This method avoids the risk of quaternization, which occurs when amines are methylated with methyl halides.
Diazomethane and trimethylsilyldiazomethane Diazomethane and the safer analogue trimethylsilyldiazomethane methylate carboxylic acids, phenols, and even alcohols: RCO tmsCHN CH OH RCO CH CH Otms +N2 The method offers the advantage that the side products are easily removed from the product mixture.
Nucleophilic methylation Methylation sometimes involve use of nucleophilic methyl reagents. Strongly nucleophilic methylating agents include methyllithium (CH3Li) or Grignard reagents such as methylmagnesium bromide (CH3MgX). For example, CH3Li will add methyl groups to the carbonyl (C=O) of ketones and aldehyde.: Milder methylating agents include tetramethyltin, dimethylzinc, and trimethylaluminium. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Barakol**
Barakol:
Barakol is a compound found in the plant Senna siamea, which is used in traditional herbal medicine. It has sedative and anxiolytic effects. There are contradictory pharmacological research findings concerning the toxicity of Cassia siamea and the active ingredient Barakol. One pharmacological study has shown an hepatoxic effect of Barakol while another study did not show any toxic effect at a daily dosage intake. Further research is needed to verify whether there are toxic effects of Barakol or not. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Emotional aperture**
Emotional aperture:
Emotional aperture has been defined as the ability to perceive features of group emotions. This skill involves the perceptual ability to adjust one's focus from a single individual's emotional cues to the broader patterns of shared emotional cues that comprise the emotional composition of the collective.
Emotional aperture:
Some examples of features of group emotions include the level of variability of emotions among members (i.e., affective diversity), the proportion of positive or negative emotions, and the modal (i.e., most common) emotion present in a group. The term “emotional aperture” was first defined by the social psychologist, Jeffrey Sanchez-Burks, and organizational theorist, Quy Huy. It has since been referenced in related work such as in psychologist, journalist, and author of the popular book Emotional Intelligence Daniel Goleman's most recent book "Focus: The Hidden Driver of Excellence." Academic references to emotional aperture and related work can be found on the references site for the Consortium for Research on Emotional Intelligence in Organizations.Emotional Aperture abilities have been measured using the EAM. The EAM consists of a series of short movie clip showing groups that have various brief reactions to an unspecified event. Following each movie clip, individuals are asked to report the proportion of individuals that had a positive or negative reaction.
Emotional aperture:
Emotional aperture, the ability to pick up such subtle signals in a group, works on essentially the same principle as the aperture of a camera, so he says. We can zoom in to focus on a person's feelings, or, conversely, zoom out to encompass everyone gathered - whether it's a school class or a workgroup. This concept is closely linked to emotional intelligence since it includes abilities such as the ability to develop motivation and persistence. Aperture enables managers to read information more accurately and understand, for example, whether their proposal is met with enthusiasm or rejection. Accurate perception of these signals can prevent failure and help make useful adjustments during project implementation.
Origin:
The construct, emotional aperture, was developed to address the need to expand existing models of individual emotion perception (e.g., emotional intelligence) to take into account the veracity of group-based emotions and their action tendencies. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GPD Win**
GPD Win:
GPD Win is a Windows-based palmtop computer equipped with a keyboard and gaming controls. It is an x86-based device that runs Windows 10. It is capable of running any x86 Windows-based application that can run within the confines of the computer's hardware. First announced in October 2015, it was crowdfunded via Indiegogo and two other crowdfunding sites in Japan and China. The GPD Win was released in October 2016.
History:
GamePad Digital (GPD) is a technology company based in Shenzhen, China. Among other products, they have created several handheld video game consoles which run Android on ARM architecture; for instance, GPD XD. GPD Win was meant to be a way to play PC games, PC-based video game console emulators, and hypervisors (such as VMware and VirtualBox clients) on a handheld device.
History:
GamePad Digital first explored the idea of GPD Win in October 2015. In December 2015, the physical design and hardware specifications were determined. In March 2016, initial prototypes were finished, debugged, and shipped. GPD started accepting pre-orders in June 2016 through several online retailers, including the Indiegogo page. Pre-order backers were offered the device for a discounted price of $330, with an estimated final retail price of $499, but settling on a price of $330 after release. The initially stated goal was $100,000. In August 2016, a small batch shipment to industry personnel was shipped. GPD started shipping the final product by October 2016.
Software:
The GPD Win runs Windows 10 Home and is able to run most x86 Windows applications that can be run on desktops and laptops.
As of April 2017, several patches are available for the Linux kernel that allows mostly complete functionality of the Win with a full desktop Linux distribution like Ubuntu.
Technical and physical specifications:
The computer has a full typical QWERTY keyboard, which includes 67 standard keys and 10 expanded function keys. For gaming, the controller has a stylized similarly to the OpenPandora and DragonBox Pyra style keyboard and controller layout of one D-pad, two analog sticks, four face buttons, and four shoulder buttons (two on each shoulder).
It was initially intended to use the Intel Atom x5-Z8500 Cherry Trail CPU.The graphics processor is an Intel HD Graphics integrated GPU with a base clock speed of 200 MHz and a turbo boost of up to 600 MHz.
It uses 4GB of LPDDR3-1600 RAM, with 64GB of eMMC 4.51 ROM. It has a single microSD slot that can support storage cards up to 256GB.
The device is 15.5×9.7 cm in size. It has a 5.5-inch 1280×720 (720p) H-IPS multi-directional touch screen in a 16:9 ratio, reinforced by Gorilla Glass 3.
The audio system consists of a built-in speaker using the Realtek ALC5645 driver, and a microphone jack. It supports typical audio/video/image formats, such as MP3, MP4, JPG, PNG, and GIF.
The device has a 6700mAh polymer lithium-ion battery with a USB-C charging interface (5 V/2.5 A). It has support for Bluetooth 4.0 and 802.11 b/g/n/ac (5 GHz and 2.4 GHz) WiFi.
Release and reception:
GamePad Digital began shipping the GPD Win to backers in October 2016. JC Torres of Slashgear gave the Win a 7/10, stating that it is "Well-rounded and has solid technical specs per expected needs, it is ambitious for being a Windows 10-based handheld console in an industry dominated by Linux-based handhelds," but he also noted an inconsistent build quality among models, and mediocre sound quality ("loud, but low"). Ultimately, he called it an "exceptional device".Linus Sebastian made a video review of the GPD Win on his YouTube channel Linus Tech Tips and stated that it handles gaming and multitasking capabilities and was happy with the hardware specifications, hardware design, and features overall. He stated that deciding whether it was worth the price was up to the user and that Win made him excited about the prospect of what UMPCs will be capable of in the near future as the hardware progresses further.
GPD Win 2:
GamePad Digital announced the GPD Win 2 in early 2017. It has an Intel Core m3 with Intel HD 615 integrated graphics, 8GB of LPDDR3 RAM, a 128GB M.2 solid-state drive, as well as the same I/O ports as the GPD Win. There are a few external hardware changes, including moving the analog knobs outwards from the frame, the removal of the D-pad, and an additional shoulder button on each side, for a total of six. The price for crowdfunding backers is $649, with a stated retail price of $899. The Indiegogo campaign launched on January 15, 2018, and the GPD Win 2 was released in May 2018. The Indiegogo campaign was successful. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Single particle analysis**
Single particle analysis:
Single particle analysis is a group of related computerized image processing techniques used to analyze images from transmission electron microscopy (TEM). These methods were developed to improve and extend the information obtainable from TEM images of particulate samples, typically proteins or other large biological entities such as viruses. Individual images of stained or unstained particles are very noisy, and so hard to interpret. Combining several digitized images of similar particles together gives an image with stronger and more easily interpretable features. An extension of this technique uses single particle methods to build up a three-dimensional reconstruction of the particle. Using cryo-electron microscopy it has become possible to generate reconstructions with sub-nanometer resolution and near-atomic resolution first in the case of highly symmetric viruses, and now in smaller, asymmetric proteins as well. Single particle analysis can also be performed by induced coupled plasma mass spectroscopy (ICP-MS).
Techniques:
Single particle analysis can be done on both negatively stained and vitreous ice-embedded transmission electron cryomicroscopy (CryoTEM) samples. Single particle analysis methods are, in general, reliant on the sample being homogeneous, although techniques for dealing with conformational heterogeneity are being developed.
Techniques:
Images (micrographs) are taken with an electron microscope using charged-coupled device (CCD) detectors coupled to a phosphorescent layer (in the past, they were instead collected on film and digitized using high-quality scanners). The image processing is carried out using specialized software programs, often run on multi-processor computer clusters. Depending on the sample or the desired results, various steps of two- or three-dimensional processing can be done.
Techniques:
In addition, single particle analysis can also be performed in an individual particle mode using an ICP-MS unit.
Techniques:
Alignment and classification Biological samples, and especially samples embedded in thin vitreous ice, are highly radiation sensitive, thus only low electron doses can be used to image the sample. This low dose, as well as variations in the metal stain used (if used) means images have high noise relative to the signal given by the particle being observed. By aligning several similar images to each other so they are in register and then averaging them, an image with higher signal-to-noise ratio can be obtained. As the noise is mostly randomly distributed and the underlying image features constant, by averaging the intensity of each pixel over several images only the constant features are reinforced. Typically, the optimal alignment (a translation and an in-plane rotation) to map one image onto another is calculated by cross-correlation.
Techniques:
However, a micrograph often contains particles in multiple different orientations and/or conformations, and so to get more representative image averages, a method is required to group similar particle images together into multiple sets. This is normally carried out using one of several data analysis and image classification algorithms, such as multi-variate statistical analysis and hierarchical ascendant classification, or k-means clustering.Often data sets of tens of thousands of particle images are used, and to reach an optimal solution an iterative procedure of alignment and classification is used, whereby strong image averages produced by classification are used as reference images for a subsequent alignment of the whole data set.
Techniques:
Image filtering Image filtering (band-pass filtering) is often used to reduce the influence of high and/or low spatial frequency information in the images, which can affect the results of the alignment and classification procedures. This is particularly useful in negative stain images. The algorithms make use of fast Fourier transforms (FFT), often employing Gaussian shaped soft-edged masks in reciprocal space to suppress certain frequency ranges. High-pass filters remove low spatial frequencies (such as ramp or gradient effects), leaving the higher frequencies intact. Low-pass filters remove high spatial frequency features and have a blurring effect on fine details.
Techniques:
Contrast transfer function Due to the nature of image formation in the electron microscope, bright-field TEM images are obtained using significant underfocus. This, along with features inherent in the microscope's lens system, creates blurring of the collected images visible as a point spread function. The combined effects of the imaging conditions are known as the contrast transfer function (CTF), and can be approximated mathematically as a function in reciprocal space. Specialized image processing techniques such as phase flipping and amplitude correction / Wiener filtering can (at least partially) correct for the CTF, and allow high resolution reconstructions.
Techniques:
Three-dimensional reconstruction Transmission electron microscopy images are projections of the object showing the distribution of density through the object, similar to medical X-rays. By making use of the projection-slice theorem a three-dimensional reconstruction of the object can be generated by combining many images (2D projections) of the object taken from a range of viewing angles. Proteins in vitreous ice ideally adopt a random distribution of orientations (or viewing angles), allowing a fairly isotropic reconstruction if a large number of particle images are used. This contrasts with electron tomography, where the viewing angles are limited due to the geometry of the sample/imaging set up, giving an anisotropic reconstruction. Filtered back projection is a commonly used method of generating 3D reconstructions in single particle analysis, although many alternative algorithms exist.Before a reconstruction can be made, the orientation of the object in each image needs to be estimated. Several methods have been developed to work out the relative Euler angles of each image. Some are based on common lines (common 1D projections and sinograms), others use iterative projection matching algorithms. The latter works by beginning with a simple, low resolution 3D starting model and compares the experimental images to projections of the model and creates a new 3D to bootstrap towards a solution.
Techniques:
Methods are also available for making 3D reconstructions of helical samples (such as tobacco mosaic virus), taking advantage of the inherent helical symmetry. Both real space methods (treating sections of the helix as single particles) and reciprocal space methods (using diffraction patterns) can be used for these samples.
Techniques:
Tilt methods The specimen stage of the microscope can be tilted (typically along a single axis), allowing the single particle technique known as random conical tilt. An area of the specimen is imaged at both zero and at high angle (~60-70 degrees) tilts, or in the case of the related method of orthogonal tilt reconstruction, +45 and −45 degrees. Pairs of particles corresponding to the same object at two different tilts (tilt pairs) are selected, and by following the parameters used in subsequent alignment and classification steps a three-dimensional reconstruction can be generated relatively easily. This is because the viewing angle (defined as three Euler angles) of each particle is known from the tilt geometry.
Techniques:
3D reconstructions from random conical tilt suffer from missing information resulting from a restricted range of orientations. Known as the missing cone (due to the shape in reciprocal space), this causes distortions in the 3D maps. However, the missing cone problem can often be overcome by combining several tilt reconstructions. Tilt methods are best suited to negatively stained samples, and can be used for particles that adsorb to the carbon support film in preferred orientations. The phenomenon known as charging or beam-induced movement makes collecting high-tilt images of samples in vitreous ice challenging.
Techniques:
Map visualization and fitting Various software programs are available that allow viewing the 3D maps. These often enable the user to manually dock in protein coordinates (structures from X-ray crystallography or NMR) of subunits into the electron density. Several programs can also fit subunits computationally.
Techniques:
Single particle ICP-MS Single particle-induced coupled plasma-mass spectroscopy (SP-ICP-MS) is used in several areas where there is the possibility of detecting and quantifying suspended particles in samples of environmental fluids, assessing their migration, assessing the size of particles and their distribution, and also determining their stability in a given environment. SP-ICP-MS was designed for particle suspensions in 2000 by Claude Degueldre. He first tested this new methodology at the Forel Institute of the University of Geneva and presented this new analytical approach at the 'Colloid 2oo2' symposium during the spring 2002 meeting of the EMRS, and in the proceedings in 2003. This study presents the theory of SP ICP-MS and the results of tests carried out on clay particles (montmorillonite) as well as other suspensions of colloids. This method was then tested on thorium dioxide nanoparticles by Degueldre & Favarger (2004), zirconium dioxide by Degueldre et al (2004) and gold nanoparticles, which are used as a substrate in nanopharmacy, and published by Degueldre et al (2006). Subsequently, the study of uranium dioxide nano- and micro-particles gave rise to a detailed publication, Ref. Degueldre et al (2006). Since 2010 the interest for SP ICP-MS has exploded.
Examples:
Important information on protein synthesis, ligand binding and RNA interaction can be obtained using this novel technique at medium resolutions of 7.5 to 25Å.
Methanococcus maripaludis chaperonin, reconstructed to 0.43 nanometer resolution. This bacterial protein complex is a machine for folding other proteins, which get trapped within the shell.
Fatty acid synthase from yeast at 0.59 nanometer resolution. This huge enzyme complex is responsible for building the long chain fatty acids essential for cellular life.
A 0.33 nanometer reconstruction of Aquareovirus. These viruses infect fish and other aquatic animals. The reconstruction has high enough resolution to have amino acid side chain densities easily visible.
Primary database:
EM Data Bank (EM Data Bank) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solar eclipse of May 22, 2058**
Solar eclipse of May 22, 2058:
A partial solar eclipse will occur on Wednesday, May 22, 2058. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth.
Related eclipses:
Solar eclipses 2059–2061 This eclipse is a member of a semester series. An eclipse in a semester series of solar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit.
Related eclipses:
Saros 119 It is a part of Saros cycle 119, repeating every 18 years, 11 days, containing 71 events. The series started with partial solar eclipse on May 15, 850 AD. It contains total eclipses on August 9, 994 AD and August 20, 1012, with a hybrid eclipse on August 31, 1030. It has annular eclipses from September 10, 1048, through March 18, 1950. The series ends at member 71 as a partial eclipse on June 24, 2112. The longest duration of totality was only 32 seconds on August 20, 1012. The longest duration of annularity was 7 minutes, 37 seconds on September 1, 1625. The longest duration of hybridity was only 18 seconds on August 31, 1030. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**False pleasure**
False pleasure:
False pleasure may be a pleasure based on a false belief, or a pleasure compared with more real, or greater pleasures. Lacan maintained that philosophers should seek to "discern not true pleasures from false, for such a distinction is impossible to make, but the true and false goods that pleasure points to".When one is said to have experienced a false pleasure, it is distinct from feeling actual pleasure. Pleasure can be described as being false or true based on the content of where the pleasure comes from. When being faced with a situation where one holds a false belief that in turn makes them feel pleasure, this would be categorized as false pleasure. An example of this could be if someone gets pleasure from being in a happy relationship, yet they are unaware that the other person is cheating. Their pleasure then comes from a false belief.
Classical philosophy:
Plato devoted much attention to the belief that "no pleasure save that of the wise is quite true and pure - all others are shadows only" - both in The Republic and in his late dialogue Philebus.Augustine saw false pleasure as focused on the body, as well as pervading the dramatic and rhetorical entertainments of his time.When Plato describes false pleasure he defines it in two different ways. The first way is sometimes called the propositional sense of falsity. In this way of looking at falsity of pleasure the truth value of the statement does not affect the fact that the statement is still a statement. The other way Plato uses falsity when looking at pleasure is in the alienans sense. When looking at falsity in this way we are explaining something as being "fake". In this use of the term falsity the thing we are talking about is in question of existence.
Asceticism:
Buddhaghosa considered that "sense-pleasures are impermanent, deceptive, trivial...unstable, unreal, hollow, and uncertain" - a view echoed in most of what Max Weber termed "world-rejecting asceticism".
Vain pleasure:
A specific false pleasure often denounced in Western thought is the pleasure of vanity - Voltaire for example pillorying the character "corrupted by vanity...He breathed in nothing but false glory and false pleasures".Similarly John Ruskin contrasted the adult's pursuit of the false pleasure of vanity with the way the child does not seek false pleasures; its pleasures are true, simple, and instinctive".False pleasure is not to be confused with vain pleasure. The difference is that vain pleasure is when someone feels pleasure from something that others would find morally wrong for them to get pleasure from. Meanwhile, false pleasure is just based on false beliefs regardless of the moral outlook on the source of pleasure. An example of vain pleasure would be if a person found pleasure in finding out that someone they hate was tortured. This would only count as a false pleasure if the person was not indeed tortured.
Sex:
Sexual intercourse is sometimes seen as a true pleasure (or false one), contrasted with the less real pleasures of the past, as with Donne's "countrey pleasures, childishly".In the wake of Reich, a distinction was sometimes made between reactive and genuine sexuality - analysis supposedly allowing people to "realize the enormous difference between what they once believed sexual pleasure to be and what they now experience".
Mass media:
Popular culture has been a central arena for latter-day disputes over true and false pleasures. Modernism saw attacks on the false pleasures of consumerism from the right, as well as from the left, with Herbert Marcuse denouncing the false pleasures of happy consciousness of "those whose life is the hell of the affluent society".From another angle, Richard Hoggart contrasted the immediate, real pleasures of the working-class from the increasingly ersatz diet fed them by the media.As the 20th Century wore on, however - while concern for the contrast of false and authentic pleasures, fragmented or integrated experiences, certainly remained - the mass media increasingly became less of a scapegoat for the prevalence of false pleasure, figures like Frederic Jameson for example insisting instead on "the false problem of value" in a world where "reification or materialization is a key structural feature of both modernism and mass culture".
Žižek:
Slavoj Žižek had added a further twist to the debate for the 21st century, arguing that in a postmodern age dominated by what he calls "the superego injunction to enjoy that permeates our discourse", the quest for pleasure has become more of a duty than a pleasure: for Žižek, "psychoanalysis is the only discipline in which you are allowed not to enjoy" ! | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Geraniol**
Geraniol:
Geraniol is a monoterpenoid and an alcohol. It is the primary component of citronella oil and is a primary component of rose oil, palmarosa oil. It is a colorless oil, although commercial samples can appear yellow. It has low solubility in water, but it is soluble in common organic solvents. The functional group derived from geraniol (in essence, geraniol lacking the terminal −OH) is called geranyl.
Uses and occurrence:
In addition to rose oil, palmarosa oil, and citronella oil, it also occurs in small quantities in geranium, lemon, and many other essential oils. With a rose-like scent, it is commonly used in perfumes. It is used in flavors such as peach, raspberry, grapefruit, red apple, plum, lime, orange, lemon, watermelon, pineapple, and blueberry.
Uses and occurrence:
Geraniol is produced by the scent glands of honeybees to mark nectar-bearing flowers and locate the entrances to their hives. It is also commonly used as an insect repellent, especially for mosquitoes.The scent of geraniol is reminiscent of, but chemically unrelated to, 2-ethoxy-3,5-hexadiene, also known as geranium taint, a wine fault resulting from fermentation of sorbic acid by lactic acid bacteria.Geranyl pyrophosphate is important in biosynthesis of other terpenes such as myrcene and ocimene. It is also used in the biosynthesis pathway of many cannabinoids in the form of CBGA.
Reactions:
In acidic solutions, geraniol is converted to the cyclic terpene α-terpineol. The alcohol group undergoes expected reactions. It can be converted to the tosylate, which is a precursor to the chloride. Geranyl chloride also arises by the Appel reaction by treating geraniol with triphenylphosphine and carbon tetrachloride. It can be hydrogenated. It can be oxidized to the aldehyde geranial.
Health and safety:
Geraniol is classified as D2B (Toxic materials causing other effects) using the Workplace Hazardous Materials Information System (WHMIS).
History:
Geraniol was first isolated in pure form in 1871 by the German chemist Oscar Jacobsen (1840–1889). Using distillation, Jacobsen obtained geraniol from an essential oil which was obtained from geranium grass (Andropogon schoenanthus) and which was produced in India. The chemical structure of geraniol was determined in 1919 by the French chemist Albert Verley (1867–1959). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Drum memory**
Drum memory:
Drum memory was a magnetic data storage device invented by Gustav Tauschek in 1932 in Austria. Drums were widely used in the 1950s and into the 1960s as computer memory.
Many early computers, called drum computers or drum machines, used drum memory as the main working memory of the computer. Some drums were also used as secondary storage as for example various IBM drum storage drives.
Drums were displaced as primary computer memory by magnetic core memory, which offered a better balance of size, speed, cost, reliability and potential for further improvements. Drums in turn were replaced by hard disk drives for secondary storage, which were both less expensive and offered denser storage. The manufacturing of drums ceased in the 1970s.
Technical design:
A drum memory or drum storage unit contained a large metal cylinder, coated on the outside surface with a ferromagnetic recording material. It could be considered the precursor to the hard disk drive (HDD), but in the form of a drum (cylinder) rather than a flat disk. In most designs, one or more rows of fixed read-write heads ran along the long axis of the drum, one for each track. The drum's controller simply selected the proper head and waited for the data to appear under it as the drum turned (rotational latency). Not all drum units were designed with each track having its own head. Some, such as the English Electric DEUCE drum and the UNIVAC FASTRAND had multiple heads moving a short distance on the drum in contrast to modern HDDs, which have one head per platter surface.
Technical design:
In November 1953 Hagen published a paper disclosing "air floating" of magnetic heads in an experimental sheet metal drum. A US patent filed in January 1954 by Baumeister of IBM disclosed a "spring loaded and air supported shoe for poising a magnetic head above a rapidly rotating magnetic drum." Flying heads became standard in drums and hard disk drives.
Magnetic drum units used as primary memory were addressed by word. Drum units used as secondary storage were addressed by block. Several modes of block addressing were possible, depending on the device.
Blocks took up an entire track and were addressed by track.
Tracks were divided into fixed length sectors and addressing was by track and sectors.
Blocks were variable length, and blocks were addressed by track and record number.
Blocks were variable length with a key, and could be searched by key content.Some devices were divided into logical cylinders, and addressing by track was actually logical cylinder and track.
Technical design:
The performance of a drum with one head per track is comparable to that of a disk with one head per track and is determined almost entirely by the rotational latency, whereas in an HDD with moving heads its performance includes a rotational latency delay plus the time to position the head over the desired track (seek time). In the era when drums were used as main working memory, programmers often did optimum programming—the programmer—or the assembler, e.g., Symbolic Optimal Assembly Program (SOAP)—positioned code on the drum in such a way as to reduce the amount of time needed for the next instruction to rotate into place under the head. They did this by timing how long it would take after loading an instruction for the computer to be ready to read the next one, then placing that instruction on the drum so that it would arrive under a head just in time. This method of timing-compensation, called the "skip factor" or "interleaving", was used for many years in storage memory controllers.
History:
Tauschek's original drum memory (1932) had a capacity of about 500,000 bits (62.5 kilobytes).One of the earliest functioning computers to employ drum memory was the Atanasoff–Berry computer (1942). It stored 3,000 bits; however, it employed capacitance rather than magnetism to store the information. The outer surface of the drum was lined with electrical contacts leading to capacitors contained within.
History:
Magnetic drums were developed for the U.S. Navy by Engineering Research Associates (ERA) in 1946 and 1947. An experimental ERA study was completed and reported to the Navy on June 19, 1947. Other early drum storage device development occurred at Birkbeck College (University of London), Harvard University, IBM and the University of Manchester. An ERA drum was the internal memory for the ATLAS-I computer delivered to the U.S. Navy in October 1950 and later sold commercially as the ERA 1101 and UNIVAC_1101. Through mergers, ERA became a division of UNIVAC shipping the Series 1100 drum as a part of the UNIVAC File Computer in 1956; each drum stored 180,000 6-bit characters (135 kilobytes).The first mass-produced computer, the IBM 650 (1954), initially had up to 2,000 10-digit words, about 17.5 kilobytes, of drum memory (later doubled to 4,000 words, about 35 kilobytes, in the Model 4). As late as 1980, PDP-11/45 machines using magnetic core main memory and drums for swapping were still in use at many of the original UNIX sites.
History:
In BSD Unix and its descendants, /dev/drum was the name of the default virtual memory (swap) device, deriving from the use of drum secondary-storage devices as backup storage for pages in virtual memory.Magnetic drum memory units were used in the Minuteman ICBM launch control centers from the beginning in the early 1960s until the REACT upgrades in the mid-1990s. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**X.desktop**
X.desktop:
X.desktop was an early desktop environment graphical user interface built on the X Window System. It was developed and sold during the late 1980s and early 1990s by IXI Limited, a British software house based in Cambridge. Versions of X.desktop were available for over 30 different UNIX operating system platforms and it was licensed to various vendors, including IBM, Compaq, Locus Computing Corporation, BiiN and Acorn Computers, the latter licensing it in 1988 for its future workstation products.The "very first version" of X.desktop used Xlib, whereas version 1.3 (being the "first version seen by end users") used the Athena widget set. From version 2.0 ("the third major version") and onwards, the product was based on the Motif toolkit. This contrasted with one rival, Visix Software's Looking Glass, which continued to use its own proprietary graphical user interface toolkit instead of adopting Motif.X.desktop provided a user interface reminiscent of the Macintosh Finder, with the screen representing a desktop and with windows showing the contents of folders (or directories) in the filesystem. Such windows contained icons, each of which representing a file, folder or other filesystem object. Icons could be dragged outside windows (and thus onto the desktop itself) for convenient access in current and future login sessions. Double-clicking on icons initiated an open action on objects, with application programs typically being launched, although the nature of the action could be configured and multiple actions defined, such as the primary action for a text file being to open it in an editor, with a secondary action being to print the file. Icons could also be dropped onto other icons to initiate actions. For example, dropping a file icon onto a printer icon would initiate printing of the file. X.desktop was described as working "the way you'd expect a Unix/X application to work", seeking to represent the contents of the filesystem accurately.Regarded as being aimed at users wanting "an easy-to-use, Macintosh-style graphical representation of a desktop", the product was highly configurable, although configuration activities were mostly aimed at experienced users or administrators who would set up environments for end-users or customers, and a dedicated configuration guide was provided to support such activities. The software required a minimum of 2 MB to 4 MB of RAM to function. It could be purchased for $495 for a single-user licence, with bulk prices available, but was also bundled with workstations from numerous vendors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CalorieMate**
CalorieMate:
CalorieMate (カロリーメイト karorīmeito) is a brand of nutritional energy bar and energy gel foods produced by Otsuka Pharmaceutical Co., in Japan. It was first released in 1983 debuting with a cheese flavored block. CalorieMate comes in several forms, including Block, Jelly, and Can. CalorieMate Block (カロリーメイト ブロック karorīmeito burokku) resembles a bar-shaped cookie (somewhat like a shortbread), sold in packs of either two or four. CalorieMate Jelly (カロリーメイト ゼリー karorīmeito zerī) is a gelatin sold in a pouch with a spout. CalorieMate Can (カロリーメイト 缶 karorīmeito kan) is a canned drink.
Flavors:
Block Cheese (Black Label) (1983) Fruit (Green Label) (1984) Chocolate (Red Label) (1993) Maple (Pink Label) (2009) Vanilla (Light Blue Label) (2022) Jelly Apple (Pink Label) Fruity Milk (Blue Label) Lime & Grapefruit (Green Label) 100kcal (Black Label) Can Corn Soup Café au lait (Red Label) Coffee Cocoa Fruit Mix (Green Label) Yogurt (Blue Label)
Former Flavors:
Block Vegetable (2000-2007) Potato (2007-2014) Plain (White Label) (2014-2022)
In popular culture:
CalorieMate is referenced and featured in Metal Gear Solid 3: Snake Eater. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Psychopathology**
Psychopathology:
Psychopathology is the study of abnormal cognition, behaviour, and experiences which differs according to social norms and rests upon a number of constructs that are deemed to be the social norm at any particular era.
Psychopathology:
Biological psychopathology is the study of the biological etiology of abnormal cognitions, behaviour and experiences. Child psychopathology is a specialisation applied to children and adolescents. Animal psychopathology is a specialisation applied to non-human animals. This concept is linked to the philosophical ideas first outlined by Galton (1869) and is linked to the appliance of eugenical ideations around what constitutes the human.
History:
Early explanations for mental illnesses were influenced by religious belief and superstition. Psychological conditions that are now classified as mental disorders were initially attributed to possessions by evil spirits, demons, and the devil. This idea was widely accepted up until the sixteenth and seventeenth centuries. Individuals who had these so-called "possessions" were tortured as treatment or as Foucault outlines in the History of Madness: viewed as seers (Joan of Arc). Religious practitioners used this technique in hoping to bring their patients back to sanity but increasingly there was the shift to the great confinement.The Greek physician Hippocrates was one of the first to reject the idea that mental disorders were caused by possession of demons or the devil. He firmly believed the symptoms of mental disorders were due to diseases originating in the brain. Hippocrates suspected that these states of insanity were due to imbalances of fluids in the body. He identified these fluids to be four in particular: blood, black bile, yellow bile, and phlegm. This later became the basis of the chemical imbalance theory used widely within the present.
History:
Furthermore, not far from Hippocrates, the philosopher Plato would come to argue the mind, body, and spirit worked as a unit. Any imbalance brought to these compositions of the individual could bring distress or lack of harmony within the individual. This philosophical idea would remain in perspective until the seventeenth century. It was later challenged by Laing (1960) along with Laing and Esterson (1964) who noted that it was the family environment that led to the formation of adaptive strategies.
History:
In the eighteenth century's Romantic Movement, the idea that healthy parent-child relationships provided sanity became a prominent idea. Philosopher Jean-Jacques Rousseau introduced the notion that trauma in childhood could have negative implications later in adulthood.The scientific discipline of psychopathology was founded by Karl Jaspers in 1913. It was referred to as "static understanding" and its purpose was to graphically recreate the "mental phenomenon" experienced by the client.
History:
Psychoanalysis Sigmund Freud proposed a method for treating psychopathology through dialogue between a patient and a psychoanalyst. Talking therapy would originate from his ideas on the individual's experiences and the natural human efforts to make sense of the world and life.
As the study of psychiatric disorders:
The study of psychopathology is interdisciplinary, with contributions coming from clinical psychology, abnormal psychology, social psychology, and developmental psychology, as well as neuropsychology and other psychology subdisciplines. Other related fields include psychiatry, neuroscience, criminology, social work, sociology, epidemiology, and statistics.Psychopathology can be broadly separated into descriptive and explanatory. Descriptive psychopathology involves categorising, defining and understanding symptoms as reported by people and observed through their behaviour which are then assessed according to a social norm. Explanatory psychopathology looks to find explanations for certain kinds of symptoms according to theoretical models such as psychodynamics, cognitive behavioural therapy or through understanding how they have been constructed by drawing upon Constructivist Grounded Theory (Charmaz, 2016) or Interpretative Phenomenological Analysis (Smith, Flowers & Larkin, 2013).There are several ways to characterise the presence of psychopathology in an individual as a whole. One strategy is to assess a person along four dimensions: deviance, distress, dysfunction, and danger, known collectively as the four Ds. Another conceptualisation, the p factor, sees psychopathology as a general, overarching construct that influences psychiatric symptoms.
As the study of psychiatric disorders:
The four Ds A description of the four Ds when defining abnormality: Deviance: this term describes the idea that specific thoughts, behaviours and emotions are considered deviant when they are unacceptable or not common in society. Clinicians must, however, remember that minority groups are not always deemed deviant just because they may not have anything in common with other groups. Therefore, we define an individual's actions as deviant or abnormal when their behaviour is deemed unacceptable by the culture they belong to. However, many disorders have a relation between patterns of deviance and therefore need to be evaluated in a differential diagnostic model.
As the study of psychiatric disorders:
Distress: this term accounts for negative feelings by the individual with the disorder. They may feel deeply troubled and affected by their illness. Behaviours and feelings that cause distress to individuals or to others around them are considered abnormal if the condition is upsetting to the person experiencing it. Distress is related to dysfunction by being a useful asset in accurately perceiving dysfunction in an individual's life. These two are not always related because an individual can be highly dysfunctional and at the same time experience minimal stress. The important characteristic of distress is not dysfunction; rather it is the limit to which an individual is stressed by an issue.
As the study of psychiatric disorders:
Dysfunction: this term involves maladaptive behaviour that impairs the individual's ability to perform normal daily functions, such as getting ready for work in the morning, or driving a car. This maladaptive behaviour has to be a problem large enough to be considered a diagnosis. It's highly noted to look for dysfunction across an individual's life experience because there is a chance the dysfunction may appear in clear observable view and in places where it is less likely to appear. Such maladaptive behaviours prevent the individual from living a normal, healthy lifestyle. However, dysfunctional behaviour is not always caused by a disorder; it may be voluntary, such as engaging in a hunger strike.
As the study of psychiatric disorders:
Danger: this term involves dangerous or violent behaviour directed at the individual, or others in the environment. The two important characteristics of danger is, danger to self and danger to others. When diagnosing, there is a large vulnerability of danger in which there is some danger in each diagnosis and within these diagnoses there is a continuum of severity. An example of dangerous behaviour that may suggest a psychological disorder is engaging in suicidal activity. Behaviours and feelings that are potentially harmful to an individual or the individuals around them are seen as abnormal.
As the study of psychiatric disorders:
The p factor Benjamin Lahey and colleagues first proposed a general "psychopathology factor" in 2012, or simply "p factor". This construct shares its conceptual similarity with the g factor of general intelligence. Instead of conceptualising psychopathology as consisting of several discrete categories of mental disorders, the p factor is dimensional and influences whether psychiatric symptoms in general are present or absent. The symptoms that are present then combine to form several distinct diagnoses. The p factor is modelled in the Hierarchical Taxonomy of Psychopathology. Although researchers initially conceived a three factor explanation for psychopathology generally, subsequent study provided more evidence for a single factor that is sequentially comorbid, recurrent/chronic, and exists on a continuum of severity and chronicity.Higher scores on the p factor dimension have been found to be correlated with higher levels of functional impairment, greater incidence of problems in developmental history, and more diminished early-life brain function. In addition, those with higher levels of the p factor are more likely to have inherited a genetic predisposition to mental illness. The existence of the p factor may explain why it has been "... challenging to find causes, consequences, biomarkers, and treatments with specificity to individual mental disorders."A 2020 review of the p factor found that many studies support its validity and that it is generally stable throughout one's life. A high p factor is associated with many adverse effects, including poor academic performance, impulsivity, criminality, suicidality, reduced foetal growth, lower executive functioning, and a greater number of psychiatric diagnoses. A partial genetic basis for the p factor has also been supported.Alternatively, the p factor has also been interpreted as an index of general impairment rather than being a specific index that causes psychopathology.
As mental symptoms:
The term psychopathology may also be used to denote behaviours or experiences which are indicative of mental illness, even if they do not constitute a formal diagnosis. For example, the presence of hallucinations may be considered as a psychopathological sign, even if there are not enough symptoms present to fulfil the criteria for one of the disorders listed in the DSM or ICD.
As mental symptoms:
In a more general sense, any behaviour or experience which causes impairment, distress or disability, particularly if it is thought to arise from a functional breakdown in either the cognitive or neurocognitive systems in the brain, may be classified as psychopathology. It remains unclear how strong the distinction between maladaptive traits and mental disorders actually is, e.g. neuroticism is often described as the personal level of minor psychiatric symptoms.
Diagnostic and Statistical Manual of Mental Disorders:
The Diagnostic and Statistical Manual of Mental Disorders (DSM) is a guideline for the diagnosis and understanding of mental disorders. It serves as reference for a range of professionals in medicine and mental health in the United States particularly. These professionals include psychologists, counsellors, physicians, social workers, psychiatric nurses and nurse practitioners, marriage and family therapists, and more. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Triple Peel**
Triple Peel:
A Triple Peel (TP) is a standard manoeuvre in top-level games of association croquet.To peel a ball in croquet is to send a ball, other than the striker's ball, through its next hoop, thereby scoring a point for that ball. The ball in question is known as the "peelee". A prerequisite for a triple peel is that the peelee's next hoop is 4-back (so it has three hoops still to run). The striker, during a single turn, peels the peelee through its last three hoops and then pegs it out. Because a ball cannot be roqueted more than once without the striker's ball first running a hoop itself, the three peels are always performed in the course of making a break for the striker's ball, often a break that completes the full circuit of 12 hoops.
Triple Peel:
A triple peel can be performed either on the striker's partner ball, or on one of the opponent's balls; the latter case is referred to as a Triple Peel on Opponent (TPO).
Triple Peel:
The significance of a triple peel is the rule in advanced association croquet that defines penalties for a player who runs their ball through the 4-back hoop. The penalty is particularly severe when 4-back is run during the same break as 1-back, by the first of the player's two balls: in this case, at the end of the turn, the opponent is allowed to take the innings by selecting either of their balls and lifting it next to another ball on the lawn, as if the ball had been roqueted. Because conceding a contact in this way creates a good chance of losing the game, players will generally end a break before running 4-back. This means that it is common for a player to start an all-round break in a position where another ball on the lawn has 4-back as its next hoop.
Triple Peel:
In a triple peel on the partner ball, the objective is to get the striker's ball all the way round the lawn, and the partner through its last three hoops, and then peg both balls out, thus winning the game. In top level play this is sometimes achieved as early as the fifth turn of the game.
Triple Peel:
In a triple peel on the opponent, the objective is to peg out the opponent's ball. This is usually attempted when the opponent's other ball is still on its first hoop. On successful completion the striker has both balls on the lawn, while the opponent has only one ball, positioned at hoop 1. Although peeling scores points for the opponent, pegging one of the opponent's balls out puts them at a considerable disadvantage because it is much harder to make a break with one remaining ball when its partner has been eliminated from the game.
Sextuple peel:
In recent years the best players have perfected the triple peel, to the extent that leaving a ball at 4-back is considered risky. Such players may well end a break at 1-back, hoping to complete the game with a sextuple peel (peeling the partner through its last six hoops). The sextuple peel is now considered one of the highest achievements in croquet, performed only by a handful of top international players. The triple peel, by contrast, is the highest aspiration of many good players at club level. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spring green**
Spring green:
Spring green is a color that was traditionally considered to be on the yellow side of green, but in modern computer systems based on the RGB color model is halfway between cyan and green on the color wheel. The modern spring green, when plotted on the CIE chromaticity diagram, corresponds to a visual stimulus of about 505 nanometers on the visible spectrum. In HSV color space, the expression of which is known as the RGB color wheel, spring green has a hue of 150°. Spring green is one of the tertiary colors on the RGB color wheel, where it is the complementary color of rose.
Spring green:
The first recorded use of spring green as a color name in English was in 1766, referring to roughly the color now called spring bud.
Spring green (computer):
Spring green (HTML) Spring green is a web color, common to X11 and HTML.
Medium spring green Displayed at right is the color medium spring green.
Medium spring green is a web color. It is close to but not right on the color wheel and it is a little closer to cyan than to green.
Dark spring green At right is displayed the web color dark spring green.
Additional variations of web spring green Mint cream Displayed at right is the web color mint cream, a pale pastel tint of spring green.
The color mint cream is a representation of the color of the interior of an after dinner mint (which is disc shaped with mint flavored buttercream on the inside and a chocolate coating on the outside).
Sea green Sea green is a shade of cyan color that resembles the hue of shallow seawater as seen from the surface.
Sea green is notable for being the emblematic color of the Levellers party in the politics of 1640s England. Leveller supporters would wear a sea-green ribbon, in a similar manner to the present-day red AIDS awareness ribbon.
Medium sea green At right is displayed the web color medium sea green, a medium shade of spring green.
Aquamarine Aquamarine is a color that is a pale bright tint of spring green toned toward cyan. It represents the color of the aquamarine gemstone. Aquamarine is the birthstone for those born on January 21 to February 20 in tropical zodiac, and February 14 to March 15 in sidereal zodiac.
Spring green (traditional):
Spring bud Spring bud is the color that used to be called spring green before the X11 web color spring green was formulated in 1987 when the X11 colors were first promulgated. This color is now called spring bud to avoid confusion with the web color.The color is also called soft spring green, spring green (traditional), or spring green (M&P).
Spring green (traditional):
The first recorded use of spring green as a color name in English (meaning the color that is now called spring bud) was in 1766.
Additional variations of traditional spring green Emerald Emerald, also called emerald green, is a tone of green that is particularly light and bright, with a faint bluish cast. The name derives from the typical appearance of the emerald gemstone.The first recorded use of emerald as a color name in English was in 1598.
Spring green (traditional):
Ireland is sometimes referred to as the Emerald Isle due to its lush greenery. The May birthstone is emerald. Seattle is sometimes referred to as the Emerald City, because its abundant rainfall creates lush vegetation. In the Middle Ages, The Emerald Tablet of Hermes Trismegistus was believed to contain the secrets of alchemy. "Emerald City", from the story of The Wonderful Wizard of Oz, by L. Frank Baum, is a city where everything from food to people are emerald green. However, it is revealed at the end of the story that everything in the city is normal colored, but the glasses everyone wears are emerald tinted. The Green Zone in Baghdad is sometimes ironically and cynically referred to as the Emerald City. The Emerald Buddha is a figurine of the sitting Buddha, made of green jade (rather than emerald), clothed in gold, and about 45 cm tall. It is kept in the Chapel of the Emerald Buddha (Wat Phra Kaew) on the grounds of the Grand Palace in Bangkok. The Emerald Triangle refers to the three counties of Mendocino, Humboldt, and Trinity in Northern California, United States because these three counties are the biggest marijuana producing counties in California and also the US. A county-commissioned study reports pot accounts for up to two-thirds of the economy of Mendocino. Emerald Cities: Urban Sustainability and Economic Development is a book published in 2010 by Joan Fitzgerald, director of the law, policy and society program at Northeastern University, about ecologically sustainable city planning.
Spring green (traditional):
Emerald was invented in Germany in 1814. By taking acetic acid, mixing and boiling it with vinegar, and then by adding some arsenic, a bright blue-green hue was formed. During the 19th century, the arsenic-containing dye Paris green was marketed as emerald green. It was notorious for causing deaths due to it being a popular color used for wallpaper. Victorian women used this bright color for dresses, and florists used it on fake flowers.
Spring green (traditional):
Viridian At right is displayed the color viridian, a medium tone of spring green.
The first recorded use of viridian as a color name in English was in the 1860s (exact year uncertain).
Other variations of spring green:
Green (CMYK) (pigment green) The color defined as green in the CMYK color system used in printing, also known as pigment green, is the tone of green that is achieved by mixing process (printer's) cyan and process (printer's) yellow in equal proportions. It is displayed at adjacent.
The purpose of the CMYK color system is to provide the maximum possible gamut of color reproducible in printing.
The color indicated is only approximate as the colors of printing inks may vary.
Green (NCS) (psychological primary green) The color defined as green in the NCS or Natural Color System is shown at adjacent (NCS 2060-G). The natural color system is a color system based on the four unique hues or psychological primary colors red, yellow, green, and blue. The NCS is based on the opponent process theory of vision.
The Natural Color System is widely used in Scandinavia.
Other variations of spring green:
Green (Munsell) The color defined as green in the Munsell color system (Munsell 5G) is shown adjacent. The Munsell color system is a color space that specifies colors based on three color dimensions: hue, value (lightness), and chroma (color purity), spaced uniformly in three dimensions in the elongated oval at an angle shaped Munsell color solid according to the logarithmic scale which governs human perception. In order for all the colors to be spaced uniformly, it was found necessary to use a color wheel with five primary colors—red, yellow, green, blue, and purple.
Other variations of spring green:
The Munsell colors displayed are only approximate as they have been adjusted to fit into the sRGB gamut.
Green (Pantone) Green (Pantone) is the color that is called green in Pantone.
The source of this color is the "Pantone Textile Paper eXtended (TPX)" color list, color # green C, EC, HC, PC, U, or UP—green.
Green (Crayola) Green (Crayola) is the color called green in Crayola crayons.
Green was one of the original Crayola crayons introduced in 1903.
Erin Adjacent is displayed the color erin. The first recorded use of erin as a color name was in 1922.
Bright mint Displayed adjacent is the color bright mint.
Dark green Dark green is a dark shade of green. A different shade of green has been designated as "dark green (X11)" for certain computer uses.
Dark pastel green Adjacent is the color dark pastel green.
Screamin' green The color screamin' green is shown adjacent.
This color was renamed from ultra green by Crayola in 1990.
This color is a fluorescent color.
Other variations of spring green:
Cambridge blue Cambridge blue is the color commonly used by sports teams from Cambridge University.This color is actually a medium tone of spring green. Spring green colors are colors with an h code (hue code) of between 135 and 165; this color has an h code of 140, putting it within the range of spring green colors on the RGB color wheel.
Other variations of spring green:
Caribbean green Adjacent is displayed the color Caribbean green. This is a Crayola color formulated in 1997.
Magic mint Adjacent is displayed the color magic mint, a light tint of spring green.
The color magic mint is a light tint of the color mint.
Ceramic tiles in a similar color, often with a contrasting black border, were a popular choice for bathroom, kitchen and upmarket hotel swimming pool décor during the 1930s.This is a Crayola color formulated in 1990 (later retired in 2003).
Mint The color mint, also known as mint leaf, is a representation of the color of mint.
The first recorded use of mint as a color name in English was in 1920.
Mountain meadow Displayed adjacent is the color mountain meadow.
Mountain meadow is a Crayola crayon color formulated in 1998.
Persian green Persian green is a color used in pottery and Persian carpets in Iran.
Other variations of spring green:
Other colors associated with Persia include Persian red and Persian blue. The color Persian green is named from the green color of some Persian pottery and is a representation of the color of the mineral malachite. It is a popular color in Iran because the color green symbolizes gardens, nature, heaven, and sanctity. The first recorded use of Persian green as a color name in English was in 1892.
Other variations of spring green:
Sea foam green This is the Crayola version of the above color, a much brighter and lighter shade. It was introduced in 2001.
Shamrock green (Irish green) Shamrock green is a tone of green that represents the color of shamrocks, a symbol of Ireland.
Other variations of spring green:
The first recorded use of shamrock as a color name in English was in the 1820s (exact year uncertain).This green is also defined as Irish green Pantone 347.This green is used as the green on the national flag of Ireland.It is customary in Ireland, Australia, New Zealand, Canada, and the United States to wear this or any other tone of green on St. Patrick's Day, 17 March.
Other variations of spring green:
The State of California uses this shade of green of the grass under the bear on their state flag.The Boston Celtics of the National Basketball Association use this shade for their uniforms, logos, and other memorabilia.
Sap green Sap green is a green pigment that was traditionally made of ripe buckthorn berries. However, modern colors marketed under this name are usually a blend of other pigments, commonly with a basis of Phthalocyanine Green G. Sap green paint was frequently used on Bob Ross's TV show, The Joy of Painting.
Jade Jade, also called jade green, is a representation of the color of the gemstone called jade, although the stone itself varies widely in hue.
The color name jade green was first used in Spanish in the form piedra de ijada in 1569.
The first recorded use of jade green as a color name in English was in 1892.
Malachite Malachite, also called malachite green, is a color that is a representation of the color of the mineral malachite.
The first recorded use of malachite green as a color name in English was in the 1200s (exact year uncertain).
Opal Displayed adjacent is the color opal.
It is a pale shade of cyan that is reminiscent of the color of an opal gemstone, although as with many gemstones, opals come in a wide variety of colors.
Other variations of spring green:
Brunswick green Brunswick green is a common name for green pigments made from copper compounds, although the name has also been used for other formulations that produce a similar hue, such as mixtures of chrome yellow and Prussian blue. The pigment is named after Braunschweig, Germany (also known as Brunswick in English) where it was first manufactured. It is a deep, dark green, which may vary from intense to very dark, almost black.The first recorded use of Brunswick green as a color name in English was in 1764. Another name for this color is English green. The first use of English green as a synonym for Brunswick green was in 1923.Deep Brunswick green is commonly recognized as part of the British racing green spectrum, the national auto racing color of the United Kingdom.
Other variations of spring green:
A different color, also called Brunswick green, was the color for passenger locomotives of the grouping and then the nationalized British Railways. There were three shades of these colors and they are defined under British Standard BS381C – 225, BS381C – 226, and BS381C – 227 (ordered from lightest to darkest). The Brunswick green used by the Nationalised British Railways – Western Region for passenger locomotives was BS381C – 227 (rgb(30:62:46)). RAL6005 is a close substitute to BS381C – 227. A characteristic of these colors was the ease for various railway locations to mix them by using whole pots of primary colors – hence the ability to get reasonably consistent colors with manual mixing half a century and more ago.
Other variations of spring green:
The color used by the Pennsylvania Railroad for locomotives was often called Brunswick green, but officially was termed dark green locomotive enamel (DGLE). This was a shade of green so dark as to be almost black, but which turned greener with age and weathering as the copper compounds further oxidized.
Other variations of spring green:
Castleton green Castleton green is one of the two official colors of Castleton University in Vermont. The official college colors are green (PMS 343) and white. The Castleton University Office of Marketing and Communications created the Castleton colors for web and logo development and has technical guidelines, copyright and privacy protection; as well as logos and images that developers are asked to follow in the college's guidelines for using official Castleton logos. If web developers are using green on a university website, they are encouraged to use Castleton green. It is prominently used for representing Castleton's athletic teams, the Castleton Spartans.
Other variations of spring green:
Bottle green Bottle green is a dark shade of green, similar to pine green. It is a representation of the color of green glass bottles.
Other variations of spring green:
The first recorded use of bottle green as a color name in English was in 1816.Bottle green is a color in Prismacolor marker and pencil sets. It is also the color of the uniform of the Police Service of Northern Ireland replacing the Royal Ulster Constabulary's "rifle green" colored uniforms in 2001. It is also the green used in uniforms for South Sydney High School in Sydney.Bottle green is also the color most associated with guide signs and street name signs in the United States.
Other variations of spring green:
Bottle green is also the background color of the Flag of Bangladesh, as defined by the government of Bangladesh. Another name for this color is Bangladesh green.
Other variations of spring green:
Dartmouth green Dartmouth green is the official color of Dartmouth College, adopted in 1866. It was chosen for being the only decent primary color that had not been taken already. It is prominently used as the name of the Dartmouth College athletic team, the Dartmouth Big Green. The Dartmouth athletic teams adopted this new name after the college officially discontinued the use of its unofficial mascot, the Dartmouth Indian, in 1974.
Other variations of spring green:
Dartmouth green and white are the main colors of Lithuanian basketball club Žalgiris Kaunas.
GO Transit green GO green was the color used for the brand of GO Transit, the regional commuter service in the Greater Toronto Area.
Between 1967 and 2013, the brand and color that has adorned each of its trains, buses, and other property generally remained unchanged. It also matched the shade of green used on signs for highways in Ontario. In July 2013, GO Transit updated its look to a two-tone color scheme.
Gotham green Gotham green is the official color of the New York Jets as of 4 April 2019. The name is a reference to one of the Nicknames of New York City.
Pakistan green Pakistan green is a shade of dark green, used in web development and graphic design. It is also the background color of the national flag of Pakistan. It is almost identical to the HTML/X11 dark green in sRGB and HSV values.
Sacramento State green In 2004, California State University, Sacramento rebranded itself as Sacramento State, while keeping the official name as the long form. In the process of rebranding a new logo was selected, and in 2005 it formalized the colors which it would use.
Paris green Paris green is a color that ranges from pale and vivid blue green to deeper true green. It comes from the inorganic compound copper (II) acetoarsenite and was once a popular pigment in artists' paints.
Spanish green Spanish green is the color that is called "verde" (the Spanish word for "green") in the Guía de coloraciones (Guide to colorations) by Rosa Gallego and Juan Carlos Sanz, a color dictionary published in 2005 that is widely popular in the Hispanophone realm.
UNT green UNT green is one of three official colors used by the University of North Texas. It is the primary color that appears on branding and promotional material produced by and on behalf of the university.
UP forest green Adjacent is one of the official colors used by the University of the Philippines, designated as "UP forest green". It is based on the approved color specifications to be used for the seal of the university.
Hooker's green Hooker's green is a dark green color created by mixing Prussian blue and gamboge. It is displayed adjacent. Hooker's green takes its name from botanical artist William Hooker (1779–1832) who first created it particularly for illustrating leaves.
Aero blue Aero blue is a fluorescent greenish-cyan color. Aero blue was used as rainshower in one of the Sharpie permanent markers but not as bright on the marker. However, there is no mechanism for showing fluorescence on a computer screen.
Morning sky Morning sky, also known as Morning blue is a representation of the color of the morning sky.
The year of the first recorded use of morning blue as a color name in English is unknown.
Feldgrau green Feldgrau (field grey) was the color of the field uniform of the German Army from 1937 to 1945, and the East German NVA armies. Metaphorically, feldgrau used to refer to the armies of Germany (the Imperial German Army and the Heer [army] component of the Reichswehr and the Wehrmacht). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spontaneous generation**
Spontaneous generation:
Spontaneous generation is a superseded scientific theory that held that living creatures could arise from nonliving matter and that such processes were commonplace and regular. It was hypothesized that certain forms, such as fleas, could arise from inanimate matter such as dust, or that maggots could arise from dead flesh. The doctrine of spontaneous generation was coherently synthesized by the Greek philosopher and naturalist Aristotle, who compiled and expanded the work of earlier natural philosophers and the various ancient explanations for the appearance of organisms. Spontaneous generation was taken as scientific fact for two millennia. Though challenged in the 17th and 18th centuries by the experiments of the Italian biologists Francesco Redi and Lazzaro Spallanzani, it was not discredited until the work of the French chemist Louis Pasteur and the Irish physicist John Tyndall in the mid-19th century.
Spontaneous generation:
Rejection of spontaneous generation is no longer controversial among biologists. By the middle of the 19th century, experiments by Pasteur and others were considered to have disproven the traditional theory of spontaneous generation. Attention has turned instead to the origin of life, since all life seems to have evolved from a single form around four billion years ago.
Description:
"Spontaneous generation" means both the supposed processes by which different types of life might repeatedly emerge from specific sources other than seeds, eggs, or parents, and the theoretical principles presented in support of any such phenomena. Crucial to this doctrine are the ideas that life comes from non-life and that no causal agent, such as a parent, is needed. Supposed examples included the seasonal generation of mice and other animals from the mud of the Nile, the emergence of fleas from inanimate matter such as dust, or the appearance of maggots in dead flesh. Such ideas have something in common with the modern hypothesis of the origin of life, which asserts that life emerged some four billion years ago from non-living materials, over a time span of millions of years, and subsequently diversified into all the forms that now exist.The term equivocal generation, sometimes known as heterogenesis or xenogenesis, describes the supposed process by which one form of life arises from a different, unrelated form, such as tapeworms from the bodies of their hosts.
Antiquity:
Pre-Socratic philosophers Active in the 6th and 5th centuries BCE, early Greek philosophers, called physiologoi in antiquity (Greek: φυσιολόγοι; in English, physical or natural philosophers), attempted to give natural explanations of phenomena that had previously been ascribed to the agency of the gods. The physiologoi sought the material principle or arche (Greek: ἀρχή) of things, emphasizing the rational unity of the external world and rejecting theological or mythological explanations.Anaximander, who believed that all things arose from the elemental nature of the universe, the apeiron (ἄπειρον) or the "unbounded" or "infinite", was likely the first western thinker to propose that life developed spontaneously from nonliving matter. The primal chaos of the apeiron, eternally in motion, served as a platform on which elemental opposites (e.g., wet and dry, hot and cold) generated and shaped the many and varied things in the world. According to Hippolytus of Rome in the third century CE, Anaximander claimed that fish or fish-like creatures were first formed in the "wet" when acted on by the heat of the sun and that these aquatic creatures gave rise to human beings. The Roman author Censorinus, writing in the 3rd century, reported: Anaximander of Miletus considered that from warmed up water and earth emerged either fish or entirely fishlike animals. Inside these animals, men took form and embryos were held prisoners until puberty; only then, after these animals burst open, could men and women come out, now able to feed themselves.
Antiquity:
The Greek philosopher Anaximenes, a pupil of Anaximander, thought that air was the element that imparted life and endowed creatures with motion and thought. He proposed that plants and animals, including human beings, arose from a primordial terrestrial slime, a mixture of earth and water, combined with the sun's heat. The philosopher Anaxagoras, too, believed that life emerged from a terrestrial slime. However, Anaximenes held that the seeds of plants existed in the air from the beginning, and those of animals in the aether. Another philosopher, Xenophanes, traced the origin of man back to the transitional period between the fluid stage of the Earth and the formation of land, under the influence of the Sun.In what has occasionally been seen as a prefiguration of a concept of natural selection, Empedocles accepted the spontaneous generation of life, but held that different forms, made up of differing combinations of parts, spontaneously arose as though by trial and error: successful combinations formed the individuals present in the observer's lifetime, whereas unsuccessful forms failed to reproduce.
Antiquity:
Aristotle In his biological works, the natural philosopher Aristotle theorized extensively the reproduction of various animals, whether by sexual, parthenogenetic, or spontaneous generation. In accordance with his fundamental theory of hylomorphism, which held that every physical entity was a compound of matter and form, Aristotle's basic theory of sexual reproduction contended that the male's seed imposed form, the set of characteristics passed down to offspring on the "matter" (menstrual blood) supplied by the female. Thus female matter is the material cause of generation—it supplies the matter that will constitute the offspring—while the male semen is the efficient cause, the factor that instigates and delineates the thing's existence. Yet, Aristotle proposed in the History of Animals, many creatures form not through sexual processes but by spontaneous generation: Now there is one property that animals are found to have in common with plants. For some plants are generated from the seed of plants, whilst other plants are self-generated through the formation of some elemental principle similar to a seed; and of these latter plants some derive their nutriment from the ground, whilst others grow inside other plants ... So with animals, some spring from parent animals according to their kind, whilst others grow spontaneously and not from kindred stock; and of these instances of spontaneous generation some come from putrefying earth or vegetable matter, as is the case with a number of insects, while others are spontaneously generated in the inside of animals out of the secretions of their several organs.
Antiquity:
According to this theory, living things may come forth from nonliving things in a manner roughly analogous to the "enformation of the female matter by the agency of the male seed" seen in sexual reproduction. Nonliving materials, like the seminal fluid present in sexual generation, contain pneuma (πνεῦμα, "breath"), or "vital heat". According to Aristotle, pneuma had more "heat" than regular air did, and this heat endowed the substance with certain vital properties: The power of every soul seems to have shared in a different and more divine body than the so called [four] elements ... For every [animal], what makes the seed generative inheres in the seed and is called its "heat". But this is not fire or some such power, but instead the pneuma that is enclosed in the seed and in foamy matter, this being analogous to the element of the stars. This is why fire does not generate any animal ... but the heat of the sun and the heat of animals does, not only the heat that fills the seed, but also any other residue of [the animal's] nature that may exist similarly possesses this vital principle.
Antiquity:
Aristotle drew an analogy between the "foamy matter" (τὸ ἀφρῶδες, to aphrodes) found in nature and the "seed" of an animal, which he viewed as being a kind of foam itself (composed, as it was, from a mixture of water and pneuma). For Aristotle, the generative materials of male and female animals (semen and menstrual fluid) were essentially refinements, made by male and female bodies according to their respective proportions of heat, of ingested food, which was, in turn, a byproduct of the elements earth and water. Thus any creature, whether generated sexually from parents or spontaneously through the interaction of vital heat and elemental matter, was dependent on the proportions of pneuma and the various elements which Aristotle believed comprised all things. While Aristotle recognized that many living things emerged from putrefying matter, he pointed out that the putrefaction was not the source of life, but the byproduct of the action of the "sweet" element of water.
Antiquity:
Animals and plants come into being in earth and in liquid because there is water in earth, and air in water, and in all air is vital heat so that in a sense all things are full of soul. Therefore living things form quickly whenever this air and vital heat are enclosed in anything. When they are so enclosed, the corporeal liquids being heated, there arises as it were a frothy bubble.
Antiquity:
With varying degrees of observational confidence, Aristotle theorized the spontaneous generation of a range of creatures from different sorts of inanimate matter. The testaceans (a genus which for Aristotle included bivalves and snails), for instance, were characterized by spontaneous generation from mud, but differed based upon the precise material they grew in—for example, clams and scallops in sand, oysters in slime, and the barnacle and the limpet in the hollows of rocks.
Antiquity:
Latin and early Christian sources Athenaeus dissented towards spontaneous generation, claiming that a variety of anchovy did not generate from roe, as Aristotle stated, but rather, from sea foam.As the dominant view of philosophers and thinkers continued to be in favour of spontaneous generation, some Christian theologians accepted the view. The Berber theologian and philosopher Augustine of Hippo discussed spontaneous generation in The City of God and The Literal Meaning of Genesis, citing Biblical passages such as "Let the waters bring forth abundantly the moving creature that hath life" (Genesis 1:20) as decrees that would enable ongoing creation.
Middle Ages:
From the fall of the Roman Empire in 5th century to the East–West Schism in 1054, the influence of Greek science declined, although spontaneous generation generally went unchallenged. New descriptions were made. Of the beliefs, some had doctrinal implications. In 1188, Gerald of Wales, after having traveled in Ireland, argued that the barnacle goose myth was evidence for the virgin birth of Jesus. Where the practice of fasting during Lent allowed fish, but prohibited fowl, the idea that the goose was in fact a fish suggested that its consumption be permitted during Lent. The practice was eventually prohibited by decree of Pope Innocent III in 1215.After Aristotle’s works were reintroduced to Western Europe, they were translated into Latin from the original Greek or Arabic. They reached their greatest level of acceptance during the 13th century. With the availability of Latin translations, the German philosopher Albertus Magnus and his student Thomas Aquinas raised Aristotelianism to its greatest prominence. Albert wrote a paraphrase of Aristotle, De causis et processu universitatis, in which he removed some commentaries by Arabic scholars and incorporated others. The influential writings of Aquinas, on both the physical and metaphysical, are predominantly Aristotelian, but show numerous other influences.
Middle Ages:
Spontaneous generation is described in literature as if it were a fact well into the Renaissance. Shakespeare wrote of snakes and crocodiles forming from the mud of the Nile: Shakespeare: Antony and Cleopatra: Act 2, scene 7 The author of The Compleat Angler, Izaak Walton repeats the question of the origin of eels "as rats and mice, and many other living creatures, are bred in Egypt, by the sun's heat when it shines upon the overflowing of the river...". While the ancient question of the origin of eels remained unanswered and the additional idea that eels reproduced from corruption of age was mentioned, the spontaneous generation of rats and mice stirred up no debate.The Dutch biologist and microscopist Jan Swammerdam rejected the concept that one animal could arise from another or from putrification by chance because it was impious; he found the concept of spontaneous generation irreligious, and he associated it with atheism.
Previous beliefs:
Frogs were believed to have spontaneously generated from mud.
Mice were believed to become pregnant though the act of licking salt, or grew from the moisture of the earth.
Barnacle geese were thought to have emerged from a crustacean, the goose barnacle (see the barnacle goose myth).
Snakes could generate from the marrow of the human spine, and had previously generated from the blood of Medusa.
Previous beliefs:
Eels had multiple stories. Aristotle claimed that eels emerged from earthworms, and were lacking in sex and milt, spawn and passages for these. Later authors dissented. The Roman author and natural historian Pliny the Elder did not argue against the anatomic limits of eels, but stated that eels reproduce by budding, scraping themselves against rocks, liberating particles that become eels. The Greek author Athenaeus described eels as entwining and discharging a fluid which would settle on mud and generate life.
Previous beliefs:
Bookworms could generate from excessive wind. Vitruvius, a Roman architect and writer of the 1st century BCE, advised that to stop their generation, libraries be placed facing eastwards to benefit from morning light, but not towards the south or the west as those winds were particularly offensive.
Bees were generated in decomposing cows, through a process known as bugonia. Samson's riddle led some to believe they could also generate through the body of a lion.
Wasps could be generated from decomposing horses.
Cicada were generated from the spittle of the cuckoo.
Experimental approach:
Early tests The Brussels physician Jan Baptist van Helmont described a recipe for mice (a piece of dirty cloth plus wheat for 21 days) and scorpions (basil, placed between two bricks and left in sunlight). His notes suggest he may have attempted to do these things.Where Aristotle held that the embryo was formed by a coagulation in the uterus, the English physician William Harvey showed by way of dissection of deer that there was no visible embryo during the first month. Although his work predated the microscope, this led him to suggest that life came from invisible eggs. In the frontispiece of his 1651 book Exercitationes de Generatione Animalium (Essays on the Generation of Animals), he denied spontaneous generation with the motto omnia ex ovo ("everything from eggs").
Experimental approach:
The ancient beliefs were subjected to testing. In 1668, the Italian physician and parasitologist Francesco Redi challenged the idea that maggots arose spontaneously from rotting meat. In the first major experiment to challenge spontaneous generation, he placed meat in a variety of sealed, open, and partially covered containers. Realizing that the sealed containers were deprived of air, he used "fine Naples veil", and observed no worms on the meat, but they appeared on the cloth. Redi used his experiments to support the preexistence theory put forth by the Catholic Church at that time, which maintained that living things originated from parents. In scientific circles Redi's work very soon had great influence, as evidenced in a letter from the English natural theologian John Ray in 1671 to members of the Royal Society of London, in which he calls the spontaneous generation of insects "unlikely".Pier Antonio Micheli, c. 1729, observed that when fungal spores were placed on slices of melon, the same type of fungi were produced that the spores came from, and from this observation he noted that fungi did not arise from spontaneous generation.In 1745, John Needham performed a series of experiments on boiled broths. Believing that boiling would kill all living things, he showed that when sealed right after boiling, the broths would cloud, allowing the belief in spontaneous generation to persist. His studies were rigorously scrutinized by his peers, and many of them agreed.Lazzaro Spallanzani modified the Needham experiment in 1768, where he attempted to exclude the possibility of introducing a contaminating factor between boiling and sealing. His technique involved boiling the broth in a sealed container with the air partially evacuated to prevent explosions. Although he did not see growth, the exclusion of air left the question of whether air was an essential factor in spontaneous generation. But attitudes were changing; by the start of the 19th century, a scientist such as Joseph Priestley could write that "There is nothing in modern philosophy that appears to me so extraordinary, as the revival of what has long been considered as the exploded doctrine of equivocal, or, as Dr. Darwin calls it, spontaneous generation."In 1837, Charles Cagniard de la Tour, a physicist, and Theodor Schwann, one of the founders of cell theory, published their independent discovery of yeast in alcoholic fermentation. They used the microscope to examine foam left over from the process of brewing beer. Where the Dutch microscopist Antonie van Leeuwenhoek described "small spheroid globules", they observed yeast cells undergo cell division. Fermentation would not occur when sterile air or pure oxygen was introduced if yeast were not present. This suggested that airborne microorganisms, not spontaneous generation, was responsible.However, although the idea of spontaneous generation had been in decline for nearly a century, its supporters did not abandon it all at once. As James Rennie wrote in 1838, despite Redi's experiments, "distinguished naturalists, such as Blumenbach, Cuvier, Bory de St. Vincent, R. Brown, &c." continued to support the theory.
Experimental approach:
Pasteur and Tyndall Louis Pasteur's 1859 experiment is widely seen as having settled the question of spontaneous generation. He boiled a meat broth in a swan neck flask; the bend in the neck of the flask prevented falling particles from reaching the broth, while still allowing the free flow of air. The flask remained free of growth for an extended period. When the flask was turned so that particles could fall down the bends, the broth quickly became clouded. However, minority objections were persistent and not always unreasonable, given that the experimental difficulties were far more challenging than the popular accounts suggest. The investigations of the Irish physician John Tyndall, a correspondent of Pasteur and an admirer of his work, were decisive in disproving spontaneous generation. All the same, Tyndall encountered difficulties in dealing with microbial spores, which were not well understood in his day. Like Pasteur, he boiled his cultures to sterilize them, and some types of bacterial spores can survive boiling. The autoclave, which eventually came into universal application in medical practice and microbiology to sterilise equipment, was introduced after these experiments.In 1862, the French Academy of Sciences paid special attention to the issue, establishing a prize "to him who by well-conducted experiments throws new light on the question of the so-called spontaneous generation" and appointed a commission to judge the winner. Pasteur and others used the term biogenesis as the opposite of spontaneous generation, to mean that life was generated only from other life. Pasteur's claim followed the German physician Rudolf Virchow's doctrine Omnis cellula e cellula ("all cells from cells"), itself derived from the work of Robert Remak. After Pasteur's 1859 experiment, the term "spontaneous generation" fell out of favor. Experimentalists used a variety of terms for the study of the origin of life from nonliving materials. Heterogenesis was applied to the generation of living things from once-living organic matter (such as boiled broths), and the English physiologist Henry Charlton Bastian proposed the term archebiosis for life originating from non-living materials. Disliking the randomness and unpredictability implied by the term spontaneous generation, in 1870 Bastian coined the term biogenesis for the formation of life from nonliving matter. Soon thereafter, however, the English biologist Thomas Henry Huxley proposed the term abiogenesis for this same process, and adopted biogenesis for the process by which life arises from existing life. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Law of truly large numbers**
Law of truly large numbers:
The law of truly large numbers (a statistical adage), attributed to Persi Diaconis and Frederick Mosteller, states that with a large enough number of independent samples, any highly implausible (i.e. unlikely in any single sample, but with constant probability strictly greater than 0 in any sample) result is likely to be observed. Because we never find it notable when likely events occur, we highlight unlikely events and notice them more. The law is often used to falsify different pseudo-scientific claims; as such, it is sometimes criticized by fringe scientists. Similar theorem but bolder (for infinite numbers) is Infinite Monkey Theorem it shows that any finite pattern is possible to get, through infinite random process (but in this case there is skepticism about physical applicability of such arrangements because of finite nature of observable universe).The law is meant to make a statement about probabilities and statistical significance: in large enough masses of statistical data, even minuscule fluctuations attain statistical significance. Thus in truly large numbers of observations, it is paradoxically easy to find significant correlations, in large numbers, which still do not lead to causal theories (see: spurious correlation), and which by their collective number, might lead to obfuscation as well.
Law of truly large numbers:
The law can be rephrased as "large numbers also deceive", something which is counter-intuitive to a descriptive statistician. More concretely, skeptic Penn Jillette has said, "Million-to-one odds happen eight times a day in New York" (population about 8,000,000).
Examples:
For a simplified example of the law, assume that a given event happens with a probability for its occurrence of 0.1%, within a single trial. Then, the probability that this so-called unlikely event does not happen (improbability) in a single trial is 99.9% (0.999).
Examples:
For a sample of only 1000 independent trials, however, the probability that the event does not happen in any of them, even once (improbability), is only 0.9991000 ≈ 0.3677, or 36.77%. Then, the probability that the event does happen, at least once, in 1000 trials is ( 1 − 0.9991000 ≈ 0.6323, or ) 63.23%. This means that this "unlikely event" has a probability of 63.23% of happening if 1000 independent trials are conducted. If the number of trials were increased to 10,000, the probability of it happening at least once in 10,000 trials rises to ( 1 − 0.99910000 ≈ 0.99995, or ) 99.995%. In other words, a highly unlikely event, given enough independent trials with some fixed number of draws per trial, is even more likely to occur.
Examples:
For an event X that occurs with very low probability of 0.0000001% (in any single sample, see also almost never), considering 1,000,000,000 as a "truly large" number of independent samples gives the probability of occurrence of X equal to 1 − 0.9999999991000000000 ≈ 0.63 = 63% and a number of independent samples equal to the size of the human population (in 2021) gives probability of event X: 1 − 0.9999999997900000000 ≈ 0.9996 = 99.96%.These calculations can be formalized in mathematical language as: "the probability of an unlikely event X happening in N independent trials can become arbitrarily near to 1, no matter how small the probability of the event X in one single trial is, provided that N is truly large."For example, where the probability of unlikely event X is not a small constant but decreased in function of N, see graph. In sexual reproduction, the chances for a microscopic, single spermatozoon to reach the ovum in order to fertilize it is very small. Thus, in every encounter, spermatozoa are released in numbers of millions at once (in mammals), raising the opportunities of fecundation to a nearly-certain event.
Examples:
In high availability systems even very unlikely events have to be taken into consideration, in series systems even when the probability of failure for single element is very low after connecting them in large numbers probability of whole system failure raises (to make system failures less probable redundancy can be used - in such parallel systems even highly unreliable redundant parts connected in large numbers raise the probability of not breaking to required high level).
In criticism of pseudoscience:
The law comes up in criticism of pseudoscience and is sometimes called the Jeane Dixon effect (see also Postdiction). It holds that the more predictions a psychic makes, the better the odds that one of them will "hit". Thus, if one comes true, the psychic expects us to forget the vast majority that did not happen (confirmation bias). Humans can be susceptible to this fallacy.
In criticism of pseudoscience:
Another similar manifestation of the law can be found in gambling, where gamblers tend to remember their wins and forget their losses, even if the latter far outnumbers the former (though depending on a particular person, the opposite may also be true when they think they need more analysis of their losses to achieve fine tuning of their playing system). Mikal Aasved links it with "selective memory bias", allowing gamblers to mentally distance themselves from the consequences of their gambling by holding an inflated view of their real winnings (or losses in the opposite case – "selective memory bias in either direction"). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Diacylglycerol oil**
Diacylglycerol oil:
Diacylglycerol oil (DAG oil) is a cooking oil in which the ratio of triglycerides, also known as Triacylglycerols (TAGs), to diacylglycerols (DAGs) is shifted to contain mostly DAG, unlike conventional cooking oils, which are rich in TAGs. Vegetable DAG oil, for example, contains 80% DAG and is used as a 1:1 replacement for liquid vegetable oils in all applications.
How it works:
DAGs and TAGs are natural components in all vegetable oils. Through an enzymatic process, the DAG content of a combination of soy and canola oils is significantly increased. Unlike TAG, which is stored as body fat, DAG is immediately burned as energy. With DAG-rich oil containing more than 80% DAG, less of the oil is stored as body fat than with traditional oils, which are rich in TAG. Excess calories consumed by the body are converted into fat and stored, regardless if it is consumed as DAG or TAG.
How it works:
Study According to a 2007 study, Diacylglycerol (DAG) oil is present with vegetable oil. A study in 2004 indicated that DAG oil is effective for both fasting and postprandial hyperlipidemia; according to the same study, it helped prevent excess adiposity.
FDA designation:
DAG oil was designated as generally recognized as safe (GRAS) by an outside panel of scientific experts, and their conclusion has been reviewed and accepted by the US Food and Drug Administration (FDA). This GRAS determination is for use in vegetable oil spreads and home cooking oil. In Japan, the Ministry of Health, Labor and Welfare has approved DAG oil to manage serum triglycerides after a meal, which leads to less build-up of body fat.
Side effects:
Because DAG oil is digested the same way as conventional vegetable oils, the potential side effects are no different than those of conventional oil. In addition, studies with animals and human subjects have shown no adverse effects from single-dose or long-term consumption of DAG-rich oil. It has also been found that fat-soluble vitamins' status is not affected by the consumption of DAG-rich oil.
Research:
Studies indicate that DAG oil has numerous health benefits, including reducing post-meal blood triglyceride levels. Clinical studies in Japan have also shown that DAG oil may increase overall metabolism, helping reduce the amount of fat already stored in the body.
Sales suspended voluntarily:
On September 16, 2009, Kao Corporation, maker of Econa Cooking Oil has voluntarily suspended sales of products containing DAG oil in Japan, which include cooking oil, mayonnaise, salad dressing, and pet food products. The company is cited as considering suspending the sales of Enova Brand Oil sold in North America. On the same day, Hagoromo Foods, maker of Sea Chicken brand of canned tuna, and Satonoyuki, maker of tofu products, have voluntarily suspended number of products made with Econa Cooking Oil sold in Japan.In its press release announcing the temporally suspension of Econa line of products, Kao cites questions raised by European researchers on the uncertain health effect of glycidyl fatty acid esters (GE). It states that GE contained in the products are introduced as by-product of deodorization process, but maintains that the main ingredient DAG (Diacylglycerol) is proven safe, and says it plans to resume sale after reducing the amount of GE introduced in its production method. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Collins glass**
Collins glass:
A collins glass is a glass tumbler which typically will contain 300 to 410 millilitres (10 to 14 US fl oz). It is commonly used to serve sparkling cocktails, especially long drinks like the Tom Collins or John Collins. Its cylindrical shape, narrower and taller than a highball glass, keeps the drink carbonated longer by reducing the surface area of the drink. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Threshold potential**
Threshold potential:
In electrophysiology, the threshold potential is the critical level to which a membrane potential must be depolarized to initiate an action potential. In neuroscience, threshold potentials are necessary to regulate and propagate signaling in both the central nervous system (CNS) and the peripheral nervous system (PNS).
Threshold potential:
Most often, the threshold potential is a membrane potential value between –50 and –55 mV, but can vary based upon several factors. A neuron's resting membrane potential (–70 mV) can be altered to either increase or decrease likelihood of reaching threshold via sodium and potassium ions. An influx of sodium into the cell through open, voltage-gated sodium channels can depolarize the membrane past threshold and thus excite it while an efflux of potassium or influx of chloride can hyperpolarize the cell and thus inhibit threshold from being reached.
Discovery:
Initial experiments revolved around the concept that any electrical change that is brought about in neurons must occur through the action of ions. The German physical chemist Walther Nernst applied this concept in experiments to discover nervous excitability, and concluded that the local excitatory process through a semi-permeable membrane depends upon the ionic concentration. Also, ion concentration was shown to be the limiting factor in excitation. If the proper concentration of ions was attained, excitation would certainly occur. This was the basis for discovering the threshold value.
Discovery:
Along with reconstructing the action potential in the 1950s, Alan Lloyd Hodgkin and Andrew Huxley were also able to experimentally determine the mechanism behind the threshold for excitation. It is known as the Hodgkin–Huxley model. Through use of voltage clamp techniques on a squid giant axon, they discovered that excitable tissues generally exhibit the phenomenon that a certain membrane potential must be reached in order to fire an action potential. Since the experiment yielded results through the observation of ionic conductance changes, Hodgkin and Huxley used these terms to discuss the threshold potential. They initially suggested that there must be a discontinuity in the conductance of either sodium or potassium, but in reality both conductances tended to vary smoothly along with the membrane potential.They soon discovered that at threshold potential, the inward and outward currents, of sodium and potassium ions respectively, were exactly equal and opposite. As opposed to the resting membrane potential, the threshold potential's conditions exhibited a balance of currents that were unstable. Instability refers to the fact that any further depolarization activates even more voltage-gated sodium channels, and the incoming sodium depolarizing current overcomes the delayed outward current of potassium. At resting level, on the other hand, the potassium and sodium currents are equal and opposite in a stable manner, where a sudden, continuous flow of ions should not result. The basis is that at a certain level of depolarization, when the currents are equal and opposite in an unstable manner, any further entry of positive charge generates an action potential. This specific value of depolarization (in mV) is otherwise known as the threshold potential.
Physiological function and characteristics:
The threshold value controls whether or not the incoming stimuli are sufficient to generate an action potential. It relies on a balance of incoming inhibitory and excitatory stimuli. The potentials generated by the stimuli are additive, and they may reach threshold depending on their frequency and amplitude. Normal functioning of the central nervous system entails a summation of synaptic inputs made largely onto a neuron's dendritic tree. These local graded potentials, which are primarily associated with external stimuli, reach the axonal initial segment and build until they manage to reach the threshold value. The larger the stimulus, the greater the depolarization, or attempt to reach threshold. The task of depolarization requires several key steps that rely on anatomical factors of the cell. The ion conductances involved depend on the membrane potential and also the time after the membrane potential changes.
Physiological function and characteristics:
Resting membrane potential The phospholipid bilayer of the cell membrane is, in itself, highly impermeable to ions. The complete structure of the cell membrane includes many proteins that are embedded in or completely cross the lipid bilayer. Some of those proteins allow for the highly specific passage of ions, ion channels. Leak potassium channels allow potassium to flow through the membrane in response to the disparity in concentrations of potassium inside (high concentration) and outside the cell (low). The loss of positive(+) charges of the potassium(K+) ions from the inside of the cell results in a negative potential there compared to the extracellular surface of the membrane. A much smaller "leak" of sodium(Na+) into the cell results in the actual resting potential, about –70 mV, being less negative than the calculated potential for K+ alone, the equilibrium potential, about –90 mV. The sodium-potassium ATPase is an active transporter within the membrane that pumps potassium (2 ions) back into the cell and sodium (3 ions) out of the cell, maintaining the concentrations of both ions as well as preserving the voltage polarization.
Physiological function and characteristics:
Depolarization However, once a stimulus activates the voltage-gated sodium channels to open, positive sodium ions flood into the cell and the voltage increases. This process can also be initiated by ligand or neurotransmitter binding to a ligand-gated channel. More sodium is outside the cell relative to the inside, and the positive charge within the cell propels the outflow of potassium ions through delayed-rectifier voltage-gated potassium channels. Since the potassium channels within the cell membrane are delayed, any further entrance of sodium activates more and more voltage-gated sodium channels. Depolarization above threshold results in an increase in the conductance of Na sufficient for inward sodium movement to swamp outward potassium movement immediately. If the influx of sodium ions fails to reach threshold, then sodium conductance does not increase a sufficient amount to override the resting potassium conductance. In that case, subthreshold membrane potential oscillations are observed in some type of neurons. If successful, the sudden influx of positive charge depolarizes the membrane, and potassium is delayed in re-establishing, or hyperpolarizing, the cell. Sodium influx depolarizes the cell in attempt to establish its own equilibrium potential (about +52 mV) to make the inside of the cell more positive relative to the outside.
Physiological function and characteristics:
Variations The value of threshold can vary according to numerous factors. Changes in the ion conductances of sodium or potassium can lead to either a raised or lowered value of threshold. Additionally, the diameter of the axon, density of voltage activated sodium channels, and properties of sodium channels within the axon all affect the threshold value. Typically in the axon or dendrite, there are small depolarizing or hyperpolarizing signals resulting from a prior stimulus. The passive spread of these signals depend on the passive electrical properties of the cell. The signals can only continue along the neuron to cause an action potential further down if they are strong enough to make it past the cell's membrane resistance and capacitance. For example, a neuron with a large diameter has more ionic channels in its membrane than a smaller cell, resulting in a lower resistance to the flow of ionic current. The current spreads quicker in a cell with less resistance, and is more likely to reach the threshold at other portions of the neuron.The threshold potential has also been shown experimentally to adapt to slow changes in input characteristics by regulating sodium channel density as well as inactivating these sodium channels overall. Hyperpolarization by the delayed-rectifier potassium channels causes a relative refractory period that makes it much more difficult to reach threshold. The delayed-rectifier potassium channels are responsible for the late outward phase of the action potential, where they open at a different voltage stimulus compared to the quickly activated sodium channels. They rectify, or repair, the balance of ions across the membrane by opening and letting potassium flow down its concentration gradient from inside to outside the cell. They close slowly as well, resulting in an outward flow of positive charge that exceeds the balance necessary. It results in excess negativity in the cell, requiring an extremely large stimulus and resulting depolarization to cause a response.
Tracking techniques:
Threshold tracking techniques test nerve excitability, and depend on the properties of axonal membranes and sites of stimulation. They are extremely sensitive to the membrane potential and changes in this potential. These tests can measure and compare a control threshold (or resting threshold) to a threshold produced by a change in the environment, by a preceding single impulse, an impulse train, or a subthreshold current. Measuring changes in threshold can indicate changes in membrane potential, axonal properties, and/or the integrity of the myelin sheath.
Tracking techniques:
Threshold tracking allows for the strength of a test stimulus to be adjusted by a computer in order to activate a defined fraction of the maximal nerve or muscle potential. A threshold tracking experiment consists of a 1-ms stimulus being applied to a nerve in regular intervals. The action potential is recorded downstream from the triggering impulse. The stimulus is automatically decreased in steps of a set percentage until the response falls below the target (generation of an action potential). Thereafter, the stimulus is stepped up or down depending on whether the previous response was lesser or greater than the target response until a resting (or control) threshold has been established. Nerve excitability can then be changed by altering the nerve environment or applying additional currents. Since the value of a single threshold current provides little valuable information because it varies within and between subjects, pairs of threshold measurements, comparing the control threshold to thresholds produced by refractoriness, supernormality, strength-duration time constant or "threshold electrotonus" are more useful in scientific and clinical study.Tracking threshold has advantages over other electrophysiological techniques, like the constant stimulus method. This technique can track threshold changes within a dynamic range of 200% and in general give more insight into axonal properties than other tests. Also, this technique allows for changes in threshold to be given a quantitative value, which when mathematically converted into a percentage, can be used to compare single fiber and multifiber preparations, different neuronal sites, and nerve excitability in different species.
Tracking techniques:
"Threshold electrotonus" A specific threshold tracking technique is threshold electrotonus, which uses the threshold tracking set-up to produce long-lasting subthreshold depolarizing or hyperpolarizing currents within a membrane. Changes in cell excitability can be observed and recorded by creating these long-lasting currents. Threshold decrease is evident during extensive depolarization, and threshold increase is evident with extensive hyperpolarization. With hyperpolarization, there is an increase in the resistance of the internodal membrane due to closure of potassium channels, and the resulting plot "fans out". Depolarization produces has the opposite effect, activating potassium channels, producing a plot that "fans in".The most important factor determining threshold electrotonus is membrane potential, so threshold electrotonus can also be used as an index of membrane potential. Furthermore, it can be used to identify characteristics of significant medical conditions through comparing the effects of those conditions on threshold potential with the effects viewed experimentally. For example, ischemia and depolarization cause the same "fanning in" effect of the electrotonus waveforms. This observation leads to the conclusion that ischemia may result from over-activation of potassium channels.
Clinical significance:
The role of the threshold potential has been implicated in a clinical context, namely in the functioning of the nervous system itself as well as in the cardiovascular system.
Clinical significance:
Febrile seizures A febrile seizure, or "fever fit", is a convulsion associated with a significant rise in body temperature, occurring most commonly in early childhood. Repeated episodes of childhood febrile seizures are associated with an increased risk of temporal lobe epilepsy in adulthood.With patch clamp recording, an analogous state was replicated in vitro in rat cortical neurons after induction of febrile body temperatures; a notable decrease in threshold potential was observed. The mechanism for this decrease possibly involves suppression of inhibition mediated by the GABAB receptor with excessive heat exposure.
Clinical significance:
ALS and diabetes Abnormalities in neuronal excitability have been noted in amyotrophic lateral sclerosis and diabetes patients. While the mechanism ultimately responsible for the variance differs between the two conditions, tests through a response to ischemia indicate a similar resistance, ironically, to ischemia and resulting paresthesias. As ischemia occurs through inhibition of the sodium-potassium pump, abnormalities in the threshold potential are hence implicated.
Clinical significance:
Arrythmia Since the 1940s, the concept of diastolic depolarization, or "pacemaker potential", has become established; this mechanism is a characteristic distinctive of cardiac tissue. When the threshold is reached and the resulting action potential fires, a heartbeat results from the interactions; however, when this heartbeat occurs at an irregular time, a potentially serious condition known as arrythmia may result.
Clinical significance:
Use of medications A variety of drugs can present prolongation of the QT interval as a side effect. Prolongation of this interval is a result of a delay in sodium and calcium channel inactivation; without proper channel inactivation, the threshold potential is reached prematurely and thus arrhythmia tends to result. These drugs, known as pro-arrhythmic agents, include antimicrobials, antipsychotics, methadone, and, ironically, antiarrhythmic agents. The use of such agents is particularly frequent in intensive care units, and special care must be exercised when QT intervals are prolonged in such patients: arrhythmias as a result of prolonged QT intervals include the potentially fatal torsades de pointes, or TdP.
Clinical significance:
Role of diet Diet may be a variable in the risk of arrhythmia. Polyunsaturated fatty acids, found in fish oils and several plant oils, serve a role in the prevention of arrhythmias. By inhibiting the voltage-dependent sodium current, these oils shift the threshold potential to a more positive value; therefore, an action potential requires increased depolarization. Clinically therapeutic use of these extracts remains a subject of research, but a strong correlation is established between regular consumption of fish oil and lower frequency of hospitalization for atrial fibrillation, a severe and increasingly common arrythmia. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scrubber (brush)**
Scrubber (brush):
A scrubber (German: Schrubber), is a type of wide brush with a long shaft used for cleaning hard floors or surfaces. Unlike a broom, which has soft bristles to sweep dirt away, a scrubber has hard bristles for brushing. It may therefore be used wet, with water or cleaning fluids. Around the brush head there may also be a removable floorcloth or mop, either soaked in water for cleaning or dry for wiping dry. However, these days other cleaning implements tend to be used for such purposes.
Scrubber (brush):
In North Germany and in sailor's language, a scrubber is also called a Leuwagen, hence in large firms or offices a cleaning party is sometimes jokingly called a Leuwagenballett ("scrubber ballet"). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Church's thesis (constructive mathematics)**
Church's thesis (constructive mathematics):
In constructive mathematics, Church's thesis CT is an axiom stating that all total functions are computable functions.
Church's thesis (constructive mathematics):
The similarly named Church–Turing thesis states that every effectively calculable function is a computable function, thus collapsing the former notion into the latter. CT is stronger in the sense that with it every function is computable. The constructivist principle is fully formalizable, using formalizations of "function" and "computable" that depend on the theory considered. A common context is recursion theory as established since the 1930's.
Church's thesis (constructive mathematics):
Adopting CT as a principle, then for a predicate of the form of a family of existence claims (e.g. ∃!y.φ(x,y) below) that is proven not to be validated for all x in a computable manner, the contrapositive of the axiom implies that this is then not validated by any total function (i.e. no mapping corresponding to x↦y ). It thus collapses the possible scope of the notion of function compared to the underlying theory, restricting it to the defined notion of computable function. The axiom in turn affects ones proof calculus, negating some common classical propositions.
Church's thesis (constructive mathematics):
The axiom is incompatible with systems that validate total functional value associations and evaluations that are also proven not to be computable. For example, Peano arithmetic PA is such a system. Concretely, the constructive Heyting arithmetic HA with the thesis in its first-order formulation, CT0 , as an additional axiom is able to disprove some universally closed variants of instances of the principle of the excluded middle. It is in this way that the axiom is shown incompatible with PA . However, HA is equiconsistent with both PA as well as with the theory given by HA plus CT0 . That is, adding either the law of the excluded middle or Church's thesis does not make Heyting arithmetic inconsistent, but adding both does.
Formal statement:
This principle has formalizations in various mathematical frameworks. Let T1 denote Kleene's T predicate, so that e.g. validity of the predicate ∀x∃wT1(e,x,w) expresses that e is the index of a total computable function. Note that there are also variations on T1 and the value extracting U , as functions with return values. Here they are expressed as primitive recursive predicates. Write TU(e,x,w,y) to abbreviate T1(e,x,w)∧U(w,y) , as the values y plays a role in the principle's formulations. So the computable function with index e terminates on x with value y iff ∃wTU(e,x,w,y) . This Σ10 -predicate of on triples e,x,y may be expressed by {e}(x)=y , at the cost of introducing notation involving the sign also used for arithmetic equality. In first-order theories such as HA , which cannot quantify over relations and function directly, CT may be stated as an axiom schema saying that for any definable total relation, which comprises a family of valid existence claims ∃y , the latter are computable in the sense of TU . For each formula φ(x,y) of two variables, the schema CT0 includes the axiom (∀x∃yφ(x,y))→∃e(∀x∃y∃wTU(e,x,w,y)∧φ(x,y)) In words: If for every x there is a y satisfying φ , then there is in fact an e that is the Gödel number of a partial recursive function that will, for every x , produce such a y satisfying the formula - and with some w being a Gödel number encoding a verifiable computation bearing witness to the fact that y is in fact the value of that function at x Relatedly, implications of this form may instead also be established as constructive meta-theoretical properties of theories. I.e. the theory need not necessarily prove the implication (a formula with → ), but the existence of e is meta-logically validated. A theory is then said to be closed under the rule.
Formal statement:
Variants Extended Church's thesis The statement ECT0 extends the claim to relations which are defined and total over a certain type of domain. This may be achieved by allowing to narrowing the scope of the universal quantifier and so can be formally stated by the schema: (∀xψ(x)→∃yφ(x,y))→∃e(∀xψ(x)→∃y∃wTU(e,x,w,y)∧φ(x,y)) In the above, ψ is restricted to be almost-negative. For first-order arithmetic (where the schema is designated ECT0 ), this means ψ cannot contain any disjunction, and existential quantifiers can only appear in front of Δ00 (decidable) formulas. In the presence of Markov's principle MP , the syntactical restrictions may be somewhat loosened.When considering the domain of all numbers (e.g. when taking ψ(x) to be the trivial x=x ), the above reduces to the previous form of Church's thesis.
Formal statement:
These first-order formulations are fairly strong in that they also constitute a form of function choice: Total relations are contain total recursive functions.
The extended Church's thesis is used by the school of constructive mathematics founded by Andrey Markov.
Functional premise CT0! denotes the weaker variant of the principle in which the premise demands unique existence (of y ), i.e. the return value already has to be determined.
Higher order formulation The first formulation of the thesis above is also called the arithmetical form of the principle, since only quantifier over numbers appear in its formulation. It uses a general relation φ in its antecedent.
In a framework like recursion theory, a functions may be representable as a functional relation, granting a unique output value for every input.
Formal statement:
In higher-order systems that can quantify over (total) functions directly, a form of CT can be stated as a single axiom, saying that every function from the natural numbers to the natural numbers is computable. In terms of the primitive recursive predicates, ∀f∃e(∀x∃wTU(e,x,w,f(x))) This postulates that all functions f are computable, in the Kleene sense, with an index e in the theory. Thus, so are all values y=f(x) . One may write ∀f∃ef≅{e} with f≅g denoting extensional equality ∀x.f(x)=g(x) . For example, in set theory functions are elements of function spaces and total functional by definition. A total function has a unique return value for every input in its domain. Being sets, set theory has quantifiers that range over functions.
Formal statement:
The principle can be regarded as the identification of the space NN with the collection of total recursive functions. In realzability topoi, this exponential object of the natural numbers object can also be identified with less restrictive collections of maps.
Weaker statements There are weaker forms of the thesis, variously called WCT . By inserting a double negation before the index existence claim in the higher order version, it is asserted that there are no non-recursive functions. This still restricts the space of functions while not constituting a function choice axiom.
A related statement is that any decidable subset of naturals cannot ruled out to be computable in the sense that (∀xχ(x)∨¬χ(x))→¬¬∃e(∀x(∃wT1(e,x,w))↔χ(x)) The contrapositive of this puts any non-computable predicate in violation to excluded middle, so this is still generally anti-classical.
Unlike CT0 , as a principle this is compatible with formulations of the fan theorem.
Variants for related premises ∀xψleft(x)∨ψright(x) may be defined. E.g. a principle always granting existence of a total recursive function N→{left,right} into some discrete binary set that validates one of the disjuncts. Without the double negation, this may be denoted CT0∨
Relationship to classical logic:
The schema CT0 , when added to constructive systems such as HA , implies the negation of the universally quantified version of the law of the excluded middle for some predicates. As an example, the halting problem provenly not computably decidable, but assuming classical logic it is a tautology that every Turing machine either halts or does not halt on a given input. Further assuming Church's thesis one in turn concludes that this is computable - a contradiction. In more detail: In sufficiently strong systems, such as HA , it is possible to express the relation h associated with the halting question, relating any code from an enumeration of Turing machines and values from {0,1} . Assuming the classical tautology above, this relation can be proven total, i.e. it constitutes a function that returns 1 if the machine halts and 0 if it does not halt. Thus HA and CT0 disproves some consequence of the law of the excluded middle. Principles like the double negation shift (commutativity of universal quantification with a double negation) are also rejected by the principle.
Relationship to classical logic:
The single axiom form of CT with ∀f above is consistent with some weak classical systems that do not have the strength to form functions such as the function f of the previous paragraph. For example, the classical weak second-order arithmetic RCA0 is consistent with this single axiom, because RCA0 has a model in which every function is computable. However, the single-axiom form becomes inconsistent with excluded middle in any system that has axioms sufficient to prove existence of functions such as the function h . E.g., adoption of variants of countable choice, such as unique choice for the numerical quantifiers, ∀n∃!mϕ(n,m)→∃a∀kϕ(k,ak), where a denotes a sequence, spoil this consistency.
Relationship to classical logic:
The first-order formulation CT0 already subsumes the power of such a function comprehension principle via enumerated functions.
Constructively formulated subtheories of ZF can typically be shown to be closed under a Church's rule and the corresponding principle is thus compatible. But as an implication ( → ) it cannot be proven by such theories, as that would render the stronger, classical theory inconsistent.
Realizers and metalogic:
This above thesis can be characterized as saying that a sentence is true iff it is computably realisable.
Realizers and metalogic:
This is captured by the following metatheoretic equivalences: HA+ECT0⊢φ↔∃n(n⊩φ) and HA+ECT0⊢φ⟺HA⊢∃n(n⊩φ) Here " ↔ " is just the equivalence in the arithmetic theory, while " ⟺ " denotes the metatheoretical equivalence. For given φ , the predicate n⊩φ is read as " realises φ ". In words, the first result above states that it is provable in HA plus ECT0 that a sentence is true iff it is realisable. But also, the second result above states that φ is provable in HA plus ECT0 iff φ is provably realisable in just HA . For the next metalogical theorem, recall that PA is non-constructive and lacks then existence property, whereas Heyting arithmetic exhibits it: exists nHA⊢ϕ(n_) The second equivalence above can be extended with MP as follows: exists nPA⊢(n_⊩φ) The existential quantifier needs to be outside PA in this case.
Realizers and metalogic:
In words, φ is provable in HA plus ECT0 as well as MP iff one can metatheoretically establish that there is some number n such that the corresponding standard numeral in PA , denoted n_ , provably realises φ . Assuming MP together with alternative variants of Church's thesis, more such metalogical existence statements have been obtained. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Random hexamer**
Random hexamer:
A random hexamer or random hexonucleotides are for various PCR applications such as rolling circle amplification to prime the DNA.
They are oligonucleotide sequences of 6 bases which are synthesised entirely randomly to give a numerous range of sequences that have the potential to anneal at many random points on a DNA sequence and act as a primer to commence first strand cDNA synthesis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MZF1**
MZF1:
Myeloid zinc finger 1 is a protein that in humans is encoded by the MZF1 gene.
Interactions:
MZF1 has been shown to interact with SCAND1.
In 2014, the laboratory of Nathan H. Lents showed that MZF-1 induces the GAPDH gene, something that must be considered when GAPDH is used as a loading control in experiments that may induce or perturb MZF-1. The same group later showed that MZF-1 induces CTGF and NOV, two members of the CCN gene family. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.