text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Colourpoint Longhair**
Colourpoint Longhair:
Colorpoint Longhair or Colourpoint Longhair (among other spellings) is a disused term for one of multiple varieties of domestic cat, and may refer to: Javanese cat, the long-haired variant of the broadly accepted Colorpoint Shorthair breed (which is essentially a Siamese cat with non-Siamese colouration); note, however, that the World Cat Federation confusingly uses "Javanese" to refer to the Oriental Longhair breed, related but different Colourpoint, the World Cat Federation name for the long-haired version of its definition of the Colorpoint Shorthair (which includes both Siamese-standard and -nonstandard colouration); this is a breed classification encompassing both of what other registries call: The Himalayan cat (essentially, the Javanese but with colors limited to those the Siamese); and The Javanese cat (see above), i.e. a long-haired cat with any of the colourations that are nonstandard for Siamese and Himalayan, but found in the non-WCF Colourpoint Shorthair) Any colour-pointed, long-haired mongrel domestic cat (uncapitalised, and in various spellings, e.g. "colourpoint long-haired", etc.) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Art fabrication**
Art fabrication:
Art fabrication describes the process or service of producing large or technically difficult artworks through entities and resources beyond an individual artist's studio. When artists or designers are incapable or choose not to realize their designs or conceptions, they may enlist the assistance of an art fabrication company. Typically, an art fabrication company has access to the resources, specialized machinery and technologies, and labor necessary to execute particularly complex projects. According to a 2018 New York Times article, art fabricators have taken on a greater importance in recent years, as art schools have emphasized ideas and concepts over execution and contemporary artists become less present in their own work.
History:
Art fabrication in its contemporary form, as opposed to the older foundry model that translated maquettes from one material into another, came into being in the 1960s. Its advent stemmed from several factors: the emergence of Pop and Conceptual artists increasingly interested in technologically ambitious projects and spectacle, often emphasizing idea over object; artists such as Donald Judd, Robert Morris, and Richard Serra, who sought to eliminate evidence of the "artist's hand" from their work; and in later years, buoyant art markets that made ambitious projects economically viable and created demands to produce work and exhibit in larger and more numerous museums.In the first half of the 1960s, industrial manufacturers, such as Treitel-Gratz Co. (a high-end producer of modernist fixtures and furniture) and Milgo Industrial (then an architectural fabricator, now Milgo/Bufkin) on the East Coast, worked with artists. They extended the possibilities of studio practice by providing access to the resources, tools, materials and techniques of industrial production. The industrial fabricators were soon joined by companies solely dedicated to art fabrication, first by New York-based Lippincott, Inc., (established in 1966 by Donald Lippincott and Roxanne Everett), and then by Gemini G.E.L. (established 1965 and led by Sidney Felsen), a Los Angeles-based print workshop that expanded into the production of artist multiples (limited editions of sculpture). These firms, which offered a greater degree of collaboration between artist and crew, worked with several previously mentioned artists, as well as Sol LeWitt, Louise Nevelson, Barnett Newman, Claes Oldenburg, Robert Rauschenberg, and Lucas Samaras.When Gemini got out of the multiples business, one its employees, Peter Carlson, left and formed Carlson & Company (1971), working with artists Ellsworth Kelly and Isamu Noguchi, among others. New fabricators soon emerged in the West, such as La Paloma Fine Arts and Jack Brogan, who worked with artists such as, respectively, Dennis Oppenheim and Jonathan Borofsky, and Robert Irwin and Roy Lichtenstein. Art historian Michelle Kuo suggests that these companies increasingly served as conduits between artists and industry and technology, expanding the scope, proportions and complexity of art fabrication. She writes that they researched and solved "new engineering and organizational problems with both patent-worthy and outmoded or discarded technologies," introducing processes and materials from auto detailing to injection moulding to surfboard glassing into fine-arts practice. Throughout the 1990s and 2000s, art fabrication incorporated advanced technologies, service and sourcing from the aerospace, computer defense, semiconductor and entertainment industries, that not only encompassed art production (CAD, 3D scanning and modeling, CNC milling, paint finishing), but also project management, shipping and installation.
Notable art fabricators:
Carlson Baker Arts in Sun Valley, CA, who have worked with Ellsworth Kelly, Isamu Noguchi, Jeff Koons, Yoshitomo Nara, Claes Oldenburg, Jim Isermann, Christian Moeller, Doug Aitken, Rob Ley, and others.
Lippincott, Inc. (now closed), which fabricated work for almost 100 artists, including Barnett Newman, Louise Nevelson, Donald Judd, Claes Oldenburg, Robert Indiana, and Ellsworth Kelly.
HANDMADE LLC, Van Nuys, CA, whose clients include Charles Ray (Artist), Jeff Koons, Judd Foundation, Mary Corse, Dan Colen, Laura Owens, and Jordan Wolfson.
Standard Sculpture LLC, located in Glassell Park, CA, whose clients include Jeff Koons, Carol Bove, Nathan Mabry, Matt Johnson, and Jacob Kassay.
Mike Smith, who has worked on behalf of Damien Hirst, Rachel Whiteread, Jake and Dinos Chapman, Gavin Turk and Michael Landy.
Milgo/Bukin (formerly Milgo Industrial), which has worked with Donald, Judd, Robert Grosvenor, and Richard Serra, among others.
Ted Lawson, founder of Prototype New York, who has worked on behalf of Jeff Koons, Yoko Ono, Mariko Mori, Vanessa Beecroft, Ellen Gallagher, Keith Tyson and Ghada Amer.
Smith of Derby Group, who have worked with Marianne Forrest and Wolfgang & Heron.
Amaral Custom Fabrications, Inc. in Bristol, Rhode Island, whose notable clients include Roy Lichtenstein, Keith Haring, Martin Puryear, Ryan McGinness, Robert Indiana, Jeff Koons, Philip Grausman, and Hasbro.
Master Art Fabrication, Chiang Mai, Thailand, whose notable clients include Charles Krafft, Doug Jeck, Trevor Foster, and Kamol Tassananchalee Gizmo Art Production, Inc. (San Francisco, CA), works with Ned Kahn, Blessing Hancock, Michael Arcega, Jim Campbell, and Ana Teresa Fernandez | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Oxiracetam**
Oxiracetam:
Oxiracetam (developmental code name ISF 2522) is a nootropic drug of the racetam family and a very mild stimulant. Several studies suggest that the substance is safe even when high doses are consumed for a long period of time. However, the mechanism of action of the racetam drug family is still a matter of research. Oxiracetam is not approved by Food and Drug Administration for any medical use in the United States.
Clinical findings:
There has been effort put into investigating the possible use of oxiracetam as a medication to attenuate the symptoms of dementia. However, no convincing results were obtained from studies of patients with Alzheimer's dementia or organic solvent abuse.Tests performed on patients with mild to moderate dementia experienced beneficial effects measured by higher scores on tests for logical performance, attention, concentration, memory and spatial orientation. Improvement was also seen in patients with exogenic post-concussion syndrome, organic brain syndromes and other dementias.Oxiracetam-treated DBA mice demonstrated a significant increase in spatial learning performance as determined by the Morris water navigation task, compared to controls. This increase in performance was correlated to an increase in membrane-bound PKC.
Pharmacokinetics:
Oxiracetam is well absorbed from the gastrointestinal tract with a bioavailability of 56-82%.
Peak serum levels are reached within one to three hours after a single 800 mg or 2000 mg oral dose, with the maximal serum concentration reaching between 19 and 31 μg/ml at these doses.
Oxiracetam is mainly cleared renally and approximately 84% is excreted unchanged in the urine.
The half-life of oxiracetam in healthy individuals is about 8 hours, whereas it is 10–68 hours in patients with renal impairment.
There is some penetration of the blood–brain barrier with brain concentrations reaching 5.3% of those in the blood (measured one hour after a single 2000 mg intravenous dose).Clearance rates range from 9 to 95 ml/min and steady-state concentrations when 800 mg is given twice daily range from 60 μM to 530 μM.
Pharmacokinetics:
The highest brain concentrations of oxiracetam are found in the septum pellucidum, followed by the hippocampus, the cerebral cortex and with the lowest concentrations in the striatum after a 200 mg/kg oral dose given to rats. Oxiracetam may be quantitated in plasma, serum or urine by liquid chromatography with one of several different detection techniques.The major metabolites of Oxiracetam include: beta-hydroxy-2-pyrrolidone, N-aminoacetyl-GABOB, GABOB (beta-hydroxy-GABA) and glycine. Thus its metabolic route is exactly parallel to that of piracetam, aniracetam, phenylpiracetam, and all other members of the -racetam family, and also pyroglutamic acid. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Set function**
Set function:
In mathematics, especially measure theory, a set function is a function whose domain is a family of subsets of some given set and that (usually) takes its values in the extended real number line R∪{±∞}, which consists of the real numbers R and ±∞.
A set function generally aims to measure subsets in some way. Measures are typical examples of "measuring" set functions. Therefore, the term "set function" is often used for avoiding confusion between the mathematical meaning of "measure" and its common language meaning.
Definitions:
If F is a family of sets over Ω (meaning that F⊆℘(Ω) where ℘(Ω) denotes the powerset) then a set function on F is a function μ with domain F and codomain [−∞,∞] or, sometimes, the codomain is instead some vector space, as with vector measures, complex measures, and projection-valued measures. The domain of a set function may have any number properties; the commonly encountered properties and categories of families are listed in the table below.
Definitions:
In general, it is typically assumed that μ(E)+μ(F) is always well-defined for all E,F∈F, or equivalently, that μ does not take on both −∞ and +∞ as values. This article will henceforth assume this; although alternatively, all definitions below could instead be qualified by statements such as "whenever the sum/series is defined". This is sometimes done with subtraction, such as with the following result, which holds whenever μ is finitely additive: Set difference formula: whenever μ(F)−μ(E) is defined with E,F∈F satisfying E⊆F and F∖E∈F.
Definitions:
Null sets A set F∈F is called a null set (with respect to μ ) or simply null if 0.
Whenever μ is not identically equal to either −∞ or +∞ then it is typically also assumed that: null empty set: μ(∅)=0 if ∅∈F.
Variation and mass The total variation of a set S is where |⋅| denotes the absolute value (or more generally, it denotes the norm or seminorm if μ is vector-valued in a (semi)normed space). Assuming that def ⋃F∈FF∈F, then |μ|(∪F) is called the total variation of μ and μ(∪F) is called the mass of μ.
A set function is called finite if for every F∈F, the value μ(F) is finite (which by definition means that μ(F)≠∞ and μ(F)≠−∞ ; an infinite value is one that is equal to ∞ or −∞ ). Every finite set function must have a finite mass.
Common properties of set functions A set function μ on F is said to be non-negative if it is valued in [0,∞].
finitely additive if ∑i=1nμ(Fi)=μ(⋃i=1nFi) for all pairwise disjoint finite sequences F1,…,Fn∈F such that ⋃i=1nFi∈F.
If F is closed under binary unions then μ is finitely additive if and only if μ(E∪F)=μ(E)+μ(F) for all disjoint pairs E,F∈F.
If μ is finitely additive and if ∅∈F then taking := := ∅ shows that μ(∅)=μ(∅)+μ(∅) which is only possible if μ(∅)=0 or μ(∅)=±∞, where in the latter case, μ(E)=μ(E∪∅)=μ(E)+μ(∅)=μ(E)+(±∞)=±∞ for every E∈F (so only the case μ(∅)=0 is useful).
countably additive or σ-additive if in addition to being finitely additive, for all pairwise disjoint sequences F1,F2,… in F such that ⋃i=1∞Fi∈F, all of the following hold: ∑i=1∞μ(Fi)=μ(⋃i=1∞Fi) The series on the left hand side is defined in the usual way as the limit def lim n→∞μ(F1)+⋯+μ(Fn).
Definitions:
As a consequence, if ρ:N→N is any permutation/bijection then ∑i=1∞μ(Fi)=∑i=1∞μ(Fρ(i)); this is because ⋃i=1∞Fi=⋃i=1∞Fρ(i) and applying this condition (a) twice guarantees that both ∑i=1∞μ(Fi)=μ(⋃i=1∞Fi) and μ(⋃i=1∞Fρ(i))=∑i=1∞μ(Fρ(i)) hold. By definition, a convergent series with this property is said to be unconditionally convergent. Stated in plain English, this means that rearranging/relabeling the sets F1,F2,… to the new order Fρ(1),Fρ(2),… does not affect the sum of their measures. This is desirable since just as the union def ⋃i∈NFi does not depend on the order of these sets, the same should be true of the sums μ(F)=μ(F1)+μ(F2)+⋯ and μ(F)=μ(Fρ(1))+μ(Fρ(2))+⋯.
Definitions:
if μ(⋃i=1∞Fi) is not infinite then this series ∑i=1∞μ(Fi) must also converge absolutely, which by definition means that ∑i=1∞|μ(Fi)| must be finite. This is automatically true if μ is non-negative (or even just valued in the extended real numbers).
As with any convergent series of real numbers, by the Riemann series theorem, the series lim N→∞μ(F1)+μ(F2)+⋯+μ(FN) converges absolutely if and only if its sum does not depend on the order of its terms (a property known as unconditional convergence). Since unconditional convergence is guaranteed by (a) above, this condition is automatically true if μ is valued in [−∞,∞].
if μ(⋃i=1∞Fi)=∑i=1∞μ(Fi) is infinite then it is also required that the value of at least one of the series and ∑μ(Fi)<0i∈Nμ(Fi) be finite (so that the sum of their values is well-defined). This is automatically true if μ is non-negative.
a pre-measure if it is non-negative, countably additive (including finitely additive), and has a null empty set.
a measure if it is a pre-measure whose domain is a σ-algebra. That is to say, a measure is a non-negative countably additive set function on a σ-algebra that has a null empty set.
a probability measure if it is a measure that has a mass of 1.
an outer measure if it is non-negative, countably subadditive, has a null empty set, and has the power set ℘(Ω) as its domain.
Outer measures appear in the Carathéodory's extension theorem and they are often restricted to Carathéodory measurable subsetsa signed measure if it is countably additive, has a null empty set, and μ does not take on both −∞ and +∞ as values.
complete if every subset of every null set is null; explicitly, this means: whenever satisfies μ(F)=0 and N⊆F is any subset of F then N∈F and 0.
Unlike many other properties, completeness places requirements on the set domain μ=F (and not just on μ 's values).𝜎-finite if there exists a sequence F1,F2,F3,… in F such that μ(Fi) is finite for every index i, and also ⋃n=1∞Fn=⋃F∈FF.
decomposable if there exists a subfamily P⊆F of pairwise disjoint sets such that μ(P) is finite for every P∈P and also ⋃P∈PP=⋃F∈FF (where domain μ ).
Every 𝜎-finite set function is decomposable although not conversely. For example, the counting measure on R (whose domain is ℘(R) ) is decomposable but not 𝜎-finite.a vector measure if it is a countably additive set function μ:F→X valued in a topological vector space X (such as a normed space) whose domain is a σ-algebra.
If μ is valued in a normed space (X,‖⋅‖) then it is countably additive if and only if for any pairwise disjoint sequence F1,F2,… in F, lim 0.
If μ is finitely additive and valued in a Banach space then it is countably additive if and only if for any pairwise disjoint sequence F1,F2,… in F, lim 0.
a complex measure if it is a countably additive complex-valued set function μ:F→C whose domain is a σ-algebra.
By definition, a complex measure never takes ±∞ as a value and so has a null empty set.a random measure if it is a measure-valued random element.
Arbitrary sums As described in this article's section on generalized series, for any family (ri)i∈I of real numbers indexed by an arbitrary indexing set I, it is possible to define their sum ∑i∈Iri as the limit of the net of finite partial sums FiniteSubsets (I)↦∑i∈Fri where the domain FiniteSubsets (I) is directed by ⊆.
Whenever this net converges then its limit is denoted by the symbols ∑i∈Iri while if this net instead diverges to ±∞ then this may be indicated by writing ∑i∈Iri=±∞.
Any sum over the empty set is defined to be zero; that is, if I=∅ then ∑i∈∅ri=0 by definition.
For example, if zi=0 for every i∈I then 0.
And it can be shown that ∑i∈Iri=∑ri=0i∈I,ri+∑ri≠0i∈I,ri=0+∑ri≠0i∈I,ri=∑ri≠0i∈I,ri.
Definitions:
If I=N then the generalized series ∑i∈Iri converges in R if and only if ∑i=1∞ri converges unconditionally (or equivalently, converges absolutely) in the usual sense. If a generalized series ∑i∈Iri converges in R then both ∑ri>0i∈Iri and ∑ri<0i∈Iri also converge to elements of R and the set {i∈I:ri≠0} is necessarily countable (that is, either finite or countably infinite); this remains true if R is replaced with any normed space. It follows that in order for a generalized series ∑i∈Iri to converge in R or C, it is necessary that all but at most countably many ri will be equal to 0, which means that ∑i∈Iri=∑ri≠0i∈Iri is a sum of at most countably many non-zero terms. Said differently, if {i∈I:ri≠0} is uncountable then the generalized series ∑i∈Iri does not converge. In summary, due to the nature of the real numbers and its topology, every generalized series of real numbers (indexed by an arbitrary set) that converges can be reduced to an ordinary absolutely convergent series of countably many real numbers. So in the context of measure theory, there is little benefit gained by considering uncountably many sets and generalized series. In particular, this is why the definition of "countably additive" is rarely extended from countably many sets F1,F2,… in F (and the usual countable series ∑i=1∞μ(Fi) ) to arbitrarily many sets (Fi)i∈I (and the generalized series ∑i∈Iμ(Fi) ).
Definitions:
Inner measures, outer measures, and other properties A set function μ is said to be/satisfies monotone if μ(E)≤μ(F) whenever E,F∈F satisfy E⊆F.
modular if it satisfies the following condition, known as modularity: μ(E∪F)+μ(E∩F)=μ(E)+μ(F) for all E,F∈F such that E∪F,E∩F∈F.
Every finitely additive function on a field of sets is modular.
In geometry, a set function valued in some abelian semigroup that possess this property is known as a valuation. This geometric definition of "valuation" should not be confused with the stronger non-equivalent measure theoretic definition of "valuation" that is given below.submodular if μ(E∪F)+μ(E∩F)≤μ(E)+μ(F) for all E,F∈F such that E∪F,E∩F∈F.
finitely subadditive if |μ(F)|≤∑i=1n|μ(Fi)| for all finite sequences F,F1,…,Fn∈F that satisfy F⊆⋃i=1nFi.
countably subadditive or σ-subadditive if |μ(F)|≤∑i=1∞|μ(Fi)| for all sequences F,F1,F2,F3,… in F that satisfy F⊆⋃i=1∞Fi.
If F is closed under finite unions then this condition holds if and only if |μ(F∪G)|≤|μ(F)|+|μ(G)| for all F,G∈F.
If μ is non-negative then the absolute values may be removed.
If μ is a measure then this condition holds if and only if μ(⋃i=1∞Fi)≤∑i=1∞μ(Fi) for all F1,F2,F3,… in F.
If μ is a probability measure then this inequality is Boole's inequality.
If μ is countably subadditive and ∅∈F with μ(∅)=0 then μ is finitely subadditive.superadditive if μ(E)+μ(F)≤μ(E∪F) whenever E,F∈F are disjoint with E∪F∈F.
continuous from above if lim n→∞μ(Fi)=μ(⋂i=1∞Fi) for all non-increasing sequences of sets F1⊇F2⊇F3⋯ in F such that ⋂i=1∞Fi∈F with μ(⋂i=1∞Fi) and all μ(Fi) finite.
Lebesgue measure λ is continuous from above but it would not be if the assumption that all μ(Fi) are eventually finite was omitted from the definition, as this example shows: For every integer i, let Fi be the open interval (i,∞) so that lim lim n→∞∞=∞≠0=λ(∅)=λ(⋂i=1∞Fi) where ⋂i=1∞Fi=∅.
continuous from below if lim n→∞μ(Fi)=μ(⋃i=1∞Fi) for all non-decreasing sequences of sets F1⊆F2⊆F3⋯ in F such that ⋃i=1∞Fi∈F.
infinity is approached from below if whenever F∈F satisfies μ(F)=∞ then for every real r>0, there exists some Fr∈F such that Fr⊆F and r≤μ(Fr)<∞.
an outer measure if μ is non-negative, countably subadditive, has a null empty set, and has the power set ℘(Ω) as its domain.
an inner measure if μ is non-negative, superadditive, continuous from above, has a null empty set, has the power set ℘(Ω) as its domain, and +∞ is approached from below.
atomic if every measurable set of positive measure contains an atom.
If a binary operation + is defined, then a set function μ is said to be translation invariant if μ(ω+F)=μ(F) for all ω∈Ω and F∈F such that ω+F∈F.
Topology related definitions If τ is a topology on Ω then a set function μ is said to be: a Borel measure if it is a measure defined on the σ-algebra of all Borel sets, which is the smallest σ-algebra containing all open subsets (that is, containing τ ).
a Baire measure if it is a measure defined on the σ-algebra of all Baire sets.
locally finite if for every point ω∈Ω there exists some neighborhood U∈F∩τ of this point such that μ(U) is finite.
If μ is a finitely additive, monotone, and locally finite then μ(K) is necessarily finite for every compact measurable subset K.
τ -additive if sup D∈Dμ(D) whenever D⊆τ∩F is directed with respect to ⊆ and satisfies def ⋃D∈DD∈F.
D is directed with respect to ⊆ if and only if it is not empty and for all A,B∈D there exists some C∈D such that A⊆C and B⊆C.
inner regular or tight if for every F∈F, sup with a compact subset of (Ω,τ)}.
outer regular if for every F∈F, inf and U∈F∩τ}.
regular if it is both inner regular and outer regular.
a Borel regular measure if it is a Borel measure that is also regular.
a Radon measure if it is a regular and locally finite measure.
strictly positive if every non-empty open subset has (strictly) positive measure.
a valuation if it is non-negative, monotone, modular, has a null empty set, and has domain τ.
Relationships between set functions If μ and ν are two set functions over Ω, then: μ is said to be absolutely continuous with respect to ν or dominated by ν , written μ≪ν, if for every set F that belongs to the domain of both μ and ν, if ν(F)=0 then 0.
Definitions:
If μ and ν are σ -finite measures on the same measurable space and if μ≪ν, then the Radon–Nikodym derivative dμdν exists and for every measurable F, μ and ν are called equivalent if each one is absolutely continuous with respect to the other. μ is called a supporting measure of a measure ν if μ is σ -finite and they are equivalent.
Definitions:
μ and ν are singular, written μ⊥ν, if there exist disjoint sets M and N in the domains of μ and ν such that M∪N=Ω, μ(F)=0 for all F⊆M in the domain of μ, and ν(F)=0 for all F⊆N in the domain of ν.
Examples:
Examples of set functions include: The function assigning densities to sufficiently well-behaved subsets A⊆{1,2,3,…}, is a set function.
A probability measure assigns a probability to each set in a σ-algebra. Specifically, the probability of the empty set is zero and the probability of the sample space is 1, with other sets given probabilities between 0 and 1.
A possibility measure assigns a number between zero and one to each set in the powerset of some given set. See possibility theory.
A random set is a set-valued random variable. See the article random compact set.The Jordan measure on Rn is a set function defined on the set of all Jordan measurable subsets of Rn; it sends a Jordan measurable set to its Jordan measure.
Lebesgue measure The Lebesgue measure on R is a set function that assigns a non-negative real number to every set of real numbers that belongs to the Lebesgue σ -algebra.Its definition begins with the set Intervals (R) of all intervals of real numbers, which is a semialgebra on R.
Examples:
The function that assigns to every interval I its length (I) is a finitely additive set function (explicitly, if I has endpoints a≤b then length (I)=b−a ). This set function can be extended to the Lebesgue outer measure on R, which is the translation-invariant set function λ∗:℘(R)→[0,∞] that sends a subset E⊆R to the infimum Lebesgue outer measure is not countably additive (and so is not a measure) although its restriction to the 𝜎-algebra of all subsets M⊆R that satisfy the Carathéodory criterion: is a measure that called Lebesgue measure. Vitali sets are examples of non-measurable sets of real numbers.
Examples:
Infinite-dimensional space As detailed in the article on infinite-dimensional Lebesgue measure, the only locally finite and translation-invariant Borel measure on an infinite-dimensional separable normed space is the trivial measure. However, it is possible to define Gaussian measures on infinite-dimensional topological vector spaces. The structure theorem for Gaussian measures shows that the abstract Wiener space construction is essentially the only way to obtain a strictly positive Gaussian measure on a separable Banach space.
Examples:
Finitely additive translation-invariant set functions The only translation-invariant measure on Ω=R with domain ℘(R) that is finite on every compact subset of R is the trivial set function ℘(R)→[0,∞] that is identically equal to 0 (that is, it sends every S⊆R to 0 However, if countable additivity is weakened to finite additivity then a non-trivial set function with these properties does exist and moreover, some are even valued in [0,1].
Examples:
In fact, such non-trivial set functions will exist even if R is replaced by any other abelian group G.
Extending set functions:
Extending from semialgebras to algebras Suppose that μ is a set function on a semialgebra F over Ω and let which is the algebra on Ω generated by F.
The archetypal example of a semialgebra that is not also an algebra is the family on := Rd where := {x∈R:a<x≤b} for all −∞≤a<b≤∞.
Importantly, the two non-strict inequalities ≤ in −∞≤ai<bi≤∞ cannot be replaced with strict inequalities < since semialgebras must contain the whole underlying set Rd; that is, Rd∈Sd is a requirement of semialgebras (as is ∅∈Sd ).
Extending set functions:
If μ is finitely additive then it has a unique extension to a set function μ¯ on algebra (F) defined by sending algebra (F) (where ⊔ indicates that these Fi∈F are pairwise disjoint) to: This extension μ¯ will also be finitely additive: for any pairwise disjoint algebra (F), If in addition μ is extended real-valued and monotone (which, in particular, will be the case if μ is non-negative) then μ¯ will be monotone and finitely subadditive: for any algebra (F) such that A⊆A1∪⋯∪An, Extending from rings to σ-algebras If μ:F→[0,∞] is a pre-measure on a ring of sets (such as an algebra of sets) F over Ω then μ has an extension to a measure μ¯:σ(F)→[0,∞] on the σ-algebra σ(F) generated by F.
Extending set functions:
If μ is σ-finite then this extension is unique.
To define this extension, first extend μ to an outer measure μ∗ on 2Ω=℘(Ω) by and then restrict it to the set FM of μ∗ -measurable sets (that is, Carathéodory-measurable sets), which is the set of all M⊆Ω such that It is a σ -algebra and μ∗ is sigma-additive on it, by Caratheodory lemma.
Restricting outer measures If μ∗:℘(Ω)→[0,∞] is an outer measure on a set Ω, where (by definition) the domain is necessarily the power set ℘(Ω) of Ω, then a subset M⊆Ω is called μ∗ –measurable or Carathéodory-measurable if it satisfies the following Carathéodory's criterion: where := Ω∖M is the complement of M.
The family of all μ∗ –measurable subsets is a σ-algebra and the restriction of the outer measure μ∗ to this family is a measure. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Evaluation Assurance Level**
Evaluation Assurance Level:
The Evaluation Assurance Level (EAL1 through EAL7) of an IT product or system is a numerical grade assigned following the completion of a Common Criteria security evaluation, an international standard in effect since 1999. The increasing assurance levels reflect added assurance requirements that must be met to achieve Common Criteria certification. The intent of the higher levels is to provide higher confidence that the system's principal security features are reliably implemented. The EAL level does not measure the security of the system itself, it simply states at what level the system was tested.
Evaluation Assurance Level:
To achieve a particular EAL, the computer system must meet specific assurance requirements. Most of these requirements involve design documentation, design analysis, functional testing, or penetration testing. The higher EALs involve more detailed documentation, analysis, and testing than the lower ones. Achieving a higher EAL certification generally costs more money and takes more time than achieving a lower one. The EAL number assigned to a certified system indicates that the system completed all requirements for that level.
Evaluation Assurance Level:
Although every product and system must fulfill the same assurance requirements to achieve a particular level, they do not have to fulfill the same functional requirements. The functional features for each certified product are established in the Security Target document tailored for that product's evaluation. Therefore, a product with a higher EAL is not necessarily "more secure" in a particular application than one with a lower EAL, since they may have very different lists of functional features in their Security Targets. A product's fitness for a particular security application depends on how well the features listed in the product's Security Target fulfill the application's security requirements. If the Security Targets for two products both contain the necessary security features, then the higher EAL should indicate the more trustworthy product for that application.
Assurance levels:
EAL1: Functionally Tested EAL1 is applicable where some confidence in correct operation is required, but the threats to security are not viewed as serious. It will be of value where independent assurance is required support the contention that due care has been exercised with respect to the protection of personal or similar information.
Assurance levels:
EAL1 provides an evaluation of the TOE (Target of Evaluation) as made available to the customer, including independent testing against a specification, and an examination of the guidance documentation provided. It is intended that an EAL1 evaluation could be successfully conducted without assistance from the developer of the TOE, and for minimal cost. An evaluation at this level should provide evidence that the TOE functions in a manner consistent with its documentation, and that it provides useful protection against identified threats.
Assurance levels:
EAL2: Structurally Tested EAL2 requires the cooperation of the developer in terms of the delivery of design information and test results, but should not demand more effort on the part of the developer than is consistent with good commercial practice. As such it should not require a substantially increased investment of cost or time.
EAL2 is therefore applicable in those circumstances where developers or users require a low to moderate level of independently assured security in the absence of ready availability of the complete development record. Such a situation may arise when securing legacy systems.
EAL3: Methodically Tested and Checked EAL3 permits a conscientious developer to gain maximum assurance from positive security engineering at the design stage without substantial alteration of existing sound development practices.
EAL3 is applicable in those circumstances where developers or users require a moderate level of independently assured security, and require a thorough investigation of the TOE and its development without substantial re-engineering.
Assurance levels:
EAL4: Methodically Designed, Tested and Reviewed EAL4 permits a developer to gain maximum assurance from positive security engineering based on good commercial development practices which, though rigorous, do not require substantial specialist knowledge, skills, and other resources. EAL4 is the highest level at which it is likely to be economically feasible to retrofit to an existing product line. EAL4 is therefore applicable in those circumstances where developers or users require a moderate to high level of independently assured security in conventional commodity TOEs and are prepared to incur additional security-specific engineering costs.
Assurance levels:
Commercial operating systems that provide conventional, user-based security features are typically evaluated at EAL4. Examples with expired Certificate are AIX, HP-UX, Oracle Linux, NetWare, Solaris, SUSE Linux Enterprise Server 9, SUSE Linux Enterprise Server 10, Red Hat Enterprise Linux 5, Windows 2000 Service Pack 3, Windows 2003, Windows XP, Windows Vista,Windows 7, Windows Server 2008 R2, z/OS version 2.1 and z/VM version 6.3.Operating systems that provide multilevel security are evaluated at a minimum of EAL4. Examples with active Certificate include SUSE Linux Enterprise Server 15 (EAL 4+). Examples with expired Certificate are Trusted Solaris, Solaris 10 Release 11/06 Trusted Extensions, an early version of the XTS-400, VMware ESXi version 4.1, 3.5, 4.0, AIX 4.3, AIX 5L, AIX 6, AIX7, Red Hat 6.2 & SUSE Linux Enterprise Server 11 (EAL 4+). vSphere 5.5 Update 2 did not achieve EAL4+ level it was an EAL2+ and certified on June 30, 2015.
Assurance levels:
EAL5: Semiformally Designed and Tested EAL5 permits a developer to gain maximum assurance from security engineering based upon rigorous commercial development practices supported by moderate application of specialist security engineering techniques. Such a TOE will probably be designed and developed with the intent of achieving EAL5 assurance. It is likely that the additional costs attributable to the EAL5 requirements, relative to rigorous development without the application of specialized techniques, will not be large.
Assurance levels:
EAL5 is therefore applicable in those circumstances where developers or users require a high level of independently assured security in a planned development and require a rigorous development approach without incurring unreasonable costs attributable to specialist security engineering techniques.
Numerous smart card devices have been evaluated at EAL5, as have multilevel secure devices such as the Tenix Interactive Link. XTS-400 (STOP 6) is a general-purpose operating system which has been evaluated at EAL5 augmented.
LPAR on IBM System z is EAL5 Certified.
EAL6: Semiformally Verified Design and Tested EAL6 permits developers to gain high assurance from application of security engineering techniques to a rigorous development environment in order to produce a premium TOE for protecting high-value assets against significant risks.
EAL6 is therefore applicable to the development of security TOEs for application in high risk situations where the value of the protected assets justifies the additional costs.
Green Hills Software's INTEGRITY-178B RTOS has been certified to EAL6 augmented.
EAL7: Formally Verified Design and Tested EAL7 is applicable to the development of security TOEs for application in extremely high risk situations and/or where the high value of the assets justifies the higher costs.
Practical application of EAL7 is currently limited to TOEs with tightly focused security functionality that is amenable to extensive formal analysis. The Tenix Interactive Link Data Diode Device and the Fox-IT Fox Data Diode (one-way data communications device) claimed to have been evaluated at EAL7 augmented (EAL7+).
Implications of assurance levels:
Technically speaking, a higher EAL means nothing more, or less, than that the evaluation completed a more stringent set of quality assurance requirements. It is often assumed that a system that achieves a higher EAL will provide its security features more reliably (and the required third-party analysis and testing performed by security experts is reasonable evidence in this direction), but there is little or no published evidence to support that assumption.
Implications of assurance levels:
Impact on cost and schedule In 2006, the US Government Accountability Office published a report on Common Criteria evaluations that summarized a range of costs and schedules reported for evaluations performed at levels EAL2 through EAL4.
In the mid to late 1990s, vendors reported spending US$1 million and even US$2.5 million on evaluations comparable to EAL4. There have been no published reports of the cost of the various Microsoft Windows security evaluations.
Implications of assurance levels:
Augmentation of EAL requirements In some cases, the evaluation may be augmented to include assurance requirements beyond the minimum required for a particular EAL. Officially this is indicated by following the EAL number with the word augmented and usually with a list of codes to indicate the additional requirements. As shorthand, vendors will often simply add a "plus" sign (as in EAL4+) to indicate the augmented requirements.
Implications of assurance levels:
EAL notation The Common Criteria standards denote EALs as shown in this article: the prefix "EAL" concatenated with a digit 1 through 7 (Examples: EAL1, EAL3, EAL5). In practice, some countries place a space between the prefix and the digit (EAL 1, EAL 3, EAL 5). The use of a plus sign to indicate augmentation is an informal shorthand used by product vendors (EAL4+ or EAL 4+). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Counterbore**
Counterbore:
A counterbore (symbol: ⌴) is a cylindrical flat-bottomed hole that enlarges another coaxial hole, or the tool used to create that feature. A counterbore hole is typically used when a fastener, such as a socket head cap screw or fillister head screw, is required to sit flush with or below the level of a workpiece's surface. Whereas a counterbore is a flat-bottomed enlargement of a smaller coaxial hole, a countersink is a conical enlargement of such. A spotface often takes the form of a very shallow counterbore. As mentioned above, the cutters that produce counterbores are often also called counterbores; sometimes, to avoid ambiguity, the term counterbore cutter is used instead. The symbol is Unicode character U+2334 ⌴ COUNTERBORE.
Description:
A counterbore hole is usually used when the head of a fastener, such as a hex head or socket head capscrew, is required to be flush with or below the level of a workpiece's surface.
Description:
For a spotface, material is removed from a surface to make it flat and smooth, usually for a fastener or a bearing. Spotfacing is usually required on workpieces that are forged or cast. A tool referred to as a counterbore is typically used to cut the spotface, although an endmill may also be used. Only enough material is removed to make the surface flat.A counterbore is also used to create a perpendicular surface for a fastener head on a non-perpendicular surface. If this is not feasible then a self-aligning nut may be required.
Description:
By comparison, a countersink makes a conical hole and is used to seat a flathead screw.
Description:
Standards exist for the sizes of counterbores, especially for fastener head seating areas. These standards can vary between corporations and between standards organizations. For example, in Boeing Design Manual BDM-1327 section 3.5, the nominal diameter of the spotfaced surface is the same as the nominal size of the cutter, and is equal to the flat seat diameter plus twice the fillet radius. This is in contrast to the ASME Y14.5-2009 definition of a spotface, which is equal to the flat seat diameter.
Machining:
Counterbores are made with standard dimensions for a certain size of screw or are produced in sizes that are not related to any particular screw size. In either case, the tip of the counterbore has a reduced diameter section referred to as the pilot, a feature essential to assuring concentricity between the counterbore and the hole being counterbored. Counterbores matched to specific screw sizes generally have integral pilots that fit the clearance hole diameter associated with a particular screw size (e.g., .191 inches for a number 10 machine screw). Counterbores that are not related to a specific screw size are designed to accept a removable pilot, allowing any given counterbore size to be adapted to a variety of hole sizes. The pilot matters little when running the cutter in a milling setup where rigidity is assured and hole center location is already achieved via X-Y positioning.
Machining:
The uppermost counterbore tools shown in the image are the same device. The smaller top item is an insert, the middle shows another three-fluted counterbore insert, assembled in the holder. The shank of this holder is a Morse taper, although there are other machine tapers that are used in the industry. The lower counterbore is designed to fit into a drill chuck, and being smaller, is economical to make as one piece. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shot/reverse shot**
Shot/reverse shot:
Shot/reverse shot (or shot/countershot) is a film technique where one character is shown looking at another character (often off-screen), and then the other character is shown looking back at the first character (a reverse shot or countershot). Since the characters are shown facing in opposite directions, the viewer assumes that they are looking at each other.
Context:
Shot/reverse shot is a feature of the "classical" Hollywood style of continuity editing, which deemphasizes transitions between shots such that the spectator perceives one continuous action that develops linearly, chronologically, and logically. It is an example of an eyeline match.
Sources:
Bordwell, David; Thompson, Kristin (2006). Film Art: An Introduction. New York: McGraw-Hill. ISBN 0-07-331027-1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Deep level underground**
Deep level underground:
Deep level underground is construction that is 20 m (66 ft) or more below ground and not using the cut-and-cover method, especially train stations, air raid shelters and bunkers, and some tunnels and mines. Cut-and-cover is a simple method of construction for shallow tunnels where a trench is excavated and roofed over with an overhead support structure that is strong enough to carry the load of what is to be built above the tunnel.
History:
Mining Although some deep mining took place as early as the late Tudor period (in North-East England, and along the Firth of Forth coast in Scotland) deep shaft mining in Britain began to develop extensively in the late 18th century, with rapid expansion throughout the 19th century and early 20th century when the industry peaked. Before 1800, a great deal of coal was left in places as extraction was still primitive. As a result, in the deep Tyneside pits (300 to 1,000 ft deep) only about 40 percent of the coal could be extracted. The use of wooden pit props to support the roof was an innovation first introduced about 1800.
History:
Transit systems Before any plans were made for transit systems with tunnels and stations, several railway operators had used tunnels for freight and passenger trains, usually to reduce the grade of the railway line. Examples include Trevithick's Tunnel from 1804, built for the Penydarren locomotive, the 1829 Crown Street Tunnel at Liverpool and the 1.13 miles (1,820 metres) long 1836 Lime Street Tunnel also at Liverpool, of which a part is still used today making it the world's oldest used tunnel.
History:
The world's first urban underground railway was the Metropolitan Railway, which opened on January 10, 1863. It was built largely in shallow tunnels (see more at cut and cover) and is nowadays part of the London Underground. It was operated using steam trains, and despite the creation of numerous ventilation shafts, was unhealthy and uncomfortable for passengers and operating staff. Nevertheless, its trains were popular from the start and the Metropolitan Railway and the competing Metropolitan District Railway developed the Inner Circle around central London (completed in 1884) and an extensive system of suburban branches to the northwest (extending into the adjoining countryside), the west, the southwest and the east (mostly completed by 1904).
History:
Liverpool James Street railway station, together with Hamilton Square underground station in Birkenhead are the oldest deep level underground stations in the world, while London's underground stations were just below the street surface built by means of the cut-and-cover method. The stations were so deep they required lifting to access easily, this gave another world's first in having the first lift-accessed stations. The lifts were hydraulically operated.
History:
For the first deep-level tube line, the City and South London Railway, two 10 feet 2 inches (3.10 m) diameter circular tube tunnels were dug between King William Street (close to today's Monument station) and Stockwell, following under the roads above to avoid the need for an agreement with owners of property on the surface. This opened in 1890 with electric locomotives that hauled carriages with small opaque windows, nicknamed "padded cells".
Construction:
Cut-and-cover is a simple method of construction for shallow tunnels where a trench is excavated and roofed over with an overhead support structure strong enough to support the load of what would be built above the tunnel. Modern deep level construction is usually done by using tunnel boring machines.
Construction:
In London, the Circle, District, Hammersmith & City, and Metropolitan lines are services that run on the sub-surface network that has railway tunnels just below the surface and built mostly using the "cut-and-cover" method. The tunnels and trains are of a similar size to those on British main lines. The Hammersmith & City and Circle lines share all their stations and most of the track with other lines. The Bakerloo, Central, Jubilee, Northern, Piccadilly, Victoria and Waterloo & City lines are deep-level tube lines, using smaller trains that run through two circular tunnels with a diameter of about 11 feet 8 inches (3.56 m), lined with cast iron or precast concrete rings, which were bored using a tunnelling shield. These were referred to as the tube lines, although since the 1950s the term "tube" has come to be used to refer to the whole London Underground system.
Deepest train stations:
Deep level train stations are not common, although many metro systems in the Commonwealth of Independent States, as well as some London Underground lines, have deep level stations. The deepest mainline train station in operation, Jerusalem–Yitzhak Navon, has platforms 80 metres below street level.
Deepest mines:
The deepest mines in the world are the TauTona (Western Deep Levels) and Savuka gold mines in the Witwatersrand region of South Africa, which are currently working at depths exceeding 3,900 m (12,800 ft). There are plans to extend Mponeng mine, a sister mine to TauTona, down to 4,500 m (14,800 ft) in the coming years.
This region is also the location of the harshest conditions for hard rock mining, where workers toil in temperatures of up to 45 °C (113 °F). However, massive refrigeration plants are used to bring the air temperature down to around 28 °C (82 °F).
Deepest mines:
The deepest hard rock mine in North America is Agnico-Eagle's LaRonde mine, which mines gold, zinc, copper and silver ores roughly 45 km (28 mi) east of Rouyn-Noranda in Cadillac, Quebec. LaRonde's Penna shaft (#3 shaft) is believed to be the deepest single lift shaft in the Western Hemisphere. The #4 shaft bottoms out at over 3,000 m (9,800 ft) down. Their LaRonde mine expansion sees open stopes down to a depth of over 3,000 m (9,800 ft), the deepest longhole open stopes in the world.
Deepest mines:
The deepest mine in Europe is the 16th shaft of the uranium mines in Příbram, Czech Republic at 1,838 m (6,030 ft), second is Bergwerk Saar in Saarland, Germany at 1,750 meters.
The deepest hard rock mines in Australia are the copper and zinc lead mines in Mount Isa, Queensland at 1,800 m (5,900 ft).
The deepest platinum-palladium mines in the world are on the Merensky Reef, in South Africa, with a resource of 203 million troy ounces, currently worked to approximately 2,200 m (7,200 ft) depth.
The deepest tourist level mine is Guido Mine and Coal Mining Museum in Zabrze, Poland.
The deepest borehole is Kola Superdeep Borehole in Murmansk Oblast, Russia. At 12,262 m (40,230 ft), it is the deepest artificial extreme point of Earth. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Media scrum**
Media scrum:
A media scrum is an improvised press conference, often held immediately outside an event such as a legislative session or meeting. Scrums play a central role in Canadian politics and also occur in the United Kingdom, the United States, Australia and New Zealand. In New Zealand, such informal press events are also called media stand-ups or gaggles.
Etymology:
A scrum in rugby is a procedure to restart the game. From the outside, it may seem to involve players from both teams clustering tightly around the ball competing for possession. Analogously, in a media scrum reporters cluster around a public figure competing for his or her attention.
Canada:
In Canada, the scrum is a daily ritual in the hallway outside the House of Commons. Members of the Parliamentary Press Gallery surround politicians as they exit the chamber. The disorganization and pressure of the scrum makes it notorious for drawing remarks that are unplanned or controversial. Liberal MP Carolyn Parrish remarked, "damn Americans, I hate those bastards" during a scrum in the run-up to the Iraq War.Because of these concerns, politicians have sometimes tried to avoid the scrum in favour of more formal venues. Canadian Alliance leader Stockwell Day declined to scrum, instead holding a daily press conference. Brian Mulroney restricted scrums during his time as Prime Minister of Canada by positioning himself on the stairway up to his office. This allowed him to tower over the media on the steps below him. The media so resented this practice that when Jean Chrétien held a "staircase scrum" soon after assuming office, their reaction was so negative that he promised never to do it again. By contrast, although Pierre Trudeau's relationship with the press was rocky, he was famously quick-witted and enjoyed deflecting — or returning — barbs from reporters. Many of his famous quotations, including "there's no place for the state in the bedrooms of the nation" and "just watch me", were made during scrums. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Makyō**
Makyō:
The term makyō (魔境, makyō) is a Japanese word that literally means "realm of demons/monsters" or "uncanny realm" or forsaken place, and hellhole.
Makyō in Zen Meditation:
In Zen, Makyō is a figurative reference to the kind of self-delusion that results from clinging to experience. Makyō used in a broad sense refers to people's attachments to experiences in their everyday lives. However, makyō may also be used in a more specific sense, referring to illusory experiences that sometimes occur during Zen meditation.In Philip Kapleau's The Three Pillars of Zen, Hakuun Yasutani explained the term as the combination of "ma" meaning devil and "kyo" meaning the objective world. This character for "devil" can also refer to Mara, the Buddhist "tempter" figure; and the character kyo can mean simply region, condition or place. Makyō refers to the hallucinations and perceptual distortions that can arise during the course of meditation and can be mistaken by the practitioner as "seeing the true nature" or kenshō. Zen masters warn their meditating students to ignore sensory distortions. As Hakuun Yasutani states: "Makyō are the phenomena–visions, hallucinations, fantasies, revelations, illusory sensations–which one practicing zazen is apt to experience at a particular stage in his sitting. ...Never be tempted into thinking that these phenomena are real or that the visions themselves have any meaning. To have a beautiful vision of Buddha does not mean that you are any nearer becoming one yourself, any more than a dream of being a millionaire means you are any richer when you awake." Makyō can take many different forms depending on an individual's personality and temperament. They can occur in the form of visions and perceptual distortions, but they can also be experiences of blank, trance-like absorption states. In the Zen school, it is understood that such experiences – however fascinating they may be – are not a true and final enlightenment.
Makyō in Zen Meditation:
John Daido Loori offers a similar description of makyō in The Art of Just Sitting. Loori writes: "Sometimes during sitting people have what we call makyo: a vision or hallucination. Other times it's a smell or sound. Students often think this means they're enlightened–particularly if the image is related to Zen, like the Buddha sitting on a golden lotus–and they immediately run off to dokusan (独参) to get it confirmed. The teacher will usually listen and then say something like, "Maybe you're not sitting straight. Sit straight. Don't worry, it will go away." It doesn't matter whether we attach to a regular thought, or to the thought of enlightenment. Whatever it is, it is still attachment." Again, experiences of makyō vary in form, and many Zen teachers do not recognize such experiences as signs of enlightenment. Equating makyō with enlightenment is instead seen as a form of attachment to experience.
Makyō in Zen Meditation:
Robert Baker Aitken described makyō as a class of delusions. He mentioned several examples of makyō outside of Zen meditation, including hearing heavenly voices, speaking in tongues, hallucinations such as a flock of white doves descending into one's body, and experiences of astral projection. Aitken thought makyō might be valuable to people interested in the rich potential of what human minds can experience, but he believed makyo had little or nothing to offer people interested in personal insight. More specific to Zen, Aitken claimed that makyō indicate progress in meditation. Makyō indicate that people have passed beyond superficial stages of thinking about Zen and their meditation. Yet, Aitken considered it a grave mistake to equate makyō with something final or ultimate. He wrote: "Always relate your makyo to your teacher, but do not try to cultivate them, for they are spontaneous and cannot be summoned up. When they do occur, let them go as you would any other delusion. ...No matter how interesting and encouraging thoughts or makyo may be, they are self-limited." Yet again, makyō appear in a great variety of forms, but they are not considered signs of enlightenment.
Comparisons to Other Traditions:
Experiences comparable to makyō occur in other meditative traditions. In some Hindu schools, experiences similar to makyō are regarded as a product of the sukshma sharira, or "experience body", in its unstable state. Such experiences are viewed as another form of maya, the illusory nature of the world as apprehended by ordinary consciousness. Tibetan contemplative literature uses the parallel term nyam, which fall into three categories, usually listed as clarity, bliss, and non-conceptuality. Many types of meditation phenomena can be classed under this rubric, and are generally tied to the reorganization of the body's subtle energies that can occur in meditation. See Dudjom Lingpa, (cited in Wallace, the Attention Revolution), and Padmasambhava (in Treasures from the Juniper Ridge) for more specific examples. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wilson–Mikity syndrome**
Wilson–Mikity syndrome:
Wilson–Mikity syndrome, a form of chronic lung disease (CLD) that exists only in premature infants, leads to progressive or immediate development of respiratory distress. This rare condition affects low birth babies and is characterized by rapid development of lung emphysema after birth, requiring prolonged ventilation and oxygen supplementation. It is closely related to bronchopulmonary dysplasia (BPD), differing mainly in the lack of prior ventilatory support. All the initial patients described with Wilson–Mikity syndrome were very low birth weight infants that had no history of mechanical ventilation, yet developed a syndrome that clinically resembled BPD. Upon the death of some of these infants, autopsies showed histologic changes similar to those seen in BPD.It was characterized by Miriam G. Wilson and Victor G. Mikity in 1960.
Symptoms and signs:
The onset of respiratory difficulty occurs at the first day of life and continues up to three weeks into the infant's life, at which point treatment is needed for infant survival.
Complications Mothers who have developed chorioamnionitis during pregnancy put their infant at higher risk for development of Wilson-Mikity syndrome. It is a rare complication that requires prolonged treatment. Infection, however, is not shown to be an etiological factor, but a correlation to chorioamnionitis is identified as a risk.
Cause:
The cause of Wilson-Mikity syndrome is unknown.
Diagnosis:
The diagnosis of Wilson-Mikity syndrome can be made through two distinct symptoms: analogous characteristics of respiratory distress syndrome and presence of diffuse and streaky infiltrates with small cystic changes, seen through a chest X-ray. Early screening allows for the identification of a collapsed lung, cystic changes within the lung, and possible start of right-sided heart failure. Upon autopsy, alveolar collapse and alveoli rupture can be seen. This can reduce the number of capillaries within the system and lead to cyanosis. Cyanosis occurs from chronic or intermittent respiratory distress and episodes of dyspnea (or apnea). Symptoms can develop within hours post-birth or be gradual; infants will experience transient respiratory distress, causing a lapse in diagnosis by around 30 to 40 days. Dangerous recurrent apnea (or dyspnea) can occur in the first two to six weeks postpartum. This cessation of breathing can progress to cyanosis and lung collapse.
Diagnosis:
Infants display deteriorating respiratory symptoms along with early chronic lung changes which can be seen on chest radiography. These changes are diagnosed either directly upon birth or within the first month, as the premature infant requires mechanical ventilation for survival.
Treatment:
When caught early enough, continuous, mechanical oxygen therapy can be used to reverse the infant's poor circulation and decreased blood oxygen, a symptom known as cyanosis. Improvement is gradual; however, cases show that after the first year of treatment using oxygen therapy and mechanical ventilators, infants show normal respiratory activity and are free from chest infiltrates with small cystic changes. Absence of fever (febrile), and normal white blood cell count correspond to successful reversion and allow for a positive prognosis.When not treated properly, methods of reversion using oxygen supplementation and ventilation have the possibility to put the infant at risk for rare complications. If not enough oxygen is administered to the infant, the apnea continues and the infant is unable proper recovery. In contrast, too much oxygen administered can lead to higher risk for retrolental fibroplasia and/or oxygen toxicity within the lungs. Continued dyspnea is a sign that Wilson-Mikity syndrome is still affecting the infant. Increased ventilation to allow proper respiration is then required. Patients in recovery are slowly taken off oxygen support and eventually are able to ventilate with minimal to no respiratory distress.There is a lack of research on the long-term effects of Wilson-Mikity into adulthood.
Epidemiology:
Around 75% of affected infants survive and are able to receive oxygen therapies and treatments to overcome this disease. In fatal cases, infants do not have noticeable or substantial respiratory recovery and can develop right-sided heart failure, ultimately leading to death. Patients that are not recovering will also continue to show signs of dyspnea, respiratory distress, and continued low body weight, heightening the risk of death. Infants that survive six months or longer have substantially better prognosis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Perpetual student**
Perpetual student:
A perpetual student or career student is a college or university attendee who re-enrolls for several years more than is necessary to obtain a given degree, or who pursues multiple terminal degrees. Perpetual students might publish or work in several fields.
Examples:
Luciano Baietti (born September 1, 1947) holds fifteen academic degrees, including the subjects of physical education, law, literature, philosophy, sociology, criminology, and military strategy. The Guinness Book of World Records officially recognizes Baietti as the most graduated living person in the world.
Dr. Robert W. McGee - Holder of 23 academic degrees including 13 doctorates.
Examples:
Dr. Bruce Berry (1940-2014), notable for being a school crossing guard, having retired from a career including technical document translation for Agfa-Gevaert, working for the Post Office, and teaching, took his first degree from Manchester University in 1963. He continued to study from the 1970s onwards, coming to possess several further Bachelor's and master's degrees (from universities including the University of Leeds, the University of York and Normandy University, Caen), as well as a Ph.D. from Leeds Metropolitan University. He died before completing his twelfth degree, another Ph.D. Fluent in several languages, he was also a Fellow of the Chartered Institute of Linguists.
Examples:
Benjamin Bolger, who received his first four-year degree from the University of Michigan has seventeen degrees from different colleges.
Examples:
Milton De Jesús has been a student at the University of Puerto Rico since 1963. In 2010, De Jesús was interviewed by the newspaper, since he was the only student on the campus who could compare the 2010 student strikes and the 1970s, 80s, 90s, and 2005 strikes. According to his Facebook page, De Jesús graduated from the University of Puerto Rico in 2005.
Examples:
Shrikant Jichkar (14 September 1954 – 2 June 2004) held twenty academic degrees.
Johnny Lechner attended the University of Wisconsin–Whitewater from 1994 to at least 2005. He was scheduled to graduate in 2008 with multiple majors and minors, but continued into a 15th year of college.
Michael Nicholson (1941?- Present) has 28 degrees, including 22 master's degrees and one doctorate.
V. N. Parthiban holds the overall record for the most degrees earned in history, with one hundred forty-five. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Knee examination**
Knee examination:
The knee examination, in medicine and physiotherapy, is performed as part of a physical examination, or when a patient presents with knee pain or a history that suggests a pathology of the knee joint.
The exam includes several parts: position/lighting/draping inspection palpation motionThe latter three steps are often remembered with the saying look, feel, move.
History taking:
Before performing a physical examination of the knee joint, a comprehensive history should be asked. A thorough history can be helpful in locating the possible pathological site during the physical examination. The mechanism of injury, location, character of the knee pain, the presence of a "pop" sound at the time of the injury (indicates ligamentous tear or fracture), swelling, infections, ability to stand or walk, sensation of instability (suggestive of subluxation), or any previous traumatic injuries to the joint are all important historical features. The most common knee problems are: soft tissue inflammation, injury, or osteoarthritis. The mechanism of the knee injury can give a clue of the possible structures that can be injured. For example, applying valgus stress on the knee can cause medial collateral ligament rupture, meanwhile a varus force can cause lateral collateral ligament rupture. When a person suddenly slows down during running, twisting, or pivoting with valgus force applying on the knee, the anterior cruciate ligament can rupture. Posterior dislocation of the tibia can cause posterior cruciate ligament injury. Twisting and pivoting while bearing weight can cause tearing of the meniscus. Fractures of the knee are less common but should be considered if direct trauma to the knee has occurred such as during a fall. Examples of fractures involving knee joints are: tibial plateau fractures, fractures of the lateral condyle of femur, medial condyle of femur, and patellar fractures.For non-traumatic causes of knee pain, history such as fever, morning stiffness, pain after exercise, infections, history of gout or psoriasis, and previous activities that contributes to long-term overuse of the knee joint should be asked. Knee pain due to long-term overuse are reproducible. For example, repetitive jumping can cause inflammation of patellar tendon. Repetitive kneeling can cause prepatellar inflammation of synovial bursa.
General examination:
Physical examination of the knee begins by observing the person's gait to assess for any abnormalities seen while walking. Gait assessment can be used to differentiate genuine knee pain or pain which referred from hip, lower back or the foot. A person can be asked to perform a duckwalk. This requires the person to squat and walk in that position. In order to perform a duckwalk, the person has to be free of ligamentous tear, knee effusions, and meniscal tears. The person can also be asked to stand with both feet stuck together. This position is useful to observe for valgus or varus deformity of the knees which is suggestive of osteoarthritis. The circumference of each thigh can be measured to look for wasting of quadriceps muscles. Skin around the knee can also be observed for psoriasis, hematoma, rash, abrasions, lacerations, or cellulitis which could be important causes of the knee pathology.
Palpation:
Palpation of the knee should begin from the unaffected side first. This will reassure the patient and is useful for comparison with the affected knee. The back of the hand can be used to assess the warmth of the knee. The knee is then flexed 90 degrees and the anterior structures are assessed. Inflammation of the patellar tendon is present if the patellar tendon is painful upon palpation. Radiographic imaging should be done if the examination findings fulfills the Ottawa rules: age 55 years and older, pain at the head of fibula, patellar pain, unable to flex the knee to 90 degrees, and inability to stand and walk at least four steps. If anterior cruciate ligament injury is suspected, radiographic imaging should also be ordered because it is frequently associated with lateral tibial plateau fracture. If there is a painful, reddish, and warm swelling in front of the patella, acute prepatellar bursitis should be considered which may require aspiration or drainage. Those presented with these features usually had history of frequent kneeling and direct trauma over the knee.Pain, swelling, and a defect of the insertion of the quadriceps tendon into the superior part of the patella is suggestive of quadriceps tendon rupture. A "pop" sound may be associated with this injury, followed by the loss of the ability to straighten the knee (knee extension). Pain at the medial joint line (medial to the inferior border of the patella) indicates medial compartment osteoarthritis, injury to the medial collateral ligament, or a medial meniscal tear. Pain at the midpoint between the anterior part of the medial joint line and tibial tuberosity is suggestive of Pes anserine bursitis (inflammation of anserine bursa. Lateral joint line tenderness is associated with lateral compartment osteoarthritis, lateral collateral ligament injury, and lateral meniscal tear. Pain at the lateral femoral condyle is suggestive of iliotibial band syndrome. Swelling at the popliteal fossa may reveal a Baker's cyst.
Motion:
Assessment of effusion The absence of normal grooves around patella may indicates a patellar intra-articular effusion. There are two ways to confirm the effusion. The knee is extended fully before the examination begins. This first way is the patellar tap. It is to squeeze the fluid between the patella and the femur by pressing at the medial patella using a non-dominant hand. Then, using the dominant hand to press on the patella vertically. If the patella is ballotable, then patellar intra-articular effusion is present. Another way is the milking of the patella. First, the effusion is milked at the medial border of the patella from inferior to superior aspect. Then, using another hand, the effusion is milked at the lateral border of the patella from superior to inferior aspect. If effusion is present, a bulge will be appear at the medial border of the patella because the effusion is milked back to the medial patella.
Motion:
Assessment of range of motion Both the active and passive range of motion should be assessed. The normal knee extension is between 0 and 10 degrees. The normal knee flexion is between 130 and 150 degrees. Any pain, abnormal movement, or crepitus of the patella should be noted. If there is pain or crepitus during active extension of the knee, while the patella is being compressed against the patellofemoral groove, patellofemoral pain syndrome or chondromalacia patellae should be suspected. Pain with active range of motion but no pain during passive range of motion is suggestive of inflammation of the tendon. Pain during active and passive range of motion is suggestive of pathology in the knee joint.
Motion:
Assessment of collateral ligaments Valgus stress test can be performed with the examined knee in 25 degrees flexion to determine the integrity of the medial collateral ligament. Similarly, varus stress test can be performed to access the integrity of the lateral collateral ligament. The degree of collateral ligament sprain can also be assessed during the valgus and varus tests. In a first degree tear, the ligament has less than 5 mm laxity with a definite resistance when the knee is pulled. In a second degree sprain, there is laxity when the knee is tested at 25 degrees of flexion, but no laxity at extension with a definite resistance when the knee is pulled. In a third degree tear, there will be 10 mm laxity with no definite resistance either with knee with full extension or flexion.
Motion:
Assessment of anterior cruciate ligament The anterior drawer and Lachman tests can be used to access the integrity of the anterior cruciate ligament. In the anterior drawer test, the person being examined should lie down on their back (supine position) with the knee in 90 degrees flexion. The foot is secured on the bed with the examiner sitting on the foot. The tibia is then pulled forward by using both hands. If the anterior movement of the affected knee is greater than the unaffected knee, then the anterior drawer test is positive. The Lachman test is more sensitive than the anterior drawer test. For the Lachman test, the person lies down in supine position with the knee flexed at 20 degrees and the heel touching the bed. The tibia is then pulled forward. If there is 6 to 8 millimeters of laxity, with no definitive resistance when the knee is pulled, then the test is positive thus raising concern for a torn anterior cruciate ligament. A large collection of blood in the knee can be associated with bony fractures and cruciate ligament tear.
Motion:
Assessment of posterior cruciate ligament Posterior drawer test and tibial sag tests can determine the integrity of the posterior cruciate ligament. Similar to anterior drawer test, the knee should be flexed 90 degrees and the tibia is pushed backwards. If the tibia can be pushed posteriorly, then the posterior drawer test is positive. In tibial sag test, both knees are flexed at 90 degrees with the person in supine position and bilateral feet touching the bed. Bilateral knees are then watch for posterior displacement of tibia. If the affected tibia slowly displaced posteriorly, the posterior cruciate ligament is affected.
Motion:
Assessment of meniscus Those with meniscal injuries may report symptoms such as clicking, catching, or locking of knees. Apart from joint line tenderness, there are three other methods of accessing meniscus tear: the McMurray test, the Thessaly test, and the Apley grind test. In McMurray test, the person should lie down in supine position with the knee should in 90 degrees flexion. the examiner put one hand with the thumb and the index finger on the medial and lateral joint lines respectively. Another hand is used to control the heel. To test the medial meniscus, the hand at the heel applies a valgus force and external rotates the leg while extending the knee. To test for the lateral meniscus, the varus force, internal rotation are applied to the leg while extending the knee. Any clicking, popping, or catching at the respective joint line indicates the corresponding meniscal tear.In Apley compression test, the person lie down in prone position with the knee flexed at 90 degrees. One hand is used to stabilise the hip and another hand grasp the foot and apply a downward compression force while external and internal rotates the leg. Pain during compression indicates meniscal tear. Examination for anterior cruciate ligament tear should be done for those with meniscal tear because these two conditions often occurs together.
Motion:
Additional Tests Clarke's test may be used to examine for patello-femoral pain The Wilson test is a test used to detect the presence of osteochondritis dissecans in the knee. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nilpotent orbit**
Nilpotent orbit:
In mathematics, nilpotent orbits are generalizations of nilpotent matrices that play an important role in representation theory of real and complex semisimple Lie groups and semisimple Lie algebras.
Definition:
An element X of a semisimple Lie algebra g is called nilpotent if its adjoint endomorphism ad X: g → g, ad X(Y) = [X,Y]is nilpotent, that is, (ad X)n = 0 for large enough n. Equivalently, X is nilpotent if its characteristic polynomial pad X(t) is equal to tdim g.
A semisimple Lie group or algebraic group G acts on its Lie algebra via the adjoint representation, and the property of being nilpotent is invariant under this action. A nilpotent orbit is an orbit of the adjoint action such that any (equivalently, all) of its elements is (are) nilpotent.
Examples:
Nilpotent n×n matrices with complex entries form the main motivating case for the general theory, corresponding to the complex general linear group. From the Jordan normal form of matrices we know that each nilpotent matrix is conjugate to a unique matrix with Jordan blocks of sizes λ1≥λ2≥…≥λr, where λ is a partition of n. Thus in the case n=2 there are two nilpotent orbits, the zero orbit consisting of the zero matrix and corresponding to the partition (1,1) and the principal orbit consisting of all non-zero matrices A with zero trace and determinant, A=[xyz−x],(x,y,z)≠(0,0,0) with x2+yz=0, corresponding to the partition (2). Geometrically, this orbit is a two-dimensional complex quadratic cone in four-dimensional vector space of 2×2 matrices minus its apex.
Examples:
The complex special linear group is a subgroup of the general linear group with the same nilpotent orbits. However, if we replace the complex special linear group with the real special linear group, new nilpotent orbits may arise. In particular, for n=2 there are now 3 nilpotent orbits: the zero orbit and two real half-cones (without the apex), corresponding to positive and negative values of y−z in the parametrization above.
Properties:
Nilpotent orbits can be characterized as those orbits of the adjoint action whose Zariski closure contains 0.
Nilpotent orbits are finite in number.
The Zariski closure of a nilpotent orbit is a union of nilpotent orbits.
Properties:
Jacobson–Morozov theorem: over a field of characteristic zero, any nilpotent element e can be included into an sl2-triple {e,h,f} and all such triples are conjugate by ZG(e), the centralizer of e in G. Together with the representation theory of sl2, this allows one to label nilpotent orbits by finite combinatorial data, giving rise to the Dynkin–Kostant classification of nilpotent orbits.
Poset structure:
Nilpotent orbits form a partially ordered set: given two nilpotent orbits, O1 is less than or equal to O2 if O1 is contained in the Zariski closure of O2. This poset has a unique minimal element, zero orbit, and unique maximal element, the regular nilpotent orbit, but in general, it is not a graded poset. If the ground field is algebraically closed then the zero orbit is covered by a unique orbit, called the minimal orbit, and the regular orbit covers a unique orbit, called the subregular orbit.
Poset structure:
In the case of the special linear group SLn, the nilpotent orbits are parametrized by the partitions of n. By a theorem of Gerstenhaber, the ordering of the orbits corresponds to the dominance order on the partitions of n. Moreover, if G is an isometry group of a bilinear form, i.e. an orthogonal or symplectic subgroup of SLn, then its nilpotent orbits are parametrized by partitions of n satisfying a certain parity condition and the corresponding poset structure is induced by the dominance order on all partitions (this is a nontrivial theorem, due to Gerstenhaber and Hesselink). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Microarchitecture**
Microarchitecture:
In computer science and computer engineering, microarchitecture, also called computer organization and sometimes abbreviated as µarch or uarch, is the way a given instruction set architecture (ISA) is implemented in a particular processor. A given ISA may be implemented with different microarchitectures; implementations may vary due to different goals of a given design or due to shifts in technology.Computer architecture is the combination of microarchitecture and instruction set architecture.
Relation to instruction set architecture:
The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the instructions, execution model, processor registers, address and data formats among other things. The microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA.
Relation to instruction set architecture:
The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be anything from single gates and registers, to complete arithmetic logic units (ALUs) and even larger elements. These diagrams generally separate the datapath (where data is placed) and the control path (which can be said to steer the data).The person designing a system usually draws the specific microarchitecture as a kind of data flow diagram. Like a block diagram, the microarchitecture diagram shows microarchitectural elements such as the arithmetic and logic unit and the register file as a single schematic symbol. Typically, the diagram connects those elements with arrows, thick lines and thin lines to distinguish between three-state buses (which require a three-state buffer for each device that drives the bus), unidirectional buses (always driven by a single source, such as the way the address bus on simpler computers is always driven by the memory address register), and individual control lines. Very simple computers have a single data bus organization – they have a single three-state bus. The diagram of more complex computers usually shows multiple three-state buses, which help the machine do more operations simultaneously.
Relation to instruction set architecture:
Each microarchitectural element is in turn represented by a schematic describing the interconnections of logic gates used to implement it. Each logic gate is in turn represented by a circuit diagram describing the connections of the transistors used to implement it in some particular logic family. Machines with different microarchitectures may have the same instruction set architecture, and thus be capable of executing the same programs. New microarchitectures and/or circuitry solutions, along with advances in semiconductor manufacturing, are what allows newer generations of processors to achieve higher performance while using the same ISA.
Relation to instruction set architecture:
In principle, a single microarchitecture could execute several different ISAs with only minor changes to the microcode.
Aspects:
The pipelined datapath is the most commonly used datapath design in microarchitecture today. This technique is used in most modern microprocessors, microcontrollers, and DSPs. The pipelined architecture allows multiple instructions to overlap in execution, much like an assembly line. The pipeline includes several different stages which are fundamental in microarchitecture designs. Some of these stages include instruction fetch, instruction decode, execute, and write back. Some architectures include other stages such as memory access. The design of pipelines is one of the central microarchitectural tasks.
Aspects:
Execution units are also essential to microarchitecture. Execution units include arithmetic logic units (ALU), floating point units (FPU), load/store units, branch prediction, and SIMD. These units perform the operations or calculations of the processor. The choice of the number of execution units, their latency and throughput is a central microarchitectural design task. The size, latency, throughput and connectivity of memories within the system are also microarchitectural decisions.
Aspects:
System-level design decisions such as whether or not to include peripherals, such as memory controllers, can be considered part of the microarchitectural design process. This includes decisions on the performance-level and connectivity of these peripherals.
Unlike architectural design, where achieving a specific performance level is the main goal, microarchitectural design pays closer attention to other constraints. Since microarchitecture design decisions directly affect what goes into a system, attention must be paid to issues such as chip area/cost, power consumption, logic complexity, ease of connectivity, manufacturability, ease of debugging, and testability.
Microarchitectural concepts:
Instruction cycles To run programs, all single- or multi-chip CPUs: Read an instruction and decode it Find any associated data that is needed to process the instruction Process the instruction Write the results outThe instruction cycle is repeated continuously until the power is turned off.
Microarchitectural concepts:
Multicycle microarchitecture Historically, the earliest computers were multicycle designs. The smallest, least-expensive computers often still use this technique. Multicycle architectures often use the least total number of logic elements and reasonable amounts of power. They can be designed to have deterministic timing and high reliability. In particular, they have no pipeline to stall when taking conditional branches or interrupts. However, other microarchitectures often perform more instructions per unit time, using the same logic family. When discussing "improved performance," an improvement is often relative to a multicycle design.
Microarchitectural concepts:
In a multicycle computer, the computer does the four steps in sequence, over several cycles of the clock. Some designs can perform the sequence in two clock cycles by completing successive stages on alternate clock edges, possibly with longer operations occurring outside the main cycle. For example, stage one on the rising edge of the first cycle, stage two on the falling edge of the first cycle, etc.
Microarchitectural concepts:
In the control logic, the combination of cycle counter, cycle state (high or low) and the bits of the instruction decode register determine exactly what each part of the computer should be doing. To design the control logic, one can create a table of bits describing the control signals to each part of the computer in each cycle of each instruction. Then, this logic table can be tested in a software simulation running test code. If the logic table is placed in a memory and used to actually run a real computer, it is called a microprogram. In some computer designs, the logic table is optimized into the form of combinational logic made from logic gates, usually using a computer program that optimizes logic. Early computers used ad-hoc logic design for control until Maurice Wilkes invented this tabular approach and called it microprogramming.
Microarchitectural concepts:
Increasing execution speed Complicating this simple-looking series of steps is the fact that the memory hierarchy, which includes caching, main memory and non-volatile storage like hard disks (where the program instructions and data reside), has always been slower than the processor itself. Step (2) often introduces a lengthy (in CPU terms) delay while the data arrives over the computer bus. A considerable amount of research has been put into designs that avoid these delays as much as possible. Over the years, a central goal was to execute more instructions in parallel, thus increasing the effective execution speed of a program. These efforts introduced complicated logic and circuit structures. Initially, these techniques could only be implemented on expensive mainframes or supercomputers due to the amount of circuitry needed for these techniques. As semiconductor manufacturing progressed, more and more of these techniques could be implemented on a single semiconductor chip. See Moore's law.
Microarchitectural concepts:
Instruction set choice Instruction sets have shifted over the years, from originally very simple to sometimes very complex (in various respects). In recent years, load–store architectures, VLIW and EPIC types have been in fashion. Architectures that are dealing with data parallelism include SIMD and Vectors. Some labels used to denote classes of CPU architectures are not particularly descriptive, especially so the CISC label; many early designs retroactively denoted "CISC" are in fact significantly simpler than modern RISC processors (in several respects).
Microarchitectural concepts:
However, the choice of instruction set architecture may greatly affect the complexity of implementing high-performance devices. The prominent strategy, used to develop the first RISC processors, was to simplify instructions to a minimum of individual semantic complexity combined with high encoding regularity and simplicity. Such uniform instructions were easily fetched, decoded and executed in a pipelined fashion and a simple strategy to reduce the number of logic levels in order to reach high operating frequencies; instruction cache-memories compensated for the higher operating frequency and inherently low code density while large register sets were used to factor out as much of the (slow) memory accesses as possible.
Microarchitectural concepts:
Instruction pipelining One of the first, and most powerful, techniques to improve performance is the use of instruction pipelining. Early processor designs would carry out all of the steps above for one instruction before moving onto the next. Large portions of the circuitry were left idle at any one step; for instance, the instruction decoding circuitry would be idle during execution and so on.
Microarchitectural concepts:
Pipelining improves performance by allowing a number of instructions to work their way through the processor at the same time. In the same basic example, the processor would start to decode (step 1) a new instruction while the last one was waiting for results. This would allow up to four instructions to be "in flight" at one time, making the processor look four times as fast. Although any one instruction takes just as long to complete (there are still four steps) the CPU as a whole "retires" instructions much faster.
Microarchitectural concepts:
RISC makes pipelines smaller and much easier to construct by cleanly separating each stage of the instruction process and making them take the same amount of time—one cycle. The processor as a whole operates in an assembly line fashion, with instructions coming in one side and results out the other. Due to the reduced complexity of the classic RISC pipeline, the pipelined core and an instruction cache could be placed on the same size die that would otherwise fit the core alone on a CISC design. This was the real reason that RISC was faster. Early designs like the SPARC and MIPS often ran over 10 times as fast as Intel and Motorola CISC solutions at the same clock speed and price.
Microarchitectural concepts:
Pipelines are by no means limited to RISC designs. By 1986 the top-of-the-line VAX implementation (VAX 8800) was a heavily pipelined design, slightly predating the first commercial MIPS and SPARC designs. Most modern CPUs (even embedded CPUs) are now pipelined, and microcoded CPUs with no pipelining are seen only in the most area-constrained embedded processors. Large CISC machines, from the VAX 8800 to the modern Pentium 4 and Athlon, are implemented with both microcode and pipelines. Improvements in pipelining and caching are the two major microarchitectural advances that have enabled processor performance to keep pace with the circuit technology on which they are based.
Microarchitectural concepts:
Cache It was not long before improvements in chip manufacturing allowed for even more circuitry to be placed on the die, and designers started looking for ways to use it. One of the most common was to add an ever-increasing amount of cache memory on-die. Cache is very fast and expensive memory. It can be accessed in a few cycles as opposed to many needed to "talk" to main memory. The CPU includes a cache controller which automates reading and writing from the cache. If the data is already in the cache it is accessed from there – at considerable time savings, whereas if it is not the processor is "stalled" while the cache controller reads it in.
Microarchitectural concepts:
RISC designs started adding cache in the mid-to-late 1980s, often only 4 KB in total. This number grew over time, and typical CPUs now have at least 2 MB, while more powerful CPUs come with 4 or 6 or 12MB or even 32MB or more, with the most being 768MB in the newly released EPYC Milan-X line, organized in multiple levels of a memory hierarchy. Generally speaking, more cache means more performance, due to reduced stalling.
Microarchitectural concepts:
Caches and pipelines were a perfect match for each other. Previously, it didn't make much sense to build a pipeline that could run faster than the access latency of off-chip memory. Using on-chip cache memory instead, meant that a pipeline could run at the speed of the cache access latency, a much smaller length of time. This allowed the operating frequencies of processors to increase at a much faster rate than that of off-chip memory.
Microarchitectural concepts:
Branch prediction One barrier to achieving higher performance through instruction-level parallelism stems from pipeline stalls and flushes due to branches. Normally, whether a conditional branch will be taken isn't known until late in the pipeline as conditional branches depend on results coming from a register. From the time that the processor's instruction decoder has figured out that it has encountered a conditional branch instruction to the time that the deciding register value can be read out, the pipeline needs to be stalled for several cycles, or if it's not and the branch is taken, the pipeline needs to be flushed. As clock speeds increase the depth of the pipeline increases with it, and some modern processors may have 20 stages or more. On average, every fifth instruction executed is a branch, so without any intervention, that's a high amount of stalling.
Microarchitectural concepts:
Techniques such as branch prediction and speculative execution are used to lessen these branch penalties. Branch prediction is where the hardware makes educated guesses on whether a particular branch will be taken. In reality one side or the other of the branch will be called much more often than the other. Modern designs have rather complex statistical prediction systems, which watch the results of past branches to predict the future with greater accuracy. The guess allows the hardware to prefetch instructions without waiting for the register read. Speculative execution is a further enhancement in which the code along the predicted path is not just prefetched but also executed before it is known whether the branch should be taken or not. This can yield better performance when the guess is good, with the risk of a huge penalty when the guess is bad because instructions need to be undone.
Microarchitectural concepts:
Superscalar Even with all of the added complexity and gates needed to support the concepts outlined above, improvements in semiconductor manufacturing soon allowed even more logic gates to be used.
Microarchitectural concepts:
In the outline above the processor processes parts of a single instruction at a time. Computer programs could be executed faster if multiple instructions were processed simultaneously. This is what superscalar processors achieve, by replicating functional units such as ALUs. The replication of functional units was only made possible when the die area of a single-issue processor no longer stretched the limits of what could be reliably manufactured. By the late 1980s, superscalar designs started to enter the market place.
Microarchitectural concepts:
In modern designs it is common to find two load units, one store (many instructions have no results to store), two or more integer math units, two or more floating point units, and often a SIMD unit of some sort. The instruction issue logic grows in complexity by reading in a huge list of instructions from memory and handing them off to the different execution units that are idle at that point. The results are then collected and re-ordered at the end.
Microarchitectural concepts:
Out-of-order execution The addition of caches reduces the frequency or duration of stalls due to waiting for data to be fetched from the memory hierarchy, but does not get rid of these stalls entirely. In early designs a cache miss would force the cache controller to stall the processor and wait. Of course there may be some other instruction in the program whose data is available in the cache at that point. Out-of-order execution allows that ready instruction to be processed while an older instruction waits on the cache, then re-orders the results to make it appear that everything happened in the programmed order. This technique is also used to avoid other operand dependency stalls, such as an instruction awaiting a result from a long latency floating-point operation or other multi-cycle operations.
Microarchitectural concepts:
Register renaming Register renaming refers to a technique used to avoid unnecessary serialized execution of program instructions because of the reuse of the same registers by those instructions. Suppose we have two groups of instruction that will use the same register. One set of instructions is executed first to leave the register to the other set, but if the other set is assigned to a different similar register, both sets of instructions can be executed in parallel (or) in series.
Microarchitectural concepts:
Multiprocessing and multithreading Computer architects have become stymied by the growing mismatch in CPU operating frequencies and DRAM access times. None of the techniques that exploited instruction-level parallelism (ILP) within one program could make up for the long stalls that occurred when data had to be fetched from main memory. Additionally, the large transistor counts and high operating frequencies needed for the more advanced ILP techniques required power dissipation levels that could no longer be cheaply cooled. For these reasons, newer generations of computers have started to exploit higher levels of parallelism that exist outside of a single program or program thread.
Microarchitectural concepts:
This trend is sometimes known as throughput computing. This idea originated in the mainframe market where online transaction processing emphasized not just the execution speed of one transaction, but the capacity to deal with massive numbers of transactions. With transaction-based applications such as network routing and web-site serving greatly increasing in the last decade, the computer industry has re-emphasized capacity and throughput issues.
Microarchitectural concepts:
One technique of how this parallelism is achieved is through multiprocessing systems, computer systems with multiple CPUs. Once reserved for high-end mainframes and supercomputers, small-scale (2–8) multiprocessors servers have become commonplace for the small business market. For large corporations, large scale (16–256) multiprocessors are common. Even personal computers with multiple CPUs have appeared since the 1990s.
Microarchitectural concepts:
With further transistor size reductions made available with semiconductor technology advances, multi-core CPUs have appeared where multiple CPUs are implemented on the same silicon chip. Initially used in chips targeting embedded markets, where simpler and smaller CPUs would allow multiple instantiations to fit on one piece of silicon. By 2005, semiconductor technology allowed dual high-end desktop CPUs CMP chips to be manufactured in volume. Some designs, such as Sun Microsystems' UltraSPARC T1 have reverted to simpler (scalar, in-order) designs in order to fit more processors on one piece of silicon.
Microarchitectural concepts:
Another technique that has become more popular recently is multithreading. In multithreading, when the processor has to fetch data from slow system memory, instead of stalling for the data to arrive, the processor switches to another program or program thread which is ready to execute. Though this does not speed up a particular program/thread, it increases the overall system throughput by reducing the time the CPU is idle.
Microarchitectural concepts:
Conceptually, multithreading is equivalent to a context switch at the operating system level. The difference is that a multithreaded CPU can do a thread switch in one CPU cycle instead of the hundreds or thousands of CPU cycles a context switch normally requires. This is achieved by replicating the state hardware (such as the register file and program counter) for each active thread.
Microarchitectural concepts:
A further enhancement is simultaneous multithreading. This technique allows superscalar CPUs to execute instructions from different programs/threads simultaneously in the same cycle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**FVWM95**
FVWM95:
FVWM95 is a window manager for the X Window System based on the popular FVWM 2 window manager. It is similar to the original FVWM, but is designed to closely resemble the look of Windows 95.
FVWM95 was for a while a rather popular window manager; for example, Red Hat Linux 5.0 used it as the default. It is no longer as popular, nor is it well-maintained or included in modern Linux distributions. FVWM98 is a derivative of FVWM95 that is designed to look like Windows 98 instead of Windows 95.
FVWM95 was included in Debian since 2000 but was removed in 2006 because of incompatibility with the UTF-8 character encoding system.Similar window managers include Qvwm, IceWM and JWM.
Features:
Windows 95-like appearance.
Taskbar for quick window switching.
Virtual desktop support.
Most features from FVWM 2 (may not include absolute latest bleeding-edge capabilities). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scooterboy**
Scooterboy:
A scooterboy (or scooter boy) is a member of one of several scooter-related subcultures of the 1960s and later decades, alongside rude boys, mods and skinheads. The term is sometimes used as a catch-all designation for any scootering enthusiast who does not fall into the latter three categories.
Definitions:
Michael Brake identifies the subculture differently, classifying it as a subgroup of the mods, alongside "art school mods", "mainstream mods", and "hard mods". Scooter boys, according to Brake, had "Italian motor scooters (a working-class sports car) covered in accessories and anoraks and wide jeans".According to Colin Shattuck and Eric Peterson, a scooter boy is more specifically, "one who attends scooter rallies and accumulates event patches on a garment of some kind". The garment is conventionally a MA-1 bomber jacket(Scooter Jacket), but can be any of several other types of jacket, a mechanic's, a motorcyclist's, or even a parka. According to Kayleen Hazlehurst, the scooterboy with anorak, accessory-covered scooter and industrial work boots was a late-1960s/early-1970s halfway house between the mods and the skinheads.Scooterboy Gaz Kishere suggests a less reductive view of this is that scooter boys emerged as a break away from a strongly 'new mod' conformity of the late 70's mod revival which saw a massive re-ignition of scooter riders and interest in traveling to scooter rallies. It enabled people to identify with more diverse groups such as punks, psychobillies or for those new to the scooter scene to keep their own original subculture identity. However the scooterboy 'birth' was also a reaction to the 'new mod' scene from those who adopted this as a passing interest or sought to no longer conform to the mainstreaming of the then scooter scene. It would be difficult to reduce scooterboys to patch wearing enthusiasts or as a subgroup of mods. As scooter boys continued to find the freedom to emerge, he or she was as likely to own a leather motorcycle jacket, a grinder, welder, black paint and have long hair. In reality, the antithesis of the Quadrophenia mod.Music biographer Mick Middles observes that the flight-jacketed scooter boy with Dr. Martens shoes was a slightly different image, favoured by scooter boys in the late 1970s scooter revival. He describes the Lambretta boom period from 1968 to 1973 as featuring: [g]iant packs of scooter boys surg[ing] out every Sunday from the big Lancashire towns ... avoiding the faster, dirtier motorbiking 'greasers' and clashing with each other in Blackpool and Southport. Those were the days of Crombie coats and two-tone 'tonic' trousers, of brogues ... and Barathea blazers, of smartness, neatness, in clothes as in music.
Definitions:
He characterises the late 1970s revival, in contrast, as "something of an oddity", in which scooter owners were "more concerned with the machine — the mechanics, the practicalities — than the look. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Photoinjector**
Photoinjector:
A photoinjector is a type of source for intense electron beams which relies on the photoelectric effect. A laser pulse incident onto the cathode of a photoinjector drives electrons out of it, and into the accelerating field of the electron gun. In comparison with the widespread thermionic electron gun, photoinjectors produce electron beams of higher brightness, which means more particles packed into smaller volume of phase space (beam emittance). Photoinjectors serve as the main electron source for single-pass synchrotron light sources, such as free-electron lasers and for ultrafast electron diffraction setups. The first RF photoinjector was developed in 1985 at Los Alamos National Laboratory and used as the source for a free-electron-laser experiment. High-brightness electron beams produced by photoinjectors are used directly or indirectly to probe the molecular, atomic and nuclear structure of matter for fundamental research, as well as material characterization.
Photoinjector:
A photoinjector comprises a photocathode, electron gun (AC or DC), power supplies, driving laser system, timing and synchronization system, emittance compensation magnets. It can include vacuum system and cathode fabrication or transport system. It is usually followed by beam diagnostics and higher-energy accelerators.
Photoinjector:
The key component of a photoinjector is a photocathode, which is located inside the cavity of electron gun (usually, a 0.6-fractional cell for optimal distribution of accelerating field). Extracted electron beam suffers from its own space-charge fields that deteriorate the beam brightness. For that reason, photoelectron guns often have one or more full-size booster cells to increase the beam energy and reduce the space-charge effect. The gun's accelerating field is RF (radio-frequency) wave provided by a klystron or other RF power source. For low-energy beams, such as ones used in electron diffraction and microscopy, electrostatic acceleration (DC) is a suitable.
Photoinjector:
The photoemission on the cathode is initiated by an incident pulse from the driving laser. Depending on the material of the photocathode, the laser wavelength can vary from 1700 nm (infrared) down to 100-200 nm (ultraviolet). Emission from the cavity wall is possible with laser wavelength of about 250 nm for copper walls or cathodes. Semiconductor cathodes are often sensitive to ambient conditions and might require a clean preparation chamber located behind the photoelectron gun. The optical system of the driving laser is often designed to control the pulse structure, and consequently, the distribution of electrons in the extracted bunch. For example, a fs-scale laser pulse with an elliptical transverse profile creates a thin "pancake" electron bunch, that evolves into a uniformly filled ellipsoid under its own space-charge fields. A more sophisticated laser pulse with a comb-like longitudinal profile generates a similarly shaped, comb electron beam. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**JWH-057**
JWH-057:
JWH-057, also known as deoxy-Δ8-THC-DMH, is a selective cannabinoid ligand, with a binding affinity of Ki = 2.9 ± 1.6 nM for the CB2 subtype, and Ki = 23 ± 7 nM for CB1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Deep water cycle**
Deep water cycle:
The deep water cycle, or geologic water cycle, involves exchange of water with the mantle, with water carried down by subducting oceanic plates and returning through volcanic activity, distinct from the water cycle process that occurs above and on the surface of Earth. Some of the water makes it all the way to the lower mantle and may even reach the outer core. Mineral physics experiments show that hydrous minerals can carry water deep into the mantle in colder slabs and even "nominally anhydrous minerals" can store several oceans' worth of water.
Deep water cycle:
The process of deep water recycling involves water entering the mantle by being carried down by subducting oceanic plates (a process known as regassing) being balanced by water being released at mid-ocean ridges (degassing). This is a central concept in the understanding of the long‐term exchange of water between the earth's interior and the exosphere and the transport of water bound in hydrous minerals.
Introduction:
In the conventional view of the water cycle (also known as the hydrologic cycle), water moves between reservoirs in the atmosphere and Earth's surface or near-surface (including the ocean, rivers and lakes, glaciers and polar ice caps, the biosphere and groundwater). However, in addition to the surface cycle, water also plays an important role in geological processes reaching down into the crust and mantle. Water content in magma determines how explosive a volcanic eruption is; hot water is the main conduit for economically important minerals to concentrate in hydrothermal mineral deposits; and water plays an important role in the formation and migration of petroleum.
Introduction:
Water is not just present as a separate phase in the ground. Seawater percolates into oceanic crust and hydrates igneous rocks such as olivine and pyroxene, transforming them into hydrous minerals such as serpentines, talc and brucite. In this form, water is carried down into the mantle. In the upper mantle, heat and pressure dehydrates these minerals, releasing much of it to the overlying mantle wedge, triggering the melting of rock that rises to form volcanic arcs. However, some of the "nominally anhydrous minerals" that are stable deeper in the mantle can store small concentrations of water in the form of hydroxyl (OH−), and because they occupy large volumes of the Earth, they are capable of storing at least as much as the world's oceans.The conventional view of the ocean's origin is that it was filled by outgassing from the mantle in the early Archean and the mantle has remained dehydrated ever since. However, subduction carries water down at a rate that would empty the ocean in 1–2 billion years. Despite this, changes in the global sea level over the past 3–4 billion years have only been a few hundred metres, much smaller than the average ocean depth of 4 kilometres. Thus, the fluxes of water into and out of the mantle are expected to be roughly balanced, and the water content of the mantle steady. Water carried into the mantle eventually returns to the surface in eruptions at mid-ocean ridges and hotspots. This circulation of water into the mantle and back is known as the deep water cycle or the geologic water cycle.Estimates of the amount of water in the mantle range from 1⁄4 to 4 times the water in the ocean. There are 1.37×1018 m3 of water in the seas, therefore, this would suggest that there is between 3.4×1017 and 5.5×1018 m3 of water in the mantle. Constraints on water in the mantle come from mantle mineralogy, samples of rock from the mantle, and geophysical probes.
Storage capacity:
An upper bound on the amount of water in the mantle can be obtained by considering the amount of water that can be carried by its minerals (their storage capacity). This depends on temperature and pressure. There is a steep temperature gradient in the lithosphere where heat travels by conduction, but in the mantle the rock is stirred by convection and the temperature increases more slowly (see figure). Descending slabs have colder than average temperatures.
Storage capacity:
The mantle can be divided into the upper mantle (above 410 km depth), transition zone (between 410 km and 660 km), and the lower mantle (below 660 km). Much of the mantle consists of olivine and its high-pressure polymorphs. At the top of the transition zone, it undergoes a phase transition to wadsleyite, and at about 520 km depth, wadsleyite transforms into ringwoodite, which has the spinel structure. At the top of the lower mantle, ringwoodite decomposes into bridgmanite and ferropericlase.The most common mineral in the upper mantle is olivine. For a depth of 410 km, an early estimate of 0.13 percentage of water by weight (wt%) was revised upwards to 0.4 wt% and then to 1 wt%. However, the carrying capacity decreases dramatically towards the top of the mantle. Another common mineral, pyroxene, also has an estimated capacity of 1 wt% near 410 km.In the transition zone, water is carried by wadsleyite and ringwoodite; in the relatively cold conditions of a descending slab, they can carry up to 3 wt%, while in the warmer temperatures of the surrounding mantle their storage capacity is about 0.5 wt%. The transition zone is also composed of at least 40% majorite, a high pressure phase of garnet; this only has capacity of 0.1 wt% or less.The storage capacity of the lower mantle is a subject of controversy, with estimates ranging from the equivalent of 3 times to less than 3% of the ocean. Experiments have been limited to pressures found in the top 100 km of the mantle and are challenging to perform. Results may be biased upwards by hydrous mineral inclusions and downwards by a failure to maintain fluid saturation.At high pressures, water can interact with pure iron to get FeH and FeO. Models of the outer core predict that it could hold as much as 100 oceans of water in this form, and this reaction may have dried out the lower mantle in the early history of Earth.
Water from the mantle:
The carrying capacity of the mantle is only an upper bound, and there is no compelling reason to suppose that the mantle is saturated. Further constraints on the quantity and distribution of water in the mantle comes from a geochemical analysis of erupted basalts and xenoliths from the mantle.
Water from the mantle:
Basalts Basalts formed at mid-ocean ridges and hotspots originate in the mantle and are used to provide information on the composition of the mantle. Magma rising to the surface may undergo fractional crystallization in which components with higher melting points settle out first, and the resulting melts can have widely varying water contents; but when little separation has occurred, the water content is between about 0.07–0.6 wt%. (By comparison, basalts in back-arc basins around volcanic arcs have between 1 wt% and 2.9 wt% because of the water coming off the subducting plate.)Mid-ocean ridge basalts (MORBs) are commonly classified by the abundance of trace elements that are incompatible with the minerals they inhabit. They are divided into "normal" MORB or N-MORB, with relatively low abundances of these elements, and enriched E-MORB. The enrichment of water correlates well with that of these elements. In N-MORB, the water content of the source mantle is inferred to be 0.08–0.18 wt%, while in E-MORB it is 0.2–0.95 wt%.Another common classification, based on analyses of MORBs and ocean island basalts (OIBs) from hotspots, identifies five components. Focal zone (FOZO) basalt is considered to be closest to the original composition of the mantle. Two enriched end-members (EM-1 and EM-2) are thought to arise from recycling of ocean sediments and OIBs. HIMU stands for "high-μ", where μ is a ratio of uranium and lead isotopes (μ = 238U/204Pb). The fifth component is depleted MORB (DMM). Because the behavior of water is very similar to that of the element cesium, ratios of water to cesium are often used to estimate the concentration of water in regions that are sources for the components. Multiple studies put the water content of FOZO at around 0.075 wt%, and much of this water is likely "juvenile" water acquired during the accretion of Earth. DMM has only 60 ppm water. If these sources sample all the regions of the mantle, the total water depends on their proportion; including uncertainties, estimates range from 0.2 to 2.3 oceans.
Water from the mantle:
Diamond inclusions Mineral samples from the transition zone and lower mantle come from inclusions found in diamonds. Researchers have recently discovered diamond inclusions of ice-VII in the transition zone. Ice-VII is water in a high pressure state. The presence of diamonds that formed in the transition zone and contain ice-VII inclusions suggests that water is present in the transition zone and at the top of the lower mantle. Of the thirteen ice-VII instances found, eight have pressures around 8–12 GPa, tracing the formation of inclusions to 400–550 km. Two inclusions have pressures between 24 and 25 GPa, indicating the formation of inclusions at 610–800 km. The pressures of the ice-VII inclusions provide evidence that water must have been present at the time the diamonds formed in the transition zone in order to have become trapped as inclusions. Researchers also suggest that the range of pressures at which inclusions formed implies inclusions existed as fluids rather than solids.Another diamond was found with ringwoodite inclusions. Using techniques including infrared spectroscopy, Raman spectroscopy, and x-ray diffraction, scientists found that the water content of the ringwoodite was 1.4 wt% and inferred that the bulk water content of the mantle is about 1 wt%.
Geophysical evidence:
Seismic Both sudden decreases in seismic activity and electricity conduction indicate that the transition zone is able to produce hydrated ringwoodite. The USArray seismic experiment is a long-term project using seismometers to chart the mantle underlying the United States. Using data from this project, seismometer measurements show corresponding evidence of melt at the bottom of the transition zone. Melt in the transition zone can be visualized through seismic velocity measurements as sharp velocity decreases at the lower mantle caused by the subduction of slabs through the transition zone. The measured decrease in seismic velocities correlates accurately with the predicted presence of 1 weight % melt of H2O.Ultra low velocity zones (ULVZs) have been discovered right above the core-mantle boundary (CMB). Experiments highlighting the presence of iron peroxide containing hydrogen (FeO2Hx) aligns with expectations of the ULVZs. Researchers believe that iron and water could react to form FeO2Hx in these ULVZs at the CMB. This reaction would be possible with the interaction of the subduction of minerals containing water and the extensive supply of iron in the Earth's outer core. Past research has suggested the presence of partial melting in ULVZs, but the formation of melt in the area surrounding the CMB remains contested.
Subduction:
As an oceanic plate descends into the upper mantle, its minerals tend to lose water. How much water is lost and when depends on the pressure, temperature and mineralogy. Water is carried by a variety of minerals that combine various proportions of magnesium oxide (MgO), silicon dioxide (SiO2), and water. At low pressures (below 5 GPa), these include antigorite, a form of serpentine, and clinochlore (both carrying 13 wt% water); talc (4.8 wt%) and some other minerals with a lower capacity. At moderate pressure (5–7 GPa) the minerals include phlogopite (4.8 wt%), the 10Å phase (a high pressure product of talc and water, 10–13 wt%) and lawsonite (11.5 wt%). At pressures above 7 GPa, there is topaz-OH (Al2SiO4(OH)2, 10 wt%), phase Egg (AlSiO3(OH), 11–18 wt%) and a collection of dense hydrous magnesium silicate (DHMS) or "alphabet" phases such as phase A (12 wt%), D (10 wt%) and E (11 wt%).The fate of the water depends on whether these phases can maintain an unbroken series as the slab descends. At a depth of about 180 km, where the pressure is about 6 gigapascals (GPa) and the temperature around 600 °C, there is a possible "choke point" where the stability regions just meet. Hotter slabs will lose all their water while cooler slabs pass the water on to the DHMS phases. In cooler slabs, some of the released water may also be stable as Ice VII.An imbalance in deep water recycling has been proposed as one mechanism that can affect global sea levels. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SageMath**
SageMath:
SageMath (previously Sage or SAGE, "System for Algebra and Geometry Experimentation") is a computer algebra system (CAS) with features covering many aspects of mathematics, including algebra, combinatorics, graph theory, numerical analysis, number theory, calculus and statistics.
SageMath:
The first version of SageMath was released on 24 February 2005 as free and open-source software under the terms of the GNU General Public License version 2, with the initial goals of creating an "open source alternative to Magma, Maple, Mathematica, and MATLAB". The originator and leader of the SageMath project, William Stein, was a mathematician at the University of Washington.
SageMath:
SageMath uses a syntax resembling Python's, supporting procedural, functional and object-oriented constructs.
Development:
Stein realized when designing Sage that there were many open-source mathematics software packages already written in different languages, namely C, C++, Common Lisp, Fortran and Python.
Development:
Rather than reinventing the wheel, Sage (which is written mostly in Python and Cython) integrates many specialized CAS software packages into a common interface, for which a user needs to know only Python. However, Sage contains hundreds of thousands of unique lines of code adding new functions and creating the interfaces among its components.SageMath uses both students and professionals for development. The development of SageMath is supported by both volunteer work and grants. However, it was not until 2016 that the first full-time Sage developer was hired (funded by an EU grant). The same year, Stein described his disappointment with a lack of academic funding and credentials for software development, citing it as the reason for his decision to leave his tenured academic position to work full-time on the project in a newly founded company, SageMath, Inc.
Achievements:
2007: first prize in the scientific software division of Les Trophées du Libre, an international competition for free software.
2012: one of the projects selected for the Google Summer of Code.
2013: ACM/SIGSAM Jenks Prize.
Performance:
Both binaries and source code are available for SageMath from the download page. If SageMath is built from source code, many of the included libraries such as OpenBLAS, FLINT, GAP (computer algebra system), and NTL will be tuned and optimized for that computer, taking into account the number of processors, the size of their caches, whether there is hardware support for SSE instructions, etc.
Performance:
Cython can increase the speed of SageMath programs, as the Python code is converted into C.
Licensing and availability:
SageMath is free software, distributed under the terms of the GNU General Public License version 3.Although Microsoft was sponsoring a native version of SageMath for the Windows operating system, prior to 2016 there were no plans for a native port, and users of Windows had to use virtualization technology such as VirtualBox to run SageMath. SageMath 8.0 (July 2017), with development funded by the OpenDreamKit project, successfully built on Cygwin, and a binary installer for 64-bit versions of Windows was available. As of SageMath 10.0 (May 2023), it requires Windows Subsystem for Linux in version 2, which in turn requires Windows to run as a Hyper-V client.
Licensing and availability:
Linux distributions in which SageMath is available as a package are Fedora, Arch Linux, Debian, Ubuntu and NixOS. In Gentoo, it is available via layman in the "sage-on-gentoo" overlay. The package used by NixOS is available for use on other distributions, due to the distribution-agnostic nature of its package manager, Nix.
Gentoo prefix also provides Sage on other operating systems.
Software packages contained in SageMath:
The philosophy of SageMath is to use existing open-source libraries wherever they exist. Therefore, it uses many libraries from other projects. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CFP-10**
CFP-10:
CFP-10 within bacterial proteins (also known as ESAT-6-like protein esxB or secreted antigenic protein MTSA-10 or 10 kDa culture filtrate antigen CFP-10) is a protein that is encoded by the esxB gene.CFP-10 is a 10 kDa secreted antigen from Mycobacterium tuberculosis. It forms a 1:1 heterodimeric complex with ESAT-6. Both genes are expressed from the RD1 region of the bacterial genome and play a key role in the virulence of the infection.
Function:
10-kDa culture filtrate protein (CFP-10) is an antigen that contributes to the virulence Mycobacterium tuberculosis. CFP-10 forms a tight 1:1 heterodimeric complex with 6kDaA early secreted antigen target (ESAT-6). In the mycobacterial cell, these two proteins are interdependent on each other for stability. The ESAT-6/CFP-10 complex is secreted by the ESX-1 secretion system, also known as the RD1 region. Mycobacterium tuberculosis uses this ESX-1 secretion system to deliver virulence factors into host macrophage and monocyte white blood cells during infection.
Function:
In Mycobacterium tuberculosis, the core components of the whole ESX-1 secretion system include Rv3877, and two AAA ATPases, including Rv3870 and Rv3871, a cytosolic protein. The ESAT-6/CFP-10 heterodimer complex is targeted for secretion by a C-terminal signal sequence on CFP-10 that is recognized by the cytosolic Rv3871 protein. Rv3871 then interacts with the CFP-10 C-terminal, and escorts the ESAT-6/CFP-10 complex to Rv3870 and Rv3877, a multi-transmembrane protein which makes up the pore that spans the cytosolic membrane of the virulent host cell. Once ESAT-6/CFP-10 is next to the membrane of the virulent host cell, the CFP-10 C-terminal attaches and binds itself to the cells surface. The ESAT-6/CFP-10 complex’s secretion and attachment to the virulent host cell shows its contribution to the pathogenicity of Mycobacterium tuberculosis. [4].
Structure:
The 10-kDa culture filtrate protein (CFP-10) and 6kDaA early secreted antigen target (ESAT-6) complex is a 100 amino-acid sequence protein. ESAT-6/CFP-10 has a hydrophobic nature as well as a high content of α-helical structures. Resonance structure analysis of the complex reveals two similar helix-turn-helix hairpin structures formed by the individual proteins, which lie anti-parallel to each other and forms a four-helix bundle. Its long flexible arm projecting off the four-helix bundle, formed by the seven amino-acid C-terminal of CFP-10, is essential for binding and attaching to the surface of host white blood cells; such as macrophages and monocytes. If this C-terminus is cleaved off, the complex shows greatly reduced attachment ability. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Boatswain**
Boatswain:
A boatswain ( BOH-sən, formerly and dialectally also BOHT-swayn), bo's'n, bos'n, or bosun, also known as a deck boss, or a qualified member of the deck department, is the most senior rate of the deck department and is responsible for the components of a ship's hull. The boatswain supervises the other members of the ship's deck department, and typically is not a watchstander, except on vessels with small crews. Additional duties vary depending upon ship, crew, and circumstances.
History:
The word boatswain has been in the English language since approximately 1450. It is derived from late Old English batswegen, from bat (boat) concatenated with Old Norse sveinn (swain), meaning a young man, apprentice, a follower, retainer or servant. Directly translated to modern Norwegian it would be båtsvenn, while the actual crew title in Norwegian is båtsmann ("boats-man"). While the phonetic spelling bosun is reported as having been observed since 1868, this latter spelling was used in Shakespeare's The Tempest written in 1611, and as bos'n in later editions.
History:
Royal Navy The rank of boatswain is the oldest rank in the Royal Navy, and its origins can be traced back to the year 1040. In that year, when five English ports began furnishing warships to King Edward the Confessor in exchange for certain privileges, they also furnished crews whose officers were the master, boatswain, carpenter, and cook. Later these officers were warranted by the British Admiralty. They maintained and sailed the ships and were the standing officers of the navy. The boatswain was the officer responsible for the care of the rigging, cordage, anchors, sails, boats, flags and other stores.The Royal Navy's last official boatswain, Commander E.W. Andrew OBE, retired in 1990. However, most RN vessels still have a Chief Boatswain's Mate (or "Buffer"), who is the most senior rating in the Seaman Specialist department.
Naval cadets:
The rank of cadet boatswain, in some schools, is the second highest rank in the combined cadet force naval section that a cadet can attain, below the rank of coxswain and above the rank of leading hand. It is equivalent to the rank of colour sergeant in the army and the royal marines cadets; it is sometimes an appointment for a senior petty officer to assist a coxswain.
Job description:
The boatswain works in a ship's deck department as the foreman of the unlicensed (crew members without a mate's licence) deck crew. Sometimes, the boatswain is also a third or fourth mate. A boatswain must be highly skilled in all matters of marlinespike seamanship required for working on deck of a seagoing vessel. The boatswain is distinguished from other able seamen by the supervisory roles: planning, scheduling, and assigning work.As deck crew foreman, the boatswain plans the day's work and assigns tasks to the deck crew. As work is completed, the boatswain checks on completed work for compliance with approved operating procedures.Outside the supervisory role, the boatswain regularly inspects the vessel and performs a variety of routine, skilled, and semi-skilled duties to maintain all areas of the ship not maintained by the engine department. These duties can include cleaning, painting, and maintaining the vessel's hull, superstructure and deck equipment as well as executing a formal preventive maintenance program.
Job description:
A boatswain's skills may include cargo rigging, winch operations, deck maintenance, working aloft, and other duties required during deck operations. The boatswain is well versed in the care and handling of lines, and has knowledge of knots, hitches, bends, whipping, and splices as needed to perform tasks such as mooring a vessel. The boatswain typically operates the ship's windlasses when letting go and heaving up anchors. Moreover, a boatswain may be called upon to lead firefighting efforts or other emergency procedures encountered on board. Effective boatswains are able to integrate their seafarer skills into supervising and communicating with members of deck crew with often diverse backgrounds.Originally, on board sailing ships, the boatswain was in charge of a ship's anchors, cordage, colours, deck crew and the ship's boats. The boatswain would also be in charge of the rigging while the ship was in dock. The boatswain's technical tasks were modernised with the advent of steam engines and subsequent mechanisation.A boatswain also is responsible for doing routine pipes using what is called a boatswain's call. There are specific sounds which can be made with the pipe to indicate various events, such as emergency situations or notifications of meal time.
Notable boatswains:
A number of boatswains and naval boatswains mates have achieved fame. Reuben James and William Wiley are famous for their heroism in the Barbary Wars and are namesakes of the ships USS Reuben James and USS Wiley. Medal of Honor recipients Francis P. Hammerberg and George Robert Cholister were U.S. Navy boatswain's mates, as was Navy Cross recipient Stephen Bass. Victoria Cross recipients John Sheppard, John Sullivan, Henry Curtis, and John Harrison were Royal Navy boatswain's mates.
Notable boatswains:
During World War II Bosun John Crisp RN is credited in "The Colditz Story" by escapee Pat Reid as providing, whilst a prisoner of war at Oflag IV-C, Colditz Castle, the expertise and enthusiasm to manufacture torn and then woven "bedsheet ropes", tested for appropriate strength, using his extensive maritime experience.
Notable boatswains:
There are also a handful of boatswains and boatswain's mates in literature. The boatswain in William Shakespeare's The Tempest is a central character in the opening scene, which takes place aboard a ship at sea, and appears again briefly in the final scene. Typhoon by Joseph Conrad has a nameless boatswain who tells Captain MacWhirr of a "lump" of men going overboard during the peak of the storm. Also, the character Bill Bobstay in Gilbert and Sullivan's musical comedy H.M.S. Pinafore is alternatively referred to as a "bos'un" and a "boatswain's mate". Another boatswain from literature is Smee from Peter Pan. Lord Byron had a Newfoundland dog named Boatswain. Byron wrote the famous poem "Epitaph to a Dog" and had a monument made for him at Newstead Abbey. The 1907 naval gothic novel The Boats of the "Glen Carrig" by William Hope Hodgson features the character of the ship's “bo'sun” as an important member of the crew and a personal friend to the narrator.Billy Bones was a boatswain in the fictional Starz TV show Black Sails.
Scouting:
Quartermaster is the highest rank in the Sea Scouts, BSA, an older youth (13–21) co-ed programme. The youth can also elect a youth leader, giving that youth the title "boatswain". In the Netherlands, a boatswain (Bootsman) is the patrol leader of a Sea Scout patrol (Bak); in Flanders, it is the assistant patrol leader of a Sea Scout patrol (Kwartier).
Notes:
This article incorporates text from public-domain sources, including websites. For specific sources of text, see notes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**IgG4-related skin disease**
IgG4-related skin disease:
IgG4-related skin disease is the recommended name for skin manifestations in IgG4-related disease (IgG4-RD). Multiple different skin manifestations have been described.
Classification:
Although a clear understanding of the various skin lesions in IgG4-related disease is a work in progress, skin lesions have been classified into subtypes based on documented cases: Angiolymphoid hyperplasia with eosinophilia (or lesions that mimic it) and cutaneous pseudolymphoma Cutaneous plasmacytosis Eyelid swelling (as part of Mikulicz's disease) Psoriasis-like eruptions Unspecified maculopapular or erythematous eruptions Hypergammaglobulinemic purpura and urticarial vasculitis Impaired blood supply to fingers or toes, leading to Raynaud's phenomenon or gangreneIn addition, Wells syndrome has also been reported in a case of IgG4-related disease. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Formula 1 (video game)**
Formula 1 (video game):
Formula 1 is a racing video game developed by Bizarre Creations and published by Psygnosis for PlayStation and Microsoft Windows. It is the first installment in Sony's Formula One series.
Formula 1 (video game):
Formula 1 is based on the 1995 Formula One World Championship. It is distinct from its sequels because it was made at the end of the season, meaning that it features driver substitutes. The game also allows two players to compete against each other either head-to-head or with other computer cars via the PlayStation Link Cable. Both players may then compete over a 17-race Championship season, or in a single race of the player's choice.
Gameplay:
Formula 1 contains 17 tracks, 13 teams and 35 drivers. If a player is to complete a season after winning every race, and leading the Constructor's Championship, a special hidden circuit is unlocked. The track is a fictional lower-level city circuit called Frameout City, which when viewed at the Race Preview page is in the shape of a Formula One car. The only way to keep the track available is to save just after having completed the season, then loading the data at the next turning on of the console.
Gameplay:
Later tracks have 24 competitors on them instead of 26 because Simtek pulled out of the actual championship after the Monaco Grand Prix. It is still possible to drive a Simtek on any course after Monaco, creating a field of 25 drivers. If two players are playing the game via the link cable setup (where players would connect two PlayStation consoles together with two copies of the game), it is possible to play as both Simtek cars, thus creating a field of 26 drivers on any course after Monaco. Every starting grid (in dry races) is the same as the real 1995 Grand Prix, timing included.
Development:
The track models in Formula 1 were modelled from surveyors' track data. The designers started with wire-frame models of the track data, then exported these from their Silicon Graphics workstations to a custom Windows 95 track editor. The track editor was used to reformat the tracks so that they could be used in-game, before exporting them back to the SGI workstations where scenery and other details were added in. To create the in-car sound, a Digital Audio Tape was strapped to a driver.Car models were created based on a combination of information provided by FOCA and real life photographs of the cars. The result was that all car models were unique rather than just a single model with different coloured "skins".Though Psygnosis was the game's publisher, development team Bizarre Creations opted to create their own 3D engine for the game rather than utilizing the one from the Psygnosis hits Wipeout and Destruction Derby. To reduce demand on the PlayStation's processor without significantly reducing the game's visuals, the developers programmed a level of detail method so that when a car reaches a certain distance away, it switches from its normal high-detail model (composed of 440 to 450 polygons, depending on the car) to a low-detail model composed of only 90 to 100 polygons.The game's original release date was pushed back to allow the developers time to make last-minute tweaks, fix bugs, and make the complex graphical changes needed to remove cigarette and alcohol advertising, which is illegal in video games in some parts of the United States.Probe Software started work on a port of the game for the Sega Saturn in 1997. Psygnosis's Formula One license had expired by this time, presenting a potential obstacle to this conversion being released. It was cancelled by June 1997.
Commentary:
This game saw the introduction of in-game commentary, which was done in the English version of the game by Murray Walker, the German version by Jochen Mass, the French version by Philippe Alliot, the Spanish version by Carlos Riera and the Italian version by Luigi Chiappini.
Soundtrack:
The in-game music – credited to "Overdrive" – was composed by Mike Clarke, who worked in-house at Psygnosis at the time, and Stuart Ellis, a session guitarist from Liverpool and owner of Curly Music, an independent music retailer. The soundtrack also features the songs "Juice" by Steve Vai (from Alien Love Secrets), as well as "Summer Song" and "Back to Shalla-Bal" by Joe Satriani (from The Extremist and Flying in a Blue Dream, respectively).
Reception:
The game was a best-seller in the UK. Worldwide sales across all computer and console versions of Formula 1 surpassed 1.7 million units by August 1997. In August 1998, the game's PlayStation version received a "Platinum" sales award from the Verband der Unterhaltungssoftware Deutschland (VUD), indicating sales of at least 200,000 units across Germany, Austria and Switzerland.The PlayStation version was reasonably well-received, with critics generally commenting that the realistic handling and real-world Formula One elements make it an ideal game for the hardcore racing fan. Some reviewers added that the game was too complicated and difficult to appeal to those looking for arcade-style racing or multiplayer gaming, though most praised the selection of modes as opening up the game to both novices and experts. Critics were more divided about the graphics. Todd Mowatt wrote in Electronic Gaming Monthly that "the fluidity of the animations were not that realistic in terms of the way a real race car would handle", GamePro's Air Hendrix praised the detailed cars and sense of speed but complained of break-up problems, and Next Generation hailed the graphics as a major leap over the first wave of PlayStation games. GameSpot called the game "a high-octane masterpiece", while Next Generation summarised: "With its exquisite graphics, wide range of challenges, and startling amount of depth, Formula 1 is the game that changes everything". PSM gave the game 9/10, praising the AI, before concluding: "Psygnosis' finest game to date, it relegates every other racing game to the back of the grid. This is the game that will sell the PlayStation to Grand Prix fans and unconverted gamers alike. An envelope-pushing killer-application. F1 is one of the essential purchases of 1996".Reviewing the PC version in GameSpot, Tim Soete praised the graphics and audio commentary but found the lack of depth and realism in the driving made the game become dull after a short while.Review aggregation website GameRankings provides an average rating for the PlayStation version of 87.75% based on 4 reviews. The PC version received an average rating of 56.40% based on 10 reviews. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bacon in the fabliaux**
Bacon in the fabliaux:
In fabliaux, bacon is one of the most commonly consumed foodstuffs, alongside capons and geese, cakes, bread, and wine.
Du provost a l'aumuche:
In some tales, bacon, and similarly pork and lard, are associated with corrupt clergymen, as symbolisms for gluttony, greed, and lust. For example, in Du provost d l'aumuche a provost hides some bacon that he has stolen from a feast prepared for his master under his hat (the "aumuche" of the title, a large fur hat) but is caught and beaten after the bacon fat, melted by a nearby fire, starts to drip down his head.
Du provost a l'aumuche:
This parallels Galbert of Bruges's tale of the Murder of Charles the Good.Du provost a l'aumuche is 132 lines long, and tells the tale of a rich knight who having left his provost, a "low fellow and a rascal" named Gervais, in charge when he went on a pilgrimage to Santiago de Compostela, returns home and sends word ahead to the provost to have a feast prepared. The provost arrives at the feast early, and spying a piece of salt pork in a shared dish, steals it and hides it in his aumuche while the person with whom he is sharing the dish has his back turned to talk to someone else. Placing the aumuche on his head, all is well until a fire behind him is stoked by a servant, at which point the fat melts and begins dripping down him. The theft is discovered when a server, inconvenienced by the hat, removes it from the provost's head, whereupon the pork falls out, and the provost tries to escape from the feast, but is caught, violently beaten, and then thrown out.Several items are parallel to Galbert's tale, which likewise features a provost named Bertulf with an aumuche, such as the name "Erembaut Brache-huche" the fabliau provost's father, the high status and high regard of their masters, the masters both going on pilgrimages, and the provosts being motivated by greed for food (fabliau) or for power (Murder). A final parallel is the fates of both provosts, Gervais being violently punished as aforementioned, and Bertulf stripped, dragged around, pelted with mud and stones, and then hanged naked at Ypres, where he was assaulted by a mob with hooks and clubs. Gervais is pelted with hot coals by cooks, assaulted by the crowd of servants at the feast, and dragged outside and thrown in a ditch with a dead dog, which parallels the Mediaeval practice of hanging with dogs.
De Haimet et de Barat et Travers:
In De Haimet et de Barat et Travers, two thieves, the eponymous brothers Haimet and Barat, and a peasant farcically steal and steal back, repeatedly, a piece of bacon.The peasant Travers is warned by his wife that she believes misfortune is coming their way, and hides a piece of bacon that he knows the brother thieves (who in the prologue to the tale have had a thieving competition, Haimet stealing the eggs out of a bird's nest in a tree and Barat stealing his trousers from him whilst he is doing it) have their eyes upon. Nonetheless, the brothers manage to steal the bacon, but while they are cooking it on a fire in the forest, Travers steals it back, scaring them away by pretending to be a ghost. The tale then proceeds to recount the bacon being stolen back and forth between the two parties.In this fabliau, the word "bacon" denotes pork products in a general fashion, just as colloquially in Modern French "cochon" can denote both the animal per se and "a bit of pig", its preserved meat.
Priests instead of stolen bacon:
In two tales of circulating bodies, Du Segretain Moine and Du Prestre qu'on porte, dead priests end up substituted for stolen bacon.In the first, the body of a priest killed by a watchful husband is hidden in the latrine of a monastery, dutifully returned by the prior, hidden again by the husband this time in a farmer's manure pile in a sack used by thieves for stolen bacon, retrieved by the thieves thinking that it is their bacon, returned by them to where they stole the bacon from, and finally strapped to a horse and sent to the monastery by the bacon's original owner, a farmer named Thibault.
Priests instead of stolen bacon:
The horse, which stumbles on its way to the monastery, finally gets the blame for the priest's death.In the second, the body circulates from the doorstep of a neighbour, to a horse, to the house of a peasant, to a sack, and into the hands of thieves who have stolen some bacon.
Thieves place it where the bacon was, from where it is removed by a tavernkeeper, put in the linen chest of a bishop, and thence placed in the bishop's bed whilst he is asleep by a prior.
In fright when he awakes to find it there, the bishop strikes the body, and assuming that he killed the priest quietly finally buries the body.
Figurative bacon:
In Le Meunier et les ii Clers "bacon" figuratively means a young woman in a sexual sense, as one of the characters encourages his companion to take his "share" of a young woman that he has just himself had sex with.
This is a sense for the word that Geoffrey Chaucer, who would have been familiar with the usage (not least because Le Meunier et les ii Clers is a clear precursor of his "Reeve's Tale"), also uses in his "Wife of Bath's Tale". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aerial dance**
Aerial dance:
Aerial modern dance is a subgenre of modern dance first recognized in the United States in the 1970s. The choreography incorporates an apparatus that is often attached to the ceiling, allowing performers to explore space in three dimensions. The ability to incorporate vertical, as well as horizontal movement paths, allows for innovations in choreography and movement.
Overview:
There are two types of aerial dance. In vertical dance a dancer is suspended in a harness from a rope or cable and explores the difference in gravity, weightlessness and varied movement possibilities offered by the suspended state. In the second type a dancer or acrobat intertwines the use of the floor or a wall with their aerial apparatus. The first utilizes the strength and expression of dance with an altered state to communicate contemporary ideas. In the second, the dancer uses dance as a way to indicate that their work is less trick-based than circus arts, and in some cases hopes that disassociating with the circus makes their work appear more contemporary and artistic.
Overview:
One of the first choreographers to utilize what we now think of as aerial dance was Trisha Brown. She called her dances (1968–1971) "equipment pieces". Please see the video of a reproduction of one of her early pieces. They are not “dancey” pieces, but by placing the pedestrians on the side wall, Brown illustrates the choreography of everyday movement. She was notably the first choreographer to pull dancers up into the air. She choreographed multiple pieces off the ground, some involving projection and multimedia, using air and wall surfaces in novel ways.
Overview:
In the late '90s an Argentinian aerial dance troupe named De La Guarda gained notoriety in London for their show combining performance art with aerial dance. The troupe is no longer touring, but some previous members have started a new company called Cuerda Producciones that continues to create aerial dance theater pieces.
Overview:
Wanda Moretti of Italy is creating a vertical dance network aimed at collecting knowledge for artists and professionals in the field. Moretti says, “From its beginning 30 years ago, vertical dance evolved from the multiple practices and influences of its initial instigators. It was born from the desire to explore space, environment and become a place where everything was possible.” Aerial modern pieces, whether solo or ensemble, often involve partnering. The apparatus used has its own motion, which changes the way a dancer must move in response. The introduction of a new element changes the dancer’s balance, center, and orientation in space. Aerial modern dancers gather annually for workshops in Boulder, Colorado, County Donegal in Ireland, Brittany, in France, and Italy.
Overview:
Another early influence on aerial modern dance, Terry Sendgraff, is credited with inventing the "motivity" trapeze. Sendgraff actively performed, choreographed and taught in the San Francisco Bay Area from the early 1970s, until announcing her retirement in 2005 at the age of 70, when she handed over her aerial dance business to Cherie Carson. The motivity trapeze came about as a result of an exploration on a low-hung circus trapeze. The ropes twisted together, causing the apparatus to spin. By formalizing this, hooking both ropes to a single point of attachment, Ms. Sendgraff used the apparatus to spin, twist, as well as fly in a straight line and in a circle.
Workshops:
In Boulder, Frequent Flyers Productions produces the Aerial Dance Festival which been held every year since its inception in July 1999. Here workshops, performances, and discussions bring together dancers, gymnasts, circus artin Brighton, England every summer.
Workshops:
In Italy, an emerging aerial dance company, brought the contemporary dance discipline to a vertical stage. The performance of the Company is distinguished from others by the details of the choreography and the harmony of the movement, typical elements of classic dance. Aerial dance is an art form that is incredibly demanding and requires a high degree of strength, power, flexibility, courage, and grace to practice.
Site dance:
Other examples of aerial modern dance are the site-specific works of Joanna Haigood of the Zaccho Dance Theatre, Amelia Rudolph of "Project Bandaloop," and Sally Jacques' Blue Lapis Light. Haigood’s work is based on careful research of the history, architecture and societal impact of found spaces, and the translation of these memories into the movements performed in that space. Project Bandaloop combines rock-climbing with dance in performances that scale and/or descend canyons, rock walls, and tall buildings across the world. Video of their outdoor work is sometimes integrated into indoor performances, projected onto screens or trampolines behind the dancers on stage. Blue Lapis Light uses multiple apparatuses, such as aerial silks, harnesses, and bungees to create dances on bridges, office buildings, hotels, and other outdoor spaces. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Landau's problems**
Landau's problems:
At the 1912 International Congress of Mathematicians, Edmund Landau listed four basic problems about prime numbers. These problems were characterised in his speech as "unattackable at the present state of mathematics" and are now known as Landau's problems. They are as follows: Goldbach's conjecture: Can every even integer greater than 2 be written as the sum of two primes? Twin prime conjecture: Are there infinitely many primes p such that p + 2 is prime? Legendre's conjecture: Does there always exist at least one prime between consecutive perfect squares? Are there infinitely many primes p such that p − 1 is a perfect square? In other words: Are there infinitely many primes of the form n2 + 1?As of August 2023, all four problems are unresolved.
Progress toward solutions:
Goldbach's conjecture Goldbach's weak conjecture, every odd number greater than 5 can be expressed as the sum of three primes, is a consequence of Goldbach's conjecture. Ivan Vinogradov proved it for large enough n (Vinogradov's theorem) in 1937, and Harald Helfgott extended this to a full proof of Goldbach's weak conjecture in 2013.Chen's theorem, another weakening of Goldbach's conjecture, proves that for all sufficiently large n, 2n=p+q where p is prime and q is either prime or semiprime. Bordignon, Johnston, and Starichkova, correcting and improving on Yamada, proved an explicit version of Chen's theorem: every even number greater than 34.5 4.2 10 417776432441823 is the sum of a prime and a product of at most two primes. Bordignon & Starichkova reduce this to 15.85 3.6 10 3321634 assuming the Generalized Riemann hypothesis for Dirichlet L-functions. Johnson and Starichkova give a version working for all n >= 4 at the cost of using a number which is the product of at most 369 primes rather than a prime or semiprime; under GRH they improve 369 to 33.Montgomery and Vaughan showed that the exceptional set of even numbers not expressible as the sum of two primes was of density zero, although the set is not proven to be finite. The best current bounds on the exceptional set is 0.72 (for large enough x) due to Pintz, and 0.5 log 3x under RH, due to Goldston.Linnik proved that large enough even numbers could be expressed as the sum of two primes and some (ineffective) constant K of powers of 2. Following many advances (see Pintz for an overview), Pintz and Ruzsa improved this to K = 8.
Progress toward solutions:
Twin prime conjecture Yitang Zhang showed that there are infinitely many prime pairs with gap bounded by 70 million, and this result has been improved to gaps of length 246 by a collaborative effort of the Polymath Project. Under the generalized Elliott–Halberstam conjecture this was improved to 6, extending earlier work by Maynard and Goldston, Pintz & Yıldırım.Chen showed that there are infinitely many primes p (later called Chen primes) such that p + 2 is either a prime or a semiprime.
Progress toward solutions:
Legendre's conjecture It suffices to check that each prime gap starting at p is smaller than 2p . A table of maximal prime gaps shows that the conjecture holds to 264 ≈ 1.8×1019. A counterexample near that size would require a prime gap a hundred million times the size of the average gap.
Järviniemi, improving on Heath-Brown and Matomäki, shows that there are at most 100 +ε exceptional primes followed by gaps larger than 2p ; in particular, 0.57 +ε.
A result due to Ingham shows that there is a prime between n3 and (n+1)3 for every large enough n.
Progress toward solutions:
Near-square primes Landau's fourth problem asked whether there are infinitely many primes which are of the form p=n2+1 for integer n. (The list of known primes of this form is A002496.) The existence of infinitely many such primes would follow as a consequence of other number-theoretic conjectures such as the Bunyakovsky conjecture and Bateman–Horn conjecture. As of 2023, this problem is open.
Progress toward solutions:
One example of near-square primes are Fermat primes. Henryk Iwaniec showed that there are infinitely many numbers of the form n2+1 with at most two prime factors. Ankeny and Kubilius proved that, assuming the extended Riemann hypothesis for L-functions on Hecke characters, there are infinitely many primes of the form p=x2+y2 with log p) . Landau's conjecture is for the stronger y=1 . The best unconditional result is due to Harman & Lewis and it gives 0.119 ) Merikoski, improving on previous works, showed that there are infinitely many numbers of the form n2+1 with greatest prime factor at least 1.279 . Replacing the exponent with 2 would yield Landau's conjecture.
Progress toward solutions:
The Friedlander–Iwaniec theorem shows that infinitely many primes are of the form x2+y4 .Baier & Zhao prove that there are infinitely many primes of the form p=an2+1 with a<p5/9+ε ; the exponent can be improved to 1/2+ε under the Generalized Riemann Hypothesis for L-functions and to ε under a certain Elliott-Halberstam type hypothesis.
The Brun sieve establishes an upper bound on the density of primes having the form p=n2+1 : there are log x) such primes up to x . Hence almost all numbers of the form n2+1 are composite. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hard candy**
Hard candy:
A hard candy (American English), or boiled sweet (British English), is a sugar candy prepared from one or more sugar-based syrups that is heated to a temperature of 160 °C (320 °F) to make candy. Among the many hard candy varieties are stick candy such as the candy cane, lollipops, rock, aniseed twists, and bêtises de Cambrai. "Boiled" is a misnomer, as sucrose (a disaccharide) melts fully at approximately 186 °C. Further heating breaks it into glucose and fructose molecules before it can vaporize.Most hard candy is nearly 100% sugar by weight, with a tiny amount of other ingredients for color or flavor, and negligible water content in the final product. Recipes for hard candy may use syrups of sucrose, glucose, fructose or other sugars. Sugar-free versions have also been created.
Creation:
Recipes for hard candy use a sugar syrup, such as sucrose, glucose or fructose. This is heated to a particular temperature, at which point the candy maker removes it from the heat source and may add citric acid, food dye, and some flavouring, such as a plant extract, essential oil, or flavourant. The syrup concoction, which is now very thick, can be poured into a mold or tray to cool, or a cooling table in case of industrial mass production. When the syrup is cool enough to handle, it can be folded, rolled, or molded into the shapes desired. After the boiled syrup cools, it is called hard candy, since it becomes stiff and brittle as it approaches room temperature.
Chemistry:
Chemically, sugar candies are broadly divided into two groups: crystalline candies and amorphous candies. Crystalline candies are not as hard as crystals of the mineral variety, but derive their name and their texture from their microscopically organized sugar structure, formed through a process of crystallization, which makes them easy to bite or cut into. Amorphous candies have a disorganized crystalline structure. Hard candies are non-crystalline, amorphous candies containing about 98% (or more) solid sugar.
Medicinal use:
Hard candies are historically associated with cough drops. The extended flavor release of lozenge-type candy, which mirrors the properties of modern cough drops, had long been appreciated. Many apothecaries used sugar candy to make their prescriptions more palatable to their customers.
They are also carried by people with hypoglycemia to quickly raise their low blood sugar level which, when untreated, can sometimes lead to fainting and other physical complications, and are used as part of diabetic management.
Sugar-free:
Hard candies and throat lozenges prepared without sugar employ isomalt as a sugar substitute, and are sweetened further by the addition of an artificial sweetener, such as aspartame, sucralose, saccharin, or a sugar alcohol, such as xylitol. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**OCC-1**
OCC-1:
OCC-1 (overexpressed in colon carcinoma-1) is a protein, which in humans is encoded by the gene C12orf75. The gene is approximately 40,882 bp long and encodes 63 amino acids. OCC-1 is ubiquitously expressed throughout the human body. OCC-1 has shown to be overexpressed in various colon carcinomas.
Novel splice variant of this gene was also detected in various human cancer types; in addition to encoding a novel smaller protein (51 amino acids), OCC-1 gene produces a non-protein coding RNA splice variant lncRNA (called OCC-D variant).
Gene:
Location and size: C12orf75 is found along the plus strand of chromosome 12 (12q23.3). The gene is 40,882 bp long with the genomic sequence beginning at 105,330,636 bp and ends at 105,371,518 bp. C12orf75 contains 6 exons and is flanked by KCCAT198 (renal clear cell carcinoma-associated transcript 198).
mRNA:
C12orf75 encodes several transcripts of mRNA that the longest one is 1386 bp. The mature mRNA of this splice variant contains six exons.
Protein:
StructurePrimary structure: OCC-1 protein is 63 amino acids long and has a molecular weight of 6.4 kdal. OCC-1 contains an “opioid growth factor receptor repeat” (OGFr) motif from residue 8 to 27. OCC-1 is an acidic protein with an isoelectric point of OCC-1 is 6.6.Secondary structure: The secondary structure of OCC-1 is a combination of multiple coils, a few α-helices, and few β-sheets. The Phyre 2 program 52% α-helices, 6% β-sheets, and 70% disordered. The predicted region β-sheet from residues 31 to 38 coincide with the α-helices and β-sheets regions predicted by other programs. OCC-1 is a soluble protein; according to the SOUSI program therefore there are no transmembrane domains.Tertiary structure: Predicted folding by iTASSER is shown.
Protein:
Post translational modifications: OCC-1 is predicted to undergo the post-translational modifications of O-glycosylation, phosphorylation, and myristoylation.
Subcellular location: The k-NN tool places OCC-1 in the nucleus of the cell with 65.2% certainty, 13% in the mitochondria, 8.7% in the vesicles of the secretory system, 4.3% cytoplasm, and 4.3% vacuole.
Homology:
Paralogs: OCC-1 has no known paralogs.
Homology:
Orthologs: OCC-1 has been found in mammalia, reptilia, amphibian, aves, and actinopterygii. The gene is not found in plants, protists, fungi, archaea, or bacteria. The most distant ortholog is the Oryzias latipers or the Japanese rice fish, which diverged from the human gene approximately 436.5 million years ago.Phylogeny: The phylogenetic tree to the right shows the evolution of OCC-1 among humans and the orthologs from the various taxa that contain OCC-1. The results of this phylogenetic tree follow in accordance with the predicted evolutionary history of animals on Earth.
Expression:
Expression level: OCC-1 has moderate to high expression throughout the body, therefore OCC-1 is ubiquitously expressed in humans; notably high expression in the kidney, skeletal muscle and pancreas and low expression in the heart. OCC-1 has shown to be overexpressed in various colon carcinomas.
In regards to homologous expression, in situ hybridization data revealed that OCC-1 is expressed in the primary visual cortex of the macque, in an activity dependent manner.Disease state expression: Profiles from NCBI UniGene show the expression of OCC-1 in adrenal tumors, chondrosarcoma, gastrointestinal tumors, kidney tumors, leukemia, liver tumors, prostate cancer, soft tissue/muscle tissue tumors, and uterine tumors.
Regulation of expression:
Promoter: The promoter of OCC-1 is GXP_4407929 and 601 bp in length. The promoter can be found on the plus strand and begins at 105234790 bp and ends at 105235390 bp.
Interacting proteins:
OCC-1 is shown to interact with HRG4, ELAVL1, c-REL, and IRS4. ELAVL1 functions to stabilized mRNA for gene expression. C-REL is involved in lymphoid and cell growth/survival, with a specific presence in T cell malignancies and cancer. HRG4 plays a role in signal transduction and trafficking sensory neurons and is located primarily in the retina, which relates to the discovery of expression in the brain of the macque through in situ hybridization data. IRS4 functions as interface between growth factor receptors consisting of tyrosine kinase activity, such as insulin receptors. IRS4 also is involved with the IGF1R mitogenic signaling pathway.
Clinical significance:
OCC-1 has shown to be overexpressed in multiple colon carcinomas. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pixel Visual Core**
Pixel Visual Core:
The Pixel Visual Core (PVC) is a series of ARM-based system in package (SiP) image processors designed by Google. The PVC is a fully programmable image, vision and AI multi-core domain-specific architecture (DSA) for mobile devices and in future for IoT.
It first appeared in the Google Pixel 2 and 2 XL which were introduced on October 19, 2017. It has also appeared in the Google Pixel 3 and 3 XL. Starting with the Pixel 4, this chip was replaced with the Pixel Neural Core.
History:
Google previously used Qualcomm Snapdragon's CPU, GPU, IPU, and DSP to handle its image processing for their Google Nexus and Google Pixel devices. With the increasing importance of computational photography techniques, Google developed the Pixel Visual Core (PVC). Google claims the PVC uses less power than using CPU and GPU while still being fully programmable, unlike their tensor processing unit (TPU) application-specific integrated circuit (ASIC).
History:
Indeed, classical mobile devices equip an image signal processor (ISP) that is a fixed functionality image processing pipeline. In contrast to this, the PVC has a flexible programmable functionality, not limited only to image processing.
The PVC in the Google Pixel 2 and 2 XL is labeled SR3HX X726C502.The PVC in the Google Pixel 3 and 3 XL is labeled SR3HX X739F030.Thanks to the PVC, the Pixel 2 and Pixel 3 obtained a mobile DxOMark of 98 and 101.
The latter one was the top-ranked single-lens mobile DxOMark score, tied with the iPhone XR.
Pixel Visual Core software:
A typical image-processing program of the PVC is written in Halide. Currently, it supports just a subset of Halide programming language without floating point operations and with limited memory access patterns.
Halide is a domain-specific language that lets the user decouple the algorithm and the scheduling of its execution.
In this way, the developer can write a program that is optimized for the target hardware architecture.
Pixel Visual Core ISA:
The PVC has two types of instruction set architecture (ISA), a virtual and a physical one. First, a high-level language program is compiled into a virtual ISA (vISA), inspired by RISC-V ISA, which abstracts completely from the target hardware generation. Then, the vISA program is compiled into the so-called physical ISA (pISA), that is a VLIW ISA. This compilation step takes into account the target hardware parameters (e.g. array of PEs size, STP size, etc...) and specify explicitly memory movements. The decoupling of vISA and pISA lets the first one to be cross-architecture and generation-independent, while pISA can be compiled offline or through JIT compilation.
Pixel Visual Core architecture:
The Pixel Visual Core is designed to be a scalable multi-core energy-efficient architecture, ranging from even numbers between 2 and 16 core designs. The core of a PVC is the image processing unit (IPU) a programmable unit tailored for image processing. The Pixel Visual Core architecture was also designed either to be its own chip, like the SR3HX, or as an IP block for System on a chip (SOC).
Pixel Visual Core architecture:
Image Processing Unit (IPU) The IPU core has a stencil processor (STP), a line buffer pool (LBP) and a NoC.
Pixel Visual Core architecture:
The STP mainly provides a 2-D SIMD array of processing elements (PEs) able to perform stencil computations, a small neighborhood of pixels. Though it seems similar to systolic array and wavefront computations, the STP has an explicit software controlled data movement. Each PEs features 2x 16-bit arithmetic logic units (ALUs), 1x 16-bit Multiplier–accumulator unit (MAC), 10x 16-bit registers, and 10x 1-bit predicate registers.
Pixel Visual Core architecture:
Line Buffer Pool (LBP) Considering that one of the most energy costly operation is DRAM access, each STP has temporary buffers to increase data locality, namely LBP. The used LBP is a 2-D FIFO that accommodates different sizes of reading and writing. The LBP uses single-producer multi-consumer behavioral model. Each LBP can have eight logical LB memories and one for DMA input-output operations.
Pixel Visual Core architecture:
Due to the real high complexity of the memory system, the PVC designers state the LBP controller as one of the most challenging components.
The NoC used is a ring network on chip used to communicate with only neighbor cores for energy savings and pipelined computational pattern preservation.
Stencil Processor (STP) The STP has a 2-D array of PEs: for example, a 16x16 array of full PEs and four lanes of simplified PEs called "halo".
The STP has a scalar processor, called scalar lane (SCL), that adds control instructions with a small instruction memory.
The last component of an STP is a load store unit called sheet generator (SHG), where the sheet is the PVC memory access unit.
SR3HX design summary:
The SR3HX PVC features a 64-bit ARMv8a ARM Cortex-A53 CPU, 8x image processing unit (IPU) cores, 512 MB LPDDR4, MIPI, PCIe.
The IPU cores each have 512 arithmetic logic units (ALUs) consisting of 256 processing elements (PEs) arranged as a 16 x 16 2-dimensional array. Those cores execute a custom VLIW ISA.
SR3HX design summary:
There are two 16-bit ALUs per processing element and they can operate in three distinct ways: independent, joined, and fused. The SR3HX PVC is manufactured as a SiP by TSMC using their 28HPM HKMG process. It was designed over 4 years in partnership with Intel. (Codename: Monette Hill) Google claims the SR3HX PVC is 7-16x more energy-efficient than the Snapdragon 835. And that the SR3HX PVC can perform 3 trillion operations per second, HDR+ can run 5x faster and at less than one-tenth the energy than the Snapdragon 835. It supports Halide for image processing and TensorFlow for machine learning. The current chip runs at 426 MHz and the single IPU is able to perform more than 1 TeraOPS. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**FREAK**
FREAK:
FREAK ("Factoring RSA Export Keys") is a security exploit of a cryptographic weakness in the SSL/TLS protocols introduced decades earlier for compliance with U.S. cryptography export regulations. These involved limiting exportable software to use only public key pairs with RSA moduli of 512 bits or less (so-called RSA_EXPORT keys), with the intention of allowing them to be broken easily by the National Security Agency (NSA), but not by other organizations with lesser computing resources. However, by the early 2010s, increases in computing power meant that they could be broken by anyone with access to relatively modest computing resources using the well-known Number Field Sieve algorithm, using as little as $100 of cloud computing services. Combined with the ability of a man-in-the-middle attack to manipulate the initial cipher suite negotiation between the endpoints in the connection and the fact that the Finished hash only depended on the master secret, this meant that a man-in-the-middle attack with only a modest amount of computation could break the security of any website that allowed the use of 512-bit export-grade keys. While the exploit was only discovered in 2015, its underlying vulnerabilities had been present for many years, dating back to the 1990s.
Vulnerability:
The flaw was found by researchers from IMDEA Software Institute, INRIA and Microsoft Research. The FREAK attack in OpenSSL has the identifier CVE-2015-0204.Vulnerable software and devices included Apple's Safari web browser, the default browser in Google's Android operating system, Microsoft's Internet Explorer, and OpenSSL. Microsoft has also stated that its Schannel implementation of transport-layer encryption is vulnerable to a version of the FREAK attack in all versions of Microsoft Windows. The CVE ID for Microsoft's vulnerability in Schannel is CVE-2015-1637. The CVE ID for Apple's vulnerability in Secure Transport is CVE-2015-1067.Sites affected by the vulnerability included the US federal government websites fbi.gov, whitehouse.gov and nsa.gov, with around 36% of HTTPS-using websites tested by one security group shown as being vulnerable to the exploit. Based on geolocation analysis using IP2Location LITE, 35% of vulnerable servers are located in the US.Press reports of the exploit have described its effects as "potentially catastrophic" and an "unintended consequence" of US government efforts to control the spread of cryptographic technology.As of March 2015, vendors were in the process of releasing new software that would fix the flaw. On March 9, 2015, Apple released security updates for both iOS 8 and OS X operating systems which fixed this flaw. On March 10, 2015, Microsoft released a patch which fixed this vulnerability for all supported versions of Windows (Server 2003, Vista and later). Google Chrome 41 and Opera 28 has also mitigated against this flaw. Mozilla Firefox is not vulnerable against this flaw.The research paper explaining this flaw has been published at the 36th IEEE Symposium on Security and Privacy and has been awarded the Distinguished Paper award. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Atomic ratio**
Atomic ratio:
The atomic ratio is a measure of the ratio of atoms of one kind (i) to another kind (j). A closely related concept is the atomic percent (or at.%), which gives the percentage of one kind of atom relative to the total number of atoms. The molecular equivalents of these concepts are the molar fraction, or molar percent.
Atoms:
Mathematically, the atomic percent is 100 %where Ni are the number of atoms of interest and Ntot are the total number of atoms, while the atomic ratio is atomicratio(i:j)=atomicpercent(i):atomicpercent(j).
For example, the atomic percent of hydrogen in water (H2O) is at.%H2O = 2/3 x 100 ≈ 66.67%, while the atomic ratio of hydrogen to oxygen is AH:O = 2:1.
Isotopes:
Another application is in radiochemistry, where this may refer to isotopic ratios or isotopic abundances. Mathematically, the isotopic abundance is isotopicabundance(i)=NiNtot, where Ni are the number of atoms of the isotope of interest and Ntot is the total number of atoms, while the atomic ratio is isotopicratio(i:j)=isotopicpercent(i):isotopicpercent(j).
For example, the isotopic ratio of deuterium (D) to hydrogen (H) in heavy water is roughly D:H = 1:7000 (corresponding to an isotopic abundance of 0.00014%).
Doping in laser physics:
In laser physics however, the atomic ratio may refer to the doping ratio or the doping fraction.
For example, theoretically, a 100% doping ratio of Yb : Y3Al5O12 is pure Yb3Al5O12.
The doping fraction equals, NatomsofdopantNatomsofsolutionwhichcanbesubstitutedwiththedopant | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bear-in-the-hole Static Rook**
Bear-in-the-hole Static Rook:
In shogi, Bear-in-the-hole Static Rook or Anaguma Static Rook (居飛車穴熊 ibisha anaguma) is a Static Rook opening that utilizes a Bear-in-the-hole castle. It is typically played against Ranging Rook opponents. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mathematical structure**
Mathematical structure:
In mathematics, a structure is a set endowed with some additional features on the set (e.g. an operation, relation, metric, or topology). Often, the additional features are attached or related to the set, so as to provide it with some additional meaning or significance.
A partial list of possible structures are measures, algebraic structures (groups, fields, etc.), topologies, metric structures (geometries), orders, events, equivalence relations, differential structures, and categories.
Mathematical structure:
Sometimes, a set is endowed with more than one feature simultaneously, which allows mathematicians to study the interaction between the different structures more richly. For example, an ordering imposes a rigid form, shape, or topology on the set, and if a set has both a topology feature and a group feature, such that these two features are related in a certain way, then the structure becomes a topological group.Mappings between sets which preserve structures (i.e., structures in the domain are mapped to equivalent structures in the codomain) are of special interest in many fields of mathematics. Examples are homomorphisms, which preserve algebraic structures; homeomorphisms, which preserve topological structures; and diffeomorphisms, which preserve differential structures.
History:
In 1939, the French group with the pseudonym Nicolas Bourbaki saw structures as the root of mathematics. They first mentioned them in their "Fascicule" of Theory of Sets and expanded it into Chapter IV of the 1957 edition. They identified three mother structures: algebraic, topological, and order.
Example: the real numbers:
The set of real numbers has several standard structures: An order: each number is either less or more than any other number.
Algebraic structure: there are operations of multiplication and addition that make it into a field.
A measure: intervals of the real line have a specific length, which can be extended to the Lebesgue measure on many of its subsets.
A metric: there is a notion of distance between points.
A geometry: it is equipped with a metric and is flat.
A topology: there is a notion of open sets.There are interfaces among these: Its order and, independently, its metric structure induce its topology.
Its order and algebraic structure make it into an ordered field.
Its algebraic structure and topology make it into a Lie group, a type of topological group. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Granule cell**
Granule cell:
The name granule cell has been used for a number of different types of neurons whose only common feature is that they all have very small cell bodies. Granule cells are found within the granular layer of the cerebellum, the dentate gyrus of the hippocampus, the superficial layer of the dorsal cochlear nucleus, the olfactory bulb, and the cerebral cortex.
Granule cell:
Cerebellar granule cells account for the majority of neurons in the human brain. These granule cells receive excitatory input from mossy fibers originating from pontine nuclei. Cerebellar granule cells project up through the Purkinje layer into the molecular layer where they branch out into parallel fibers that spread through Purkinje cell dendritic arbors. These parallel fibers form thousands of excitatory granule-cell–Purkinje-cell synapses onto the intermediate and distal dendrites of Purkinje cells using glutamate as a neurotransmitter.
Granule cell:
Layer 4 granule cells of the cerebral cortex receive inputs from the thalamus and send projections to supragranular layers 2–3, but also to infragranular layers of the cerebral cortex.
Structure:
Granule cells in different brain regions are both functionally and anatomically diverse: the only thing they have in common is smallness. For instance, olfactory bulb granule cells are GABAergic and axonless, while granule cells in the dentate gyrus have glutamatergic projection axons. These two populations of granule cells are also the only major neuronal populations that undergo adult neurogenesis, while cerebellar and cortical granule cells do not.
Structure:
Granule cells (save for those of the olfactory bulb) have a structure typical of a neuron consisting of dendrites, a soma (cell body) and an axon.
Dendrites: Each granule cell has 3 – 4 stubby dendrites which end in a claw. Each of the dendrites are only about 15 μm in length.
Soma: Granule cells all have a small soma diameter of approximately 10 μm.
Axon: Each granule cell sends a single axon onto the Purkinje cell dendritic tree. The axon has an extremely narrow diameter: ½ micrometre.
Synapse: 100–300,000 granule cell axons synapse onto a single Purkinje cell.The existence of gap junctions between granule cells allows multiple neurons to be coupled to one another allowing multiple cells to act in synchrony and to allow signalling functions necessary for granule cell development to occur.
Structure:
Cerebellar granule cell The granule cells, produced by the rhombic lip, are found in the granule cell layer of the cerebellar cortex. They are small and numerous. They are characterized by a very small soma and several short dendrites which terminate with claw-shaped endings. In the transmission electron microscope, these cells are characterized by a darkly stained nucleus surrounded by a thin rim of cytoplasm. The axon ascends into the molecular layer where it splits to form parallel fibers.
Structure:
Dentate gyrus granule cell The principal cell type of the dentate gyrus is the granule cell. The dentate gyrus granule cell has an elliptical cell body with a width of approximately 10 μm and a height of 18μm.The granule cell has a characteristic cone-shaped tree of spiny apical dendrites. The dendrite branches project throughout the entire molecular layer and the furthest tips of the dendritic tree end just at the hippocampal fissure or at the ventricular surface. The granule cells are tightly packed in the granular cell layer of the dentate gyrus.
Structure:
Dorsal cochlear nucleus granule cell The granule cells in the dorsal cochlear nucleus are small neurons with two or three short dendrites that give rise to a few branches with expansions at the terminals. The dendrites are short with claw-like endings that form glomeruli to receive mossy fibers, similar to cerebellar granule cells. Its axon projects to the molecular layer of the dorsal cochlear nucleus where it forms parallel fibers, also similar to cerebellar granule cells.
Structure:
The dorsal cochlear granule cells are small excitatory interneurons which are developmentally related and thus resemble the cerebellar granule cell.
Structure:
Olfactory bulb granule cell The main intrinsic granule cell in the vertebrate olfactory bulb lacks an axon (as does the accessory neuron). Each cell gives rise to short central dendrites and a single long apical dendrite that expands into the granule cell layer and enters the mitral cell body layer. The dendrite branches terminate within the outer plexiform layer among the dendrites in the olfactory tract.
Structure:
In the mammalian olfactory bulb, granule cells can process both synaptic input and output due to the presence of large spines.
Function:
Neural pathways and circuits of the cerebellum Cerebellar granule cells receive excitatory input from 3 or 4 mossy fibers originating from pontine nuclei. Mossy fibers make an excitatory connection onto granule cells which cause the granule cell to fire an action potential.
Function:
The axon of a cerebellar granule cell splits to form a parallel fiber which innervates Purkinje cells. The vast majority of granule cell axonal synapses are found on the parallel fibers.The parallel fibers are sent up through the Purkinje layer into the molecular layer where they branch out and spread through Purkinje cell dendritic arbors. These parallel fibers form thousands of excitatory Granule-cell-Purkinje-cell synapses onto the dendrites of Purkinje cells.
Function:
This connection is excitatory as glutamate is released.
Function:
The parallel fibers and ascending axon synapses from the same granule cell fire in synchrony which results in excitatory signals. In the cerebellar cortex there are a variety of inhibitory neurons (interneurons). The only excitatory neurons present in the cerebellar cortex are granule cells.Plasticity of the synapse between a parallel fiber and a Purkinje cell is believed to be important for motor learning. The function of cerebellar circuits is entirely dependent on processes carried out by the granular layer. Therefore, the function of granule cells determines the cerebellar function as a whole.
Function:
Mossy fiber input on cerebellar granule cells Granule cell dendrites also synapse with distinctive unmyelinated axons which Santiago Ramón y Cajal called mossy fibers Mossy fibers and golgi cells both make synaptic connections with granule cells. Together these cells form the glomeruli.Granule cells are subject to feed-forward inhibition: granule cells excite Purkinje cells but also excite GABAergic interneurons that inhibit Purkinje cells.
Function:
Granule cells are also subject to feedback inhibition: Golgi cells receive excitatory stimuli from granule cells and in turn send back inhibitory signals to the granule cell.Mossy fiber input codes are conserved during synaptic transmission between granule cells, suggesting that innervation is specific to the input that is received. Granule cells do not just relay signals from mossy fibers, rather they perform various, intricate transformations which are required in the spatiotemporal domain.Each granule cell is receiving an input from two different mossy fiber inputs. The input is thus coming from two different places as opposed to the granule cell receiving multiple inputs from the same source.
Function:
The differences in mossy fibers that are sending signals to the granule cells directly effects the type of information that granule cells translate to Purkinje cells. The reliability of this translation will depend on the reliability of synaptic activity in granule cells and on the nature of the stimulus being received. The signal a granule cell receives from a Mossy fiber depends on the function of the mossy fiber itself. Therefore, granule cells are able to integrate information from the different mossy fibers and generate new patterns of activity.
Function:
Climbing fiber input on cerebellar granule cells Different patterns of mossy fiber input will produce unique patterns of activity in granule cells that can be modified by a teaching signal conveyed by the climbing fiber input. David Marr and James Albus suggested that the cerebellum operates as an adaptive filter, altering motor behaviour based on the nature of the sensory input.
Function:
Since multiple (~200,000) granule cells synapse onto a single Purkinje cell, the effects of each parallel fiber can be altered in response to a “teacher signal” from the climbing fiber input.
Function:
Specific functions of different granule cells Cerebellum granule cellsDavid Marr suggested that the granule cells encode combinations of mossy fiber inputs. In order for the granule cell to respond, it needs to receive active inputs from multiple mossy fibers. The combination of multiple inputs results in the cerebellum being able to make more precise distinctions between input patterns than a single mossy fiber would allow. The cerebellar granule cells also play a role in orchestrating the tonic conductances which control sleep in conjunction with the ambient levels of GABA which are found in the brain.
Function:
Dentate granule cellsLoss of dentate gyrus neurons from the hippocampus results in spatial memory deficits. Therefore, dentate granule cells are thought to function in the formation of spatial memories and of episodic memories.
Immature and mature dentate granule cells have distinct roles in memory function. Adult-born granule cells are thought to be involved in pattern separation whereas old granule cells contribute to rapid pattern completion.
Function:
Dorsal cochlear granule cellsPyramidal cells from the primary auditory cortex project directly on to the cochlear nucleus. This is important in the acoustic startle reflex, in which the pyramidal cells modulate the secondary orientation reflex and the granule cell input is responsible for appropriate orientation. This is because the signals received by the granule cells contain information about the head position. Granule cells in the dorsal cochlear nucleus play a role in the perception and response to sounds in our environment.
Function:
Olfactory bulb granule cellsInhibition generated by granule cells, the most common GABAergic cell type in the olfactory bulb, plays a critical role in shaping the output of the olfactory bulb. There are two types of excitatory inputs received by GABAergic granule cells; those activated by an AMPA receptor and those activated by a NMDA receptor. This allows the granule cells to regulate the processing of the sensory input in the olfactory bulb. The olfactory bulb transmits smell information from the nose to the brain, and is thus necessary for a proper sense of smell. Granule cells in the olfactory bulb have also been found to be important in forming memories linked with scents.
Function:
Critical factors for function CalciumCalcium dynamics are essential for several functions of granule cells such as changing membrane potential, synaptic plasticity, apoptosis, and regulation of gene transcription. The nature of the calcium signals that control the presynaptic and postsynaptic function of the olfactory bulb granule cells spines is mostly unknown.
Function:
Nitric oxideGranule neurons have high levels of the neuronal isoform of nitric oxide synthase. This enzyme is dependent on the presence of calcium and is responsible for the production of nitric oxide (NO). This neurotransmitter is a negative regulator of granule cell precursor proliferation which promotes the differentiation of different granule cells. NO regulates interactions between granule cells and glia and is essential for protecting the granule cells from damage. NO is also responsible for neuroplasticity and motor learning.
Role in disease:
Altered morphology of dentate granule cells TrkB is responsible for the maintenance of normal synaptic connectivity of the dentate granule cells. TrkB also regulates the specific morphology (biology) of the granule cells and is thus said to be important in regulating neuronal development, neuronal plasticity, learning, and the development of epilepsy. The TrkB regulation of granule cells is important in preventing memory deficits and limbic epilepsy. This is due to the fact that dentate granule cells play a critical role in the function of the entorhinal-hippocampal circuitry in health and disease. Dentate granule cells are situated to regulate the flow of information into the hippocampus, a structure required for normal learning and memory.
Role in disease:
Decreased granule cell neurogenesis Both epilepsy and depression show a disrupted production of adult-born hippocampal granule cells. Epilepsy is associated with increased production - but aberrant integration - of new cells early in the disease and decreased production late in the disease. Aberrant integration of adult-generated cells during the development of epilepsy may impair the ability of the dentate gyrus to prevent excess excitatory activity from reaching hippocampal pyramidal cells, thereby promoting seizures. Long-lasting epileptic seizure stimulate dentate granule cell neurogenesis. These newly born dentate granule cells may result in aberrant connections that result in the hippocampal network plasticity associated with epileptogenesis.
Role in disease:
Shorter granule cell dendrites Patients with Alzheimer's have shorter granule cell dendrites. Furthermore, the dendrites were less branched and had fewer spines than those in patients not with Alzheimer's. However, granule cell dendrites are not an essential component of senile plaques and these plaques have no direct effect on granule cells in the dentate gyrus. The specific neurofibrillary changes of dentate granule cells occur in patients with Alzheimer's, Lewy body variant and progressive supranuclear palsy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DEC PRISM**
DEC PRISM:
PRISM (Parallel Reduced Instruction Set Machine) was a 32-bit RISC instruction set architecture (ISA) developed by Digital Equipment Corporation (DEC). It was the outcome of a number of DEC research projects from the 1982–1985 time-frame, and the project was subject to continually changing requirements and planned uses that delayed its introduction. This process eventually decided to use the design for a new line of Unix workstations. The arithmetic logic unit (ALU) of the microPrism version had completed design in April 1988 and samples were fabricated, but the design of other components like the floating point unit (FPU) and memory management unit (MMU) were still not complete in the summer when DEC management decided to cancel the project in favor of MIPS-based systems. An operating system codenamed MICA was developed for the PRISM architecture, which would have served as a replacement for both VAX/VMS and ULTRIX on PRISM.PRISM's cancellation had significant effects within DEC. Many of the team members left the company over the next year, notably Dave Cutler who moved to Microsoft and led the development of Windows NT. The MIPS-based workstations were moderately successful among DEC's existing Ultrix users but had little success competing against companies like Sun Microsystems. Meanwhile, DEC's cash-cow VAX line grew increasingly less performant as new RISC designs outperformed even the top-of-the-line VAX 9000. As the company explored the future of the VAX they concluded that a PRISM-like processor with a few additional changes could address all of these markets. Starting where PRISM left off, the DEC Alpha program started in 1989.
History:
Background Introduced in 1977, the VAX was a runaway success for DEC, cementing its place as the world's #2 computer vendor behind IBM. The VAX was noted for its rich instruction set architecture (ISA), which was implemented in complex microcode. The VMS operating system was layered on top of this ISA, which drove it to have certain requirements for interrupt handling and the memory model used for memory paging. By the early 1980s, VAX systems had become "the computing hub of many technology-driven companies, sending spokes of RS-232 cables out to a rim of VT-100 terminals that kept the science and engineering departments rolling."This happy situation was upset by the relentless improvement of semiconductor manufacturing as encoded by Moore's Law; by the early 1980s there were a number of capable 32-bit single-chip microprocessors with performance similar to early VAX machines yet able to fit into a desktop pizza box form factor. Companies like Sun Microsystems introduced Motorola 68000 series-based Unix workstations that could replace a huge multi-user VAX machine with one that provided even more performance but was inexpensive enough to be purchased for every user that required one. While DEC's own microprocessor teams were introducing a series of VAX implementations at lower price-points, the price-performance ratio of their systems continued to be eroded. By the later half of the 1980s, DEC found itself being locked out of the technical market.
History:
RISC During the 1970s, IBM had been carrying out studies of the performance of their computer systems and found, to their surprise, that 80% of the computer's time was spent performing only five operations. The hundreds of other instructions in their ISAs, implemented using microcode, went almost entirely unused. The presence of the microcode introduced a delay when the instructions were decoded, so even when one called one of those five instructions directly, it ran slower than it could if there was no microcode. This led to the IBM 801 design, the first modern RISC processor.Around the same time, in 1979, Dave Patterson was sent on a sabbatical from University of California, Berkeley to help DEC's west-coast team improve the VAX microcode. Patterson was struck by the complexity of the coding process and concluded it was untenable. He first wrote a paper on ways to improve microcoding, but later changed his mind and decided microcode itself was the problem. He soon started the Berkeley RISC project. The emergence of RISC sparked off a long-running debate within the computer industry about its merits; when Patterson first outlined his arguments for the concept in 1980, a dismissive dissenting opinion was published by DEC.By the mid-1980s practically every company with a processor design arm began exploring the RISC approach. In spite of any official disinterest, DEC was no exception. In the period from 1982 to 1985, no fewer than four attempts were made to create a RISC chip at different DEC divisions. Titan from DEC's Western Research Laboratory (WRL) in Palo Alto, California was a high-performance ECL based design that started in 1982, intended to run Unix. SAFE (Streamlined Architecture for Fast Execution) was a 64-bit design that started the same year, designed by Alan Kotok (of Spacewar! fame) and Dave Orbits and intended to run VMS. HR-32 (Hudson, RISC, 32-bit) started in 1984 by Rich Witek and Dan Dobberpuhl at the Hudson, MA fab, intended to be used as a co-processor in VAX machine. The same year Dave Cutler started the CASCADE project at DECwest in Bellevue, Washington.
History:
PRISM Eventually, Cutler was asked to define a single RISC project in 1985, selecting Rich Witek as the chief architect. In August 1985 the first draft of a high-level design was delivered, and work began on the detailed design. The PRISM specification was developed over a period of many months by a five-person team: Dave Cutler, Dave Orbits, Rich Witek, Dileep Bhandarkar, and Wayne Cardoza. Through this early period, there were constant changes in the design as debates within the company argued over whether it should be 32- or 64-bit, aimed at a commercial or technical workload, and so forth.These constant changes meant the final ISA specification was not complete until September 1986. At the time, the decision was made to produce two versions of the basic concept, DECwest worked on a "high-end" ECL implementation known as Crystal, while the Semiconductor Advanced Development team worked on microPRISM, a CMOS version. This work was 98% done 1985–86 and was heavily supported by simulations by Pete Benoit on a large VAXcluster.Through this era there was still considerable scepticism on the part of DEC engineering as a whole about whether RISC was really faster, or simply faster on the trivial five-line programs being used to demonstrate its performance. Based on the Crystal design, in 1986 it was compared to the then-fastest machine in development, the VAX 8800. The conclusion was clear: for any given amount of investment, the RISC designs would outperform a VAX by 2-to-1.In the middle of 1987, the decision was made that both designs be 64-bit, although this lasted only a few weeks. In October 1987, Sun introduced the Sun-4. Powered by a 16 MHz SPARC, a commercial version of Patterson's RISC design, it ran four times as fast as their previous top-end Sun-3 using a 20 MHz Motorola 68020. With this release, DEC once again changed the target for PRISM, aiming it solely at the workstation space. This resulted in the microPRISM being respecified as a 32-bit system while the Crystal project was canceled. This introduced more delays, putting the project far behind schedule.By early 1988 the system was still not complete; the CPU design was nearly complete, but the FPU and MMU, both based on the contemporary Rigel chipset for the VAX, were still being designed. The team decided to stop work on those parts of the design and focus entirely on the CPU. Design was completed in March 1988 and taped out by April.
History:
Cancellation Throughout the PRISM period, DEC was involved in a major debate over the future direction of the company. As newer RISC-based workstations were introduced, the performance benefit of the VAX was constantly eroded, and the price/performance ratio completely undermined. Different groups within the company debated how to best respond. Some advocated moving the VAX into the high-end, abandoning the low-end to the workstation vendors like Sun. This led to the VAX 9000 program, which was referred to internally as the "IBM killer". Others suggested moving into the workstation market using PRISM or a commodity processor. Still others suggested re-implementing the VAX on a RISC processor.Frustrated with the growing number of losses to cheaper faster competitive machines, independently, a small skunkworks group in Palo Alto, outside of Central Engineering, focused on workstations and UNIX/Ultrix, entertained the idea of using an off-the-shelf RISC processor to build a new family of workstations. The group carried out due diligence, eventually choosing the MIPS R2000. This group acquired a development machine and prototyped a port of Ultrix to the system. From the initial meetings with MIPS to a prototype machine took only 90 days. Full production of a DEC version could begin as early as January 1989, whereas it would be at least another year before a PRISM based machine would be ready.When the matter was raised at DEC headquarters the company was split on which approach was better. Bob Supnik was asked to consider the issue for an upcoming project review. He concluded that while the PRISM system appeared to be faster, the MIPS approach would be less expensive and much earlier to market. At the acrimonious review meeting by the company's Executive Committee in July 1988, the company decided to cancel Prism, and continue with the MIPS workstations and high-end VAX products. The workstation emerged as the DECstation 3100.By this time samples of the microPRISM had been returned and were found to be mostly working. They also proved capable of running at speeds of 50 to 80 MHz, compared to the R2000's 16 to 20. This would have offered a significant performance improvement over the MIPS systems.
History:
Legacy By the time of the July 1988 meeting, the company had swung almost entirely into the position that the RISC approach was a workstation play. But PRISM's performance was similar to that of the latest VAX machines and the RISC concept had considerable room for growth. As the meeting broke up, Ken Olsen asked Supnik to investigate ways that Digital could keep the performance of VMS systems competitive with RISC-based Unix systems.A group of engineers formed a team, variously referred to as the "RISCy VAX" or "Extended VAX" (EVAX) task force, to explore this issue. By late summer, the group had explored three concepts, a subset of the VAX ISA with a RISC-like core, a translated VAX that ran native VAX code and translated it on-the-fly to RISC code and stored in a cache, and the ultrapipelined VAX, a much higher-performance CISC implementation. All of these approaches had issues that meant they would not be competitive with a simple RISC machine.The group next considered systems that combined both an existing VAX single-chip solution as well as a RISC chip for performance needs. These studies suggested that the system would inevitably be hamstrung by the lower-performance part and would offer no compelling advantage.It was at this point that Nancy Kronenberg pointed out that people ran VMS, not VAX, and that VMS only had a few hardware dependencies based on its modelling of interrupts and memory paging. There appeared to be no compelling reason why VMS could not be ported to a RISC chip as long as these small bits of the model were preserved. Further work on this concept suggested this was a workable approach.Supnik took the resulting report to the Strategy Task Force in February 1989. Two questions were raised: could the resulting RISC design also be a performance leader in the Unix market, and should the machine be an open standard? And with that, the decision was made to adopt the PRISM architecture with the appropriate modifications, eventually becoming the Alpha, and began the port of VMS to the new architecture.When PRISM and MICA were cancelled, Dave Cutler left Digital for Microsoft, where he was put in charge of the development of what became known as Windows NT. Cutler's architecture for NT was heavily inspired by many aspects of MICA.
Design:
In terms of integer operations, the PRISM architecture was similar to the MIPS designs. Of the 32-bits in the instructions, the 6 highest and 5 lowest bits were the instruction, leaving the other 21 bits of the word for encoding either a constant or register locations. Sixty-four 32-bit registers were included, as opposed to thirty-two in the MIPS, but usage was otherwise similar. PRISM and MIPS both lack the register windows that were a hallmark of the other major RISC design, Berkeley RISC.
Design:
The PRISM design was notable for several aspects of its instruction set. Notably, PRISM included Epicode (extended processor instruction code), which defined a number of "special" instructions intended to offer the operating system a stable ABI across multiple implementations. Epicode was given its own set of 22 32-bit registers to use. A set of vector processing instructions were later added as well, supported by an additional sixteen 64-bit vector registers that could be used in a variety of ways. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Data-flow diagram**
Data-flow diagram:
A data-flow diagram is a way of representing a flow of data through a process or a system (usually an information system). The DFD also provides information about the outputs and inputs of each entity and the process itself. A data-flow diagram has no control flow — there are no decision rules and no loops. Specific operations based on the data can be represented by a flowchart.There are several notations for displaying data-flow diagrams. The notation presented above was described in 1979 by Tom DeMarco as part of structured analysis.
Data-flow diagram:
For each data flow, at least one of the endpoints (source and / or destination) must exist in a process. The refined representation of a process can be done in another data-flow diagram, which subdivides this process into sub-processes.
The data-flow diagram is a tool that is part of structured analysis and data modeling. When using UML, the activity diagram typically takes over the role of the data-flow diagram. A special form of data-flow plan is a site-oriented data-flow plan.
Data-flow diagrams can be regarded as inverted Petri nets, because places in such networks correspond to the semantics of data memories. Analogously, the semantics of transitions from Petri nets and data flows and functions from data-flow diagrams should be considered equivalent.
History:
The DFD notation draws on graph theory, originally used in operational research to model workflow in organizations. DFD originated from the activity diagram used in the structured analysis and design technique methodology at the end of the 1970s. DFD popularizers include Edward Yourdon, Larry Constantine, Tom DeMarco, Chris Gane and Trish Sarson.Data-flow diagrams (DFD) quickly became a popular way to visualize the major steps and data involved in software-system processes. DFDs were usually used to show data flow in a computer system, although they could in theory be applied to business process modeling. DFDs were useful to document the major data flows or to explore a new high-level design in terms of data flow.
DFD components:
DFD consists of processes, flows, warehouses, and terminators. There are several ways to view these DFD components.Process The process (function, transformation) is part of a system that transforms inputs to outputs. The symbol of a process is a circle, an oval, a rectangle or a rectangle with rounded corners (according to the type of notation). The process is named in one word, a short sentence, or a phrase that is clearly to express its essence.Data flow Data flow (flow, dataflow) shows the transfer of information (sometimes also material) from one part of the system to another. The symbol of the flow is the arrow. The flow should have a name that determines what information (or what material) is being moved. Exceptions are flows where it is clear what information is transferred through the entities that are linked to these flows. Material shifts are modeled in systems that are not merely informative. Flow should only transmit one type of information (material). The arrow shows the flow direction (it can also be bi-directional if the information to/from the entity is logically dependent—e.g. question and answer). Flows link processes, warehouses and terminators.Warehouse The warehouse (datastore, data store, file, database) is used to store data for later use. The symbol of the store is two horizontal lines, the other way of view is shown in the DFD Notation. The name of the warehouse is a plural noun (e.g. orders)—it derives from the input and output streams of the warehouse. The warehouse does not have to be just a data file but can also be, for example, a folder with documents, a filing cabinet, or a set of optical discs. Therefore, viewing the warehouse in a DFD is independent of implementation. The flow from the warehouse usually represents reading of the data stored in the warehouse, and the flow to the warehouse usually expresses data entry or updating (sometimes also deleting data). The warehouse is represented by two parallel lines between which the memory name is located (it can be modeled as a UML buffer node).Terminator The terminator is an external entity that communicates with the system and stands outside of the system. It can be, for example, various organizations (e.g. a bank), groups of people (e.g. customers), authorities (e.g. a tax office) or a department (e.g. a human-resources department) of the same organization, which does not belong to the model system. The terminator may be another system with which the modeled system communicates.
Rules for creating DFD:
Entity names should be comprehensible without further comments. DFD is a system created by analysts based on interviews with system users. It is determined for system developers, on one hand, project contractor on the other, so the entity names should be adapted for model domain or amateur users or professionals. Entity names should be general (independent, e.g. specific individuals carrying out the activity), but should clearly specify the entity. Processes should be numbered for easier mapping and referral to specific processes. The numbering is random, however, it is necessary to maintain consistency across all DFD levels (see DFD Hierarchy). DFD should be clear, as the maximum number of processes in one DFD is recommended to be from 6 to 9, minimum is 3 processes in one DFD. The exception is the so-called contextual diagram where the only process symbolizes the model system and all terminators with which the system communicates.
DFD consistency:
DFD must be consistent with other models of the system—entity relationship diagram, state-transition diagram, data dictionary, and process specification models. Each process must have its name, inputs and outputs. Each flow should have its name (exception see Flow). Each Data store must have input and output flow. Input and output flows do not have to be displayed in one DFD—but they must exist in another DFD describing the same system. An exception is warehouse standing outside the system (external storage) with which the system communicates.
DFD hierarchy:
To make the DFD more transparent (i.e. not too many processes), multi-level DFDs can be created. DFDs that are at a higher level are less detailed (aggregate more detailed DFD at lower levels). The contextual DFD is the highest in the hierarchy (see DFD Creation Rules). The so-called zero level is followed by DFD 0, starting with process numbering (e.g. process 1, process 2). In the next, the so-called first level—DFD 1—the numbering continues For example, process 1 is divided into the first three levels of the DFD, which are numbered 1.1, 1.2, and 1.3. Similarly, processes in the second level (DFD 2) are numbered 2.1.1, 2.1.2, 2.1.3, and 2.1.4. The number of levels depends on the size of the model system. DFD 0 processes may not have the same number of decomposition levels. DFD 0 contains the most important (aggregated) system functions. The lowest level should include processes that make it possible to create a process specification for roughly one A4 page. If the mini-specification should be longer, it is appropriate to create an additional level for the process where it will be decomposed into multiple processes. For a clear overview of the entire DFD hierarchy, a vertical (cross-sectional) diagram can be created. The warehouse is displayed at the highest level where it is first used and at every lower level as well. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Smoked sable**
Smoked sable:
Smoked sable (also known as sable, sablefish, or smoked black cod), is sablefish that has been smoked. Smoked sable is often prepared with paprika.
Smoked sable:
Alongside lox, hot-smoked whitefish, mackerel, and trout, Jewish delis often sell sablefish (also sometimes referred to as black cod in its fresh state). Smoked sablefish, often called simply "sable", has long been a staple of New York appetizing stores, one of many smoked fish products usually eaten with bagels for breakfast or lunch in American Jewish cuisine.While "sable" or "sablefish" is the common name, delis often are not serving sablefish, but rather other types of "black cod" within the Anoplopomatidae family of fish. "Black cod" is a common marketing term for fish within this family. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zero wait state**
Zero wait state:
Zero wait state is a feature of a processor or computer architecture in which the processor does not have to wait to perform memory access. Non-zero wait state describes the situation when a processor operates at a higher frequency than the memory, it has a wait state during which the processor is idle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Syntrophomonas**
Syntrophomonas:
Syntrophomonas is a bacterial genus from the family of Syntrophomonadaceae.
Phylogeny:
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**OR13C9**
OR13C9:
Olfactory receptor 13C9 is a protein that in humans is encoded by the OR13C9 gene.Olfactory receptors interact with odorant molecules in the nose, to initiate a neuronal response that triggers the perception of a smell. The olfactory receptor proteins are members of a large family of G-protein-coupled receptors (GPCR) arising from single coding-exon genes. Olfactory receptors share a 7-transmembrane domain structure with many neurotransmitter and hormone receptors and are responsible for the recognition and G protein-mediated transduction of odorant signals. The olfactory receptor gene family is the largest in the genome. The nomenclature assigned to the olfactory receptor genes and proteins for this organism is independent of other organisms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ㄴ**
ㄴ:
Nieun (sign: ㄴ; (Korean: 니은) is the second consonant of the Korean alphabet. The Unicode for ㄴ is U+3134. It makes an 'n' sound. The IPA pronunciation is [n]. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gazzi-Dickinson method**
Gazzi-Dickinson method:
The Gazzi-Dickinson method is a point-counting technique used in geology to statistically measure the components of a sedimentary rock, chiefly sandstone. The main focus (and most controversial) part of the technique is counting all sand-sized components as separate grains, regardless of what they are connected to. Gazzi-Dickinson point counting is used in the creation of ternary diagrams, such as QFL diagrams.
Technique:
To perform a point count using the Gazzi-Dickinson method, a randomly selected thin section from a sedimentary rock is needed, with a slide advance mechanism that will randomly select points on the slide with a petrographic microscope. A minimum of 300 representative points (preferably 500 points) should be used to perform the count. On each randomly selected point that lands on a sand grain, the operator must determine the make-up of the area chosen, i.e. whether it is a mineral grain that is sand sized (larger than 62.5 micrometers) or a finer-grained fragment of another rock type, called a lithic fragment (e.g. a sand-sized piece of shale). These counts are then converted to percentages and used for compositional comparisons in provenance studies. Typically, only framework (non-matrix) grains are counted, or non-framework grains are counted and then excluded from percentages when using descriptive devices such as QFL triangles. This can create problems with pseudomatrix, which are lithic grains that have been deformed and thus blend in with (or have become) matrix.
History:
The Gazzi-Dickinson method came out of separate work by P. Gazzi in 1966 and William R. Dickinson, starting in 1970. Dickinson and his students (most notably Raymond Ingersoll, Steven Graham, and Chris Suczek) at Stanford University in the 1970s established the method and its use to use the composition of sandstones to infer tectonic processes. This was in contrast to ideas presented by sedimentary geologists at Indiana University at the time, who used the more traditional "QFR" or "rock fragment" method of Robert Folk (1974) (which later grew into the Folk classification scheme), in which all grains that are connected are considered rock fragments, and the individual components are disregarded.
History:
The best way to explain the differences in these two schools of thought is with an example: A sand rich in grus, or a granitic sand, when point counted with these two methods would yield drastically different results. A QFR-style count would be rich in rock fragments, whereas a Gazzi-Dickinson point count would show the sand rich in quartz and feldspar. Proponents of the Indiana University method would say that information is lost by not counting rock fragments. Proponents of Gazzi-Dickinson point counting would say that small changes in erosional transport would change the composition of the sand. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Surface feet per minute**
Surface feet per minute:
Surface feet per minute (SFPM or SFM) is the combination of a physical quantity (surface speed) and an imperial and American customary unit (feet per minute or FPM). It is defined as the number of linear feet that a location on a rotating component travels in one minute. Its most common use is in the measurement of cutting speed (surface speed) in machining. It is a unit of velocity that describes how fast the cutting edge of the cutting tool travels. It correlates directly to the machinability of the workpiece material and the hardness of the cutting tool material. It relates to spindle speed via variables such as cutter diameter (for rotating cutters) or workpiece diameter (for lathe work).
Surface feet per minute:
SFM is a combination of diameter and the velocity (RPM) of the material measured in feet-per-minute as the spindle of a milling machine or lathe. 1 SFM equals 0.00508 m/s (meter per second, the SI unit of speed). The faster the spindle turns, and/or the larger the diameter, the higher the SFM. The goal is to tool a job to run the SFM as high as possible to increase hourly part production. However some materials will run better at specific SFMs. When the SFM is known for a specific material (ex 303 annealed stainless steel = 120 SFM for high speed steel tooling), a formula can be used to determine spindle speed for live tools or spindle speeds for turning materials.
Surface feet per minute:
In a milling machine, the tool diameter is used instead of the stock diameter in the following formulas when the tool is revolving and the stock is stationary.
Spindle speed can be calculated using the following equation: Spindle speed (RPM) 12 stock diameter (in) 0.2618 stock diameter (in) {\text{Spindle speed (RPM)}}={\frac {SFM}{\pi \times {\frac {1}{12}}\times {\text{stock diameter (in)}}}}\approx {\frac {SFM}{0.2618\times {\text{stock diameter (in)}}}} SFM can be calculated using the following equation: stock diameter (in) ft 12 in RPM stock diameter (in) 0.2618 RPM | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Addition principle**
Addition principle:
In combinatorics, the addition principle or rule of sum is a basic counting principle. Stated simply, it is the intuitive idea that if we have A number of ways of doing something and B number of ways of doing another thing and we can not do both at the same time, then there are A+B ways to choose one of the actions. In mathematical terms, the addition principle states that, for disjoint sets A and B, we have |A∪B|=|A|+|B| .The rule of sum is a fact about set theory.The addition principle can be extended to several sets. If S1,S2,…,Sn are pairwise disjoint sets, then we have:This statement can be proven from the addition principle by induction on n.
Simple example:
A person has decided to shop at one store today, either in the north part of town or the south part of town. If they visit the north part of town, they will shop at either a mall, a furniture store, or a jewelry store (3 ways). If they visit the south part of town then they will shop at either a clothing store or a shoe store (2 ways).
Simple example:
Thus there are 3+2=5 possible shops the person could end up shopping at today.
Inclusion–exclusion principle:
The inclusion–exclusion principle (also known as the sieve principle) can be thought of as a generalization of the rule of sum in that it too enumerates the number of elements in the union of some sets (but does not require the sets to be disjoint). It states that if A1, ..., An are finite sets, then
Subtraction principle:
Similarly, for a given finite set S, and given another set A, if A⊂S , then |Ac|=|S|−|A| . To prove this, notice that |Ac|+|A|=|S| by the addition principle.
Applications:
The addition principle can be used to prove Pascal's rule combinatorially. To calculate (n+1k) , one can view it as the number of ways to choose k people from a room containing n children and 1 teacher. Then there are (nk) ways to choose people without choosing the teacher, and (nk−1) ways to choose people that includes the teacher. Thus (n+1k)=(nk)+(nk−1) .: 83 The addition principle can also be used to prove the multiplication principle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Meta noise**
Meta noise:
Meta noise refers to inaccurate or irrelevant metadata. This is particularly prevalent in systems with a schema not based on a controlled vocabulary, such as certain folksonomies.Examples: misspelled tags (wihte instead of white), or tags with multiple spellings (hip-hop and hip hop) obviously inaccurate or joke tags (dog on a content object featuring only a cat) On systems open to large user groups, tags which are understood by only a minority of users.
Hidden benefit:
Although the existence of meta noise may initially appear to detract from the value of metadata generally, meta noise allows less popular tags to be defined and used by a minority of users without damaging the validity or cohesion of what the majority of users would consider to be the most relevant or accurate metadata, thus actually increasing access to content. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pseudostratified columnar epithelium**
Pseudostratified columnar epithelium:
A pseudostratified epithelium is a type of epithelium that, though comprising only a single layer of cells, has its cell nuclei positioned in a manner suggestive of stratified epithelia. As it rarely occurs as squamous or cuboidal epithelia, it is usually considered synonymous with the term pseudostratified columnar epithelium.
Pseudostratified columnar epithelium:
The term pseudostratified is derived from the appearance of this epithelium in the section which conveys the erroneous (pseudo means almost or approaching) impression that there is more than one layer of cells, when in fact this is a true simple epithelium since all the cells rest on the basement membrane. The nuclei of these cells, however, are disposed at different levels, thus creating the illusion of cellular stratification. All cells are not of equal size and not all cells extend to the luminal/apical surface; such cells are capable of cell division providing replacements for cells lost or damaged.
Pseudostratified columnar epithelium:
Pseudostratified epithelia function in secretion or absorption. If a specimen looks stratified but has cilia, then it is a pseudostratified ciliated epithelium, since stratified epithelia do not have cilia. Ciliated epithelia are more common and lines the trachea, bronchi. Non-ciliated epithelia lines the larger ducts such as the ducts of parotid glands.
Examples:
Ciliated pseudostratified columnar epithelia is the type of respiratory epithelium found in the linings of the trachea as well as other respiratory tract, which allows filtering and humidification of incoming air.
Non-ciliated pseudostratified columnar epithelia are located in the prostate and membranous part of male vas deferens.
Pseudostratified columnar epithelia with stereocilia are located in the epididymis. Stereocilia of the epididymis are not cilia because their cytoskeleton is composed of actin filaments, not microtubules. They are structurally and molecularly more similar to microvilli than to true cilia.
Pseudostratified columnar epithelia are found forming the straight, tubular glands of the endometrium in females. They are also found in the internal part of the ear. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Neuroscience of religion**
Neuroscience of religion:
The neuroscience of religion, also known as neurotheology and as spiritual neuroscience, attempts to explain religious experience and behaviour in neuroscientific terms. It is the study of correlations of neural phenomena with subjective experiences of spirituality and hypotheses to explain these phenomena. This contrasts with the psychology of religion which studies mental, rather than neural states.
Proponents of the neuroscience of religion say there is a neurological and evolutionary basis for subjective experiences traditionally categorized as spiritual or religious. The field has formed the basis of several popular science books.
Introduction:
"Neurotheology" is a neologism that describes the scientific study of the neural correlates of religious or spiritual beliefs, experiences and practices. Other researchers prefer to use terms like "spiritual neuroscience" or "neuroscience of religion". Researchers in the field attempt to explain the neurological basis for religious experiences, such as: The perception that time, fear or self-consciousness have dissolved Spiritual awe Oneness with the universe Ecstatic trance Sudden enlightenment Altered states of consciousness
Terminology:
Aldous Huxley used the term neurotheology for the first time in the utopian novel Island. The discipline studies the cognitive neuroscience of religious experience and spirituality. The term is also sometimes used in a less scientific context or a philosophical context. Some of these uses, according to the mainstream scientific community, qualify as pseudoscience. Huxley used it mainly in a philosophical context.
Terminology:
The use of the term neurotheology in published scientific work is already common. A search on the citation indexing service provided by Institute for Scientific Information returns 68 articles (December/2020). A search in Google Scholar, also in 2020 December, gives several pages of references, both books and scientific articles.
Theoretical work:
In an attempt to focus and clarify what was a growing interest in this field, in 1994 educator and businessman Laurence O. McKinney published the first book on the subject, titled "Neurotheology: Virtual Religion in the 21st Century", written for a popular audience but also promoted in the theological journal Zygon. According to McKinney, neurotheology sources the basis of religious inquiry in relatively recent developmental neurophysiology. According to McKinney's theory, pre-frontal development, in humans, creates an illusion of chronological time as a fundamental part of normal adult cognition past the age of three. The inability of the adult brain to retrieve earlier images experienced by an infantile brain creates questions such as "where did I come from" and "where does it all go", which McKinney suggests led to the creation of various religious explanations. The experience of death as a peaceful regression into timelessness as the brain dies won praise from readers as varied as author Arthur C. Clarke, eminent theologian Harvey Cox, and the Dalai Lama and sparked a new interest in the field.What Andrew B. Newberg and others "discovered is that intensely focused spiritual contemplation triggers an alteration in the activity of the brain that leads one to perceive transcendent religious experiences as solid, tangible reality. In other words, the sensation that Buddhists call oneness with the universe." The orientation area requires sensory input to do its calculus. "If you block sensory inputs to this region, as you do during the intense concentration of meditation, you prevent the brain from forming the distinction between self and not-self," says Newberg. With no information from the senses arriving, the left orientation area cannot find any boundary between the self and the world. As a result, the brain seems to have no choice but "to perceive the self as endless and intimately interwoven with everyone and everything." "The right orientation area, equally bereft of sensory data, defaults to a feeling of infinite space. The meditators feel that they have touched infinity."The radical Catholic theologian Eugen Drewermann developed a two-volume critique of traditional conceptions of God and the soul and a reinterpretation of religion (Modern Neurology and the Question of God) based on current neuroscientific research.However, it has also been argued "that neurotheology should be conceived and practiced within a theological framework." Furthermore, it has been suggested that creating a separate category for this kind of research is moot since conventional Behavioural and Social Neurosciences disciplines can handle any empirical investigation of this nature.Various theories regarding the evolutionary origin of religion and the evolutionary psychology of religion have been proposed.
Experimental work:
In 1969, British biologist Alister Hardy founded a Religious Experience Research Centre at Oxford after retiring from his post as Linacre Professor of Zoology. Citing William James's The Varieties of Religious Experience (1902), he set out to collect first-hand accounts of numinous experiences. He was awarded the Templeton Prize before his death in 1985. His successor David Hay suggested in God's Biologist: A Life of Alister Hardy (2011) that the RERC later dispersed as investigators turned to newer techniques of scientific investigation.
Experimental work:
Magnetic stimulation studies During the 1980s Michael Persinger stimulated the temporal lobes of human subjects with a weak magnetic field using an apparatus that popularly became known as the "God helmet" and reported that many of his subjects claimed to experience a "sensed presence" during stimulation. This work has been criticised, though some researchers have published a replication of one God Helmet experiment.Granqvist et al. claimed that Persinger's work was not "double-blind." Participants were often graduate students who knew what sort of results to expect, and there was the risk that the experimenters' expectations would be transmitted to subjects by unconscious cues. The participants were frequently given an idea of the purpose of the study by being asked to fill in questionnaires designed to test their suggestibility to paranormal experiences before the trials were conducted. Granqvist et al. failed to replicate Persinger's experiments double-blinded, and concluded that the presence or absence of the magnetic field had no relationship with any religious or spiritual experience reported by the participants, but was predicted entirely by their suggestibility and personality traits. Following the publication of this study, Persinger et al. dispute this. One published attempt to create a "haunted room" using environmental "complex" electromagnetic fields based on Persinger's theoretical and experimental work did not produce the sensation of a "sensed presence" and found that reports of unusual experiences were uncorrelated with the presence or absence of these fields. As in the study by Granqvist et al., reports of unusual experiences were instead predicted by the personality characteristics and suggestibility of participants. One experiment with a commercial version of the God helmet found no difference in response to graphic images whether the device was on or off.
Experimental work:
Neuropsychology and neuroimaging The first researcher to note and catalog the abnormal experiences associated with temporal lobe epilepsy (TLE) was neurologist Norman Geschwind, who noted a set of religious behavioral traits associated with TLE seizures. These include hypergraphia, hyperreligiosity, reduced sexual interest, fainting spells, and pedantism, often collectively ascribed to a condition known as Geschwind syndrome.
Experimental work:
Vilayanur S. Ramachandran explored the neural basis of the hyperreligiosity seen in TLE using the galvanic skin response (GSR), which correlates with emotional arousal, to determine whether the hyperreligiosity seen in TLE was due to an overall heightened emotional state or was specific to religious stimuli. Ramachandran presented two subjects with neutral, sexually arousing and religious words while measuring GSR. Ramachandran was able to show that patients with TLE showed enhanced emotional responses to the religious words, diminished responses to the sexually charged words, and normal responses to the neutral words. This study was presented as an abstract at a neuroscience conference and referenced in Ramachandran's book, Phantoms in the Brain, but it has never been published in the peer-reviewed scientific press.
Experimental work:
Research by Mario Beauregard at the University of Montreal, using fMRI on Carmelite nuns, has purported to show that religious and spiritual experiences include several brain regions and not a single 'God spot'. As Beauregard has said, "There is no God spot in the brain. Spiritual experiences are complex, like intense experiences with other human beings." The neuroimaging was conducted when the nuns were asked to recall past mystical states, not while actually undergoing them; "subjects were asked to remember and relive (eyes closed) the most intense mystical experience ever felt in their lives as a member of the Carmelite Order." A 2011 study by researchers at the Duke University Medical Center found hippocampal atrophy is associated with older adults who report life-changing religious experiences, as well as those who are "born-again Protestants, Catholics, and those with no religious affiliation".A 2016 study using fMRI found "a recognizable feeling central to ... (Mormon)... devotional practice was reproducibly associated with activation in nucleus accumbens, ventromedial prefrontal cortex, and frontal attentional regions. Nucleus accumbens activation preceded peak spiritual feelings by 1–3 s and was replicated in four separate tasks. ... The association of abstract ideas and brain reward circuitry may interact with frontal attentional and emotive salience processing, suggesting a mechanism whereby doctrinal concepts may come to be intrinsically rewarding and motivate behavior in religious individuals." Psychopharmacology Some scientists working in the field hypothesize that the basis of spiritual experience arises in neurological physiology. Speculative suggestions have been made that an increase of N,N-dimethyltryptamine levels in the pineal gland contribute to spiritual experiences. Scientific studies confirming this have yet to be published. It has also been suggested that stimulation of the temporal lobe by psychoactive ingredients of 'Magic Mushrooms' mimics religious experiences. This hypothesis has found laboratory validation with respect to psilocybin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Michelin PAX System**
Michelin PAX System:
The Michelin PAX is an automobile run-flat tire system that utilizes a special type of rim and tire to allow temporary use of a wheel if its tire is punctured. The core of Michelin's PAX system is the semi-rigid ring installed onto the rim using special equipment. It provides support to the tire and its sidewall to allow emergency operation at limited speed until such time as the tire can be replaced. Cars that use the system include supercars like the Bugatti Veyron EB 16.4, luxury cars like the Rolls-Royce Phantom, and more common vehicles like the Honda Odyssey and Nissan Quest.
Prior technology:
Prior to the late 1990s introduction of the Michelin PAX System run-flat technology, both Michelin and Goodyear had introduced a "zero-pressure" run-flat technology, meaning that a pneumatic (air pressure-supported) tire could support itself with no air pressure. The new zero-pressure tire was a modified standard tire, constructed with a sidewall that was much stiffer and heavier, so as to support the weight of a car running at speed without careening out of control. The heavier sidewalls and special bead construction allowed a driver to drive a car with little or no air pressure at a limited speed (approximately 50 mph) over some specified distance to a service station, or at least off the roadway out of immediate danger. In a conventional pneumatic tire, loss of pressure at speed would result in the collapse of the soft tire sidewalls such that the metal wheel edges would slice through the collapsed sidewalls, which would likely result in an accident, possibly fatal, as well as the destruction of the tire, and possibly the wheel.
Prior technology:
The reinforced sidewall Goodyear EMT (Extended Mobility Tire) was introduced with the 1994 Chevrolet Corvette. Michelin also introduced a similar tire in the mid-1990s called the Zero Pressure System, and the ZP designator differentiates this type of run flat tire from a conventional tire.
Prior technology:
Such tires required the introduction of a Tire Pressure Monitoring System, sensors and instrumentation in the car, which would indicate to the driver a condition of dangerously low tire pressure. With conventional tires, low tire pressure can be noticed due to sidewall deformation; With reinforced sidewall zero-pressure run-flat type tires, there is no warning when a tire was going flat. This low or no-pressure condition was almost impossible to detect by the driver with a zero pressure run flat, until the tire failed.
Michelin PAX System run-flat tires:
Michelin developed the PAX system in the late 1995, first calling it PAV, but introduced the system as the PAX system in late 2000. The PAX system approached the problem differently, requiring a system of parts, not just a different tire. Rather than extra stiff and supportive sidewalls, the PAX system relies on a newly designed asymmetric wheel, a supportive insert and a similarly asymmetric tire combination to provide an extended run-flat capability of up to 125 miles at 50 or 55 mph.
Michelin PAX System run-flat tires:
The PAX system weighed approximately the same as four conventional tires plus a spare. So although comparable PAX tires were heavier than conventional counterparts, they were not that much heavier (Michelin claims four PAX System assemblies are equal to the weight of 4.7 standard tire-wheel assemblies). However, rotating mass and unsprung weight of the wheels and tires on the ground increased, which is a disadvantage. In addition, no spare tire had to be carried, so that PAX-equipped vehicles weighed no more than conventionally equipped vehicles, and had more storage space without the spare tire. Michelin said that the retail replacement cost of the PAX tires would be approximately equal to the cost of five conventional tires. The true cost was approximately US$1200 or $1600 for snow tires in 2008 for Odyssey which is higher than some conventional tires. PAX tires allegedly had a smoother ride and provided better gas mileage than the comparable conventional tire but owner impressions vary greatly. PAX tires provided peace of mind and real safety and added mobility that no conventional tire could but limited availability and cost of PAX tires and service reduced owner satisfaction. As with the zero-pressure type run flats, a tire pressure monitoring system (TPMS) was mandatory. TPMS became mandatory on all cars soon afterward.The soft PAX sidewall allows for a more comfortable ride compared to run flat tires which work by having stiff sidewalls. When flat, the PAX tire sidewalls collapse until the weight of the car is riding on an internal polymer support ring mounted to an asymmetric wheel. That is the outside diameter of the wheel is smaller than the inside diameter of the wheel. This asymmetric wheel and tire design allows the tire to lock onto the wheel, rather than coming off at speed and/or while turning. So a PAX wheel that appears to be about 18 inches from the outside of the car (the side facing out), will look more like 19 inches on the inside of the wheel (the side facing the suspension). The inner support ring is also coated with a gel that is required to lubricate the rotating flat PAX tire.
Michelin PAX System run-flat tires:
A PAX tire may be patched from the inside as a conventional tire. If the tire is replaced, and the inner support ring is undamaged, the inner support ring need not be replaced. A new gel pack is generally applied when the tires are demounted and repaired or replaced.PAX tires are measured in the metric system exclusively unlike the mixed English and metric measurements prevalent in conventional tires. For example a conventional passenger car tire might have the designation 245/45-18, indicating a tread section 245 mm wide, with an aspect ratio of 45% (i.e., the sidewall height is 45% the width of the tread section), mounted on an 18 inch conventional wheel. The similarly sized PAX tire was designated 245-680R460A, indicating the same 245 mm tread section, but a 680 mm overall diameter (a specification not in the conventional passenger car tire nomenclature), the R, meaning radial construction, a 460 mm wheel seat diameter (approximately 18.1 inches) and A for asymmetric, meaning that the wheel is asymmetric, or PAX system.
Michelin PAX System run-flat tires:
Special machinery and training is required to service the PAX tires. Michelin was confident enough in its service network that it guaranteed a replacement within 12 hours, if the Michelin dealer could not repair the original tire, or did not stock the replacement. Michelin also worked with Honda (for the Odyssey minivan) and Acura (for the RL luxury sedan) to ensure each dealer carried a full set of replacements wheel tire combinations in inventory. However, Canadian models of Odyssey did not offer PAX tires.In theory, the PAX tire would not have the same proprietary issues that plagued the Michelin TRX tire that was introduced in the mid-1970s but died out in the 1980s. By licensing the PAX technology to other companies, Michelin would ensure that the consumer would not be locked into a single supplier as with the TRX. Pirelli, Goodyear, Toyo and Sumitomo have licensed the technology from Michelin, but these companies never came out with PAX system tires. So in reality, the consumer was still left with a single supplier solution.
Halt of development:
In December 2007, Michelin announced that it would halt further development of the PAX tire, but would still produce them for the foreseeable future, essentially a repeat of the TRX scenario.
Class-action lawsuit in America:
A class-action suit against American Honda Motor Company involving the use of Michelin PAX tires was settled in December 2008. The settlement stated that the PAX tire system should continue to be used with cars originally equipped with PAX system. The original warranty was extended to 36,000 miles, although it is unclear as to whether it is the materials and workmanship portion of the warranty or the hazard insurance portion of the warranty, or both. Also left unanswered was what happens to the trip interruption payments, and the guarantee to get a replacement tire delivered within one day if the car is stranded beyond the 125-mile flat tire driving limit since PAX-certified repair shops are not as plentiful as standard tire repair shops. In addition, a spare tire kit would be offered by Honda for those cars originally equipped with the PAX system tires since servicing may be impractical in certain geographic areas. If a spare had already been purchased, Honda would reimburse the owner approximately $110.00. The deadline for class claims was in January 2010. Affected owners were referred to the sfmslaw.com web site, the attorneys responsible for the class action suit.It appears that the major impetus for the class action was the poor tread-wear performance of the PAX tire on the Touring Edition of the third-generation Honda Odyssey minivan built for the North American market. The Michelin PAX system latitude LX4 tires apparently did not have the claimed lifetime, and the lack of universal repair and replacement facilities (PAX tires cannot be serviced at any tire shop, but only those with special equipment and specially trained personnel) and the cost to replace PAX tires were major factors for the class action. Rather than alleging the PAX system was defective, the suit charged that the PAX tires and special wheels were unreasonably difficult to repair or replace.Since the lawsuit was against both Michelin North America and American Honda, the American Honda luxury division Acura was also included. Certain models of the Acura RL were equipped with higher-performance PAX system Michelin Pilot HX MM4 tires. Anecdotally, informal internet searches will reveal that the Acura RL PAX tires did not have the poor wear performance experienced by the Honda Odyssey PAX tires. However, many Acura RL owners did complain about the difficulty in seeking repair and replacement, as well as higher than expected repair or replacement costs.The suit included consumer victims from Florida, Illinois, Arizona and New York. On December 20, 2010, the United States District Court for the District of Maryland granted final approval of the consumer class action.
Cars equipped with PAX tires:
In the United States 2005-2009 Honda Odyssey Touring models 2006-2008 Nissan Quest models 2006-2008 Acura RL model Rolls-Royce Phantom—introduced in April 2004 In Europe Bugatti Veyron EB 16.4 Renault Scenic models in Europe—introduced in February 2002 Audi A8 in Europe—introduced in November 2002 Rolls-Royce Phantom—introduced in January 2003 Audi A4 in Europe—introduced in September 2004 Mercedes-Benz S-Class Lancia Thesis Blindata B6 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DocBook XSL**
DocBook XSL:
The DocBook XSL stylesheets are a set of XSLT stylesheets for the XML-based DocBook language.
Purpose:
DocBook is a semantic markup language. That is, it specifies the meaning of the elements in a document, not how they are intended to be presented to the end user. It provides separation between the content of the document and the visual representation. While DocBook is a readable markup language, it is not intended to be read by end-users in its DocBook form.
Purpose:
The purpose of DocBook XSL is to provide a standard set of transformations from DocBook to several presentational formats.
Output formats:
DocBook XSL provides for transforms into the following formats: HTML, both as single pages and in a "chunked" format that outputs sections to different pages.
Output formats:
XHTML XSL-FO, and from there, usually PDF Man Pages WebHelp Web help Webhelp is a chunked HTML output format in the DocBook xslt stylesheets that was introduced in version 1.76.1. The documentation for web help also provides an example of web help and is part of the DocBook xsl distribution. Its major features include CSS-based page layout without frameset, multilingual full content search, Table of contents (TOC) pane with collapsible TOC tree, Auto-synchronization of content pane and TOC. This web help format was originally implemented by Kasun Gajasinghe and David Cramer as part of the Google Summer of Code 2010 program.DocBook XSL also has transformations to slide-like formats for HTML and XSL-FO. EPUB support is currently experimental.
Configuration:
DocBook XSL's stylesheets are highly configurable. Each of the different formats has a number of XSLT parameters available for simple customization. For example, the XSL-FO transforms allow the user to define the size of the pages. Additionally, the XSLT documents themselves are modular; it is possible for the user to add, change, or replace particular levels of functionality. This can allow DocBook XSL to process new documentation tags added to the standard DocBook, or to simply change how the XSLT's generate the resulting format. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Indium(III) sulfide**
Indium(III) sulfide:
Indium(III) sulfide (Indium sesquisulfide, Indium sulfide (2:3), Indium (3+) sulfide) is the inorganic compound with the formula In2S3.
Indium(III) sulfide:
It has a "rotten egg" odor characteristic of sulfur compounds, and produces hydrogen sulfide gas when reacted with mineral acids.Three different structures ("polymorphs") are known: yellow, α-In2S3 has a defect cubic structure, red β-In2S3 has a defect spinel, tetragonal, structure, and γ-In2S3 has a layered structure. The red, β, form is considered to be the most stable form at room temperature, although the yellow form may be present depending on the method of production. In2S3 is attacked by acids and by sulfide. It is slightly soluble in Na2S.Indium sulfide was the first indium compound ever described, being reported in 1863. Reich and Richter determined the existence of indium as a new element from the sulfide precipitate.
Structure and properties:
In2S3 features tetrahedral In(III) centers linked to four sulfido ligands.
Structure and properties:
α-In2S3 has a defect cubic structure. The polymorph undergoes a phase transition at 420 °C and converts to the spinel structure of β-In2S3. Another phase transition at 740 °C produces the layered γ-In2S3 polymorph.β-In2S3 has a defect spinel structure. The sulfide anions are closely packed in layers, with octahedrally-coordinated In(III) cations present within the layers, and tetrahedrally-coordinated In(III) cations between them. A portion of the tetrahedral interstices are vacant, which leads to the defects in the spinel.β-In2S3 has two subtypes. In the T-In2S3 subtype, the tetragonally-coordinated vacancies are in an ordered arrangement, whereas the vacancies in C-In2S3 are disordered. The disordered subtype of β-In2S3 shows activity for photocatalytic H2 production with a noble metal cocatalyst, but the ordered subtype does not.β-In2S3 is an N-type semiconductor with an optical band gap of 2.1 eV. It has been proposed to replace the hazardous cadmium sulfide, CdS, as a buffer layer in solar cells, and as an additional semiconductor to increase the performance of TiO2-based photovoltaics.The unstable γ-In2S3 polymorph has a layered structure.
Production:
Indium sulfide is usually prepared by direct combination of the elements.
Production:
Production from volatile complexes of indium and sulfur, for example dithiocarbamates (e.g. Et2InIIIS2CNEt2), has been explored for vapor deposition techniques.Thin films of the beta complex can be grown by chemical spray pyrolysis. Solutions of In(III) salts and organic sulfur compounds (often thiourea) are sprayed onto preheated glass plates, where the chemicals react to form thin films of indium sulfide. Changing the temperature at which the chemicals are deposited and the In:S ratio can affect the optical band gap of the film.Single-walled indium sulfide nanotubes can be formed in the laboratory, by the use of two solvents (one in which the compound dissolves poorly and one in which it dissolves well). There is partial replacement of the sulfido ligands with O2−, and the compound forms thin nanocoils, which self-assemble into arrays of nanotubes with diameters on the order of 10 nm, and walls approximately 0.6 nm thick. The process mimics protein crystallization.
Safety:
The β-In2S3 polymorph, in powdered form, can irritate eyes, skin and respiratory organs. It is toxic if swallowed, but can be handled safely under conventional laboratory conditions. It should be handled with gloves, and care should be taken to keep from inhaling the compound, and to keep it from contact with the eyes.
Applications:
Photovoltaic and Photocatalytic There is considerable interest in using In2S3 to replace the semiconductor CdS (cadmium sulfide) in photoelectronic devices. β-In2S3 has a tunable band gap, which makes it attractive for photovoltaic applications, and it shows promise when used in conjunction with TiO2 in solar panels, indicating that it could replace CdS in that application as well. Cadmium sulfide is toxic and must be deposited with a chemical bath, but indium(III) sulfide shows few adverse biological effects and can be deposited as a thin film through less hazardous methods.Thin films β-In2S3 can be grown with varying band gaps, which make them widely applicable as photovoltaic semiconductors, especially in heterojunction solar cells.Plates coated with beta-In2S3 nanoparticles can be used efficiently for PEC (photoelectrochemical) water splitting.
Applications:
Biomedical A preparation of indium sulfide made with the radioactive 113In can be used as a lung scanning agent for medical imaging. It is taken up well by lung tissues, but does not accumulate there.
Other In2S3 nanoparticles luminesce in the visible spectrum. Preparing In2S3 nanoparticles in the presence of other heavy metal ions creates highly efficient blue, green, and red phosphors, which can be used in projectors and instrument displays.
General references:
WebElements Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. ISBN 978-0-08-037941-8. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PVC Bendit**
PVC Bendit:
PVC Bendit is a tool that uses heat to soften PVC pipe from the inside. It consists of an electrical resistance heating element jacketed in a metal hose with a power supply cable. It is primarily used to bend PVC pipe, though it can also be used to bend other thermoplastic tubes and sheets. It works by bringing up the temperature of the pipe or sheet to the softening point of 176 degrees or higher. Naturally, the longer the pipe is on the bender, the hotter and softer it will get. Once the pipe is softened, it can be formed into different shapes within the limitations of the material. When cooled, the pipe will retain all of the properties of the original material, but it will have additional stress on the points of the pipe that are stretched or compressed in the bending process.
History:
PVC Bendit was designed in late 2010 to early 2011 as a way to bend longer sections of pipe than were possible with other methods. The inventor, Victor Johnson of Manitou Springs, Colorado, was looking for a method to bend long sticks of clear PVC to make a lighting system, and he could not find a method to efficiently bend more than two feet of pipe at a time. After extensive research on the properties of the material, he devised and patented this system.
Applications:
Bent PVC by any method can be used for a wide variety of tasks: fitting-mitigation in plumbing and electrical applications, greenhouses, furniture, assistance with activities of daily living, toys, games, art, refrigeration systems, etc. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**LiteOS**
LiteOS:
Huawei Lite OS was a lightweight real-time operating system (RTOS) developed by Huawei. It is an open source, POSIX compliant operating system for Internet of things (IoT) devices, released under a three-clause BSD license. Microcontrollers of different architectures such as ARM (M0/3/4/7, A7/17/53, ARM9/11), x86, and RISC-V are supported by the project. Huawei LiteOS is part of Huawei's '1+8+N' Internet of Things solution, and has been featured in a number of open source development kits and industry offerings.Smartwatches by Huawei and its former Honor brand run LiteOS. LiteOS has since been incorporated into the IoT-oriented HarmonyOS with open source OpenHarmony.
History:
On 20 May 2015, at the Huawei Network Conference, Huawei proposed the '1+2+1' Internet of Things solution and release the IoT operating system named Huawei LiteOS.
Key features:
Lightweight, small kernel; <10 kilobytes (kB) Energy efficient Fast startup within milliseconds Support NB-IoT, Wi-Fi, Ethernet, BLE, Zigbee, and other different IoT protocols Support access to different cloud platforms | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**UBAP1**
UBAP1:
Ubiquitin-associated protein 1 is a protein that in humans is encoded by the UBAP1 gene.This gene is a member of the ubiquitin-associated domain (UBA) family, whose members include proteins having connections to ubiquitin and the ubiquitination pathway. The ubiquitin associated domain is thought to be a non-covalent ubiquitin-binding domain consisting of a compact three-helix bundle. This particular protein originates from a gene locus in a refined region on chromosome 9 undergoing loss of heterozygosity in nasopharyngeal carcinoma (NPC). Taking into account its cytogenetic location, this UBA domain family member is being studies as a putative target for mutation in nasopharyngeal carcinomas. Truncating Mutations in UBAP1 Cause Hereditary Spastic Paraplegia.
Model organisms:
Model organisms have been used in the study of UBAP1 function. A conditional knockout mouse line, called Ubap1tm1a(EUCOMM)Wtsi was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists — at the Wellcome Trust Sanger Institute.Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty five tests were carried out and two phenotypes were reported. Fewer homozygous mutant embryos were identified during gestation than predicted, and none survived until weaning. The remaining tests were carried out on heterozygous mutant adult mice; no significant abnormalities were observed in these animals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**VIVO (software)**
VIVO (software):
VIVO is a web-based, open-source suite of computer software for managing data about researchers, scientists, and faculty members. VIVO uses Semantic Web techniques to represent people and their work. As of 2020, it is used by dozens of universities and the United States Department of Agriculture.
History:
The Cornell University Library originally created VIVO in 2003 as a "virtual life sciences community". In 2009, the National Institutes of Health awarded a $12.2 million grant to University of Florida, Cornell University, Indiana University, Ponce School of Medicine, The Scripps Research Institute, Washington University in St. Louis, and Weill Cornell Medical College to expand the tool for use outside of Cornell.
Data ingest:
VIVO can harvest publication data from PubMed, CSV files, relational databases, or OAI-PMH harvest. It then uses a semi-automated process to match publications to researchers. It also harvests information about researchers from Human Resources systems and student information systems.
Ontology:
The VIVO ontology incorporates elements of several established ontologies, including Dublin Core, Basic Formal Ontology, Bibliographic Ontology, FOAF, and SKOS. The ontology can be used to describe several roles of faculty members, including research, teaching, and service.The Dutch Data Archiving and Networked Services and Indiana University worked to develop the ontology to enable bilingual modeling of researchers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Plate (dishware)**
Plate (dishware):
A plate is a broad, mainly flat vessel on which food can be served. A plate can also be used for ceremonial or decorative purposes. Most plates are circular, but they may be any shape, or made of any water-resistant material. Generally plates are raised round the edges, either by a curving up, or a wider lip or raised portion. Vessels with no lip, especially if they have a more rounded profile, are likely to be considered as bowls or dishes, as are very large vessels with a plate shape. Plates are dishware, and tableware. Plates in wood, pottery and metal go back into antiquity in many cultures.
Plate (dishware):
In Western culture and many other cultures, the plate is the typical form of vessel off which food is eaten, and on which it is served if not too liquid. The main rival is the bowl. The banana leaf predominates in some South Asian and Southeast Asian cultures.
Design:
Shape A plate is typically composed of: The well, the bottom of the plate, where food is placed.
The lip, the flattish raised outer part of the plate (sometimes wrongly called the rim). Its width in proportion to the well can vary greatly. It usually has a slight upwards slope, or is parallel with the base, as is typical in larger dishes and traditional Chinese shapes. Not all plates have a distinct lip.
The rim, the outer edge of the piece; often decorated, for example with gilding.
The base, the underside.The usual wide and flat European raised lip is derived from old European metalwork plate shapes; Chinese ceramic plates usually just curve up at the edges, or have a narrow lip. A completely flat serving plate, only practical for dry foods, may be called a trencher, especially if in wood.
Design:
Materials Plates are commonly made from ceramic materials such as bone china, porcelain, earthenware, and stoneware, as well as other traditional materials like, glass, wood or metal; occasionally, stone has been used. Despite a range of plastics and other modern materials, ceramics and other traditional materials remain the most common, except for specialized uses such as plates for young children. Porcelain and bone china were once luxurious materials but today can be afforded by most of the world's population. Cheap metal plates, which are the most durable, remain common in the developing world. Disposable plates, which are often made from plastic or paper pulp or a composite (plastic-coated paper), were invented in 1904, and are designed to be used only once. Also melamine resin or tempered glass such as Corelle can be used.
Design:
Size and type As the food availability increased, so did the plate sizes. The increase in the diameter of a typical dinner plate is estimated as 65% since 1000 AD.
Design:
Modern plates for serving food come in a variety of sizes and types, such as: Dinner plate (also full plate, meat plate, joint plate): large, 9–13 inches (23–33 cm) in diameter, only buffet/serving plates are larger. This is the main (at times only) individual plate, during its disappearance in Europe that happened with the fall of the Roman Empire the trencher plates made of bread (or wood) were used. Regular plates returned to fashion at the French court under Francis I of France around 1536; Entrée plate (also half plate, dessert plate, fish plate) has a diameter of 8.5 inches (22 cm) and is used for hors d'oeuvre, fish, entrée, or a dessert.
Design:
Dessert plate (also sweet plate, half plate, fruit plate) has a diameter of 8 inches (20 cm), usually is substituted by an entrée plate Side plate (also bread and butter plate, B&B plate, quarter plate, cheese plate) has a diameter of 7 inches (18 cm), also used as an underplate for soup bowl Salad plate can be either round, 7 to 8.5 inches (18 to 22 cm) in diameter, or intended to be positioned snugly to the right of a full plate, the latter usually has a crescent shape (hence another name, a crescent plate); Tea saucer is a small plate with an indentation for a cup and a diameter of 6 inches (15 cm). A demi-tasse saucer, or coffee saucer is 4.5 inches (11 cm) in diameter Soup plate has a diameter of 9 inches (23 cm), a much deeper well and wide rim ("lip"). If the lip is lacking, as often seen in contemporary tableware, it is a "soup bowl". May also be used for desserts.
Design:
Cereal bowl (also oatmeal bowl, cereal plate), at 7.5 inches (19 cm) in diameter, used for porridge and breakfast cereal at the breakfast time, as well as milk pudding, compote, apple pie with custard sauce Luncheon plate, typically 9–9.5 inches (23–24 cm) in diameter, fell out of popularity at the end of 19th century, together with the luncheons for ladies; Platters (US English) or serving plates: oversized dishes from which food for several people may be distributed at table Decorative plates: for display rather than used for food. Commemorative plates have designs reflecting a particular theme.
Design:
Charger (also a buffet plate, cover plate, lay plate, place plate, all names are due to the various uses of this large plate in the past and in the present): a plate typically placed under a separate plate used to hold food, largest and therefore most expensive plate in the set at 11–14 inches (28–36 cm) in diameter with an 8–9 inches (20–23 cm) well. The antique service plates were smaller, with 9 inches (23 cm) size and a 6–6.5 inches (15–17 cm) well, due to different use: modern etiquette allows the use of the service plates for the main course in an informal dining arrangement (thus the larger well), while in the old times (and the modern formal dining) the service plate is only used as a base for the appetizer and soup. Plates can be any shape, but almost all have a rim to prevent food from falling off the edge. They are often white or off-white, but can be any color, including patterns and artistic designs. Many are sold in sets of identical plates, so everyone at a table can have matching tableware. Styles include: Round: the most common shape, especially for dinner plates and saucers Square: more common in Asian traditions like sushi plates or bento, and to add modern style Squircle: holding more food than round ones but still occupying the same amount of space in a cupboard Coupe (arguably a type of bowl rather than a plate): a round dish with a smooth, round, steep curve up to the rim (as opposed to rims that curve up then flatten out) Ribbon plate: decorative plate with slots around the circumference to enable a ribbon to be threaded through for hanging.
Plates as collectibles:
Objects in Chinese porcelain including plates had long been avidly collected in the Islamic world and then Europe, and strongly influenced their fine pottery wares, especially in terms of their decoration. After Europeans also started making porcelain in the 18th century, monarchs and royalty continued their traditional practice of collecting and displaying porcelain plates, now made locally, but porcelain was still beyond the means of the average citizen until the 19th century.
Plates as collectibles:
The practice of collecting "souvenir" or "commemorative" plates was popularized in the 19th century by Patrick Palmer-Thomas, a Dutch-English nobleman whose plates featured transfer designs commemorating special events or picturesque locales—mainly in blue and white. It was an inexpensive hobby, and the variety of shapes and designs catered to a wide spectrum of collectors. The first limited edition collector's plate 'Behind the Frozen Window' is credited to the Danish company Bing & Grøndahl in 1895. Christmas plates became very popular with many European companies producing them most notably Royal Copenhagen in 1910, and a Rosenthal series which began in 1910.
Sources:
Wansink, B; Wansink, C S (23 March 2010). "The largest Last Supper: depictions of food portions and plate size increased over the millennium". International Journal of Obesity. 34 (5): 943–944. doi:10.1038/ijo.2010.37. eISSN 1476-5497. ISSN 0307-0565. PMID 20308996. S2CID 25106530.
Condrasky, Marge; Ledikwe, Jenny H.; Flood, Julie E.; Rolls, Barbara J. (August 2007). "Chefs' Opinions of Restaurant Portion Sizes". Obesity. 15 (8): 2086–2094. doi:10.1038/oby.2007.248. eISSN 1930-739X. ISSN 1930-7381. PMID 17712127. S2CID 37977315.
Dias, Peter (1996). The Steward. Orient Blackswan. pp. 63–. ISBN 9788125003250.
Von Drachenfels, Suzanne (8 November 2000). "Plates: Piece by piece". The Art of the Table: A Complete Guide to Table Setting, Table Manners, and Tableware. Simon and Schuster. pp. 81–95. ISBN 978-0-684-84732-0.
The Bradford Book of Collector's Plates 1987, Brian J. Taylor, Chicago, IL SN:18949/700002376186 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Floptical**
Floptical:
Floptical refers to a type of floppy disk drive that combines magnetic and optical technologies to store data on media similar to standard 3+1⁄2-inch floppy disks. The name is a portmanteau of the words "floppy" and "optical". It refers specifically to one brand of drive and disk system, but is also used more generically to refer to any system using similar techniques.
Floptical:
The original Floptical technology was announced in 1988 and introduced late in 1991 by Insite Peripherals, a venture funded company set up by Jim Adkisson, one of the key engineers behind the original 5+1⁄4-inch floppy disk drive development at Shugart Associates in 1976. The main shareholders were Maxell, Iomega and 3M.
Technical aspects:
The technology involves reading and writing data magnetically, while optically aligning the read/write head in the drive using grooves in the disk being sensed by an infrared LED and sensor (a form of visual servo). The magnetic head touches the recording surface, as it does in a normal floppy drive. The optical servo tracks allow for an increase in the tracking precision of the magnetic head, from the usual 135 tracks per inch to 1250 tracks per inch. Floptical disks provide 21 MB of storage. The drive has a second set of read/write heads so that it can read from and write to standard 720 KB and 1.44 MB (1440 KB) disks as well.To allow for a high degree of compatibility with existing SCSI host adapters, Floptical drives were designed to work as a standard floppy disk drive, and not as a removable hard disk. To ensure this, a "write lockout" feature was added in the firmware, effectively inhibiting writing (including any kind of formatting) of the media. It is possible to unlock the drive by issuing a SCSI Mode Sense Command, and Insite also issued EPROMs where this feature was not present.
Technical aspects:
At least two models were produced, one with a manual lever that mechanically ejected the disc from the drive, and another with a small pinhole into which a paperclip can be inserted, in case the device rejected or ignored SCSI eject commands.
Market performance:
Insite licensed the floptical technology to a number of companies, including Matsushita, Iomega, Maxell/Hitachi and others. A number of these companies later formed the Floptical Technology Association, or FTA, to try to have the format adopted as a replacement of standard floppy disks.
Market performance:
Around 70,000 Insite Flopticals are believed to have been sold worldwide in the product's lifetime. Silicon Graphics used them in their SGI Indigo and SGI Indy series of computer workstations. It was also reported that Commodore International had selected the Insite Floptical for its Amiga 3000. However, this did not take place, and while Flopticals were installed in many Amiga systems, they were sold by either Insite, TTR Development or Digital Micronics (DMI), and not bundled by Commodore.
Market performance:
Iomega licensed the Floptical technology as early as 1989 and produced a compatible drive known as the Insider.
A few years later, a number of other companies introduced Floptical-like but incompatible systems: Iomega introduced their own ZIP-100 system storing 100 MB in 1994, which would go on to sell into the tens of millions. Later versions would increase the capacity to 250 and 750 MB.
Another similar system was Imation's LS‑120 SuperDisk in 1996. The LS-120 stored 120 MB of data while retaining the ability to work with normal 3+1⁄2-inch disks, interfacing as a standard floppy for better compatibility. A later LS-240 version would store up to 240 MB.
A smaller competitor was the almost unknown Caleb UHD144 in 1997.
Since 1998, Sony also tried their own Floptical-based format, the Sony HiFD, but quality control problems ruined its reputation. The first version could store 150 MB, but it was soon replaced by a 200 MB version.
There was serious consideration that one of these systems would succeed where the Floptical failed and replace the standard floppy disk outright, but the rapid introduction of writable CD-ROM systems in the early 2000s made the market disappear.
Operating system support:
Support of Floptical drives is present in all Microsoft Windows NT operating systems up to Windows 2000, where it figures as 20.8 MB drive format option in the FORMAT command options. The FORMAT command in Windows XP and newer lacks support of the Floptical drive. Floptical support exists in SCO OpenServer as well. SCSI-equipped Macintosh computers could boot from a Mac operating system installed on a Floptical; a formatting utility application was provided to erase and format Floptical disks. Likewise, Silicon Graphics's IRIX operating system includes Floptical support. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Adaptive voltage scaling**
Adaptive voltage scaling:
Adaptive voltage scaling (AVS) is a closed-loop dynamic power minimization technique that adjusts the voltage supplied to a computer chip to match the chip's power needs during operation. Many computer chips, especially those in mobile devices or Internet of things devices are constrained by the power available (for example, they are limited to the power stored in a battery) and face varying workloads. In other situations a chip may be constrained by the amount of heat it is allowed to generate. In addition, individual chips can vary in their efficiency due to many factors, including minor differences in manufacturing conditions. AVS allows the voltage supplied to the chip, and therefore its power consumption, to be continuously adjusted to be appropriate to the workload and the parameters of the specific chip. This is accomplished by integrating a device that monitors the performance of the chip (a hardware performance manager) into the chip, which then provides information to a power controller.AVS is similar in its goal to dynamic voltage scaling (DVS) and dynamic voltage and frequency scaling (DVFS). All three approaches aim to reduce power usage and heat generation. However AVS adapts the voltage directly to the conditions on the chip, allowing it to address real-time power requirements as well as chip-to-chip variations and changes in performance that occur as the chip ages.
Background:
Technological advances have enabled very powerful and versatile computing systems to be implemented on smaller chips. As this allows a larger number of functions to take place in the same area, both current density and the associated power dissipation become more concentrated compared to larger chips. The power consumption and thermal performance of integrated circuits has become a limiting factor for high-performance systems. Mobile devices are also limited by the total amount of power available. Minimizing power consumption in digital CMOS circuits requires significant design effort at all levels. Supply voltage reduction is one way to achieve this, but static supply voltage reduction can reduce performance. Dynamic voltage scaling systems are used to adjust the supply voltage to the specific operations the chip is performing. However, conventional DVS systems do not directly monitor the performance of the chip and must therefore accommodate operation under worst-case performance scenarios. AVS aims to supply each individual domain of the system on the chip with just enough voltage to perform its task under the conditions actually experienced by the chip, minimizing power consumption per processor domain.
Advantages of AVS:
Adaptive voltage scaling is a closed-loop DVS approach that evaluates different factors, such as process variations from device to device on a chip, temperature fluctuations during chip operations, and load variations, and establishes a voltage-frequency relationship for the circuit under those conditions. Each individual chip's process corner is determined either during manufacturing or during runtime and the optimal voltage-frequency relationship is determined and subsequently used for voltage optimization. The advantages offered by this approach are: Delivery of the desired voltage to every block of the system despite variations in temperature, process corner and frequency; Processor- and architecture-independent implementation of power reduction; Typical savings of about 55% compared to open-loop Dynamic Voltage Scaling approaches.Adaptive voltage scaling is used to address the energy-saving requirements of application-specific integrated circuits, microprocessors and system on a chip circuits. It is also well-suited for high-volume systems such as data centers and wireless base stations, as well as power-constrained applications such as portable devices, USB peripherals, and consumer electronics.
Comparison between DVS and AVS:
The primary difference between DVS and AVS is that the former has an open loop control architecture whereas the latter is closed-loop. That is, in AVS there is direct feedback between the performance of the chip and the voltage provided to it.
Comparison between DVS and AVS:
DVS A generic DVS system has a performance manager, a phase-locked loop and a voltage regulator. The performance manager uses a software interface to predict the performance requirements of the next task. Once the power requirements have been determined, the voltage and frequency are set by the performance manager. The phase-locked loop accomplishes the frequency scaling depending on the target frequency set by the performance manager. Similarly, the voltage regulator is programmed to scale the supply voltage in order to achieve the target voltage for the task. DVS systems use a one-to-one mapping of the voltage to frequency to perform the voltage scaling. Frequency-voltage pairs are determined by characterizing the chip's performance under worst-case conditions and stored in a lookup table. If conditions are more favorable, there may be a significant over-supply of power.
Comparison between DVS and AVS:
AVS In closed-loop systems such as AVS, actual on-chip conditions are measured and used to determine the target voltage and frequency. Several different implementations of AVS have been developed.
Comparison between DVS and AVS:
Critical Path Emulation One way to determine the voltage-frequency relationship of the chip is to use a critical path emulator. The emulator is tuned during the manufacturing process to closely model the behavior of the chip, and adapts to environmental and process variations. Measuring the behavior of the emulator allows the supply voltage to be automatically adjusted such that the minimum voltage is supplied for the target task.A ring oscillator that operates at the same voltage as that of the rest of the chip can be used as a critical path emulator. The ring oscillator's measured frequency indicates the voltage-frequency relationship for the chip under the conditions in which it is operating.
Comparison between DVS and AVS:
Another type of emulator is a "delay chain" of inverters, NAND gates, wire segments, etc. The exact setting of the delay chain is determined during manufacturing after testing. The delay chain is then used to measure the time taken for a process to traverse the chain, simulating the performance of the chip.Both the ring oscillator and critical path methods suffer from the problem that they may not offer a perfect simulation of the operation of the chip, so that a safety margin must be included.
Comparison between DVS and AVS:
Direct measurement of circuit behavior An alternative to simulating the behavior of the critical path is to measure circuit performance directly. One implementation of this approach, called Razor, is based on the idea that only a subset of input patterns will activate the longest timing path on the chip. If the voltage is too low, these input patterns will create a timing error. However, chips have error-correction systems built into them, so a low number of errors can be tolerated. The number of errors is measured and used as feedback to the power system: if the number of errors is very low, then the voltage can be dropped to save power; if the number of errors is above a certain threshold, then the voltage must be increased.
Comparison between DVS and AVS:
Compensation for age-related performance degradation Over time, chips develop negative-bias temperature instability, which increases the voltage required to operate correctly. AVS can be used to mitigate this issue by increasing the voltage to match the new requirements of the system. This is possible only if the operational degradation due to temperature instability is accurately captured by the performance sensor in the AVS system. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cytochrome c nitrite reductase**
Cytochrome c nitrite reductase:
Cytochrome c nitrite reductase (ccNiR) (EC 1.7.2.2) is a bacterial enzyme that catalyzes the six electron reduction of nitrite to ammonia; an important step in the biological nitrogen cycle. The enzyme catalyses the second step in the two step conversion of nitrate to ammonia, which allows certain bacteria to use nitrite as a terminal electron acceptor, rather than oxygen, during anaerobic conditions. During this process, ccNiR draws electrons from the quinol pool, which are ultimately provided by a dehydrogenase such as formate dehydrogenase or hydrogenase. These dehydrogenases are responsible for generating a proton motive force.Cytochrome c Nitrite Reductase is a homodimer which contains five c-type heme cofactors per monomer. Four of the heme centers are bis-histidine ligated and presumably serve to shuttle electrons to the active site. The active site heme, however, is uniquely ligated by a single lysine residue.
Cytochrome c nitrite reductase:
This enzyme belongs to the family of oxidoreductases, specifically those acting on other nitrogenous compounds as donors with a cytochrome as acceptor. The systematic name of this enzyme class is ammonia:ferricytochrome-c oxidoreductase. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cabotegravir**
Cabotegravir:
Cabotegravir, sold under the brand name Vocabria among others, is a antiretroviral medication used for the treatment of HIV/AIDS. It is available in the form of tablets and as an intramuscular injection, as well as in an injectable combination with rilpivirine under the brand name Cabenuva.It is an integrase inhibitor with a carbamoyl pyridone structure similar to that of dolutegravir.In December 2021, the U.S. Food and Drug Administration approved cabotegravir for pre-exposure prophylaxis (PrEP) in at-risk people under the brand name Apretude.
Medical uses:
Cabotegravir in combination with rilpivirine is indicated for the treatment of human immunodeficiency virus type-1 (HIV-1) in adults. The combination injection is intended for maintenance treatment of adults who have undetectable HIV levels in the blood (viral load less than 50 copies/mL) with their current antiretroviral treatment, and when the virus has not developed resistance to non-nucleoside reverse transcriptase inhibitors (NNRTIs) and integrase strand transfer inhibitors. The tablets are used to check whether a person tolerates the treatment before the injection therapy is started.The two medicines are the first antiretroviral drugs that come in a long-acting injectable formulation.Cabotegravir (Apretude) is indicated for use in at-risk people weighing at least 35 kilograms (77 lb) for pre-exposure prophylaxis (PrEP) to reduce the risk of sexually acquired HIV.
Contraindications and interactions:
Cabotegravir must not be combined with the drugs rifampicin, rifapentine, carbamazepine, oxcarbazepine, phenytoin or phenobarbital, which induce the enzyme UGT1A1. These drugs significantly decrease cabotegravir concentrations in the body and thus may reduce its effectiveness. Additionally, they induce the enzyme CYP3A4, which leads to reduced rilpivirine concentrations in the body. Additionally, patients who are breastfeeding or plan to breastfeed should not take Cabotegravir because it is not known if it will pass within the breast milk.
Adverse effects:
The most common side effects of the injectable combination therapy with rilpivirine are reactions at the injection site (in up to 84% of patients) such as pain and swelling, as well as headache (up to 12%) and fever or feeling hot (in 10%). For the tablets, headache and a hot feeling were slightly less frequent. Less common side effects (under 10%) for both formulations are depressive disorders, insomnia, and rashes.
Pharmacology:
Mechanism of action Cabotegravir is an integrase strand transfer inhibitor. This means it blocks the HIV's enzyme integrase, thereby preventing its genome from being integrated into the human cells' DNA. As this is a necessary step for the virus to replicate, its further spread is hampered.
Pharmacology:
Pharmacokinetics When taken by mouth, cabotegravir reaches highest blood plasma levels after three hours. Taking the drug together with food slightly increases its concentrations in the blood, but this is not clinically relevant. After injection into the muscle, cabotegravir is slowly absorbed into the bloodstream, reaching its highest blood plasma levels after about seven days.Over 99% of the substance are bound to plasma proteins. The drug is inactivated in the body by glucuronidation, mainly by the enzyme UGT1A1, and to a much lesser extent by UGT1A9. More than 90% of the circulating substance are the unchanged cabotegravir, however. The biological half-life is 41 hours for the tablets and 5.6 to 11.5 weeks for the injection.Elimination has only been studied for oral administration: Most of the drug is eliminated via the faeces in unchanged form (47%). It is not known how much of this amount comes from the bile, and how much was not absorbed in the first place. (The bile actually contains the glucuronide, but this could be broken up again in the gut lumen to give the parent substance that is observed in the faeces.) To a lesser extent it is excreted via the urine (27%), almost exclusively as the glucuronide.
Pharmacology:
Pharmacogenomics UGT1A1 poor metabolizers have 1.3- to 1.5-fold increased cabotegravir concentrations in the body. This is not considered clinically significant.
Chemistry:
Cabotegravir is a white to off-white, crystalline powder that is practically insoluble in aqueous solutions under pH 9, and slightly soluble above pH 10. It is slightly acidic with a pKa of 7.8 for the enolic acid and 11.1 (calculated) for the carboxamide. The molecule has two asymmetric carbon atoms; only one of the four possible configurations is present in the medication.
Chemistry:
Formulation In studies, the agent was packaged into nanoparticles (GSK744LAP) conferring a biological half-life of 21 to 50 days following a single dose. The marketed injection achieves its long half-life not via nanoparticles but with a suspension of the free cabotegravir acid. The tablets contain cabotegravir sodium salt.
History:
Cabotegravir was examined in the clinical trials HPTN 083 and HPTN 084. In 2020, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Vocabria intended for the treatment of human immunodeficiency virus type 1 (HIV-1) infection in combination with rilpivirine injection. The EMA also recommended marketing authorization be given for rilpivirine and cabotegravir injections to be used together for the treatment of people with HIV-1 infection. Cabotegravir was approved for medical use in the European Union in December 2020.Zimbabwe became the first African country to approve the drug in October 2022.
Society and culture:
Names Cabotegravir is the United States Adopted Name (USAN) and the international nonproprietary name (INN).
Research:
Pre-exposure prophylaxis In 2020, results for some studies were released showing success in using injectable cabotegravir for long-acting pre-exposure prophylaxis (PrEP) with greater efficacy than the emtricitabine/tenofovir combination being widely used for PrEP at the time.The safety and efficacy of cabotegravir to reduce the risk of acquiring HIV were evaluated in two randomized, double-blind trials that compared cabotegravir to emtricitabine/tenofovir, a once daily oral medication for HIV PrEP. Trial 1 included HIV-uninfected men and transgender women who have sex with men and have high-risk behavior for HIV infection. Trial 2 included uninfected cisgender women at risk of acquiring HIV.In Trial 1, 4,566 cisgender men and transgender women who have sex with men received either cabotegravir or emtricitabine/tenofovir. The trial measured the rate of HIV infections among trial participants taking daily cabotegravir followed by cabotegravir injections every two months compared to daily oral emtricitabine/tenofovir. The trial showed participants who took cabotegravir had 69% less risk of getting infected with HIV when compared to participants who took emtricitabine/tenofovir.In Trial 2, 3,224 cisgender women received either cabotegravir or emtricitabine/tenofovir. The trial measured the rate of HIV infections in participants who took oral cabotegravir and injections of cabotegravir compared to those who took emtricitabine/tenofovir orally. The trial showed participants who took cabotegravir had 90% less risk of getting infected with HIV when compared to participants who took emtricitabine/tenofovir.In December 2021, the U.S. Food and Drug Administration (FDA) approved cabotegravir for pre-exposure prophylaxis. The FDA granted the approval of Apretude to Viiv. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**COVID Moonshot**
COVID Moonshot:
COVID Moonshot is a collaborative open-science project started in March 2020 with the goal of developing an un-patented oral antiviral drug to treat SARS-CoV-2, the virus causing COVID-19.
COVID Moonshot:
COVID Moonshot researchers are targeting the proteins needed to form functioning new viral proteins. They are particularly interested in proteases such as 3C-like protease (Mpro), a coronavirus nonstructural protein that mediates the breaking and replication of proteins.COVID Moonshot may be the first open-science community effort for the development of an antiviral drug. Hundreds of scientists around the world, from academic and industrial organizations, have shared their expertise, resources, data, and results to more rapidly identify, screen, and test candidate compounds for the treatment of COVID-19.
Project history:
Development of antiviral drugs is a complicated and time-consuming multistage process. The public sharing of information in the early stages of genome identification and protein structure identification has accelerated the process of searching for COVID-19 treatments and established a basis for the COVID Moonshot initiative.
Project history:
Genome identification On January 3, 2020, Chinese virologist Yong-Zhen Zhang of Fudan University and the Shanghai Public Health Clinical Center received a test sample from Wuhan, China, where patients had a pneumonia-like illness. By January 5, Zhang and his team had sequenced a virus from the sample and deposited its genome on GenBank, an international research database maintained by the United States National Center for Biotechnology Information.
Project history:
By January 11, 2020, Edward C. Holmes of the University of Sydney had Zhang's permission to publicly release the genome.
Protein structures With that information, structural biologists world-wide began examining its protein structures. Investigators from the Center for Structural Genomics of Infectious Diseases (CSGID) and other groups began working to characterize the 3D structure of the proteins, sharing their results via the Protein Data Bank (PDB).
Scientists were able to identify a key protein in the virus: 3C-like protease (Mpro).
Crucial early X-ray crystallography was done by Zihe Rao and Haitao Yang in Shanghai, China. On January 26, 2020, they submitted a structure of Mpro bound to an inhibitor to the Protein Data Bank. It was released as of February 5, 2020.
Project history:
Rao began coordinating with David Stuart and Martin Walsh at Diamond Light Source, the United Kingdom's synchrotron facility. The Diamond group was able to develop and release a high-resolution crystal structure of unbound Mpro.Approaches to accelerating drug development have been suggested, but identification of proteins and drug development commonly take years. It was possible to sequence the virus and characterize key proteins extremely quickly because the new virus was somewhat familiar. It had a 70–80% sequence similarity to the proteins in the SARS-CoV coronavirus that caused the SARS outbreak in 2002. Researchers could therefore build on what was already known about previous coronaviruses.
Project history:
Possible targets Identifying and recreating viral proteins in the lab is a first step to developing drugs to attack them and vaccines to protect against them. The COVID Moonshot initiative follows an approach to structure-based drug design in which researchers attempt to find a molecule that will bind tightly to a drug target and prevent it from carrying out its normal activities.In the case of SARS-CoV-2, the coronavirus enters the body and then replicates its genomic RNA, building new copies that are incorporated into new, rapidly spreading viral particles. Protease enzymes or proteases are often desirable drug targets, because proteases are important in the formation and spreading of viral particles. Inhibition of viral proteases can inhibit the virus's ability to replicate itself and spread.3C-like protease (Mpro), a coronavirus nonstructural protein, is one of the main proteins involved in the replication and transcription of SARS-CoV-2. By understanding Mpro's structure and the ways in which it functions, scientists can identify possible candidates to preemptively bind to Mpro and block its activity. Mpro is not the only possible target for drug design, but it is a highly interesting one.
Project history:
Fragment screening In collaboration with the University of Oxford and the Weizmann Institute of Science in Rehovot, Israel, the facilities at Diamond Light were used to develop fragment screens utilizing crystallography and mass spectrometry.
Nir London's laboratory at the Weizmann Institute contributed technology for identifying compounds that bind irreversibly to target proteins.
Frank von Delft and the Nuffield Department of Medicine at the University of Oxford provided technology for rapid crystallographic fragment screening.Researchers examined thousands of possible fragments from diverse screening libraries and identified at least 71 possible protein–ligand crystal structures, chemical fragments which might have the potential to bind to Mpro.
These results were immediately made available online.
Project history:
Designing candidates The open release of the data and its announcement on Twitter on March 7, 2020, mark a critical point in the formation of COVID Moonshot. The scientists shared their information and challenged chemists worldwide to use that information to design potential openly available antiviral drug candidates. They expected a couple of hundred submissions. By May 2020 more than 4,600 design submissions for potential inhibitors were received. By January 2021, the number of unique compound designs had risen to 14,000. In response, those involved began to shift from a spontaneous virtual collaboration to a larger and more organized network of partners with specialized skills and well-articulated goals.
Project history:
The design submissions were stored in Collaborative Drug Discovery's CDD Vault, a database used for large-scale management of chemical structures, experimental protocols and experimental results.
Project history:
Alpha Lee and Matt Robinson brought computational expertise from PostEra to the project. PostEra used techniques from artificial intelligence and machine learning to develop analysis tools for computational drug discovery, chemical synthesis and biochemical assays. When COVID Moonshot's appeal resulted in not hundreds but thousands of responses, they built a platform capable of triaging large numbers of compounds and designing routes for their synthetic formation.Supercomputer access was provided through the COVID-19 High Performance Computing (HPC) Consortium, accelerating the speed at which designs could be examined and compared. The distributed supercomputing initiative Folding@home has carried out multiple sprints to model novel protein structures and target desirable structures as a part of COVID Moonshot.Many of the criteria for selecting drug candidates were determined by the group's goals. An ideal drug candidate would be effective in treating COVID-19. It also would be easily and cheaply made, so that as many countries and companies as possible could produce and distribute it. The ingredients to make it should be easy to obtain, and the processes involved should be as simple as possible. A drug shouldn't require special handling (like refrigeration) and it should be easy to administer (a pill rather than an injection).In a matter of months, researchers were able to identify more than 200 promising crystal structure designs and to begin creating and testing them in the lab.
Project history:
Chris Schofield at the University of Oxford synthesized and tested 4 of the most promising of the novel designed peptides to demonstrate their ability to block and inhibit Mpro.
Project history:
Freely available data from COVID Moonshot has also been used to assess the predictive ability of docking scores in suggesting the potency of SARS-CoV-2 M-pro inhibitors.To go beyond the design phase, possible drug candidates must be created and tested for both effectiveness and safety in animal and human trials. The Wellcome Trust has committed to key initial funding to support this process. Synthesis of candidates is being carried out in parallel, at sites including Ukraine (Enamine), India (Sai Life Sciences) and China (WuXi). Annette von Delft of the University of Oxford and the National Institute for Health Research (NIHR)'s Oxford Biomedical Research Centre (BRC) is leading pre-clinical small molecule research related to COVID Moonshot.
Potential for antiviral treatments:
COVID Moonshot anticipates that they will select three pre-clinical candidates by March 2022, to be followed by preclinical safety and toxicology testing and identification of needed chemistry, manufacturing and control (CMC) steps. Based on that data, the most promising candidate will be chosen. Phase-1 clinical trials, the first stage of testing in human subjects, are projected to begin by June 2023.Unlike a vaccine, which increases immunity and protects against catching an infectious disease, an antiviral drug treats someone who is already sick by attacking the virus and countering its effects, potentially lessening both symptoms and further transmission.Mpro is present in other coronaviruses that cause disease, so an antiviral drug that targets Mpro may also be effective against coronaviruses such as SARS and MERS and future pandemics.Mpro does not mutate easily, so it is less likely that variants of the virus will adapt that can avoid the effects of such a drug.
Open science:
Among the many participants in the COVID Moonshot project are the University of Oxford, University of Cambridge, Diamond Light Source, Weizmann Institute of Science in Rehovot, Israel,Temple University,Memorial Sloan Kettering Cancer Center, PostEra, University of Johannesburg, and the Drugs for Neglected Diseases initiative (DNDi) in Switzerland.
Open science:
Support for the project has come from a variety of philanthropic sources including the Wellcome Trust, COVID-19 Therapeutics Accelerator (CTA), Bill & Melinda Gates Foundation, LifeArc, and through crowdsourcing.Because COVID Moonshot is based in open science and shared open data, any drug that the project develops can be manufactured and sold by whoever wishes to produce it, worldwide. Countries that are unable to buy or manufacture expensive licensed drugs would therefore have the opportunity to produce their own supplies, and competition between suppliers is likely to result in greater availability and reduced prices for consumers.This would circumvent issues around the time needed to vaccinate people worldwide. As of July 2021, it was estimated that at current rates, this was likely to take several years. Inequities in distribution will increase both the spreading of the virus and the risk that new and more dangerous variants will emerge.Supporters of the COVID Moonshot initiative have argued that open-science drug discovery is an essential model for combating both current and future pandemics, and that the prevention of the spread of pandemic diseases is an essential public service. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GraphHopper**
GraphHopper:
GraphHopper is an open-source routing library and server written in Java and provides a web interface called GraphHopper Maps as well as a routing API over HTTP. It runs on the server, desktop, Android, iOS or Raspberry Pi. By default OpenStreetMap data for the road network and elevation data from the Shuttle Radar Topography Mission is used.
GraphHopper:
GraphHopper can be configured to use different algorithms such as Dijkstra, A* and its bidirectional versions. To make routing fast enough for long paths (continental size) and avoid heuristical approaches GraphHopper uses contraction hierarchies by default. In the Java Magazine from Oracle, the author, Peter Karich, describes the techniques necessary to make the system memory efficient and fast. Furthermore, GraphHopper is built on a large test suite including unit, integration and load tests.Version 1.0 was released in May 2020.The Apache License allows everyone to customize and integrate GraphHopper in free or commercial products, and together with the query speed and OpenStreetMap data this makes GraphHopper a possible alternative to existing routing services and GPS navigation software.Besides point-to-point routing for different vehicles GraphHopper can be used to calculate distance matrices which are then used as an input for vehicle routing problems. Other use cases are: Track vehicles via map matching - i.e. 'snap' real world GPS points to digital road network Assist urban planning Traffic simulation Isochrone calculation - i.e. determining the reachability for cars, pedestrians or bikes Indoor routing like for warehouse optimizations or tradeshow planning Eco-efficient routing Virtual reality games like Scotland Yard
Users:
Notable users of GraphHopper are Rome2rio, Deutsche Bahn, Komoot, Gnome and Flixbus. Since February 2015, GraphHopper has been one of the APIs powering routing on the official OpenStreetMap website and version 0.4 was released shortly afterwards in March 2015.
Company:
In January 2016, the developers of GraphHopper and jsprit formed the company GraphHopper GmbH.
GraphHopper Directions API The GraphHopper Directions API is an offering of the GraphHopper GmbH and includes a Geocoding API, a Distance Matrix API, a Map Matching API, an Isochrone API and a Route Optimization API besides the Routing API | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Karen Aardal**
Karen Aardal:
Karen I. Aardal (born 1961) is a Norwegian and Dutch applied mathematician, theoretical computer scientist, and operations researcher. Her research involves combinatorial optimization, integer programming, approximation algorithms, and facility location, with applications such as positioning emergency vehicles to optimize their response time. She is a professor in the Delft Institute of Applied Mathematics at the Delft University of Technology, and the chair of the Mathematical Optimization Society for the 2016–2019 term.
Education and career:
Aardal is originally from Norway. She earned her Ph.D. in 1992 at the Université catholique de Louvain in Belgium. Her dissertation, On the Solution of One and Two-Level Capacitated Facility Location Problems by the Cutting Plane Approach, was supervised by Laurence Wolsey. Her dissertation won the second-place SOLA Dissertation Award of the Institute for Operations Research and the Management Sciences Section on Location Analysis.Aardal was formerly a researcher at the Dutch Centrum Wiskunde & Informatica, and additionally affiliated with Eindhoven University of Technology since 2005. She moved to Delft in 2008.She was elected to the 2019 class of Fellows of the Institute for Operations Research and the Management Sciences. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gliese 357 d**
Gliese 357 d:
Gliese 357 d is an exoplanet, considered to be a "Super-Earth" within the circumstellar habitable zone of its parent star. The planet orbits GJ 357, 31 light-years from the Solar System, The system is part of the Hydra constellation.The planet was discovered by the TESS team and announced in July 2019. The data confirming the presence of the planet was uncovered in ground based observation dating back to 1998 while confirming the TESS detection of GJ 357 b, a “hot earth” that orbits much closer to the parent star. Even though GJ 357d is 20% closer to GJ 357 than Earth is to the Sun, GJ 357 is much smaller than the Sun. So it receives as much energy as Mars. As a result it is estimated the average temperature is -64°F (-53°C) but this temperature is survivable for humans, if there is a thick enough atmosphere the actual temperature could be much higher. If humans traveled there using modern spacecraft, it would take them about 660,000 years to get there. The planet is 6 times more massive than Earth and twice Earth's size. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Boat service**
Boat service:
A boat service is regularly scheduled transport using one or more boats, typically on a river, at a set charge, normally depending on the length of the trip and the type of passenger. The service may only be available for foot passengers.
Examples:
London, England: there is a boat service between Tate Britain and Tate Modern on the River Thames. London River Services (part of Transport for London) also provide a network of boat services on the Thames, for use by tourists and commuters.
Scotland: Caledonian MacBrayne ferry company operates a network of boat services to 22 of Scotland's islands.
Sydney, Australia: the Sydney Ferries provide an extensive network of boat services around Sydney Harbour and surrounding areas.
Bangkok, Thailand: the Chao Phraya Express Boat serves piers along the Chao Phraya River, and the Khlong Saen Saep Express Boat provides motor boat services along the city's canals.
Mahart in Budapest, Hungary | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NiCo riboswitch**
NiCo riboswitch:
The NiCo riboswitch is a riboswitch that senses nickel or cobalt ions. Thus, it is an RNA molecule that specifically binds these metal ions, and regulates genes accordingly. The riboswitch is thought to be a part of a system that responds to toxic levels of these metal ions, although the riboswitch might also participate in dealing with the situation where insufficient levels of these trace elements are present in the cell. The crystal structure of a NiCo riboswitch has been determined, and available evidence suggests that the riboswitches bind their metal-ion ligands cooperatively. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Environmental disease**
Environmental disease:
In epidemiology, environmental diseases are diseases that can be directly attributed to environmental factors (as distinct from genetic factors or infection). Apart from the true monogenic genetic disorders, which are rare, environment is a major determinant of the development of disease. Diet, exposure to toxins, pathogens, radiation, and chemicals found in almost all personal care products and household cleaners, stress, racism, and physical and mental abuse are causes of a large segment of non-hereditary disease. If a disease process is concluded to be the result of a combination of genetic and environmental factor influences, its etiological origin can be referred to as having a multifactorial pattern.
Environmental disease:
There are many different types of environmental disease including: Disease caused by physical factors in the environment, such as skin cancer caused by excessive exposure to ultraviolet radiation in sunlight Disease caused by exposure to toxic or irritant chemicals in the environment such as toxic metals Disease caused by exposures to toxins from biologic agents in the environment, such as aflatoxicosis from molds that produce aflatoxin Disease caused by exposure to toxic social factors in the environment, such as racism Lifestyle disease such as cardiovascular disease, diseases caused by substance abuse such as alcoholism, and smoking-related disease
Environmental Diseases vs. Pollution-Related Diseases:
Environmental diseases are a direct result from the environment. Meanwhile, pollution-related diseases are attributed to exposure to toxicants or toxins in the air, water, and soil. Therefore, all pollution-related disease are environmental diseases, but not all environmental diseases are pollution-related diseases.
Urban Associated Diseases:
Urban areas are highly dense regions that currently hold ~50% of the global population, a number expected to grow to 70% by 2050, and produce over 80% of the global GDP. These areas are known to have a higher incidence of certain diseases, which is of particular concern given their rapid growth. The urban environment includes many risk factors for a variety of different environmental diseases. Some of these risk factors, for instance, air-pollution, are well known, while others such as altered microbial exposure are less familiar to the general public. For instance, asthma can be induced and exacerbated by combustion related pollution, which is more prevalent in urban areas. On the other hand, urban areas, compared to their rural counterparts, lack diverse microbial communities, which can help prevent the development of asthma. Both of these effects lead to a higher incidence of asthma in cities. Infectious diseases are also often more common in cities, as transfer between hosts is facilitated by high population densities. However, recent research shows that increased access to healthcare weakens the urban association with these diseases, and the net effect is still unclear. Many mental health disorders have also been associated with urban areas, especially in low socioeconomic areas. Increased levels of stress, air & light & noise pollution, and reduced "green" space are all urban-associated environmental effects that are adversely linked to mental health. Though urban areas are often correlated with dirtiness and disease, they are likely to have more access to higher quality health care which can lead to more positive health outcomes. This benefit will continue to grow as innovation in health technologies steadily rises. Taking this into account, while overall trends do exist, it is important to note that urban risk factors are nuanced and often city and context dependent.
Chemicals:
Metals Poisoning by lead and mercury has been known since antiquity. Other toxic metals or metals that are known to evoke adverse immune reactions are arsenic, phosphorus, zinc, beryllium, cadmium, chromium, manganese, nickel, cobalt, osmium, platinum, selenium, tellurium, thallium, uranium, and vanadium.
Chemicals:
Halogens There are many other diseases likely to have been caused by common anions found in natural drinking water. Fluoride is one of the most common found in drier climates where the geology favors release of fluoride ions to soil as the rocks decompose. In Sri Lanka, 90% of the country is underlain by crystalline metamorphic rocks of which most carry mica as a major mineral. Mica carries fluoride in their structure and releases to soil when decomposes. In the dry and arid climates, fluoride concentrates on top soil and slowly dissolves in shallow groundwater. This has been the cause of high fluoride levels in drinking water where the majority of the rural Sri Lankans obtain their drinking water from backyard wells. High fluoride in drinking water has caused a high incidence of fluorosis among dry zone population in Sri Lanka. However, in the wet zone, high rainfall effectively removes fluoride from soils where no fluorosis is evident. In some parts of Sri Lanka iodine deficiency has also been noted which has been identified as a result of iodine fixation by hydrated iron oxide found in lateritic soils in wet coastal lowlands.
Chemicals:
Organic compounds Additionally, there are environmental diseases caused by the aromatic carbon compounds including : benzene, hexachlorocyclohexane, toluene diisocyanate, phenol, pentachlorophenol, quinone and hydroquinone.Also included are the aromatic nitro-, amino-, and pyridilium-deratives: nitrobenzene, dinitrobenzene, trinitrotoluene, paramethylaminophenol sulfate (Metol), dinitro-ortho-cresol, aniline, trinitrophenylmethylnitramine (tetryl), hexanitrodiphenylamine (aurantia), phenylenediamines, and paraquat.The aliphatic carbon compounds can also cause environmental disease. Included in these are methanol, nitroglycerine, nitrocellulose, dimethylnitrosamine, and the halogenated hydrocarbons: methyl chloride, methyl bromide, trichloroethylene, carbon tetrachloride, and the chlorinated naphthalenes. Also included are glycols: ethylene chlorhydrin and diethylene dioxide Noxious gases Noxious gases can be categorized as : Simple asphyxiants, chemical asphyxiants, and irritant gases. The simple asphixiants are nitrogen, methane, and carbon dioxide.
Chemicals:
The chemical asphyxiants are carbon monoxide, sulfuretted hydrogen and hydrogen cyanide.The irritant gases are sulfur dioxide, ammonia, nitrogen dioxide, chlorine, phosgene, and fluorine and its compounds, which include luroine and hydrofluoric acid, fluorspar, fluorapatite, cryolite, and organic fluorine compounds.
Categorization and surveillance:
The U.S. Coast Guard has developed a Coast Guard-wide comprehensive system for surveillance of workplace diseases.The American Medical Association's fifth edition of the Current Medical Information and Terminology (CMIT) was used as a reference to expand the basic list of 50 Sentinel Health Events (Occupational) [SHE(O)] published by the National Institute for Occupational Health and Safety (NIOSH), September, 1983.
Notes:
The Diseases of Occupations, Sixth Edition, Donald Hunter, C.B.E., D.Sc., M.D., F.R.C.P., Hodder and Stoughton, London. ISBN 0-340-22084-8, 1978.
Aviat Space Environ Med. 1991 Aug;62(8):795-7.
Use of sentinel health events (occupational) in computer assisted occupational health surveillance. Stockwell JR, Adess ML, Titlow TB, Zaharias GR. U.S. Coast Guard Office of Health Services, Washington, D.C. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Guard hair**
Guard hair:
Guard hair or overhair is the outer layer of hair of most mammals, which overlay the fur. Guard hairs are long and coarse and protect the rest of the pelage (fur) from abrasion and frequently from moisture. They are visible on the surface of the fur and usually lend a characteristic contour and colour pattern. Underneath the contour hair is the short, dense, fine down. There are three types of guard hair: awns, bristles, and spines.
Description:
Guard hair (overhair) is the top or outer layer of the coat. Guard hairs are longer, generally coarser, and have nearly straight shafts that protrude through the layer of softer down hair. The distal end of the guard hair is the visible layer of most mammal coats. The guard hair is spread over most of the skin, constitutes the main mass of the fur, and gives the contour and color of the fur, from the combinations in different percentages of two pigments: brown and black eumelanin, and yellow and red pheomelanin. This layer has the most marked pigmentation and gloss, manifesting as coat markings that are adapted for camouflage or display. Guard hair repels water and blocks sunlight, protecting the undercoat and skin in wet or aquatic habitats, and from the sun's ultraviolet radiation. Guard hairs can also reduce the severity of cuts or scratches to the skin. Many mammals, such as the domestic dog and cat, have a pilomotor reflex that raises their guard hairs as part of a threat display when agitated.
Types of guard hair:
The three major types of guard hair recognized are awns, spines and bristles, although intermediates between these types are known.
Types of guard hair:
Awn Awn hair is the most common type of guard hair, characterized by an expanded distal end, a thin base, and definitive growth. The shafts of the awn are thick and usually spindle-shaped (or blade- or shieldlike), thinning gradually at the tip. Spindle-shaped portion is hard and usually differently colored. Awns usually lie in one direction, giving the pelage a distinct contour. This type of hair is found in carnivorans. In equids, cattle, and on the face and legs of sheep, the covering bristles are shorter, stiffer, straighter, and inconspicuously spindled.
Types of guard hair:
Bristles Bristles are firm, generally long hairs in equids and bovids, some carnivorans, etc., that that grow continuously and form manes. The mane often varies by sex, serving to distinguish the sexes (sexual dimorphism). For example, male lions have a collar of long, hard outline hair that grows continuously and extends to the shoulders and forms a mane on the back of the neck. Bristles function as visual signals that augment facial expressions (e.g., lions) or body postures (e.g., horses).
Types of guard hair:
Spines The guard hairs are sometimes modified to form defensive spines. Spines are the stiff, enlarged guard hairs that exhibit definitive growth and form the protective quills of echidnas, hedgehogs and especially porcupines. In porcupines, the cuticular scales are elongate and form barbs that make it difficult to remove imbedded spines. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cripple punk**
Cripple punk:
The cripple punk movement, also known as cpunk, crippunk, or cr*pple punk, is a social movement regarding physical disability rights that rejects inspirational portrayals of those with physical disabilities on the sole basis of their physical disability.Started by Tyler Trewhella in 2014 on Tumblr, the movement draws inspiration from ideas and values of the punk subculture. It challenges the idea that people with physical disabilities need to appear morally good to deserve the conditional support of able-bodied people, and instead advocates for the solidarity of physically disabled people who appear not to conform to normative standards through their appearance, body size, dress, use of a mobility aid, drug use, or physical deformity.
Origins:
The cripple punk tag was started in 2014 by a Tumblr user;Tyler Trewhella, who posted a picture standing with a cane and a lit cigarette, with the caption "cripple punk" layered over the top, and the description "I'm starting a movement." The post would go on to be liked and reblogged by over 40,000 people, with the caption being used as a tag to boost other posts and images of physically disabled people going against the typical perception of people with physical disabilities.
Ideology:
Cripple punk ideology centers and prioritizes the experiences of physically disabled people over the pressure to conform to the standards that able-bodied people uphold. The movement is made explicitly by and for people with physical disabilities and aims to depict how they navigate the world, as opposed to able-bodied people. Participation is not contingent on people being comfortable with using the word "cripple", and alternative spellings or censoring is accepted. The movement tries to change ideas that people with physical disabilities need to be entirely unproblematic, without fault, and give all of their energy to trying to act or look less disabled; or that of physically disabled people being either a source of inspiration or that of pity. Instead, it focuses on basic survival and quality of life improvement for physically disabled people through the support and solidarity of other physically disabled people. It also supports unlearning forms of internalized ableism, and those who are going through the process of doing so. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Heat escape lessening position**
Heat escape lessening position:
The heat escape lessening position (HELP) is a human position to reduce heat loss while immersed in cold water.
Position:
HELP is taught as part of the curriculum in Australia, North America, and Ireland for lifeguard and boating safety training. It involves positioning one's knees together and hugging them close to the chest using one's arms. Furthermore, groups of people can huddle together in this position to conserve body heat, offer moral support, and provide a larger target for rescuers.The HELP is an attempt to reduce heat loss enough to lessen the effect of hypothermia. Hypothermia is a condition where bodily temperature drops too low to perform normal voluntary or involuntary functions. Cold water causes "immersion hypothermia", which can cause damage to extremities or the body's core, including unconsciousness or death.The HELP reduces exposure of high heat loss areas of the body. Wearing a personal flotation device allows a person to draw their knees to their chest and arms to their sides, while still remaining able to breathe. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**4F2 cell-surface antigen heavy chain**
4F2 cell-surface antigen heavy chain:
4F2 cell-surface antigen heavy chain is a protein that in humans is encoded by the SLC3A2 (solute carrier family 3 member 2) gene.SLC3A2 comprises the heavy subunit of the large neutral amino acid transporter (LAT1) that is also known as CD98 (cluster of differentiation 98).
Function:
SLC3A2 is a member of the solute carrier family and encodes a cell surface, transmembrane protein with an alpha-amylase domain. The protein exists as the heavy chain of a heterodimer, covalently bound through di-sulfide bonds to one of several possible light chains. It associates with integrins and mediates integrin-dependent signaling related to normal cell growth and tumorigenesis. Alternate transcriptional splice variants, encoding different isoforms, have been characterized.LAT1 is a heterodimeric membrane transport protein that preferentially transports neutral branched (valine, leucine, isoleucine) and aromatic (tryptophan, tyrosine, phenylalanine) amino acids. LAT is highly expressed in brain capillaries (which form the blood brain barrier) relative to other tissues.A functional LAT1 transporter is composed of two proteins encoded by two distinct genes: 4F2hc/CD98 heavy subunit protein encoded by the SLC3A2 gene (this gene) CD98 light subunit protein encoded by the SLC7A5 gene
Interactions:
SLC3A2 has been shown to interact with SLC7A7.Additionally, SLC3A2 is a constituent member of the system xc- cystine/glutamate antiporter, complexing with SLC7A11. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Headstarting**
Headstarting:
Headstarting is a conservation technique for endangered species, in which young animals are raised artificially and subsequently released into the wild. The technique allows a greater proportion of the young to reach independence, without predation or loss to other natural causes.For endangered birds and reptiles, eggs are collected from the wild are hatched using an incubator. For mammals such as Hawaiian monk seals, the young are removed from their mothers after weaning.The technique was trialled on land-based mammals for the first time in Australia. In the three years prior to May 2021, young bridled nail-tail wallabies were placed in a fenced-off area of 10-hectare (25-acre) area within Avocet Nature Refuge in Queensland. The population, safe from their main predator, feral cats, more than doubled over this period. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Eul-yong Ta**
Eul-yong Ta:
Eul-yong Ta (Korean: 을용타; Hanja: 乙容打) is a South Korean internet phenomenon used to describe the incident when South Korean footballer Lee Eul-yong slapped Chinese forward Li Yi in the back of his head in a match against China in December 2003, or to describe the slap itself. The incident led to numerous parodies in South Korea.
Eul-yong Ta derives its name from Lee Eul-yong, who was carded for slapping Li Yi in the head, and ta (打), which means strike or blow in Hanja. The term would roughly translate as "Eul-yong Strike" or "Eul Yong Smash" in English.
Background:
Koreaphobia "Koreaphobia" (공한증) has been coined by Chinese journalists to describe expectation of China losing against Korea. The Korea Republic national football team went head-to-head against the Chinese team 26 times as of 2006. The results of the match were 15 wins and 11 draws.
Background:
Anti-Chinese sentiment in Korea The Korean media has said that the popularity of Eul-yong Ta is a result of anger over Chinese-Korean geographical disputes. Prior to the time of the match, South Korea had been involved in geographical disputes with China regarding Goguryeo, Jiandao, and Northeast Project of the Chinese Academy of Social Sciences. In 1998, a Chinese goalkeeper injured Hwang Sun-Hong, the main forward for the national team until his retirement in 2003. This prevented Hwang from playing in the 1998 World Cup in France.
Background:
The match The two countries faced each other during the East Asian Cup 2003. During the match, South Korea was winning by one point after the first half, as Yoo Sang-Chul headed the ball passed by Lee.
Background:
In the second half of the game, Chinese forward Li Yi deliberately kicked Lee's right shin after Lee completed a pass. Having recovered from a recent ankle injury, Lee was upset at Li's foul play and slapped Li on the back of his head, thereafter which Li Yi started to roll on the ground grabbing his head. For a while the entire Chinese and Korean squad ran toward the scene resulting in brief ruckus and some degree of physical contact but further conflict did not occur, as the referee awarded a yellow card for Li (simulation) and a red card for Lee (violence).
Reaction and influence:
When Lee received a red card for violence, the Korean media accused Li Yi of "Hollywood action." At the same time, the media described Lee's expression in the picture as "proud" and "remorseless." The picture of Li Yi painfully clasping his head in front of Lee who is casting an angry glance at him, accompanied by a video clip of the incident, spread out through the Internet.
Reaction and influence:
Korean websites, especially DC Inside known for high-level digital image editing and digital photography, began to photoshop and produce numerous parodies to ridicule the incident. Initially, the photoshopped works showed Lee holding a hammer or a chainsaw. Photoshopped works of the Korean footballer holding a Korean history textbook in front of the Chinese and another picture of Lee in Eulji Mundeok's armor have also surfaced on the internet.Gradually, the parodies grew more complex over time as they were altered to resemble movie posters, and statues. Examples of parodies include pictures of Lee holding an electric saw before the Chinese, driving a crane towards Li, and Lee in Goku's costume in his Super Saiyan form. As of June 2006, the number of parodies have exceeded 200. Less frequently, the Chinese athlete (Yang Chen) in the background has been the subject of photoshopping.
Reaction and influence:
Eul-yong Ta has been used to portray anti-Chinese sentiments and other social and political issues of South Korea such as strained relations between Korea and Japan. Eul-yong Ta became popular once again when South Korean gymnast Yang Tae Young lost to Paul Hamm in 2004 Summer Olympics. New parodies continued to be made after 2006 World Cup when Korea lost in a match against Switzerland.Eul-yong Ta influenced new terms such as Eul-yong Chook and "Zidane-Ta" in South Korea when Zinedine Zidane headbutted Marco Materazzi during the final round of the 2006 World Cup.
Reaction and influence:
Lee's reaction Lee Eul-yong had seen the parodies of Eul-yong Ta "just once." He regrets the incident and has said that he should have controlled his temper as a veteran on the field. To a question asking him whether he was offended, he responded that he was not offended by the parodies and content "as long as people laughed from the parodies." (그런 건 없다. 사람들이 그걸로 재미있으면 됐다.) Eul-yong Chook In late 2005, another picture of Lee began to surface on the internet, which showed him in Bucheon FC's uniform performing a mid-air kick during the match against Suwon Samsung Bluewings. Like Eul-yong Ta, the picture became subject to parodies which include a photoshopped picture of Lee kicking Junichiro Koizumi to reflect the anti-Japanese sentiment in Korea and also Apolo Anton Ohno. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nasal palatal approximant**
Nasal palatal approximant:
The nasal palatal approximant is a type of consonantal sound used in some oral languages. The symbol in the International Phonetic Alphabet that represents this sound is ⟨j̃⟩, that is, a j with a tilde. The equivalent X-SAMPA symbol is j~, and in the Americanist phonetic notation it is ⟨ỹ⟩.
The nasal palatal approximant is sometimes called a nasal yod; [j̃] and [w̃] may be called nasal glides.
Features:
Features of the nasal palatal approximant: Its manner of articulation is approximant, which means it is produced by narrowing the vocal tract at the place of articulation, but not enough to produce a turbulent airstream.
Its place of articulation is palatal, which means it is articulated with the middle or back part of the tongue raised to the hard palate.
Its phonation is voiced, which means the vocal cords vibrate during the articulation.
It is a nasal consonant, which means air is allowed to escape through the nose, in this case in addition to through the mouth.
It is a central consonant, which means it is produced by directing the airstream along the center of the tongue, rather than to the sides.
The airstream mechanism is pulmonic, which means it is articulated by pushing air solely with the intercostal muscles and diaphragm, as in most sounds.
Occurrence:
[j̃], written ny, is a common realization of /j/ before nasal vowels in many languages of West Africa that do not have a phonemic distinction between voiced nasal and oral stops, such as Yoruba, Ewe and Bini languages. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Journal of Chemical Technology & Biotechnology**
Journal of Chemical Technology & Biotechnology:
The Journal of Chemical Technology & Biotechnology is a monthly peer-reviewed scientific journal. It was established in 1882 as the Journal of the Society of Chemical Industry by The Society of Chemical Industry (SCI). In 1950 it changed its title to Journal of Applied Chemistry and volume numbering restarted at 1. In 1971 the journal changed its title to Journal of Applied Chemistry and Biotechnology and in 1983 it obtained the current title. It covers chemical and biological technology relevant for economically and environmentally sustainable industrial processes. The journal is published by John Wiley & Sons on behalf of the Society of Chemical Industry.
Abstracting and indexing:
The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.174. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MDGRAPE-4**
MDGRAPE-4:
MDGRAPE-4 is a supercomputer under development at the RIKEN Quantitative Biology Center (QBiC) in Suita, Osaka, Japan. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Global spread of H5N1 in 2005**
Global spread of H5N1 in 2005:
The global spread of (highly pathogenic) H5N1 in birds is considered a significant pandemic threat.
Global spread of H5N1 in 2005:
While prior H5N1 strains have been known, they were significantly different from the current H5N1 strain on a genetic level, making the global spread of this new strain unprecedented. The current H5N1 strain is a fast-mutating, highly pathogenic avian influenza virus (HPAI) found in multiple bird species. It is both epizootic (an epidemic in non-humans) and panzootic (a disease affecting animals of many species especially over a wide area). Unless otherwise indicated, "H5N1" in this article refers to the recent highly pathogenic strain of H5N1.
Global spread of H5N1 in 2005:
In January 2005 an outbreak of avian influenza affected thirty three out of sixty four cities and provinces in Vietnam, leading to the forced killing of nearly 1.2 million poultry. Up to 140 million birds are believed to have died or been killed because of the outbreak. In April 2005 an unprecedented die-off began of over 6,000 migratory birds at Qinghai Lake in central China over three months. This strain of H5N1 is the same strain as is spread west by migratory birds over at least the next ten months. In August 2005 H5N1 spread to Kazakhstan, Mongolia and Russia. On September 30, 2005, David Nabarro, the newly appointed Senior United Nations System Coordinator for Avian and Human Influenza, warned the world that an outbreak of avian influenza could kill 5 to 150 million people. David Nabarro later stated that as the virus had spread to migratory birds, an outbreak could start in Africa or the Middle East. Later in 2005 H5N1 spread to Turkey, Romania, Croatia and Kuwait.
January:
An outbreak of avian influenza affected thirty three out of sixty four cities and provinces in Vietnam, leading to the forced killing of nearly 1.2 million poultry. Up to 140 million birds are believed to have died or been killed because of the outbreak.
February:
"Surveillance stepped up in province where Cambodia's first human avian influenza case was detected".
March:
Vietnam and Thailand have seen several isolated cases where human-to-human transmission of the virus has been suspected in care-givers of H5N1 patients, including a mother of a girl who died from H5N1 and two nurses.
April:
"The Ministry of Health in Vietnam has provided WHO with official confirmation of an additional eight human cases of H5N1 avian influenza. Two of the cases were recently detected, between 2 and 8 April, in Hung Yen and Ha Tay Provinces, respectively. Both patients are alive. The other six cases are thought to have been detected prior to 2 April. WHO is seeking further details from the authorities on this six cases." There is an unprecedented die-off of over 6,000 migratory birds at Qinghai Lake in central China during April, May and June. This strain of H5N1 is the same strain as is spread west by migratory birds over at least the next ten months. "The RNA sequence of the Qinghai virus reveals that three of its eight genes are almost identical to those of a virus isolated from a chicken in Shantou in 2003. The other five genes resemble those of viruses found in southern China earlier in 2005, which belong to the "Z genotype" virus circulating across east Asia."
May:
"Since January 2004, when human cases of H5N1 avian influenza were first reported in the current outbreak, 97 cases and 53 deaths have been reported in Vietnam, Thailand and Cambodia. Vietnam, with 76 cases and 37 deaths, has been the most severely affected country, followed by Thailand, with 17 cases and 12 deaths, and Cambodia, with 4 cases and 4 deaths."
June:
"[T]esting of clinical specimens by international experts working in Vietnam provided further suggestive evidence of more widespread infection with the virus, raising the possibility of community-acquired infection" but "the detection of H5N1 in clinical specimens is technically challenging and prone to errors" so team members and supplies from "institutes in Australia, Canada, Hong Kong SAR, Japan, the United Kingdom, and the United States of America having extensive experience in the testing of avian influenza viruses in human clinical specimens" investigated and concluded that "no laboratory evidence suggesting that human infections are occurring with greater frequency or that the virus is spreading readily among humans."
July:
A death in Jakarta was the first confirmed human fatality in Indonesia.
On July 28, avian influenza was reported to have killed two more people in Vietnam, raising the death toll to sixty.
August:
August 3, 2005 WHO said it was following closely reports from China that at least 38 people have died and more than 200 others have been made ill by a swine-borne virus in Sichuan Province. Sichuan Province, where infections with Streptococcus suis have been detected in pigs in a concurrent outbreak, has one of the largest pig populations in China. The outbreak in humans has some unusual features and is being closely followed by the WHO.August 11, 2005 An avian outbreak of H5N1 flu was confirmed in Kazakhstan and Mongolia, suggesting further spread of the virus.August 22, 2005 The virus was found in western Russia, marking its appearance in Europe. As a result, Dutch authorities ordered that free-range chickens would have to be kept indoors. EU officials chose not to impose a similar policy on member countries.
September:
September 30, 2005 David Nabarro, the newly appointed Senior United Nations System Coordinator for Avian and Human Influenza, warned the world that an outbreak of avian influenza could kill 5 to 150 million people. Also, due to a bipartisan effort of the United States Senate, $4 billion was appropriated to develop vaccines and treatments for Avian influenza. David Nabarro stated that as the virus had spread to migratory birds, an outbreak could start in Africa or the Middle East.
September:
Agricultural ministers of Association of South East Asian Nations announced a three-year plan to counter the spread of the disease.
October:
October 13, 2005 The EU Health Commissioner Markos Kyprianou confirmed that tests on the dead turkeys found on farms in Kiziksa, Turkey, showed that they had died from the H5N1 strain. Even before the test results were available, some 5,000 birds and poultry have been culled in the area. It is believed that the disease had spread from migratory birds that land at the Manyas bird sanctuary (a few miles from the infected farm) on their way to Africa.October 15, 2005 The British Veterinary Laboratory in Weybridge confirmed that the virus detected in Ciamurlia, Romania is H5N1.October 19, 2005 China announced a fresh outbreak of bird flu, saying 2,600 birds have died from the disease in Inner Mongolia. The deaths, at a farm near the region's capital of Hohhot, were due to the H5N1 strain, the Xinhua news agency said.October 26, 2005 Croatia announced H5N1 strain was found in dead swans.October 31, 2005 Russia confirmed previously suspected H5N1 bird flu in ten rural communities across Russia. The confirmed outbreak sites are in the central areas of Tula and Tambov, as well as in the Urals province of Chelyabinsk and in Omsk and Altai, in Siberia.
November:
November 12, 2005 Kuwait has reported positive testing of two birds, one infected with H5N1, and the other with the H5N2 virus, making them the first cases of infection in the Middle East. A flamingo holding the H5N1 virus was found dead by the sea, as Gulf News reports, it was killed by authorities and did not die from the virus.
December:
December 30, 2005 "China confirms its third human death from bird flu. That brings the death toll [...] to 74, comprising 14 victims in Thailand, four in Cambodia, 11 in Indonesia, 42 in Vietnam and three in China." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tunnel injection**
Tunnel injection:
Tunnel injection is a field electron emission effect; specifically a quantum process called Fowler–Nordheim tunneling, whereby charge carriers are injected to an electric conductor through a thin layer of an electric insulator. It is used to program NAND flash memory. The process used for erasing is called tunnel release. This injection is achieved by creating a large voltage difference between the gate and the body of the MOSFET. When VGB >> 0, electrons are injected into the floating gate. When VGB << 0, electrons are forced out of the floating gate.
Tunnel injection:
An alternative to tunnel injection is the spin injection. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Camptodactyly, tall stature, and hearing loss syndrome**
Camptodactyly, tall stature, and hearing loss syndrome:
Camptodactyly, tall stature, and hearing loss syndrome, also known as CATSHL syndrome, is a rare genetic disorder which consists of camptodactyly, tall height, scoliosis, and hearing loss. Occasionally, developmental delay and intellectual disabilities are reported. About 30 (live) people with the disorder have been recorded in medical literature to date (May 2022); 27 people from a four-generation Utah family and 2 brothers from consanguineous Egyptian parents. This disorder is caused by autosomal dominant missense mutations in the FGFR3 gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pastiche**
Pastiche:
A pastiche is a work of visual art, literature, theatre, music, or architecture that imitates the style or character of the work of one or more other artists. Unlike parody, pastiche pays homage to the work it imitates, rather than mocking it.The word pastiche is a French cognate of the Italian noun pasticcio, which is a pâté or pie-filling mixed from diverse ingredients. Metaphorically, pastiche and pasticcio describe works that are either composed by several authors, or that incorporate stylistic elements of other artists' work. Pastiche is an example of eclecticism in art.
Pastiche:
Allusion is not pastiche. A literary allusion may refer to another work, but it does not reiterate it. Moreover, allusion requires the audience to share in the author's cultural knowledge. Both allusion and pastiche are mechanisms of intertextuality.
By art:
Literature In literary usage, the term denotes a literary technique employing a generally light-hearted tongue-in-cheek imitation of another's style; although jocular, it is usually respectful. The word implies a lack of originality or coherence, an imitative jumble, but with the advent of postmodernism, pastiche has become positively construed as deliberate, witty homage or playful imitation.For example, many stories featuring Sherlock Holmes, originally penned by Arthur Conan Doyle, have been written as pastiches since the author's time. Ellery Queen and Nero Wolfe are other popular subjects of mystery parodies and pastiches.A similar example of pastiche is the posthumous continuations of the Robert E. Howard stories, written by other writers without Howard's authorization. This includes the Conan the Barbarian stories of L. Sprague de Camp and Lin Carter.
By art:
David Lodge's novel The British Museum Is Falling Down (1965) is a pastiche of works by Joyce, Kafka, and Virginia Woolf. In 1991 Alexandra Ripley wrote the novel Scarlett, a pastiche of Gone with the Wind, in an unsuccessful attempt to have it recognized as a canonical sequel.
In 2017, John Banville published Mrs. Osmond, a sequel to Henry James's The Portrait of a Lady, written in a style similar to that of James. In 2018, Ben Schott published Jeeves and the King of Clubs, an homage to P. G. Wodehouse's character Jeeves, with the blessing of the Wodehouse estate.
By art:
Music Charles Rosen has characterized Mozart's various works in imitation of Baroque style as pastiche, and Edvard Grieg's Holberg Suite was written as a conscious homage to the music of an earlier age. Some of Pyotr Ilyich Tchaikovsky's works, such as his Variations on a Rococo Theme and Serenade for Strings, employ a poised "classical" form reminiscent of 18th-century composers such as Mozart (the composer whose work was his favorite). Perhaps one of the best examples of pastiche in modern music is that of George Rochberg, who used the technique in his String Quartet No. 3 of 1972 and Music for the Magic Theater. Rochberg turned to pastiche from serialism after the death of his son in 1963.
By art:
"Bohemian Rhapsody" by Queen is unusual as it is a pastiche in both senses of the word, as there are many distinct styles imitated in the song, all "hodge-podged" together to create one piece of music. A similar earlier example is "Happiness is a Warm Gun" by the Beatles. One can find musical "pastiches" throughout the work of the American composer Frank Zappa. Comedian/parodist "Weird Al" Yankovic has also recorded several songs that are pastiches of other popular recording artists, such as Devo ("Dare to Be Stupid"), Talking Heads ("Dog Eat Dog"), Rage Against the Machine ("I'll Sue Ya"), and The Doors ("Craigslist"), though these so-called "style parodies" often walk the line between celebration (pastiche) and send-up (parody).
By art:
A pastiche Mass is a musical Mass where the constituent movements come from different Mass settings. Most often this convention has been chosen for concert performances, particularly by early-music ensembles. Masses are composed of movements: Kyrie, Gloria, Credo, Sanctus, Agnus Dei; for example, the Missa Solemnis by Beethoven and the Messe de Nostre Dame by Guillaume de Machaut. In a pastiche Mass, the performers may choose a Kyrie from one composer, and a Gloria from another; or choose a Kyrie from one setting of an individual composer, and a Gloria from another.
By art:
Musical theatre In musical theatre, pastiche is often an indispensable tool for evoking the sounds of a particular era for which a show is set. For the 1971 musical Follies, a show about a reunion of performers from a musical revue set between the World Wars, Stephen Sondheim wrote over a dozen songs in the style of Broadway songwriters of the 1920s and 1930s. Sondheim imitates not only the music of composers such as Cole Porter, Irving Berlin, Jerome Kern, and George Gershwin but also the lyrics of writers such as Ira Gershwin, Dorothy Fields, Otto Harbach, and Oscar Hammerstein II. For example, Sondheim notes that the torch song "Losing My Mind" sung in the show contains "near-stenciled rhythms and harmonies" from the Gershwins' "The Man I Love" and lyrics written in the style of Dorothy Fields. Examples of musical pastiche also appear in other Sondheim shows including Gypsy, Saturday Night, Assassins, and Anyone Can Whistle.
By art:
Film Pastiche can also be a cinematic device whereby filmmakers pay homage to another filmmaker's style and use of cinematography, including camera angles, lighting, and mise en scène. A film's writer may also offer a pastiche based on the works of other writers (this is especially evident in historical films and documentaries but can be found in non-fiction drama, comedy and horror films as well). Italian director Sergio Leone's Once Upon a Time in the West is a pastiche of earlier American Westerns. Another major filmmaker, Quentin Tarantino, often uses various plots, characteristics and themes from many films to create his films, among them from the films of Sergio Leone, in effect creating a pastiche of a pastiche. Tarantino has openly stated that "I steal from every single movie ever made." Director Todd Haynes' 2002 film Far from Heaven was a conscious attempt to replicate a typical Douglas Sirk melodrama—in particular All That Heaven Allows.
By art:
In cinema, the influence of George Lucas' Star Wars films (spawning their own pastiches, such as the 1983 3D film Metalstorm: The Destruction of Jared-Syn) can be regarded as a function of postmodernity.
By art:
Architecture In discussions of urban planning, the term "pastiche" may describe developments as imitations of the building styles created by major architects: with the implication that the derivative work is unoriginal and of little merit, and the term is generally attributed without reference to its urban context. Many 19th and 20th century European developments can in this way be described as pastiches, such as the work of Vincent Harris and Edwin Lutyens who created early 20th century Neoclassical and Neo-Georgian architectural developments in Britain, or of later pastiche works based on the architecture of the modernist Ludwig Mies van der Rohe and the Bauhaus movement. The term itself is not pejorative; however, Alain de Botton describes pastiche as "an unconvincing reproduction of the styles of the past". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vitalism**
Vitalism:
Vitalism is a belief that starts from the premise that "living organisms are fundamentally different from non-living entities because they contain some non-physical element or are governed by different principles than are inanimate things." Where vitalism explicitly invokes a vital principle, that element is often referred to as the "vital spark", "energy", "élan vital" (coined by vitalist Henri Bergson), "vital force", or "vis vitalis", which some equate with the soul. In the 18th and 19th centuries, vitalism was discussed among biologists, between those who felt that the known mechanics of physics would eventually explain the difference between life and non-life and vitalists who argued that the processes of life could not be reduced to a mechanistic process. Vitalist biologists such as Johannes Reinke proposed testable hypotheses meant to show inadequacies with mechanistic explanations, but their experiments failed to provide support for vitalism. Biologists now consider vitalism in this sense to have been refuted by empirical evidence, and hence regard it either as a superseded scientific theory, or, since the mid-20th century, as a pseudoscience.Vitalism has a long history in medical philosophies: many traditional healing practices posited that disease results from some imbalance in vital forces.
History:
Ancient times The notion that bodily functions are due to a vitalistic principle existing in all living creatures has roots going back at least to ancient Egypt. In Greek philosophy, the Milesian school proposed natural explanations deduced from materialism and mechanism. However, by the time of Lucretius, this account was supplemented, (for example, by the unpredictable clinamen of Epicurus), and in Stoic physics, the pneuma assumed the role of logos. Galen believed the lungs draw pneuma from the air, which the blood communicates throughout the body.
History:
Medieval In Europe, medieval physics was influenced by the idea of pneuma, helping to shape later aether theories.
History:
Early modern Vitalists included English anatomist Francis Glisson (1597–1677) and the Italian doctor Marcello Malpighi (1628–1694). Caspar Friedrich Wolff (1733–1794) is considered to be the father of epigenesis in embryology, that is, he marks the point when embryonic development began to be described in terms of the proliferation of cells rather than the incarnation of a preformed soul. However, this degree of empirical observation was not matched by a mechanistic philosophy: in his Theoria Generationis (1759), he tried to explain the emergence of the organism by the actions of a vis essentialis (an organizing, formative force). Carl Reichenbach (1788–1869) later developed the theory of Odic force, a form of life-energy that permeates living things.
History:
In the 17th century, modern science responded to Newton's action at a distance and the mechanism of Cartesian dualism with vitalist theories: that whereas the chemical transformations undergone by non-living substances are reversible, so-called "organic" matter is permanently altered by chemical transformations (such as cooking).As worded by Charles Birch and John B. Cobb, "the claims of the vitalists came to the fore again" in the 18th century: "Georg Ernst Stahl's followers were active as were others, such as the physician genius Francis Xavier Bichat of the Hotel Dieu." However, "Bichat moved from the tendency typical of the French vitalistic tradition to progressively free himself from metaphysics in order to combine with hypotheses and theories which accorded to the scientific criteria of physics and chemistry." John Hunter recognised "a 'living principle' in addition to mechanics."Johann Friedrich Blumenbach was influential in establishing epigenesis in the life sciences in 1781 with his publication of Über den Bildungstrieb und das Zeugungsgeschäfte. Blumenbach cut up freshwater Hydra and established that the removed parts would regenerate. He inferred the presence of a "formative drive" (Bildungstrieb) in living matter. But he pointed out that this name, like names applied to every other kind of vital power, of itself, explains nothing: it serves merely to designate a peculiar power formed by the combination of the mechanical principle with that which is susceptible of modification.
History:
19th century Jöns Jakob Berzelius, one of the early 19th century founders of modern chemistry, argued that a regulative force must exist within living matter to maintain its functions. Berzelius contended that compounds could be distinguished by whether they required any organisms in their synthesis (organic compounds) or whether they did not (inorganic compounds). Vitalist chemists predicted that organic materials could not be synthesized from inorganic components, but Friedrich Wöhler synthesised urea from inorganic components in 1828. However, contemporary accounts do not support the common belief that vitalism died when Wöhler made urea. This Wöhler Myth, as historian Peter Ramberg called it, originated from a popular history of chemistry published in 1931, which, "ignoring all pretense of historical accuracy, turned Wöhler into a crusader who made attempt after attempt to synthesize a natural product that would refute vitalism and lift the veil of ignorance, until 'one afternoon the miracle happened'".Between 1833 and 1844, Johannes Peter Müller wrote a book on physiology called Handbuch der Physiologie, which became the leading textbook in the field for much of the nineteenth century. The book showed Müller's commitments to vitalism; he questioned why organic matter differs from inorganic, then proceeded to chemical analyses of the blood and lymph. He describes in detail the circulatory, lymphatic, respiratory, digestive, endocrine, nervous, and sensory systems in a wide variety of animals but explains that the presence of a soul makes each organism an indivisible whole. He claimed that the behaviour of light and sound waves showed that living organisms possessed a life-energy for which physical laws could never fully account.
History:
Louis Pasteur (1822–1895) after his famous rebuttal of spontaneous generation, performed several experiments that he felt supported vitalism. According to Bechtel, Pasteur "fitted fermentation into a more general programme describing special reactions that only occur in living organisms. These are irreducibly vital phenomena." Rejecting the claims of Berzelius, Liebig, Traube and others that fermentation resulted from chemical agents or catalysts within cells, Pasteur concluded that fermentation was a "vital action".
History:
20th century Hans Driesch (1867–1941) interpreted his experiments as showing that life is not run by physicochemical laws. His main argument was that when one cuts up an embryo after its first division or two, each part grows into a complete adult. Driesch's reputation as an experimental biologist deteriorated as a result of his vitalistic theories, which scientists have seen since his time as pseudoscience. Vitalism is a superseded scientific hypothesis, and the term is sometimes used as a pejorative epithet. Ernst Mayr (1904–2005) wrote: It would be ahistorical to ridicule vitalists. When one reads the writings of one of the leading vitalists like Driesch one is forced to agree with him that many of the basic problems of biology simply cannot be solved by a philosophy as that of Descartes, in which the organism is simply considered a machine... The logic of the critique of the vitalists was impeccable.
History:
Vitalism has become so disreputable a belief in the last fifty years that no biologist alive today would want to be classified as a vitalist. Still, the remnants of vitalist thinking can be found in the work of Alistair Hardy, Sewall Wright, and Charles Birch, who seem to believe in some sort of nonmaterial principle in organisms.
History:
Other vitalists included Johannes Reinke and Oscar Hertwig. Reinke used the word neovitalism to describe his work, claiming that it would eventually be verified through experimentation, and that it was an improvement over the other vitalistic theories. The work of Reinke influenced Carl Jung.John Scott Haldane adopted an anti-mechanist approach to biology and an idealist philosophy early on in his career. Haldane saw his work as a vindication of his belief that teleology was an essential concept in biology. His views became widely known with his first book Mechanism, life and personality in 1913. Haldane borrowed arguments from the vitalists to use against mechanism; however, he was not a vitalist. Haldane treated the organism as fundamental to biology: "we perceive the organism as a self-regulating entity", "every effort to analyze it into components that can be reduced to a mechanical explanation violates this central experience". The work of Haldane was an influence on organicism. Haldane stated that a purely mechanist interpretation could not account for the characteristics of life. Haldane wrote a number of books in which he attempted to show the invalidity of both vitalism and mechanist approaches to science. Haldane explained: We must find a different theoretical basis of biology, based on the observation that all the phenomena concerned tend towards being so coordinated that they express what is normal for an adult organism.
History:
By 1931, biologists had "almost unanimously abandoned vitalism as an acknowledged belief."
Emergentism:
Contemporary science and engineering sometimes describe emergent processes, in which the properties of a system cannot be fully described in terms of the properties of the constituents. This may be because the properties of the constituents are not fully understood, or because the interactions between the individual constituents are important for the behavior of the system.
Emergentism:
Whether emergence should be grouped with traditional vitalist concepts is a matter of semantic controversy. According to Emmeche et al. (1997): On the one hand, many scientists and philosophers regard emergence as having only a pseudo-scientific status. On the other hand, new developments in physics, biology, psychology, and cross-disciplinary fields such as cognitive science, artificial life, and the study of non-linear dynamical systems have focused strongly on the high level 'collective behaviour' of complex systems, which is often said to be truly emergent, and the term is increasingly used to characterize such systems.
Mesmerism:
A popular vitalist theory of the 18th century was "animal magnetism", in the theories of Franz Mesmer (1734–1815). However, the use of the (conventional) English term animal magnetism to translate Mesmer's magnétisme animal can be misleading for three reasons: Mesmer chose his term to clearly distinguish his variant of magnetic force from those referred to, at that time, as mineral magnetism, cosmic magnetism and planetary magnetism.
Mesmerism:
Mesmer felt that this particular force/power only resided in the bodies of humans and animals.
Mesmerism:
Mesmer chose the word "animal," for its root meaning (from Latin animus="breath") specifically to identify his force as a quality that belonged to all creatures with breath; viz., the animate beings: humans and animals.Mesmer's ideas became so influential that King Louis XVI of France appointed two commissions to investigate mesmerism; one was led by Joseph-Ignace Guillotin, the other, led by Benjamin Franklin, included Bailly and Lavoisier. The commissioners learned about Mesmeric theory, and saw its patients fall into fits and trances. In Franklin's garden, a patient was led to each of five trees, one of which had been "mesmerized"; he hugged each in turn to receive the "vital fluid," but fainted at the foot of a 'wrong' one. At Lavoisier's house, four normal cups of water were held before a "sensitive" woman; the fourth produced convulsions, but she calmly swallowed the mesmerized contents of a fifth, believing it to be plain water. The commissioners concluded that "the fluid without imagination is powerless, whereas imagination without the fluid can produce the effects of the fluid."
Medical philosophies:
Vitalism has a long history in medical philosophies: many traditional healing practices posited that disease results from some imbalance in vital forces. In the Western tradition founded by Hippocrates, these vital forces were associated with the four temperaments and humours; Eastern traditions posited an imbalance or blocking of qi or prana. One example of a similar notion in Africa is the Yoruba concept of ase. Today forms of vitalism continue to exist as philosophical positions or as tenets in some religious traditions.Complementary and alternative medicine therapies include energy therapies, associated with vitalism, especially biofield therapies such as therapeutic touch, Reiki, external qi, chakra healing and SHEN therapy. In these therapies, the "subtle energy" field of a patient is manipulated by a practitioner. The subtle energy is held to exist beyond the electromagnetic energy produced by the heart and brain. Beverly Rubik describes the biofield as a "complex, dynamic, extremely weak EM field within and around the human body...."The founder of homeopathy, Samuel Hahnemann, promoted an immaterial, vitalistic view of disease: "...they are solely spirit-like (dynamic) derangements of the spirit-like power (the vital principle) that animates the human body." The view of disease as a dynamic disturbance of the immaterial and dynamic vital force is taught in many homeopathic colleges and constitutes a fundamental principle for many contemporary practising homeopaths.
Criticism:
Vitalism has sometimes been criticized as begging the question by inventing a name. Molière had famously parodied this fallacy in Le Malade imaginaire, where a quack "answers" the question of "Why does opium cause sleep?" with "Because of its dormitive virtue (i.e., soporific power)." Thomas Henry Huxley compared vitalism to stating that water is the way it is because of its "aquosity". His grandson Julian Huxley in 1926 compared "vital force" or élan vital to explaining a railroad locomotive's operation by its élan locomotif ("locomotive force").
Criticism:
Another criticism is that vitalists have failed to rule out mechanistic explanations. This is rather obvious in retrospect for organic chemistry and developmental biology, but the criticism goes back at least a century. In 1912, Jacques Loeb published The Mechanistic Conception of Life, in which he described experiments on how a sea urchin could have a pin for its father, as Bertrand Russell put it (Religion and Science). He offered this challenge: "... we must either succeed in producing living matter artificially, or we must find the reasons why this is impossible." (pp. 5–6)Loeb addressed vitalism more explicitly: "It is, therefore, unwarranted to continue the statement that in addition to the acceleration of oxidations the beginning of individual life is determined by the entrance of a metaphysical "life principle" into the egg; and that death is determined, aside from the cessation of oxidations, by the departure of this "principle" from the body. In the case of the evaporation of water we are satisfied with the explanation given by the kinetic theory of gases and do not demand that to repeat a well-known jest of Huxley the disappearance of the "aquosity" be also taken into consideration." (pp. 14–15)Bechtel states that vitalism "is often viewed as unfalsifiable, and therefore a pernicious metaphysical doctrine." For many scientists, "vitalist" theories were unsatisfactory "holding positions" on the pathway to mechanistic understanding. In 1967, Francis Crick, the co-discoverer of the structure of DNA, stated "And so to those of you who may be vitalists I would make this prophecy: what everyone believed yesterday, and you believe today, only cranks will believe tomorrow."While many vitalistic theories have in fact been falsified, notably Mesmerism, the pseudoscientific retention of untested and untestable theories continues to this day. Alan Sokal published an analysis of the wide acceptance among professional nurses of "scientific theories" of spiritual healing. (Pseudoscience and Postmodernism: Antagonists or Fellow-Travelers?). Use of a technique called therapeutic touch was especially reviewed by Sokal, who concluded, "nearly all the pseudoscientific systems to be examined in this essay are based philosophically on vitalism" and added that "Mainstream science has rejected vitalism since at least the 1930s, for a plethora of good reasons that have only become stronger with time."Joseph C. Keating, Jr. discusses vitalism's past and present roles in chiropractic and calls vitalism "a form of bio-theology." He further explains that: "Vitalism is that rejected tradition in biology which proposes that life is sustained and explained by an unmeasurable, intelligent force or energy. The supposed effects of vitalism are the manifestations of life itself, which in turn are the basis for inferring the concept in the first place. This circular reasoning offers pseudo-explanation, and may deceive us into believing we have explained some aspect of biology when in fact we have only labeled our ignorance. 'Explaining an unknown (life) with an unknowable (Innate),' suggests chiropractor Joseph Donahue, 'is absurd'."Keating views vitalism as incompatible with scientific thinking: "Chiropractors are not unique in recognizing a tendency and capacity for self-repair and auto-regulation of human physiology. But we surely stick out like a sore thumb among professions which claim to be scientifically based by our unrelenting commitment to vitalism. So long as we propound the 'One cause, one cure' rhetoric of Innate, we should expect to be met by ridicule from the wider health science community. Chiropractors can't have it both ways. Our theories cannot be both dogmatically held vitalistic constructs and be scientific at the same time. The purposiveness, consciousness and rigidity of the Palmers' Innate should be rejected."Keating also mentions Skinner's viewpoint: "Vitalism has many faces and has sprung up in many areas of scientific inquiry. Psychologist B.F. Skinner, for example, pointed out the irrationality of attributing behavior to mental states and traits. Such 'mental way stations,' he argued, amount to excess theoretical baggage which fails to advance cause-and-effect explanations by substituting an unfathomable psychology of 'mind'."According to Williams, "[t]oday, vitalism is one of the ideas that form the basis for many pseudoscientific health systems that claim that illnesses are caused by a disturbance or imbalance of the body's vital force." "Vitalists claim to be scientific, but in fact they reject the scientific method with its basic postulates of cause and effect and of provability. They often regard subjective experience to be more valid than objective material reality."Victor Stenger states that the term "bioenergetics" "is applied in biochemistry to refer to the readily measurable exchanges of energy within organisms, and between organisms and the environment, which occur by normal physical and chemical processes. This is not, however, what the new vitalists have in mind. They imagine the bioenergetic field as a holistic living force that goes beyond reductionist physics and chemistry."Such a field is sometimes explained as electromagnetic, though some advocates also make confused appeals to quantum physics. Joanne Stefanatos states that "The principles of energy medicine originate in quantum physics." Stenger offers several explanations as to why this line of reasoning may be misplaced. He explains that energy exists in discrete packets called quanta. Energy fields are composed of their component parts and so only exist when quanta are present. Therefore, energy fields are not holistic, but are rather a system of discrete parts that must obey the laws of physics. This also means that energy fields are not instantaneous. These facts of quantum physics place limitations on the infinite, continuous field that is used by some theorists to describe so-called "human energy fields". Stenger continues, explaining that the effects of EM forces have been measured by physicists as accurately as one part in a billion and there is yet to be any evidence that living organisms emit a unique field.Vitalistic thinking has been identified in the naive biological theories of children: "Recent experimental results show that a majority of preschoolers tend to choose vitalistic explanations as most plausible. Vitalism, together with other forms of intermediate causality, constitute unique causal devices for naive biology as a core domain of thought."
Sources:
Birch, Charles; Cobb, John B (1985). The Liberation of Life: From the Cell to the Community. CUP Archive. ISBN 9780521315142.
History and Philosophy of the Life Sciences. Vol. 29. 2007. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bunkering**
Bunkering:
Bunkering is the supplying of fuel for use by ships (such fuel is referred to as bunker), including the logistics of loading and distributing the fuel among available shipboard tanks. A person dealing in trade of bunker (fuel) is called a bunker trader.
Bunkering:
The term bunkering originated in the days of steamships, when coal was stored in bunkers. Nowadays, the term bunker is generally applied to the petroleum products stored in tanks, and bunkering to the practice and business of refueling ships. Bunkering operations take place at seaports and include the storage and provision of the bunker (ship fuels) to vessels.Singapore is currently the largest bunkering port in the world.
Two types of Bunkering:
The two most common types of bunkering procedure at sea is Ship to Ship bunkering (STS) where one ship acts as a terminal whilst the other moors. The second type is the Stern line bunkering which is the easiest but is the risky way of transferring oil during bad weather.
Bunkering in maritime law:
In many maritime contracts, such as charter parties, contracts for carriage of goods by sea, and marine insurance policies, the ship-owner or ship operator is required to ensure that the ship is seaworthy. Seaworthiness requires not only that the ship be sound and properly crewed, but also that it be fully fuelled (or "bunkered") at the start of the voyage. If the ship operator wishes to bunker en route, this must be provided for in a written agreement, or the interruption of the voyage may be deemed to be deviation (a serious breach of contract). If the vessel runs out of fuel in mid-ocean, this also constitutes serious breach, allowing the insurer to cancel a policy and allowing a consignee to make a cargo claim. It may also lead to a salvage operation.
Bunkering in maritime law:
The International Maritime Organisation is an agency of the United Nations responsible for the prevention of marine pollution by ships. On 1 January 2020, the agency began enforcing the IMO 2020 regulation of MARPOL Annex VI to minimise bunkering's environmental impact. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NADPH oxidase**
NADPH oxidase:
NADPH oxidase (nicotinamide adenine dinucleotide phosphate oxidase) is a membrane-bound enzyme complex that faces the extracellular space. It can be found in the plasma membrane as well as in the membranes of phagosomes used by neutrophil white blood cells to engulf microorganisms. Human isoforms of the catalytic component of the complex include NOX1, NOX2, NOX3, NOX4, NOX5, DUOX1, and DUOX2.
Reaction:
NADPH oxidase catalyzes the production of a superoxide free radical by transferring one electron to oxygen from NADPH.NADPH + 2O2 ↔ NADP+ + 2O−2 + H+
Types:
In mammals, NADPH oxidase is found in two types: one in white blood cells (neutrophilic) and the other in vascular cells, differing in biochemical structure and functions. Neutrophilic NADPH oxidase produces superoxide almost instantaneously, whereas the vascular enzyme produces superoxide in minutes to hours. Moreover, in white blood cells, superoxide has been found to transfer electrons across the membrane to extracellular oxygen, while in vascular cells, the radical anion appears to be released mainly intracellularly.
Types:
Neutrophilic type The isoform found in neutrophils is made up of six subunits. These subunits are: a Rho GTPase, usually Rac1 or Rac2 (Rac stands for Rho-related C3 botulinum toxin substrate) Five phagocytic oxidase subunits: gp91phox (NOX2) p22phox (CYBA) p40phox (NCF4) p47phox (NCF1) p67phox (NCF2) Vascular type There are several vascular isoforms of the complex which use paralogs the NOX2 subunit: NOX1 NOX3 NOX4 NOX5 Thyroid type There are two further paralogs of NOX2 subunit in the thyroid: DUOX1 DUOX2
Structure:
The whole structure of the membrane-bound vascular enzyme is composed of five parts: two cytosolic subunits (p47phox and p67phox), a cytochrome b558 which consists of gp91phox, p22phox and a small G protein Rac. Generation of the superoxide in vascular NADPH occurs by a one-electron reduction of oxygen via the gp91phox subunit, using reduced NADPH as the electron donor. The small G protein carries an essential role in the activation of the oxidase by switching between a GDP-bound (inactive) and GTP-linked (active) forms.
Biological function:
NADPH oxidases (NOXes) are one of the major sources of cellular reactive oxygen species (ROS), and they still are the focus of extensive research interest due to their exclusive function in producing ROS under normal physiological conditions. The NADPH oxidase complex is dormant under normal circumstances, but is activated to assemble in the membranes during respiratory burst. The activated NADPH oxidase generates superoxide which has roles in animal immune response and plant signalling.Superoxide can be produced in phagosomes which have ingested bacteria and fungi, or it can be produced outside of the cell. In macrophages, superoxide kills bacteria and fungi by mechanisms that are not yet fully understood. Superoxide spontaneously dismutates to form peroxide which is then protonated to produce hydrogen peroxide. Opinions are polarised as to how the oxidase kills microbes in neutrophils. On the one hand it is thought that hydrogen peroxide acts as substrate for myeloperoxidase to produce hypochlorous acid. It may also inactivate critical metabolic enzymes, initiate lipid peroxidation, damage iron-sulphur clusters, and liberate redox-active iron, which allows the generation of indiscriminate oxidants such as the hydroxyl radical. An alternative view is that the oxidase elevates the pH in the vacuole to about 9.0, which is optimal for the neutral proteases that degranulate from the cytoplasmic granules (where they are inactive at pH ~5.5) and it pumps potassium into the vacuole, which solubilises the enzymes, and it is the activated proteases that kill and digest the microbes.In insects, NOXes had some functions clarified. Arthropods have three NOX types (NOX4-art, an arthropod-specific p22-phox-independent NOX4, and two calcium-dependent enzymes, DUOX). In the gut, DUOX-dependent ROS production from bacteria-stimulated Drosophila melanogaster mucosa is an important pathogen-killing mechanism and can increase defecation as a defense response. In Aedes aegypti, DUOX is involved in the control of the gut indigenous microbiota. Rhodnius prolixus has calcium-activated DUOX, which is involved in eggshell hardening, and NOX5, which is involved in the control of gut motility and blood digestion.
Biological function:
Regulation Careful regulation of NADPH oxidase activity is crucial to maintain a healthy level of ROS in the body. The enzyme is dormant in resting cells but becomes rapidly activated by several stimuli, including bacterial products and cytokines. Vascular NADPH oxidases are regulated by a variety of hormones and factors known to be important players in vascular remodeling and disease. These include thrombin, platelet-derived growth factor (PDGF), tumor necrosis factor (TNFa), lactosylceramide, interleukin-1, and oxidized LDL. It is also stimulated by agonists and arachidonic acid. Conversely, assembly of the complex can be inhibited by apocynin and diphenylene iodonium. Apocynin decreases influenza-induced lung inflammation in mice in vivo and so may have clinical benefits in the treatment of influenza.Ang-1 triggers NOX2, NOX4, and the mitochondria to release ROS and that ROS derived from these sources play distinct roles in the regulation of the Ang-1/Tie 2 signaling pathway and pro-angiogenic responses.
Pathology:
Superoxides are crucial in killing foreign bacteria in the human body. Consequently, under-activity can lead to an increased susceptibility to organisms such as catalase-positive microbes, and over-activity can lead to oxidative stress and cell damage.
Pathology:
Excessive production of ROS in vascular cells causes many forms of cardiovascular disease including hypertension, atherosclerosis, myocardial infarction, and ischemic stroke. Atherosclerosis is caused by the accumulation of macrophages containing cholesterol (foam cells) in artery walls (in the intima). ROS produced by NADPH oxidase activate an enzyme that makes the macrophages adhere to the artery wall (by polymerizing actin fibers). This process is counterbalanced by NADPH oxidase inhibitors, and by antioxidants. An imbalance in favor of ROS produces atherosclerosis. In vitro studies have found that the NADPH oxidase inhibitors apocynin and diphenyleneiodonium, along with the antioxidants N-acetyl-cysteine and resveratrol, depolymerized the actin, broke the adhesions, and allowed foam cells to migrate out of the intima.One study suggests a role for NADPH oxidase in ketamine-induced loss of neuronal parvalbumin and GAD67 expression. Similar loss is observed in schizophrenia, and the results may point at the NADPH oxidase as a possible player in the pathophysiology of the disease. Nitro blue tetrazolium is used in a diagnostic test, in particular, for chronic granulomatous disease, a disease in which there is a defect in NADPH oxidase; therefore, the phagocyte is unable to make the reactive oxygen species or radicals required for bacterial killing, resulting in bacteria thriving within the phagocyte. The higher the blue score the better the cell is at producing reactive oxygen species.
Pathology:
It has also been shown that NADPH oxidase plays a role in the mechanism that induces the formation of sFlt-1, a protein that deactivates certain proangiogenic factors that play a role in the development of the placenta, by facilitating the formation of reactive oxygen species, which are suspected intermediaries in sFlt-1 formation. These effects are in part responsible for inducing pre-eclampsia in pregnant women Mutations Mutations in the NADPH oxidase subunit genes cause several Chronic Granulomatous Diseases (CGD), characterized by extreme susceptibility to infection. These include: X-linked chronic granulomatous disease (CGD) Autosomal recessive cytochrome b-negative CGD Autosomal recessive cytochrome b-positive CGD type I Autosomal recessive cytochrome b-positive CGD type II.In these diseases, cells have a low capacity for phagocytosis, and persistent bacterial infections occur. Areas of infected cells are common, granulomas. A similar disorder called neutrophil immunodeficiency syndrome is linked to a mutation in the RAC2, also a part of the complex.
Pathology:
Inhibition NADPH oxidase can be inhibited by apocynin, nitric oxide (NO), and diphenylene iodonium. Apocynin acts by preventing the assembly of the NADPH oxidase subunits. Apocynin decreases influenza-induced lung inflammation in mice in vivo and so may have clinical benefits in the treatment of influenza.Inhibition of NADPH oxidase by NO blocks the source of oxidative stress in the vasculature. NO donor drugs (nitrovasodilators) have therefore been used for more than a century to treat coronary artery disease, hypertension, and heart failure by preventing excess superoxide from deteriorating healthy vascular cells.More advanced NADPH oxidase inhibitors include GKT-831 (Formerly GKT137831), a dual Inhibitor of isoforms NOX4 and NOX1 which was patented in 2007. The compound was initially developed for Idiopathic pulmonary fibrosis and obtained orphan drug designation by the FDA and EMA at end of 2010. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Super Mutant**
Super Mutant:
Super Mutants are a fictional race of posthuman beings from the post-apocalyptic Fallout video game franchise. The Super Mutants were first introduced in 1997's Fallout as the results of human experimentation with a strain of the Forced Evolutionary Virus (FEV), a genetically engineered viral mutagen which transforms the subjects into a hulking monstrous humanoid form. Within series lore, Super Mutants tend to be depicted as savage and innately violent beings who, as a result of their transformations, lost a substantial amount of intelligence and often have cannibalistic tendencies. While Super Mutants tend to form their own factions or societies and are usually hostile to civilized humans, some have chosen to live peacefully alongside humans in settlements across the post-apocalyptic wasteland. Among the most recognizable and integral elements of the Fallout intellectual property (IP), Super Mutants have appeared in every media of the franchise, and have been the subject of numerous fan mods of Fallout series games. Certain individual Super Mutant characters have been well received by critics for their characterization, although the role the Super Mutant race have played throughout the history of the franchise have been the source of contention for some commentators.
Characteristics:
Super Mutants are first introduced in the first Fallout as part of an ambitious effort by a grotesque mutated being known as "The Master of the Super Mutants", to create a single, perfect "master race" and remove inequalities which have been the cause of strife among humans. The FEV, an artificially-created virus by a defense contractor and research corporation contracted by the American government, was originally meant to protect against various forms of biological warfare. Subsequent games in the Fallout series feature other antagonistic factions that employ different strains of the FEV to create Super Mutants. For example, Vault Tec, the technology company which built the nuclear fallout shelter facilities called Vaults, deliberately exposed the occupants of Vault 87 to a modified strain of the FEV as part of an unethical experiment called the Evolutionary Experimentation Program. The Institute, an elusive faction of scientists in Fallout 4, kidnap the inhabitants of the surrounding region and subject their victims to another strain of FEV for experimentation. Super Mutants created by the Institute are less intelligent compared to other variants but are capable of speaking like normal humans.The physiology of a Super Mutant is very different compared to ordinary humans; the most immediately noticeable effects are their immense size and strength, their different skin color, and their immunity to radiation damage. A standard Super Mutant's skin tone is usually green or yellow and stands approximately 10.4 feet tall, 7.8 feet when hunching, and weighs around 800 pounds. The Nightkin, an elite Super Mutant caste, have gray-blue colored skin and largely retain their pre-existing intellect unlike their lesser brethren. They are often equipped with cloaking devices known as "Stealth Boys" and developed schizophrenia from prolonged use. Other varieties of Super Mutants encountered in Fallout 3 and Fallout 4 grow much larger when they age. An immensely large variety first introduced in Fallout 3, the Behemoth, stand at roughly 20 feet tall and could grow up to 30 feet in height.In comparison to the series' other notable mutated beings like ghouls, Super Mutants are very formidable opponents in combat. Behemoths in particular are capable of overpowering opponents like Deathclaws and hordes of Feral Ghouls with little difficulty, and are unexpectedly nimble for their size. In spite of their physically impressive qualities, Super Mutants are sterile and cannot reproduce as the gametes of the reproductive system consist of "half-cells" using split DNA, which could be perceived as "damage" by the FEV, and in the attempt to "repair" them, it would render the subject infertile.
Development:
According to producer Tim Cain, the Fallout development team conceived of a faction of mutants who grow their ranks by dipping people into virus vats. During the discussion, a team member wondered what would happen if more than one person was dropped into the vat. The team decided to conceptualize such a being as the "Master of the Super Mutants", who would switch between three voices: male, female, and electronic. The Master's birth surname, Moreau, references the titular mad scientist of the 1896 novel The Island of Doctor Moreau by H. G. Wells, who leads a group of genetically altered beings he has created through unethical experiments.Concept artist Adam Adamowicz was responsible for conceptualizing the Fallout 3 iteration of the Super Mutant, in particular the Behemoth variant. In a 2008 developmental diary titled "Conceptual Design", Adamowicz said he was inspired by a line of dialogue from the 1974 film Blazing Saddles about the dim-witted but strong and tough character Mongo, Adamowicz wanted the game's Super Mutants to look like "they would step into a tree shredder, for relaxation.". He described their musculature to be straining at the bone structure underneath, evoking a caricature of a person with well developed muscles "in the throes of radioactive testosterone poisoning, and liking it". The armor and equipment used by the Super Mutants of Fallout 3 are predominantly based on salvaged material, such as car hoods and fenders crudely pounded into chest plates and pauldrons, and lawn mower blades welded onto helmets. The idea behind this design approach is to show recognizable real world elements being twisted to a more violent purpose, which conveys a sinister resourcefulness by these creatures to survive in a highly dangerous post-apocalyptic world.Adamowicz noted that his “junkyard wars” approach to designing the Super Mutants' armor and weapons inspired a number outlandish designs for homemade weapons, with some of them implemented in the final game. Adamowicz incorporated numerous visual gags into the character design for the Behemoths; examples given include their wielding of parking meters as if they are police batons and carrying their victims bound by shopping carts on their backs, which they will occasionally consume when they get hungry.Obsidian Entertainment, the developers of Fallout: New Vegas, initially considered allowing players the opportunity to play as a ghoul or a Super Mutant for their protagonist in New Vegas. The team faced technical limitations as New Vegas shared the same engine as Fallout 3, and the developers realized that the game engine's equipment system would not work properly for player characters which use non-human character models after Bethesda provided advice discouraging the addition of the proposed feature. Modiphius founder Chris Birch said the inclusion of Super Mutants as a playable race in the officially licensed Fallout tabletop roleplaying game, developed by his company to be part of their design decision to make the game "authentically Fallout".
Appearances:
Super Mutants have appeared in every Fallout video game as both hostile antagonists and supporting non-player characters, beginning with 1997's Fallout which takes place after a global nuclear war had destroyed most of human civilization a century earlier. Information about how the "Master of Super Mutants" raised an army of Super Mutants is picked up throughout gameplay prior to the player's final confrontation with the character in the cathedral. The Master was originally a human named Richard Moreau who was exiled from Vault City as a murder suspect. He changed his surname to Grey and migrated to a merchant town called the Hub, where he became a physician. Grey met a trader named Harold in 2102, and they form a joint expedition to discover the source of the mutant attacks on Harold's caravans. This led them to Mariposa Military Base, where the expedition group is overwhelmed by mutant forces. Grey is knocked into a vat of the Forced Evolutionary Virus (F.E.V.), which mutated him into a blob-like creature that expanded himself by absorbing other humans.After a month of simmering, Grey crawled out of the vat and installed himself inside a vault. Driven insane and severely mutated, he decided that humanity was inept and had to be replaced by a master race. Through his experimentations on humans with the F.E.V., the Master managed to create a race of virtually immortal monsters that were immune to disease and radiation. Deeming Super Mutants to be the superior race, the Master went on a campaign to replace all the humans with the Super Mutants by infecting them with the virus. To accomplish this, the Master created a cult called the Children of the Cathedral as a front for his campaign, using aliases such as the "Holy Flame" or "Father Hope" when dealing with his followers.When the player engages the Master near the end of the game, they can confront the Master with a number of different skills and approaches. The player can enter combat with the Master and his Super Mutant allies. The player can also convince the Master to abandon his plans by revealing to him that Super Mutants are sterile, a revelation that causes the Master to commit suicide. Skipping the confrontation entirely, the player can sneak into his chambers and find a hidden nuclear arsenal that can blow up the cathedral, killing the Master and his Super Mutant associates. The player also has the option of joining the Master's campaign, which will lead to a bad ending in which Vault 13 is raided by Super Mutants.Following the death of the Master in the canonical ending of the original Fallout, the Super Mutant remnants of his organization are hunted by the Brotherhood of Steel, who would label them "Frankenstein's Monsters" as a derogatory term. Survivors scatter from their base of operations beneath a cathedral and go their separate ways to find a new purpose: many resort to become raiders and resume their violent ways against baseline humanity, while some individuals congregate together in non-violent communities and attempt to live in peace with humans and ghouls.Players could recruit Super Mutant characters as traveling companions or allies in the sequel to 1997's Fallout. Noteworthy companion characters throughout the series include Marcus in Fallout 2, Fawkes in Fallout 3, Lily Bowen in Fallout: New Vegas, and Strong in Fallout 4. Super Mutants are available as a playable race in the Fallout tabletop roleplaying game released in 2021.
Cultural impact:
Promotion and merchandise Super Mutants are featured as part of a range of Fallout-themed Funko Pop figurines which were first released in 2015, and are described one of the franchise's "iconic characters". Super Mutants have been marketed as figurines for Fallout: Wasteland Warfare, a miniatures wargame which adapts the Fallout universe.
Cultural impact:
Critical reception and analysis Commenting on Adamowicz's developer diary post in 2008, Joseph Leray from Destructoid praised the creative vision and art style for the Super Mutants of Fallout 3 as "delightfully twisted", and said these aesthetic elements were key ingredients "that makes post-apocalyptic stories work". Some of the best side quests in Fallout 4 according to The Escapist's Ron Whitaker involve conflicts with Super Mutants, with one that culminates in the recruitment of Strong as a companion character noted to be a highlight by Whitaker. Some individual Super Mutant characters have received critical acclaim. For example, the Nightkin character Lily Bowen has been praised for her characterization and has appeared in "top" character lists, including Polygon's "The 70 best video game characters of the decade".In a paper published by the University of Leicester, Conor Appleton scaled an ordinary Super Mutant up to the size of a Behemoth and used dimensional analysis to determine the plausibility and viability of such a creature's existence. After examining the Behemoth's physiology using in-universe data and discussing how it may be affected by gravity, Appleton concluded that it is unlikely that such a creature would survive in reality.
Cultural impact:
Fandom Super Mutants are a recognizable element of the Fallout franchise. Some characters are particularly popular with players. Fawkes from Fallout 3 was among the top-voted characters in a RPG character poll organized by IGN in 2014. Super Mutant characters and their physiology are also the subject of popular works derived from fan labor, such as mods, music videos and fan films.On the other hand, the portrayal of Super Mutants following the transition of the series' ownership to Bethesda have been a source of contention for some fans. Story elements introduced in Fallout 3 contradicted lore in the first Fallout, developed by series creators Interplay Entertainment, which established that Super Mutants were originally created by the Master in 2103, while the Vault 87 FEV experiment took place in 2078. In a report about the Fallout fansite, No Mutants Allowed, Luke Winkie from Kotaku highlighted criticism from community members about the creative and design decisions in video games developed by Bethesda Game Studios: one aspect of contention involved the ubiquity of Super Mutants as ordinary "mob bad guys" in the early game levels of Fallout 3, which departed from the perspective of escalation presented in the original games' stories, gameplay mechanics and setting. The presence of Super Mutants in Fallout 76 also proved to be a controversial retcon with series fandom, as the game established that its iteration of Super Mutants is native to the region and thus unrelated to the Super Mutants in previous games. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Oncostatin M**
Oncostatin M:
Oncostatin M, also known as OSM, is a protein that in humans is encoded by the OSM gene.OSM is a pleiotropic cytokine that belongs to the interleukin 6 group of cytokines. Of these cytokines it most closely resembles leukemia inhibitory factor (LIF) in both structure and function. As yet poorly defined, it is proving important in liver development, haematopoeisis, inflammation and possibly CNS development. It is also associated with bone formation and destruction.OSM signals through cell surface receptors that contain the protein gp130. The type I receptor is composed of gp130 and LIFR, the type II receptor is composed of gp130 and OSMR.
Discovery, isolation and cloning:
The human form of OSM was originally isolated in 1986 from the growth media of PMA treated U-937 histiocytic lymphoma cells by its ability to inhibit the growth of cell lines established from melanomas and other solid tumours. A robust protein, OSM is stable between pH2 and 11 and resistant to heating for one hour at 56 °C. A partial amino acid sequence allowed the isolation of human OSM cDNA and subsequently genomic clones. The full cDNA clone of hOSM encodes a 252 amino acid precursor, the first 25 amino acids of which functions as a secretory signal peptide, which on removal yields the soluble 227 amino acid pro-OSM. Cleavage of the C-terminal most 31 residues at a trypsin like cleavage site yields the fully active 196 residue form. Two potential N-glycosylation sites are present in hOSM both of which are retained in the mature form.The 196 residue OSM is the predominant form isolated form a variety of cell lines and corresponds to a glycoprotein of 28 KDa, although the larger 227 residue pro-OSM can be isolated from over transfected cells. Pro-OSM, although an order of magnitude less efficacious in growth inhibition assays, displays similar binding affinity toward cells in radio ligand binding assays. Thus, post translational processing may play a significant role in the in vivo function of OSM. Like many cytokines OSM is produced from cells by de novo synthesis followed by release through the classical secretion pathway. However, OSM can be released from preformed stores within polymorphonuclear leukocytes on degranulation. It still remains unclear how OSM is targeted to these intracellular compartments.
Structure:
Primary sequence analysis of OSM allocates it to the gp130 group of cytokines. OSM most resembles LIF, bearing 22% sequence identity and 30% similarity. Incidentally the genes for OSM and LIF occur in tandem on human chromosome 22. Both LIF and OSM genes have very similar gene structures sharing similar promoter elements and intron-exon structure. These data suggest that OSM and LIF arose relatively recently in evolutionary terms by gene duplication. Of the five cysteine residues within the human OSM sequence four participate in disulfide bridges, one of these disulfide bonds namely between helices A and B is necessary for OSM activity. The free cysteine residue does not appear to mediate dimerisation of OSM.
Structure:
The three-dimensional structure of human OSM has been solved to atomic resolution, confirming the predicted long chain four helix bundle topology. Comparing this structure with the known structures of other known LC cytokines shows it to be most closely related to LIF (RMSD of 2.1 Å across 145 equivalent Cα). A distinctive kink in the A helix arises from departure of the classical alpha helical H-bonding pattern, a feature shared with all known structures of LIFR using cytokines. This "kink" results in a different special positioning of one extreme of the bundle to the other, significantly affecting the relative positioning of site III with sites I and II (see:Receptor recruitment sites)
Receptors:
Receptors for OSM can be found on a variety of cells from a variety of tissues. In general cells derived from endothelial and tumour origins express high levels of OSM receptors, whereas cells of Haematopoietic origin tend to express lower numbers.
Receptors:
Scatchard analysis of radio ligand binding data from 125I-OSM binding to a variety of OSM responsive cell lines produced curvilinear graphs which the authors interpreted as the presence of two receptor species, a high affinity form with an approximate dissociation constant Kd of 1-10 pM, and a low affinity form of 0.4-1 nM. Subsequently it was shown that the presence of gp130 alone was sufficient to reproduce the low affinity form of the receptor, and co-transfection of COS-7 cells with LIFR and gp130 produced a high affinity receptor. However further experiments demonstrated that not all actions of OSM could be replicated by LIF, that is certain cells that are irresponsive to LIF would respond to OSM. This data hinted to the existence of an additional ligand specific receptor chain which led to the cloning of OSMR. These two receptor complexes, namely gp130/LIFR and gp130/OSMR, were termed the type I and type II Oncostatin-M receptors.
Receptors:
The ability of OSM to signal via two receptor complexes conveniently offers a molecular explanation to the shared and unique effects of OSM with respect to LIF. Thus common biological activities of LIF and OSM are mediated through the type I receptor and OSM specific activities are mediated through the type II receptor.
Receptors:
The murine homologue of OSM was not discovered until 1996, whereas the murine OSMR homologue was not cloned until 1998. Until recently, it was thought that mOSM only signals through the murine type II receptor, namely through mOSMR/mgp130 complexes, because of a low affinity for the type I receptor counterpart. However, it is now known that, in bone at least, mOSM is able to signal through both mOSMR/mgp130 and mLIFR/mgp130.
Receptor recruitment sites:
Oncostatin M triggers the formation of receptor complexes by binding to receptors via two binding sites named site II and site III. The nomenclature of these sites is taken by direct analogy to Growth Hormone, probably the best studied of four helix bundle cytokines.
Site II consists of exposed residues within the A and C helices, and confers binding to gp130.
Receptor recruitment sites:
The crucial residues of site III are located at the N-terminal extremity of the D-helix. This site is the most conserved amongst IL-6 like cytokines. OSM contains a conserved Phenylalanine and Lysine residues (F160 and K163). Cytokines that recruit LIFR via site 3 i.e. LIF, OSM, CNTF and CT-1 possess these conserved phenylalanine and lysine residues and is known as the FK motif.
Signal transduction through OSM receptors:
Signalling by type I and type II OSM receptors have now been shown to be qualitatively distinct. These differences in signaling character, in addition to the tissue distribution profiles of OSMRb and LIFRb, offer another variable in the distinction between the common and specific cellular effects of OSM with respect to LIF. All IL-6 cytokines whether they homo- or heterodimerise gp130 seem to activate JAK1, JAK2 and to a lesser degree Tyk2. JAK1, JAK2, and tyk2 are not interchangeable in the gp130 system, this has been demonstrated with the use of JAK1, Jak2 or Tyk2 deficient cell lines obtained from mutant mice. Cells from JAK1 deficient mice show reduced STAT activation and generation of biological responses in response to IL-6 and LIF. In contrast, fibroblasts derived from JAK2 null mice can respond to IL-6, with demonstratable tyrosine phosphorylation of gp130, JAK1 and TYK2. Thus it seems JAK1 is the critical JAK required for gp130 signalling. Activation of the same Jaks by all three receptor combinations (gp130/gp130, gp130/LIFR, gp130/OSMR) raises the question of how IL6, LIF and OSM can activate distinct intracellular signaling pathways. Selection of particular substrates, i.e. STAT isoform, depended not on which Jak is activated, but instead are determined by specific motifs, especially tyrosine-based motifs, within each receptor intracellular domain.
Signal transduction through OSM receptors:
Aligning the intracellular domains of gp130, LIFR and hOSMR results in some interesting observations. Sequence identity is generally quite low across the group averaging at 4.6%. However, as with many Class I Haematopoeitin receptors, two short membrane proximal motifs, termed box 1 and box 2 are present. In addition these receptors also contain a serine rich region and a third more poorly conserved motif termed box 3. Box 1 is present in all signalling cytokine receptors. It is characteristically rich in proline residues and is essential for the association and activation of JAKs. Box 2 is also important for association with JAKs. Gp130 contains box1 and box2 sequences within the membrane-proximal part of the cytoplasmic region, lying within the minimum 61 amino acids required for receptor activation. Mutations within the box1 region reduce the ability of gp130 to associate with Jaks and abolish ligand-induced activation of Jak1 and Jak2. Box 2 also contributes to activation and binding of JAKs. Studies with various gp130 truncation mutants show a reduction of Jak2 binding and abrogation of certain biological effects upon deletion of box2. However, Jaks are able to associate with gp130 devoid of box2 when overexpressed.LIFR and OSMR also contain the membrane-proximal box1/box2-like regions. The first 65 amino acid residues in the cytoplasmic domain of LIFR, in combination with full length gp130, can generate signalling on treatment with LIF. Coprecipitation of Jak1, Jak2 and Tyk2 with receptors containing cytoplasmic parts of the LIFR or OSMR. All beta receptor subunits of the gp130 system also possess a box 3 region. This region corresponds to the C-terminal amino acids of the OSMR and LIFR receptors respectively. Box 3 is necessary for the action of OSMR; however Box3 is dispensable for the action of LIFR. In the case of gp130 box 3 is dispensable for activity, however the presence of an intact box 3 sequence is required for certain aspects of gp130 signalling, i.e. stimulation of transcription through the STAT-3 response element. In addition to the poor sequence conservation amongst the intracellular domains of gp130 receptors, the number and position of conserved tyrosine residues are also poorly conserved. For example, LIFR and OSMR share three homologous tyrosines. In contrast none of the tyrosine residues present in the intracellular domain of gp130 share equivalents with LIFR or OSMR, even though the intracellular regions of LIFR and gp130 share more sequence identity than LIFR and OSMR.
Signal transduction through OSM receptors:
Of the proteins recruited to type I cytokine receptors STAT proteins remain the best studied. Homodimerisation of gp130 has been shown to phosphorylate and activate both STAT1 and STAT3. gp130 preferentially activates STAT3 which it can do through four STAT3 activation consensus sequences YXXQ: (YRHQ), (YFKQ), Y905 (YLPQ), Y915 (YMPQ). The lower propensity for STAT1 activation may be a reflection of the lower number of STAT1 activation sequences, YZPQ (where X is any residue and Z is any uncharged residue), namely Y905 and Y915. Cytokines that signal via homodimeric complexes of LIFR or OSMR (i.e. devoid of gp130) are currently unknown in nature. However, various investigators have attempted artificial homodimerisation of LIFR and OSMR intracellular domains, with conflicting results, by constructing receptor chimeras that fuse the extracellular domain of one cytokine receptor with the intracellular domain of LIFR or OSMR.
Signal transduction through OSM receptors:
Signalling by LIFR intracellular domain homodimerisation has been demonstrated in hepatoma and neuroblastoma cells, embryonic stem cells and COS-1 cells by using chimeric receptors that homodimerise upon stimulation with their cognate cytokines (i.e. GCSF, neurotrophin-3, EGF). However a GCSFR/LIFR chimera was not capable of signaling in M1 or Baf cells.
Anti- or pro-inflammatory?:
The role of OSM as an inflammatory mediator was clear as early as 1986. Its precise effect on the immune system, as with any cytokine, is far from clear. However, two schools of thought are emerging: The first proposes that OSM is pro-inflammatory; whilst the other holds the opposite view, claiming OSM is anti-inflammatory. It is important to note that before 1997 differences in human and murine OSM receptor usage were unknown. As a result, several investigators used human OSM in mouse assays and thus any conclusion drawn from the results of these experiments will be representative of LIF, i.e. signalling through gp130/LIFR complexes.
Anti- or pro-inflammatory?:
OSM is synthesized by stimulated T-cells and monocytes. The effects of OSM on endothelial cells suggest a pro-inflammatory role for OSM. Endothelial cells possess a large number of OSM receptors. Stimulation of a primary endothelial culture (HUVEC) with hOSM results in delayed but prolonged upregulation of P-selectin, which facilitates leukocyte adhesion and rolling, necessary for their extravasation. OSM also promotes the production of IL-6 from these cells.As mentioned above the action of OSM as a quencher of the inflammatory response is by no means established yet. For example, conflicting results exist as to the action of OSM on various models of arthritis. For example, OSM reduces the degree of joint destruction in an antibody induced model of rheumatoid arthritis.OSM is a major growth factor for Kaposi's sarcoma "spindle cells", which are of endothelial origin. These cells do not express LIFR but do express OSMR at high levels.
Anti- or pro-inflammatory?:
For example, OSM can modulate the expression of IL-6, an important regulator of the host defence system. OSM can regulate the expression of acute phase proteins. OSM regulates the expression of various protease and protease inhibitors, for example Gelatinase and a1-chymotrypsin inhibitor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Geographic Regions of the Dominican Republic**
Geographic Regions of the Dominican Republic:
The Dominican Republic is divided into three macro-regions, which are in turn divided into ten regions.In 1858 the country was divided in 3 departments: Cibao (North), Ozama (Southwest), and Seybo (Southeast). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ID3 (gene)**
ID3 (gene):
DNA-binding protein inhibitor ID-3 is a protein that in humans is encoded by the ID3 gene.
Function:
Members of the ID family of helix-loop-helix (HLH) proteins lack a basic DNA-binding domain and inhibit transcription through formation of nonfunctional dimers that are incapable of binding to DNA.[supplied by OMIM]
Interactions:
ID3 (gene) has been shown to interact with TCF3.
Repressors of ID3:
BTG2 binds to the promoter of Id3 and represses its activity. By this mechanism, the upregulation of Id3 in the hippocampus caused by BTG2 ablation prevents terminal differentiation of hippocampal neurons. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Contour currents**
Contour currents:
The term contour currents was first introduced by Heezen et al in 1966 as bottom currents along the continental shelf driven by Coriolis effects and temperature/salinity dependent density gradients. Generally, the currents flow along depth contours, hence called contour currents. Sediments deposited and shaped by the contour currents are called contourites, which are commonly observed in continental rise.
Depositional Processes:
Since contour currents generally flow at speed of 2–20 cm/s, their capacity to carry sediments is limited to fine grain particles already in suspension. Redistribution of sediments by contour currents have, however, been reported as evidenced by the sea floor morphological features parallel to regional isobaths.
Depositional Processes:
Turbidity currents, on the other hand, flow down slope across regional isobaths and are mainly responsible for supplying terrigenous sediment across continental margins to deep-water environments, such as continental rise, where fine particles are further carried in suspension by contour currents. The joint depositional processes of the two current systems contribute to the dominant factors influencing the morphology of the lower continental margins. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Protein poisoning**
Protein poisoning:
Protein poisoning (also referred to colloquially as rabbit starvation, mal de caribou, or fat starvation) is an acute form of malnutrition caused by a diet deficient in fat and carbohydrates, where almost all bioavailable calories come from the protein in lean meat. The concept is discussed in the context of paleoanthropological investigations into the diet of ancient humans, especially during the Last Glacial Maximum and at high latitude regions.The term rabbit starvation originates from the fact that rabbit meat is very low in fat, with almost all of its caloric content from the amino acids digested out of skeletal muscle protein, and therefore is a food which, if consumed exclusively, would cause protein poisoning. Animals in harsh, cold environments similarly become lean. The reported symptoms include initial nausea and fatigue, followed by diarrhea and ultimately death.
Observations:
In Appian's Roman History, Volume I, Book VI: The Wars in Spain, Chapter IX, page 223, the author notes a multitude of Roman soldiers dying of severe diarrhea after eating mostly rabbits while besieging the city Intercatia in approx 150 B.C. Appian wrote: ... strange terror in the Roman camp. Their soldiers were sick from watching and want of sleep, and because of the unaccustomed food which the country afforded. They had no wine, no salt, no vinegar, no oil, but lived on wheat and barley, and quantities of venison and rabbits' flesh boiled without salt, which caused dysentery, from which many died. The explorer Vilhjalmur Stefansson is said to have lived for years exclusively on game meat and fish, with no ill effects. The same is true for his fellow explorer Karsten Anderson. As part of his promotion of meat-only diet modeled on Inuit cuisine, and to demonstrate the effects, in New York City beginning in February 1928, Stefansson and Anderson "lived and ate in the metabolism ward of Russell Sage Institute of Pathology of Bellevue Hospital, New York" for a year, with their metabolic performance closely observed, all this partly funded by the Institute of American Meat Packers. Researchers hoping to replicate Stefansson's experience with rabbit starvation in the field urged him to cut the fat intake in his all-meat diet to zero. He did, and experienced a much quicker onset of diarrhea than in the field. With fat added back in, Stefansson recovered, although with a 10-day period of constipation afterwards. The study reported finding no previous medical literature examining either the effects of meat-only diets, which appear to be sustainable, or on rabbit starvation, which is fatal.
Observations:
Stefansson wrote: The groups that depend on the blubber animals are the most fortunate in the hunting way of life, for they never suffer from fat-hunger. This trouble is worst, so far as North America is concerned, among those forest Indians who depend at times on rabbits, the leanest animal in the North, and who develop the extreme fat-hunger known as rabbit-starvation. Rabbit eaters, if they have no fat from another source—beaver, moose, fish—will develop diarrhea in about a week, with headache, lassitude and vague discomfort. If there are enough rabbits, the people eat till their stomachs are distended; but no matter how much they eat they feel unsatisfied. Some think a man will die sooner if he eats continually of fat-free meat than if he eats nothing, but this is a belief on which sufficient evidence for a decision has not been gathered in the North. Deaths from rabbit-starvation, or from the eating of other skinny meat, are rare; for everyone understands the principle, and any possible preventive steps are naturally taken.
Observations:
A World War II-era Arctic survival booklet issued by the Flight Control Command of the United States Army Air Forces included this emphatic warning: "Because of the importance of fats, under no conditions limit yourself to a meat diet of rabbit just because they happen to be plentiful in the region where you are forced down. A continued diet of rabbit will produce rabbit starvation -- diarrhea will begin in about a week and if the diet is continued DEATH MAY RESULT."
Physiology:
The U.S. and Canadian Dietary Reference Intake review for protein mentions "rabbit starvation", but concluded that there was not sufficient evidence by 2005 to establish a tolerable upper intake level, i.e., an upper limit for how much protein can be safely consumed.According to Bilsborough and Mann in 2006, protein intake is mainly restricted by the urea cycle, but deriving more than 35% of energy needs from protein leads to health problems. They suggested an upper limit of 25% or 2-2.5 g/kg, "corresponding to 176 g protein per day for an 80 kg individual", but stated that humans can theoretically use much larger amounts than this for energy. For arctic hunter-gatherers, the amount can seasonally increase to 45%. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Refractive error**
Refractive error:
Refractive error, also known as refraction error, is a problem with focusing light accurately on the retina due to the shape of the eye and or cornea. The most common types of refractive error are near-sightedness, far-sightedness, astigmatism, and presbyopia. Near-sightedness results in far away objects being blurry, far-sightedness and presbyopia result in close objects being blurry, and astigmatism causes objects to appear stretched out or blurry. Other symptoms may include double vision, headaches, and eye strain.Near-sightedness is due to the length of the eyeball being too long, far-sightedness the eyeball too short, astigmatism the cornea being the wrong shape, and presbyopia aging of the lens of the eye such that it cannot change shape sufficiently. Some refractive errors occur more often among those whose parents are affected. Diagnosis is by eye examination.Refractive errors are corrected with eyeglasses, contact lenses, or surgery. Eyeglasses are the easiest and safest method of correction. Contact lenses can provide a wider field of vision; however they are associated with a risk of infection. Refractive surgery permanently changes the shape of the cornea.The number of people globally with refractive errors has been estimated at one to two billion. Rates vary between regions of the world with about 25% of Europeans and 80% of Asians affected. Near-sightedness is the most common disorder. Rates among adults are between 15-49% while rates among children are between 1.2-42%. Far-sightedness more commonly affects young children and the elderly. Presbyopia affects most people over the age of 35.The number of people with refractive errors that have not been corrected was estimated at 660 million (10 per 100 people) in 2013. Of these 9.5 million were blind due to the refractive error. It is one of the most common causes of vision loss along with cataracts, macular degeneration, and vitamin A deficiency.
Classification:
An eye that has no refractive error when viewing distant objects is said to have emmetropia or be emmetropic meaning the eye is in a state in which it can focus parallel rays of light (light from distant objects) on the retina, without using any accommodation. A distant object, in this case, is defined as an object located beyond 6 meters, or 20 feet, from the eye, since the light from those objects arrives as essentially parallel rays when considering the limitations of human perception.An eye that has refractive error when viewing distant objects is said to have ametropia or be ametropic. This eye cannot focus parallel rays of light (light from distant objects) on the retina, or needs accommodation to do so.The word "ametropia" can be used interchangeably with "refractive error". Types of ametropia include myopia, hyperopia and astigmatism. They are frequently categorized as spherical errors and cylindrical errors: Cylindrical errors cause astigmatism, when the optical power of the eye is too powerful or too weak across one meridian, such as if the corneal curvature tends towards a cylindrical shape. The angle between that meridian and the horizontal is known as the axis of the cylinder.
Classification:
Astigmatism: A person with astigmatic refractive error sees lines of a particular orientation less clearly than lines at right angles to them. This defect can be corrected by refracting light more in one meridian than the other. Cylindrical lenses serve this purpose.
Spherical errors occur when the optical power of the eye is either too large or too small to focus light on the retina. People with refractive error frequently have blurry vision.
Classification:
Farsightedness: When the optics are too weak for the length of the eyeball, one has hyperopia or farsightedness. This can arise from a cornea or crystalline lens with not enough curvature (refractive hyperopia) or an eyeball that is too short (axial hyperopia). This can be corrected with convex lenses, which cause light rays to converge prior to hitting the cornea.
Classification:
Nearsightedness: When the optics are too powerful for the length of the eyeball one has myopia or nearsightedness. This can arise from a cornea or crystalline lens with too much curvature (refractive myopia) or an eyeball that is too long (axial myopia). Myopia can be corrected with a concave lens, which causes the divergence of light rays before they reach the cornea.
Classification:
Presbyopia: When the flexibility of the lens declines, typically due to age. The individual would experience difficulty in near vision, often relieved by reading glasses, bifocal, or progressive lenses.Other terminology include anisometropia, when the two eyes have unequal refractive power, and aniseikonia which is when the magnification power between the eyes differ.Refractive error may be quantified as the error of a wavefront arising from a person's far point, compared with a plane, or zero vergence, wavefront compared at an appropriate reference plane. The reference plane may be a real plane such as the spectacle plane or the corneal plane, or an imaginary plane such as the first principal plane or the entrance pupil plane. In diopters, spherical refractive errors can be expressed as K=1/k, where k is the distance in meters from the reference plane to an eye's far point, and K is the refractive error in diopters. Thus, a person with myopia would have a negative refractive error, a person with emmetropia would have zero refractive error and a person with hyperopia would have a positive refractive error. In the case of regular astigmatism, refractive error needs to be expressed as 3 values: classically as sphere, cylinder and axis. However, it can also be expressed in vector terms, for example, M (mean sphere), J0 (With the rule/Against the rule astigmatism), J45 (oblique astigmatism). Refractive errors containing higher order aberrations (sometimes referred to as irregular astigmatism) can be expressed for a given pupil size using wavefront errors or optical path differences, often as coefficients for Zernike polynomials. A more subjective quantity visual acuity (expressed as a fraction) may be used, but there is no direct or exact conversion between the two.
Risk factors:
Genetics There is evidence to suggest genetic predilection for refractive error. Individuals that have parents with certain refractive errors are more likely to have similar refractive errors.The Online Mendelian Inheritance in Man (OMIM) database has listed 261 genetic disorders in which myopia is one of the symptoms. Myopia may be present in heritable connective tissue disorders such as: Knobloch syndrome (OMIM 267750); Marfan syndrome (OMIM 154700); and Stickler syndrome (type 1, OMIM 108300; type 2, OMIM 604841). Myopia has also been reported in X-linked disorders caused by mutations in loci involved in retinal photoreceptor function (NYX, RP2, MYP1) such as: autosomal recessive congenital stationary night blindness (CSNB; OMIM 310500); retinitis pigmentosa 2 (RP2; OMIM 312600); Bornholm eye disease (OMIM 310460).
Risk factors:
Many genes that have been associated with refractive error are clustered into common biological networks involved in connective tissue growth and extracellular matrix organization. Although a large number of chromosomal localisations have been associated with myopia (MYP1-MYP17), few specific genes have been identified.
Risk factors:
Environmental In studies of the genetic predisposition of refractive error, there is a correlation between environmental factors and the risk of developing myopia. Myopia has been observed in individuals with visually intensive occupations. Reading has also been found to be a predictor of myopia in children. It has been reported that children with myopia spent significantly more time reading than non-myopic children who spent more time playing outdoors. Socioeconomic status and higher levels of education have also been reported to be a risk factor for myopia.
Diagnosis:
Blurry vision may result from any number of conditions not necessarily related to refractive errors. The diagnosis of a refractive error is usually confirmed by an eye care professional during an eye examination using a large number of lenses of different optical powers, and often a retinoscope (a procedure entitled retinoscopy) to measure objectively in which the person views a distant spot while the clinician changes the lenses held before the person's eye and watches the pattern of reflection of a small light shone on the eye. Following that "objective refraction" the clinician typically shows the person lenses of progressively higher or weaker powers in a process known as subjective refraction.
Diagnosis:
Cycloplegic agents are frequently used to more accurately determine the amount of refractive error, particularly in childrenAn automated refractor is an instrument that is sometimes used in place of retinoscopy to objectively estimate a person's refractive error. Shack–Hartmann wavefront sensor and its inverse can also be used to characterize eye aberrations in a higher level of resolution and accuracy.
Vision defects caused by refractive error can be distinguished from other problems using a pinhole occluder, which will improve vision only in the case of refractive error.
Management:
The management of refractive error is done post-diagnosis of the condition by either optometrists, ophthalmologists, refractionists, or ophthalmic medical practitioners.How refractive errors are treated or managed depends upon the amount and severity of the condition. Those who possess mild amounts of refractive error may elect to leave the condition uncorrected, particularly if the person is asymptomatic. For those who are symptomatic, glasses, contact lenses, refractive surgery, or a combination of the three are typically used.
Management:
Glasses These are the most effective ways of correcting the refractive error. However, the availability and affordability of eye glasses can present a difficulty for people in many low income settings of the world. Glasses also pose a challenge to children to whom they are prescribed to, due to children's tendency to not wear them as consistently as recommended.As mentioned earlier refractive errors are because of the improper focusing of the light in the retina. Eyeglasses work as an added lens of the eye serving to bend the light to bring it to focus on the retina. Depending on the eyeglasses, they serve many functions.
Management:
Reading glasses These are general over-the-counter glasses which can be worn for easier reading, especially for defective vision due to aging called presbyopia.
Single vision prescription lenses They can correct only one form of defective vision, either far-sightedness or near-sightedness.
Multifocal lenses The multifocal lenses can correct defective vision in multiple focus, for example: near-vision as well as far-vision. This are particularly beneficial for presbyobia.
Management:
Contact lenses Alternatively, many people choose to wear contact lenses. One style is hard contact lenses, which can distort the shape of the cornea to a desired shape. Another style, soft contact lenses, are made of silicone or hydrogel. Depending on the duration they are designed for, they may be worn daily or may be worn for an extended period of time, such as for weeks.There are a number of complication associated with contact lenses. Typically the ones that are used daily.
Management:
If redness, itching, and difficulty in vision develops, the use of the lenses should be stopped immediately and the consultation of ophthalmologists may be sought.
Management:
Surgery Laser in situ keratomileusis (LASIK) and photo-refractive keratectomy (PRK) are popular procedures; while use of laser epithelial keratomileusis (LASEK) is increasing. Other surgical treatments for severe myopia include insertion of implants after clear lens extraction (refractive lens exchange). Full thickness corneal graft may be a final option for patients with advanced kerataconus although currently there is interest in new techniques that involve collagen crosslinking. As with any surgical procedure complications may arise post-operatively Post-operative monitoring is normally undertaken by the specialist ophthalmic surgical clinic and optometry services. Patients are usually informed pre-operatively about what to expect and where to go if they suspect complications. Any patient reporting pain and redness after surgery should be referred urgently to their ophthalmic surgeon.
Management:
Medical treatment Atropine has believed to slow the progression of near-sightedness and is administered in combination with multifocal lenses. These, however, need further research.
Prevention Strategies being studied to slow worsening include adjusting working conditions, increasing the time children spend outdoors, and special types of contact lenses. In children special contact lenses appear to slow worsening of nearsightedness.A number of questionnaires exist to determine quality of life impact of refractive errors and their correction.
Epidemiology:
The number of people globally with refractive errors that have not been corrected was estimated at 660 million (10 per 100 people) in 2013.Refractive Errors are the first common cause of Visual Impairment and second most common cause of visual loss . The assessment of Refractive Error is now done in DALY (Disability Adjusted Life Years) which showed an 8% increase from 1990 to 2019.The number of people globally with significant refractive errors has been estimated at one to two billion. Rates vary between regions of the world with about 25% of Europeans and 80% of Asians affected. Near-sightedness is one of the most prevalent disorders of the eye. Rates among adults are between 15-49% while rates among children are between 1.2-42%. Far-sightedness more commonly affects young children, whose eyes have yet to grow to their full length, and the elderly, who have lost the ability to compensate with their accommodation system. Presbyopia affects most people over the age of 35, and nearly 100% of people by the ages of 55-65. Uncorrected refractive error is responsible for visual impairment and disability for many people worldwide. It is one of the most common causes of vision loss along with cataracts, macular degeneration, and vitamin A deficiency.
Cost:
The yearly cost of correcting refractive errors is estimated at 3.9 to 7.2 billion dollars in the United States. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Umbrella stand**
Umbrella stand:
An umbrella stand is a storage device for umbrellas and walking sticks. They are usually located inside the entrance of a home or public building, and are sometimes complemented by a hanger or mirror, or combined with a coat rack.
Umbrella stand:
The stand is used to hold umbrellas when they are not in use. In public spaces, their use is usually limited to rainy days when employees and visitors need to carry umbrellas. Umbrellas can be closed and placed in the stand. They are usually dropped off upon entering the building and collected when leaving the building. This is useful because wet umbrellas may cause a wet floor, which could pose a hazard of slipping.
Design:
A domestic umbrella stand is generally an upright container, often a cylindrical tube, stored near to the entrance of a house. In addition to acting as a functional piece of furniture, they can be aesthetic objects for decoration, and are manufactured from a wide variety of materials: clay, plastic, metal, and wood.
Stands in public buildings are often designed to hold greater numbers of umbrellas, and may be constructed with separate spaces for each umbrella. Stands with locks are used in Japan, to protect umbrellas from thieves.
Bag dispensers:
A more recent variant of the umbrella stand used in modern retailers is the umbrella bag dispenser, which allows the user to insert the umbrella into a disposable waterproof bag when entering a building, and to carry that bag with them so as not to get the floors wet. This eliminates the need for a large umbrella stand, in buildings where storage space may be at a premium. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Optimal computing budget allocation**
Optimal computing budget allocation:
In computer science, optimal computing budget allocation (OCBA) is an approach to maximize the overall simulation efficiency for finding an optimal decision. It was introduced in the mid-1990s by Dr. Chun-Hung Chen.
Optimal computing budget allocation:
OCBA determines the number of replications or the simulation time that is needed in order to receive acceptable or best results within a set of given parameters. This is accomplished by using an asymptotic framework to analyze the structure of the optimal allocation.OCBA has also been shown effective in enhancing partition-based random search algorithms for solving deterministic global optimization problems.
Intuitive explanation:
OCBA's goal is to provide a systematic approach to run a large number of simulations including only the critical alternatives in order to select the best alternative. In other words, OCBA focuses on only part the most critical alternatives, which minimizes computation time and reduces these critical estimators’ variances. The expected result maintains the required level of accuracy, while requiring less amount of work.For example, we can create a simple simulation between five alternatives. The goal is to select an alternative with minimum average delay time. The figure below shows preliminary simulation results ( i.e. having run only a fraction of the required number of simulation replications). It is clear to see that alternative 2 and 3 have a significantly lower delay time (highlighted in red). In order to save computation cost (which is time, resources and money spend on the process of running the simulation) OCBA suggests that more replications are required for alternative 2 and 3, and simulation can be stopped for 1, 4, and 5 much earlier without compromising results.
Problem:
The main objective of OCBA is to maximize the probability of correct selection (PCS). PCS is subject to the sampling budget of a given stage of sampling τ.
max subject to ∑i=1kτi=τ,τi≥0,i=1,2,...,k.(1) In this case ∑i=1kτi=τ stands for the total computational cost.
Some extensions of OCBA:
Experts in the field explain that in some problems it is important to not only know the best alternative among a sample, but the top 5, 10, or even 50, because the decision maker may have other concerns that may affect the decision which are not modeled in the simulation. According to Szechtman and Yücesan (2008), OCBA is also helpful in feasibility determination problems. This is where the decisions makers are only interested in differentiating feasible alternatives from the infeasible ones. Further, choosing an alternative that is simpler, yet similar in performance is crucial for other decision makers. In this case, the best choice is among top-r simplest alternatives, whose performance rank above desired levels.In addition, Trailovic and Pao (2004) demonstrate an OCBA approach, where we find alternatives with minimum variance, instead of with best mean. Here, we assume unknown variances, voiding the OCBA rule (assuming that the variances are known). During 2010 research was done on an OCBA algorithm that is based on a t distribution. The results show no significant differences between those from t-distribution and normal distribution. The above presented extensions of OCBA is not a complete list and is yet to be fully explored and compiled.
Multi-objective OCBA:
Multi-objective Optimal Computing Budget Allocation (MOCBA) is the OCBA concept that applies to multi-objective problems. In a typical MOCBA, the PCS is defined as Pr Pr {(⋂i∈SpEi)⋂(⋂i∈S¯pEic)}, in which Sp is the observed Pareto set, S¯p is the non-Pareto set, i.e., S¯p=Θ∖Sp ,Ei is the event that design i is non-dominated by all other designs, Eic is the event that design i is dominated by at least one design.We notice that, the Type I error e1 and Type II error e2 for identifying a correct Pareto set are respectively Pr {⋂i∈S¯pEic} and Pr {⋂i∈SpEi} Besides, it can be proven that max min Pr {J~jl≤J~il}] and max min Pr {J~jl≤J~il}], where H is the number of objectives, and J~il follows posterior distribution Normal(J¯il,σil2Ni).
Multi-objective OCBA:
Noted that J¯il and σil are the average and standard deviation of the observed performance measures for objective l of design i , and Ni is the number of observations.
Thus, instead of maximizing Pr {CS} , we can maximize its lower bound, i.e., APCS−M≡1−ub1−ub2.
Assuming τ→∞ , the Lagrange method can be applied to conclude the following rules: τi=βi∑j∈Θβjτ, in which for a design h∈SA , βh=(σ^hljhh2+σ^jhljhh2/ρh)/δhjhljhh2(σ^mljmm2+σ^jmljmm2/ρm)/δmjmljmm2 for a design d∈SB , βd=∑i∈Θd∗σdldi2σildi2βi2 ,and δijl=J¯jl−J¯il, arg max Pr {J~jl≤J~il}, arg min Pr {J~jl≤J~il}, min i∈Θhδihlhi2σ^ilhi2αi+σ^hlhi2αh}, SB≡S∖SA, Θh=i|i∈S,ji=h, Θd∗=h|h∈SA,jh=d, ρi=αji/αi.
Constrained optimization:
Similar to the previous section, there are many situations with multiple performance measures. If the multiple performance measures are equally important, the decision makers can use the MOCBA. In other situations, the decision makers have one primary performance measure to be optimized while the secondary performance measures are constrained by certain limits. The primary performance measure can be called the main objective while the secondary performance measures are referred as the constraint measures. This falls into the problem of constrained optimization. When the number of alternatives is fixed, the problem is called constrained ranking and selection where the goal is to select the best feasible design given that both the main objective and the constraint measures need to be estimated via stochastic simulation. The OCBA method for constrained optimization (called OCBA-CO) can be found in Pujowidianto et al. (2009) and Lee et al. (2012).The key change is in the definition of PCS. There are two components in constrained optimisation, namely optimality and feasibility. As a result, the simulation budget can be allocated to each non-best design either based on the optimality or feasibility. In other word, a non-best design will not be wrongly selected as the best feasible design if it remains either infeasible or worse than the true best feasible design. The idea is that it is not necessary to spend a large portion of the budget to determine the feasibility if the design is clearly worse than the best. Similarly, we can save the budget by allocating based on feasibility if the design is already better than the best in terms of the main objective.
Feasibility determination:
The goal of this problem is to determine all the feasible designs from a finite set of design alternatives, where the feasible designs are defined as the designs with their performance measures satisfying specified control requirements (constraints). With all the feasible designs selected, the decision maker can easily make the final decision by incorporating other performance considerations (e.g., deterministic criteria, such as cost, or qualitative criteria which are difficult to mathematically evaluate). Although the feasibility determination problem involves stochastic constraints too, it is distinguished from the constrained optimization problems introduced above, in that it aims to identify all the feasible designs instead of the single best feasible one.
Feasibility determination:
Define k : total number of designs; m : total number of performance measure constraints; cj : control requirement of the j th constraint for all the designs, j=1,2,...,m ;SA : set of feasible designs; SB : set of infeasible designs; μi,j : mean of simulation samples of the j th constraint measure and design i ;σi,j2 : variance of simulation samples of the j th constraint measure and design i ;αi : proportion of the total simulation budget allocated to design i ;X¯i,j : sample mean of simulation samples of the j th constraint measure and design i .Suppose all the constraints are provided in form of μi,j≤cj , i=1,2,...,k,j=1,2,...,m . The probability of correctly selecting all the feasible designs is PCS=P(⋂i∈SA(⋂j=1m(X¯i,j≤cj))∩⋂i∈SB(⋃j=1m(X¯i,j>cj))), and the budget allocation problem for feasibility determination is given by Gao and Chen (2017) max subject to ∑i=1kαi=1,αi≥0,i=1,2,...,k.
Feasibility determination:
Let Ii,j(x)=(x−μi,j)22σi,j2 and ji∈argminj∈{1,...,m}Ii,j(cj) . The asymptotic optimal budget allocation rule is αiαi′=Ii′,ji′(cji′)Ii,ji(cji),i,i′∈{1,2,...,k}.
Intuitively speaking, the above allocation rule says that (1) for a feasible design, the dominant constraint is the most difficult one to be correctly detected among all the constraints; and (2) for an infeasible design, the dominant constraint is the easiest one to be correctly detected among all constraints.
OCBA with expected opportunity cost:
The original OCBA maximizes the probability of correct selection (PCS) of the best design. In practice, another important measure is the expected opportunity cost (EOC), which quantifies how far away the mean of the selected design is from that of the real best. This measure is important because optimizing EOC not only maximizes the chance of selecting the best design but also ensures that the mean of the selected design is not too far from that of the best design, if it fails to find the best one. Compared to PCS, EOC penalizes a particularly bad choice more than a slightly incorrect selection, and is thus preferred by risk-neutral practitioners and decision makers.
OCBA with expected opportunity cost:
Specifically, the expected opportunity cost is EOC=ET[μT−μt]=∑i=1,i≠tkδi,tP(T=i), where, k is the total number of designs; t is the real best design; T is the random variable whose realization is the observed best design; μi is the mean of the simulation samples of design i , i=1,2,...,k ;δi,j=μi−μj .The budget allocation problem with the EOC objective measure is given by Gao et al. (2017) min subject to ∑i=1kαi=1,αi≥0,i=1,2,...,k, where αi is the proportion of the total simulation budget allocated to design i If we assume αt≫αi for all i≠t , the asymptotic optimal budget allocation rule for this problem is αt2σt2=∑i=1,i≠tkαi2σi2,αiαj=σi2/δi,t2σj2/δj,t2,i≠j≠t, where σi2 is the variance of the simulation samples of design i . This allocation rule is the same as the asymptotic optimal solution of problem (1). That is, asymptotically speaking, maximizing PCS and minimizing EOC are the same thing.
OCBA with input uncertainty:
An implicit assumption for the aforementioned OCBA methods is that the true input distributions and their parameters are known, while in practice, they are typically unknown and have to be estimated from limited historical data. This may lead to uncertainty in the estimated input distributions and their parameters, which might (severely) affect the quality of the selection. Assuming that the uncertainty set contains a finite number of scenarios for the underlying input distributions and parameters, Gao et al. (2017) introduces a new OCBA approach by maximizing the probability of correctly selecting the best design under a fixed simulation budget, where the performance of a design is measured by its worst-case performance among all the possible scenarios in the uncertainty set.
Web-based demonstration of OCBA:
The following link provides an OCBA demonstration using a simple example. In the demo, OCBA performs and allocates computing budget differently as compared with traditional equal allocation approach. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ABCB11**
ABCB11:
ATP-binding cassette, sub-family B member 11 also known as ABCB11 is a protein which in humans is encoded by the ABCB11 gene.
Function:
The product of the ABCB11 gene is an ABC transporter named BSEP (bile salt export pump), or sPgp (sister of P-glycoprotein). This membrane-associated protein is a member of the superfamily of ATP-binding cassette (ABC) transporters. ABC proteins transport various molecules across extra- and intra-cellular membranes. ABC genes are divided into seven distinct subfamilies (ABC1, MDR/TAP, MRP, ALD, OABP, GCN20, White).This protein is a member of the MDR/TAP subfamily. Some members of the MDR/TAP subfamily are involved in multidrug resistance. This particular protein is responsible for the transport of taurocholate and other cholate conjugates from hepatocytes (liver cells) to the bile. In humans, the activity of this transporter is the major determinant of bile formation and bile flow.
Clinical significance:
ABCB11 is a gene associated with progressive familial intrahepatic cholestasis type 2 (PFIC2). PFIC2 caused by mutations in the ABCB11 gene increases the risk of hepatocellular carcinoma in early life. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hybrid electric bus**
Hybrid electric bus:
A hybrid electric bus is a bus that combines a conventional internal combustion engine propulsion system with an electric propulsion system. These type of buses normally use a Diesel-electric powertrain and are also known as hybrid Diesel-electric buses.
The introduction of hybrid electric vehicles and other green vehicles for purposes of public transport forms a part of sustainable transport schemes.
Powertrain:
Types of hybrid vehicle drivetrain A hybrid electric bus may have either a parallel powertrain (e.g., Volvo B5LH) or a series powertrain (e.g., some versions of the Alexander Dennis Enviro400 MMC).
Powertrain:
Plug-in hybrid A plug-in hybrid school bus effort began in 2003 in Raleigh, NC, when Advanced Energy began working between districts across the country and manufacturers to understand the needs of both. The effort demonstrated both a technical and business feasibility and as a result was able to secure funding in 2005 from NASEO to purchase up to 20 buses. The resulting RFP from Advanced Energy was won by IC Bus using a product jointly produced with Enova for a 22-mile plug-in hybrid product with a $140k premium over existing buses. The buses performed well in testing with 70% reductions in fuel usage although only in specific conditions.
Powertrain:
The United States Department of Energy (USDOE) announced the selection of Navistar Corporation for a cost-shared award of up to $10 million to develop, test, and deploy plug-in hybrid electric (PHEV) school buses. The project aims to deploy 60 vehicles for a three-year period in school bus fleets across the nation. The vehicles will be capable of running in either electric-only or hybrid modes and will be recharged from a standard electrical outlet. Because electricity will be their primary fuel, they will consume less petroleum than standard vehicles. To develop the PHEV school bus, Navistar will examine a range of hybrid architectures and evaluate advanced energy storage devices, with the goal of developing a vehicle with a 40-mile (64 km) electric range. Travel beyond the 40-mile (64 km) range will be facilitated by a clean Diesel engine capable of running on renewable fuels. The DOE funding will cover up to half of the project's cost and will be provided over three years, subject to annual appropriations.
Powertrain:
Tribrid Bus Tribrid buses have been developed by the University of Glamorgan in Wales. They are powered by hydrogen fuel or solar cells, batteries and ultracapacitors.
Air pollution and greenhouse gas emissions:
A report prepared by Purdue University suggests introducing more hybrid Diesel-electric buses and a fuel containing 20% biodiesel (BD20) would further reduce greenhouse emissions and petroleum consumption.
Manufacturers:
Current manufacturers of Diesel-electric hybrid buses include Alexander Dennis, Azure Dynamics Corporation, Ebus, Eletra (Brazil), New Flyer Industries, Tata (India), Gillig, Motor Coach Industries, Orion Bus Industries, North American Bus Industries, Daimler AG's Mitsubishi Fuso, MAN, Designline, BAE Systems, Volvo Buses, VDL Bus & Coach, Wrightbus, Castrosua, Tata Hispano and many more.
Manufacturers:
Toyota claims to have started with the Coaster Hybrid Bus in 1997 on the Japanese market. Since 1999, Hybrid electric buses with gas turbine generators have been developed by several manufacturers in the US and New Zealand, with the most successful design being the buses made by Designline of New Zealand. The first model went into commercial service in Christchurch since 1999, and later models were sold for daily service in Auckland, Hong Kong, Newcastle upon Tyne, and Tokyo. The Whispering Wheel bus is another HEV using in-wheel motors. It was tested in winter 2003–04 in Apeldoorn in the Netherlands.In Japan, Mitsubishi Fuso have developed a diesel engine hybrid bus using lithium batteries in 2002, and this model has since seen limited service in several Japanese cities. The Blue Ribbon City Hybrid bus was presented by Hino, a Toyota affiliate, in January 2005.
Manufacturers:
For the North American transit bus market, New Flyer Industries, Gillig, North American Bus Industries, and Nova Bus produce hybrid electric buses using components from either BAE Systems (series hybrid, initially branded HybriDrive and now branded Series-E), or Allison Transmission (parallel/series hybrid, branded Hybrid EP or H 40/50 EP). In May 2003 General Motors started to tour with hybrid electric buses developed together with Allison. General Electric introduced its hybrid electric gear shifters on the market in 2005. Several hundreds of those buses have entered into daily operation in the U.S. In 2006, Nova Bus, which had previously marketed the RTS hybrid before that model was discontinued, added a Diesel-electric hybrid option for its LFS series. In the United Kingdom, Wrightbus has introduced a development of the London "Double-Decker", a new interpretation of the traditional red buses that are a feature of the extreme traffic density in London. The Wright Pulsar Gemini HEV bus uses a small Diesel engine with electric storage through a lithium ion battery pack. The use of a 1.9-litre Diesel instead of the typical 7.0-litre engine in a traditional bus demonstrates the possible advantages of serial hybrids in extremely traffic-dense environments. Based on a London test cycle, a reduction in CO2 emissions of 31% and fuel savings in the range of 40% have been demonstrated, compared with a "Euro-4" compliant bus.
Manufacturers:
Former hybrid bus manufacturers ISE Corporation ThunderVolt (filed for bankruptcy in 2010) Azure Dynamics (filed for bankruptcy in 2012) Conversions Hybrid Electric Vehicle Technologies (HEVT) makes conversions of new and used vehicles (aftermarket and retrofit conversions), from combustion buses and conventional hybrid electric buses into plug-in buses.
List of transit authorities using hybrid electric buses:
Transit authorities that use hybrid electric buses: North America United States Federal funding generally comes from the federal Diesel Emissions Reduction Act.
List of transit authorities using hybrid electric buses:
ABQ RIDE (Albuquerque, New Mexico) Ann Arbor Area Transportation Authority (AAATA) (Ann Arbor, Michigan) Autoridad Metropolitana de Autobuses (San Juan, Puerto Rico) Baltimore, Maryland Bee-Line Bus System (Westchester County, New York) Berks Area Reading Transportation Authority (Berks County, Pennsylvania) Bloomington Transit (Bloomington, Indiana) Broome County Transit (Broome County, New York) Broward County Transit (Broward County, Florida) Capital Area Transportation Authority (Lansing, Michigan) Capital District Transportation Authority (Albany, New York) Central New York Regional Transportation Authority (Syracuse, New York) Charlotte Area Transit System (Charlotte, North Carolina) Chatham Area Transit (Savannah, Georgia) Chicago Transit Authority Citibus (Lubbock, Texas) Central Ohio Transit Authority (Columbus, Ohio) Clarksville Transit System (CTS) (Clarksville, Tennessee) Community Transit (Snohomish County, Washington) C-Tran (Vancouver, Washington) Citilink (Fort Wayne, Indiana) Cache Valley Transit District (Logan, Utah) DART First State (Delaware) Durham Area Transit Authority (Durham, North Carolina) Eureka Transit Service (Eureka, California) GoRaleigh (formerly Capital Area Transit) (Raleigh, North Carolina) Greater Lafayette Public Transportation Corporation (Lafayette, IN and West Lafayette, IN) Greater Lynchburg Transit Company (Lynchburg, VA) Greenville Area Transit (Greenville, North Carolina) Hillsborough Area Regional Transit (Hillsborough County, Florida) Howard Transit, (Howard County, Maryland) IndyGo (Indianapolis, Indiana) Jacksonville Transportation Authority Kanawha Valley Regional Transportation Authority Kansas City Area Transportation Authority King County Metro Transit Authority (Seattle, Washington) Lane Transit District (Lane County, Oregon) Long Beach Transit (Long Beach, California) LACMTA (Los Angeles, California) LANta (Lehigh Valley, Pennsylvania) Madison Metro Transit (Wisconsin) Manatee County Area Transit (Manatee County, Florida) Massachusetts Bay Transportation Authority (Boston, MA) MATA (Memphis, Tennessee) MATBUS – Metro Area Transit (Fargo, ND – Moorhead, MN) Metropolitan Transit Authority of Harris County, Texas (Houston, Texas) Minneapolis-Saint Paul Metro Transit MTA Maryland (Baltimore, Maryland) Nashville Metropolitan Transit Authority New York City Transit Authority Niagara Frontier Transportation Authority (Buffalo, New York) North County Transit District (North San Diego County, California) Orange County Transportation Authority (Orange County, California) Pioneer Valley Transit Authority (Springfield, Massachusetts) Port Authority of Allegheny County (Pittsburgh, Pennsylvania) Regional Transportation Commission of Southern Nevada/Citizens Area Transit (Las Vegas, Nevada) Rhode Island Public Transit Authority (Providence, Rhode Island). 1 gas and 1 diesel for testing use only; diesel was converted gas was hybrid from factory.
List of transit authorities using hybrid electric buses:
Roaring Fork Transportation Authority (Aspen, Colorado) San Diego Metropolitan Transit System/San Diego Transit (San Diego, California) San Francisco MUNI (San Francisco, California) San Joaquin Regional Transit District (Stockton, California) Santa Clara Valley Transportation Authority - VTA (Santa Clara County, California) Santa Rosa CityBus (Santa Rosa, California) Sarasota County Area Transit (Sarasota County, Florida) Sound Transit (Puget Sound region, Washington) Southeastern Pennsylvania Transportation Authority (Philadelphia, Pennsylvania) Southwest Ohio Regional Transit Authority (Cincinnati, Ohio) Spokane Transit Authority (Spokane, Washington) TCAT (Ithaca, NY) TheBus (Honolulu, Hawaii) The Rapid The Interurban Transit Partnership Grand Rapids, Michigan *Has 5 vehicles used in fixed route service.
List of transit authorities using hybrid electric buses:
TriMet (Portland, Oregon): two vehicles University of Michigan parking and transportation services (Ann Arbor, Michigan) Utah Transit Authority (Salt Lake City, Utah) Washington Metropolitan Area Transit Authority Canada Transit Windsor (Windsor, Ontario) Edmonton Transit System (Edmonton, Alberta) Hamilton Street Railway (Hamilton, Ontario) OC Transpo (Ottawa, Ontario) RTC (Quebec City, Quebec) RTL (Longueuil, Quebec) Saskatoon Transit, Saskatchewan STL (Laval, Quebec) STL (Lévis, Quebec) STM (Montreal, Quebec) STO (Gatineau, Quebec) STS (Sherbrooke, Quebec) St. Catharines Transit Commission (St. Catharines, Ontario) Toronto Transit Commission buses [673 out of 2137 regular buses are hybrid as of 2020] Coast Mountain Bus Company (Vancouver, British Columbia) BC Transit (Kelowna and Victoria).
List of transit authorities using hybrid electric buses:
GRT (Waterloo, Ontario) [currently 6 out of 218 buses in service are hybrid] London Transit Commission (London, Ontario) Strathcona County Transit (Strathcona County, Alberta) [as of 2014, 10 Nova Bus LFS HEV diesel-electric hybrid buses remain in service] Halifax Transit, Halifax, Nova Scotia. Currently owns two hybrid-electric buses.
Lethbridge Transit, Lethbridge, Alberta. 11 out of 42 buses are hybrid.
Asia China Beijing Public Transport Kunming Bus Shenzhen Bus Group Shenzhen Eastern Bus Shenzhen Western Bus Jinan Bus Zhengzhou Bus Group Hong Kong Citybus New World First Bus Kowloon Motor Bus India Delhi Multi-Module Transit Mumbai BEST CNG-Hybrid Iran Vehicle, Fuel and Environment Research Institute (VFERI) Pakistan TransP esh Greenline, Karachi awar Japan Marunouchi Shuttleetc.
Philippines Green Frog Hybrid Buses Singapore SBS Transit SMRT Buses Tower Transit Singapore Thailand BMTA Europe Belarus Minsk Slutsk Germany Dresden Hagen Lübeck Munich Nuremberg Hungary Budapest – The fleet consists of 28 Volvo 7900A Hybrid (articulated).
Kecskemét – The fleet consists of 20 Mercedes-Benz Citaro G BlueTec®-Hybrid (articulated).
Norway Nettbuss, Hamar Ruter, Oslo Nettbuss, Trondheim Nettbuss, Arendal Nobina, Tromsø Vestviken Kollektivtrafikk, Vestfold. Scania Citywide.
Romania STB, Bucharest – The fleet consists of 130 Mercedes-Benz Citaro Hybrid.
UK The Green Bus Fund is a fund which is supporting bus companies and local authorities in the UK to help them buy new electric buses.
London Buses, London. This is the largest fleet in the UK, with around 2,300 vehicles in use.
List of transit authorities using hybrid electric buses:
National Express West Midlands, Birmingham – 18 currently, 21 more planned Stagecoach, Manchester, Oxford, Sheffield, Newcastle Oxford Bus Company, Oxfordshire – 52 currently FirstGroup, Bath, Somerset, Bristol, Manchester Metroshuttle, Leeds, Essex Reading Buses Lothian Buses Cumfybus, Merseyside Brighton & Hove Stagecoach East Scotland, Aberdeen Arriva Yorkshire, from April 2013 Spain Barcelona (MAN Lion's City Hybrid) Empresa Municipal de Transportes, Madrid Figueres, within the electric bus Project, IDAE Sweden Jönköpings Länstrafik, Jönköping. MAN Lion's City Hybrid.
List of transit authorities using hybrid electric buses:
Göteborgs Spårvägar, Gothenburg. Volvo 7700 Hybrid.
Storstockholms Lokaltrafik, Stockholm. MAN Lion's City Hybrid.
Other European countries Ljubljanski potniški promet (5 Kutsenits Hydra City II/III Hybrid's), Ljubljana, Slovenia Paris: RATP is using a hybrid electric bus outfitted with ultracapacitors; the model used is the MAN Lion's City Hybrid.
Milan, Italy Team Trafikk, Trondheim, Norway, with 10 Volvo B5L Vienna, Austria PostAuto, Switzerland: one vehicle is being tested since April 2010; the test will continue for three years.
Warsaw, Poland, 4 Solaris hybrid (combustion-electric) buses Luxemburg (Sales-Lentz, Emile Weber and AVL) Belgium / Flanders (De Lijn) Belgium / Wallonia (TEC): 90 Volvo 7900H (plug-in hybrid) + 208 solaris (combustion-electric) ordered in 2016Q4 Other countries Egypt IMUT: http://www.i-mut.net/en/about-us.
Buenos Aires, Argentina Christchurch, New Zealand Curitiba, Brazil Mexico City, Mexico (Metrobús Line 4) Bogotá, Colombia | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gold halide**
Gold halide:
Gold halides are compounds of gold with the halogens.
Monohalides:
AuCl, AuBr, and AuI are all crystalline solids with a structure containing alternating linear chains: ..-X-Au-X-Au-X-Au-X-... The X-Au-X angle is less than 180°.The monomeric AuF molecule has been detected in the gas phase.
Trihalides:
Gold triiodide does not exist or is unstable.Gold(III) fluoride, AuF3, has a unique polymeric helical structure, containing corner-sharing {AuF4} squares.
Pentahalides:
Gold(V) fluoride, AuF5, is the only known example of gold in the +5 oxidation state. It most commonly occurs as the dimer Au2F10. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Xbox 360 controller**
Xbox 360 controller:
The Xbox 360 controller is the primary game controller for Microsoft's Xbox 360 home video game console that was introduced at E3 2005. The Xbox 360 controller comes in both wired and wireless versions. The Xbox controller is not compatible with the Xbox 360. The wired and wireless versions are also compatible with Microsoft PC operating systems, such as Windows XP, Windows Vista, Windows 7, Windows 8, Windows 10, and Windows 11.
Xbox 360 controller:
The wireless controllers run on either AA batteries or a rechargeable battery pack. The wired controllers may be connected to any of the USB ports on the console, or to an attached USB hub.
Design:
The Xbox 360 controller has the same basic familiar button layout as the Controller S except that a few of the auxiliary buttons have been moved. The "back" and "start" buttons have been moved to a more central position on the face of the controller, and the "white" and "black" buttons have been removed and replaced with two new bumpers that are positioned over the analog triggers on the back of the controller. The controller has a 2.5 mm TRS connector on the front, allowing users to connect a headset for voice communication. It also features a proprietary serial connector (which is split into 2 parts on either side of the headset connector) for use with additional accessories, such as the chatpad.
Design:
On August 31, 2010, Microsoft's Larry Hryb (a.k.a. Major Nelson) revealed a new design of the Xbox 360 controller set to replace the Wireless controller bundled with the Play & Charge Kit. Among small changes such as the shape of the analog stick tops and grey-colored face buttons, the new controller features an adjustable directional pad which can be changed between a disc type D-pad or a plus shaped D-pad. The control pad was released in North America exclusively with Play & Charge Kits on November 9, 2010, and was released in Europe during February 2011.The Xbox 360 controller provides a standard USB Human interface device software interface, but is designed for the Microsoft XInput interface library. Although many PC video games support the XInput library, some games might not work with this controller.
Layout:
A standard Xbox 360 controller features eleven digital buttons, two analog triggers, two analog sticks and a digital D-pad. The right face of the controller features four digital action buttons: a green button, red button, blue button, and yellow button. The lower right houses the right analog stick, in lower left is a digital D-pad and on the left face is the left analog stick. Both analog sticks can also be clicked in to activate a digital button beneath. In the center of the controller face are digital "Start", "Back" and "Guide" buttons. The "Guide" (more commonly known as simply the "Xbox") button is labelled with the Xbox logo, and is used to turn on the console/controller and to access the guide menu. It is also surrounded by the "ring of light", which indicates the controller number, as well as flashing when connecting and to provide notifications. The left and right "shoulders" each feature a digital shoulder button, or "bumper", and an analog trigger.
Layout:
Wireless controllers also feature an additional "connect" button located between the "bumpers" to facilitate syncing with the console.
Standard colors Wired controllers are available in white (sold separately and bundled with the Core consoles) and black (Xbox 360 S color scheme) along with the limited edition TRON controllers. However, wireless controllers are available in numerous different colors including: White controllers were bundled with the Arcade, Pro, and Limited Edition Final Fantasy XIII Elite consoles; also sold separately.
Black controllers were bundled with the Elite consoles to match the casing; also sold separately (UPC/EAN 0885370145717, 885370239393).
Dark Blue controllers were released in October 2007 (US only).
Light Blue controllers were released in October 2007 (Europe and Japan only).
Pink controllers were released in October 2007.
Layout:
Black S and White S controllers are bundled with Xbox 360 S consoles. These differ from their original counterparts in that they are completely one color, rather than with grey accents. The guide button has a mirror like finish, and the analog sticks and D-pad are color matched. The bottom edge of this controller also features a glossy finish to match the Xbox 360 S 250 GB case design. "S" controllers also replace the Microsoft branding above the charging port with an Xbox 360 wordmark.
Layout:
Limited and special edition colors Halo 3 "Spartan Green" controllers were included with the Halo 3 Special Edition Xbox 360 Pro consoles released in September 2007. The controller features 'black accents' with the D-pad, analog sticks, triggers and parts of the casing all changed to black instead to the usual gray.
"Limited Edition" "Spartan" and "Brute" controllers were released in September 2007. Two versions were available, each of which feature Halo 3-themed artwork (with either a "Spartan" or "Brute" design) from artist Todd McFarlane. Each version of the controller also included a Master Chief figurine (a different figure was included with each version).
Layout:
Red "Limited Edition" controllers were released in September 2008. The controller features 'black accents' with the D-pad, analog sticks, triggers and parts of the casing all changed to black instead to the usual gray. It comes bundled with a Play & Charge Kit with a red rechargeable battery pack. The red controller is also included with the Limited Edition Resident Evil 5 Xbox 360 Elite console released in March 2009.
Layout:
Green "Limited Edition" controllers were released in mid October 2008 in Europe, Asia, and Latin America. The green controller has a D-pad with 16-way functionality, instead of the 8-way D-pad used on all previous controllers. This controller was released alongside Pro Evolution Soccer 2009.
Layout:
Dragon Design "Limited Edition" White and Black controllers were released in October 2008 and are available only through Walmart and Sam's Club. The controller features a black dragon (and other symbols) on a white background, along with a white D-pad and black analog sticks. It comes bundled with a black wired headset.Halo 3: ODST "Special Edition" controllers were released in September 2009 in a "Collector's Pack" including the Halo 3: ODST game. The pack was originally exclusive to GameStop and retailed for US$99.99 in North America.
Layout:
Radioactive Design "Exclusive" controllers were released in October 2009 and were available exclusively at GameStop and EB Games Australia. The controller features a carbon black shell with a red radiation symbol emanating from the right analog stick. The left analog stick is black and the right analog stick and D-pad are red. This controller was announced at Major Nelson's website and is said to be limited edition although the packaging makes no reference to this. It comes bundled with a Play & Charge Kit with a black rechargeable battery pack.
Layout:
Halo: Reach "Special Edition" controllers were released on September 14, 2010, coinciding with the release of Halo: Reach. The controller is based around the Black S design (black analog sticks, D-pad, etc.; glossy black front; shiny guide button) with the matte black shell replaced with a satin silver shell, which also features a custom design based on the game. It was available separately and with the Halo: Reach Special Edition console bundle, which came bundled with two of the controllers.
Layout:
Fable III "Special Edition" controllers were released on October 5, 2010, 3 weeks before the release of Fable III itself. The controller is based around the Black S design (black analog sticks, D-pad etc.; glossy black front; shiny guide button) and features a custom gold-colored shell and artwork. It also comes bundled with an exclusive downloadable tattoo set for use within the game.
Layout:
TRON controllers were created by PDP and released in December 2010 to coincide with the film TRON: Legacy. The controllers were offered in two limited edition variations—one with blue LED illumination (20,000 units made) and the other with orange LED illumination (250 units made). Both versions are wired and feature textured grips and a raised, 4-way D-pad.
Blue "Limited Edition" controllers were released with Limited Edition Blue Xbox 360 consoles and both the console and the controller had turquoise accents and was released on October 7, 2014, exclusively at Walmart for a limited time.
Layout:
Transforming D-pad controllers Transforming D-pad "special edition" controllers were released in the US on November 9, 2010, and in Europe during February 2011. The main feature of this controller is a D-pad that can be rotated to adapt to the user's gameplay, becoming either a "plus" (4-way) or a "disc" (8-way) D-pad. The controller also features new concave analog stick tops and grey tone face buttons (A, B, X and Y). The main shell of the controller is matte silver, with glossy black accents (triggers, bumpers and both front and rear panels) like the "Black S" design. This controller comes bundled with a matte black improved Play & Charge Kit with a matte black rechargeable battery pack, offering up to 35 hours of play. The codename for the controller during development was "Aberdeen".
Layout:
Gears of War 3 "Limited Collector's Edition" controllers were released on September 20, 2011, to coincide with the launch of Gears of War 3. The controllers are metallic red with a black "Infected Omen" symbol and feature a transforming D-pad. Unlike the "Transforming D-pad" Special Edition controller, the Gears of War 3 LCE controller features the standard colored face buttons and analog stick tops found on other controllers. It was sold with the Gears of War 3 LCE console bundle, which came bundled with two of the controllers.
Layout:
Call of Duty: Modern Warfare 3 "Limited Edition" controllers were released on November 8, 2011, in North America, Australasia and the EMEA (Europe, Middle East and Africa) region to coincide with the release of Call of Duty: Modern Warfare 3. It features custom Modern Warfare 3 artwork (predominantly matte grey), a transforming D-pad and the same concave analog stick tops found on the original transforming D-pad controller. All (non-face) buttons, as well as the analog sticks, are black. It was sold with the Modern Warfare 3 Limited Edition console bundle which came bundled with two of the controllers.
Layout:
Star Wars C-3PO "Limited Edition" controllers were released in April 2012 to coincide with the release of Kinect Star Wars. The controller is mirrored gold and black, and features a transforming D-pad, concave analog stick tops and standard colored face buttons. The black panel at the front of the controller also features "wiring" artwork, resembling the parts of C-3PO that are not covered in gold plating in the original Star Wars films. It was bundled the Kinect Star Wars Limited Edition console bundle.
Layout:
Chrome Series "Special Edition" controllers were released in May 2012. The chrome series controllers are available in six colors: Blue, Red, Silver, Gold, Black and Purple. These controllers feature a transforming D-pad, concave analog stick tops and standard colored face buttons.
Black S controllers with a transforming D-pad, concave analog stick tops and standard colored face buttons and bundled with a black improved Play & Charge Kit with a black rechargeable battery pack were released in October 2012.
Layout:
Halo 4 "Limited Edition" controllers were released in November 2012. Two different controllers are available: Halo 4 branded "Limited Edition" 'exclusive controllers inspired by the game' were sold with the Halo 4 Limited Edition console bundle; two were included. These feature a transforming D-pad, concave analog stick tops, standard colored face buttons and a glowing blue Xbox guide button instead of the traditional glowing green Xbox guide button.
Layout:
UNSC Halo 4 "Limited Edition" controllers feature the United Nations Space Command (UNSC) emblem on a dark grey translucent case, and also feature a transforming D-pad, concave analog stick tops, standard colored face buttons and a glowing blue Xbox guide button.
Layout:
Tomb Raider "Limited Edition" controllers were released in early March 2013 to complement the launch of Tomb Raider. They are red and feature a two layer color finish with laser etching to create a realistic and tactile worn appearance inspired by Lara's climbing axe from the game. These feature a transforming D-pad, concave analog stick tops and standard colored face buttons. The controllers also come bundled with a downloadable token for an Xbox 360-exclusive playable Tomb Raider character.
Layout:
Non-retail colors Launch Team Edition controllers were bundled with the "Xbox 360 Launch Team Edition", given exclusively to members of the Xbox launch team by Microsoft in November 2005. These white wireless controllers feature green accents at the front in place of the standard grey.
Yellow controllers were included with the 100 Limited Edition The Simpsons Movie Xbox 360 Pro consoles announced in May 2007, and given away as prizes in special events and promotions.
Orange coloured LIVE TURNS FIVE controllers were released in November 2007, and were given away to selected members of the media.
Guide button:
The Xbox 360 controller has a guide button in the center of its face that provides a new functionality. This button is surrounded by a ring of lights divided into four quadrants that provide gamers with different types of information during game play. For instance, during a split screen multiplayer match, a particular quadrant will light up to indicate to a player which part of the screen they are playing on at that time. In this case, when the user pushes the button, they access the Xbox guide; a menu which provides access to features like messaging friends, downloading content, voice chat and customizing soundtracks, while staying in the game. The Guide button also allows users to turn off the controller or the console by holding the button for a few seconds (rather than simply pressing it).
Accessories:
Rechargeable Battery Pack The Rechargeable Battery Pack is a nickel metal hydride (NiMH) battery pack, which provides up to 24 hours of continuous gaming for the wireless controller. It is an alternative to disposable AA batteries, which differ slightly in voltage and have higher disposal costs (financial and environmental). It ships as part of, and can be charged by, the Play & Charge Kit and the Quick Charge Kit. To fully charge the battery pack takes approximately 2 hours with the Quick Charge Kit; the Play & Charge Kit takes longer (and depends on whether the controller is being used). An upgraded, 35-hour version is included with improved Play & Charge Kits and "transforming D-pad" controllers, while a 40-hour version is included with the improved Quick Charge Kit.
Accessories:
Wireless Gaming Receiver The Wireless Gaming Receiver (sold as "Crossfire Wireless Gaming Receiver" in the UK) allows wireless Xbox 360 accessories, such as wireless gamepads, racing wheels and headsets, to be used on a Windows-based PC. The device acts in a similar manner to an Xbox 360, allowing up to 4 controllers and 4 headsets at a time to be connected to the receiver. The device has a 30-foot (10 meter) range and a six-foot (2 meter) USB cable. It is specifically designed to work with games bearing the "Games for Windows" logo, but will function with most games that permit a standard PC gamepad. The official Xbox website noted that the adapter will work with "all future wireless devices".
Accessories:
Messenger Kit The Messenger Kit consists of a wired Xbox 360 headset and a small keyboard known as the "Chatpad". The Chatpad connects to the front of the controller and may be used for any standard text input on the console. It is not currently compatible with the wireless gaming receiver.
Non-gaming uses:
The United States Navy has announced that it plans to use Xbox 360 controllers to control periscopes on new Virginia-class submarines, for both cost and familiarity reasons.
Reception:
The Xbox 360 controller received positive reviews when it was released. Before then, as IGN stated, the original Xbox controller was "huge, ugly, cheap, and uncomfortable" and concluded to be an "abomination". Many of these problems were corrected with Microsoft's releases of the Xbox controller S and then the Xbox 360 controller. IGN credited the Xbox 360 controller for its being one of "the most ergonomically comfortable console controllers around". It was also praised for its improved button placement, its functioning logo as a button, and Microsoft's choice of bottom-mounting headset ports as opposed to top-mounting them so as to minimize snagged wire problems. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Comparison of application virtualization software**
Comparison of application virtualization software:
Application virtualization software refers to both application virtual machines and software responsible for implementing them. Application virtual machines are typically used to allow application bytecode to run portably on many different computer architectures and operating systems. The application is usually run on the computer using an interpreter or just-in-time compilation (JIT). There are often several implementations of a given virtual machine, each covering a different set of functions.
Comparison of virtual machines:
JavaScript machines not included. See List of ECMAScript engines to find them.The table here summarizes elements for which the virtual machine designs are intended to be efficient, not the list of abilities present in any implementation.
Virtual machine instructions process data in local variables using a main model of computation, typically that of a stack machine, register machine, or random access machine often called the memory machine. Use of these three methods is motivated by different tradeoffs in virtual machines vs physical machines, such as ease of interpreting, compiling, and verifying for security.
Comparison of virtual machines:
Memory management in these portable virtual machines is addressed at a higher level of abstraction than in physical machines. Some virtual machines, such as the popular Java virtual machines (JVM), are involved with addresses in such a way as to require safe automatic memory management by allowing the virtual machine to trace pointer references, and disallow machine instructions from manually constructing pointers to memory. Other virtual machines, such as LLVM, are more like traditional physical machines, allowing direct use and manipulation of pointers. Common Intermediate Language (CIL) offers a hybrid in between, allowing both controlled use of memory (like the JVM, which allows safe automatic memory management), while also allowing an 'unsafe' mode that allows direct pointer manipulation in ways that can violate type boundaries and permission.
Comparison of virtual machines:
Code security generally refers to the ability of the portable virtual machine to run code while offering it only a prescribed set of abilities. For example, the virtual machine might only allow the code access to a certain set of functions or data. The same controls over pointers which make automatic memory management possible and allow the virtual machine to ensure typesafe data access are used to assure that a code fragment is only allowed to certain elements of memory and cannot bypass the virtual machine itself. Other security mechanisms are then layered on top as code verifiers, stack verifiers, and other methods.
Comparison of virtual machines:
An interpreter allows programs made of virtual instructions to be loaded and run immediately without a potentially costly compile into native machine instructions. Any virtual machine which can be run can be interpreted, so the column designation here refers to whether the design includes provisions for efficient interpreting (for common usage).
Comparison of virtual machines:
Just-in-time compilation (JIT), refers to a method of compiling to native instructions at the latest possible time, usually immediately before or during the running of the program. The challenge of JIT is more one of implementation than of virtual machine design, however, modern designs have begun to make considerations to help efficiency. The simplest JIT methods simply compile to a code fragment similar to an offline compiler. However, more complex methods are often employed, which specialize compiled code fragments to parameters known only at runtime (see Adaptive optimization).
Comparison of virtual machines:
Ahead-of-time compilation (AOT) refers to the more classic method of using a precompiler to generate a set of native instructions which do not change during the runtime of the program. Because aggressive compiling and optimizing can take time, a precompiled program may launch faster than one which relies on JIT alone for execution. JVM implementations have mitigated this startup cost by initial interpreting to speed launch times, until native code fragments can be generated by JIT.
Comparison of virtual machines:
Shared libraries are a facility to reuse segments of native code across multiple running programs. In modern operating systems, this generally means using virtual memory to share the memory pages containing a shared library across different processes which are protected from each other via memory protection. It is interesting that aggressive JIT methods such as adaptive optimization often produce code fragments unsuitable for sharing across processes or successive runs of the program, requiring a tradeoff be made between the efficiencies of precompiled and shared code and the advantages of adaptively specialized code. For example, several design provisions of CIL are present to allow for efficient shared libraries, possibly at the cost of more specialized JIT code. The JVM implementation on OS X uses a Java Shared Archive to provide some of the benefits of shared libraries.
Comparison of application virtual machine implementations:
In addition to the portable virtual machines described above, virtual machines are often used as an execution model for individual scripting languages, usually by an interpreter. This table lists specific virtual machine implementations, both of the above portable virtual machines, and of scripting language virtual machines. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dad joke**
Dad joke:
A dad joke is a joke, typically a pun, often presented as a one-liner or a question and answer, but less often a narrative. Generally inoffensive, dad jokes are stereotypically told with sincere humorous intent or to intentionally provoke a negative reaction to their overly simplistic humor.
Dad joke:
Many dad jokes may be considered anti-jokes, deriving humor from a punchline that is intentionally not funny.A common type of dad joke goes as follows: A child will say to the father, "I'm hungry," to which the father will reply, "Hi, Hungry, I'm Dad."While the exact origin of the term dad joke is unknown, a writer for the Gettysburg Times wrote an impassioned defence of the genre in June 1987 under the headline "Don't ban the 'Dad' jokes; preserve and revere them". The term "dad jokes" received mentions in the American sitcom How I Met Your Mother in 2008 and the Australian quiz show Spicks and Specks in 2009. In September 2019, Merriam-Webster added the phrase "dad joke" to the dictionary.
Examples:
Q: What does a highlighter say when it answers the phone? A: Yello!! Q: What's orange and sounds like a parrot? A: A carrot.
Q: What's brown and sticky? A: A stick.
Q: Where does a sick fish go? A: The dock.
Q: What do a tick and the Eiffel Tower have in common? A: They're both Paris sites.
Q: What's the difference between a pun and a Dad joke? A: It will become apparent.
Q: What did the fish say when he swam into the wall? A: Dam! | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Identric mean**
Identric mean:
The identric mean of two positive real numbers x, y is defined as: lim lim exp ln ln if else It can be derived from the mean value theorem by considering the secant of the graph of the function ln x . It can be generalized to more variables according by the mean value theorem for divided differences. The identric mean is a special case of the Stolarsky mean. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.