text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Probabilistic method**
Probabilistic method:
In mathematics, the probabilistic method is a nonconstructive method, primarily used in combinatorics and pioneered by Paul Erdős, for proving the existence of a prescribed kind of mathematical object. It works by showing that if one randomly chooses objects from a specified class, the probability that the result is of the prescribed kind is strictly greater than zero. Although the proof uses probability, the final conclusion is determined for certain, without any possible error.
Probabilistic method:
This method has now been applied to other areas of mathematics such as number theory, linear algebra, and real analysis, as well as in computer science (e.g. randomized rounding), and information theory.
Introduction:
If every object in a collection of objects fails to have a certain property, then the probability that a random object chosen from the collection has that property is zero.
Similarly, showing that the probability is (strictly) less than 1 can be used to prove the existence of an object that does not satisfy the prescribed properties.
Introduction:
Another way to use the probabilistic method is by calculating the expected value of some random variable. If it can be shown that the random variable can take on a value less than the expected value, this proves that the random variable can also take on some value greater than the expected value. Alternatively, the probabilistic method can also be used to guarantee the existence of a desired element in a sample space with a value that is greater than or equal to the calculated expected value, since the non-existence of such element would imply every element in the sample space is less than the expected value, a contradiction.
Introduction:
Common tools used in the probabilistic method include Markov's inequality, the Chernoff bound, and the Lovász local lemma.
Two examples due to Erdős:
Although others before him proved theorems via the probabilistic method (for example, Szele's 1943 result that there exist tournaments containing a large number of Hamiltonian cycles), many of the most well known proofs using this method are due to Erdős. The first example below describes one such result from 1947 that gives a proof of a lower bound for the Ramsey number R(r, r).
Two examples due to Erdős:
First example Suppose we have a complete graph on n vertices. We wish to show (for small enough values of n) that it is possible to color the edges of the graph in two colors (say red and blue) so that there is no complete subgraph on r vertices which is monochromatic (every edge colored the same color).
Two examples due to Erdős:
To do so, we color the graph randomly. Color each edge independently with probability 1/2 of being red and 1/2 of being blue. We calculate the expected number of monochromatic subgraphs on r vertices as follows: For any set Sr of r vertices from our graph, define the variable X(Sr) to be 1 if every edge amongst the r vertices is the same color, and 0 otherwise. Note that the number of monochromatic r -subgraphs is the sum of X(Sr) over all possible subsets Sr . For any individual set Sri , the expected value of X(Sri) is simply the probability that all of the C(r,2) edges in Sri are the same color: E[X(Sri)]=2⋅2−(r2) (the factor of 2 comes because there are two possible colors).
Two examples due to Erdős:
This holds true for any of the C(n,r) possible subsets we could have chosen, i.e. i ranges from 1 to C(n,r) . So we have that the sum of E[X(Sri)] over all Sri is ∑i=1C(n,r)E[X(Sri)]=(nr)21−(r2).
The sum of expectations is the expectation of the sum (regardless of whether the variables are independent), so the expectation of the sum (the expected number of all monochromatic r -subgraphs) is E[X(Sr)]=(nr)21−(r2).
Two examples due to Erdős:
Consider what happens if this value is less than 1. Since the expected number of monochromatic r-subgraphs is strictly less than 1, there exists a coloring satisfying the condition that the number of monochromatic r-subgraphs is strictly less than 1. The number of monochromatic r-subgraphs in this random coloring is a non-negative integer, hence it must be 0 (0 is the only non-negative integer less than 1). It follows that if E[X(Sr)]=(nr)21−(r2)<1 (which holds, for example, for n = 5 and r = 4), there must exist a coloring in which there are no monochromatic r-subgraphs.By definition of the Ramsey number, this implies that R(r, r) must be bigger than n. In particular, R(r, r) must grow at least exponentially with r.
Two examples due to Erdős:
A weakness of this argument is that it is entirely nonconstructive. Even though it proves (for example) that almost every coloring of the complete graph on (1.1)r vertices contains no monochromatic r-subgraph, it gives no explicit example of such a coloring. The problem of finding such a coloring has been open for more than 50 years.
Two examples due to Erdős:
Second example A 1959 paper of Erdős (see reference cited below) addressed the following problem in graph theory: given positive integers g and k, does there exist a graph G containing only cycles of length at least g, such that the chromatic number of G is at least k? It can be shown that such a graph exists for any g and k, and the proof is reasonably simple. Let n be very large and consider a random graph G on n vertices, where every edge in G exists with probability p = n1/g −1. We show that with positive probability, G satisfies the following two properties: Property 1. G contains at most n/2 cycles of length less than g.Proof. Let X be the number cycles of length less than g. The number of cycles of length i in the complete graph on n vertices is n!2⋅i⋅(n−i)!≤ni2 and each of them is present in G with probability pi. Hence by Markov's inequality we have Pr (X>n2)≤2nE[X]≤1n∑i=3g−1pini=1n∑i=3g−1nig≤gnng−1g=gn−1g=o(1).
Two examples due to Erdős:
Thus for sufficiently large n, property 1 holds with a probability of more than 1/2.Property 2. G contains no independent set of size ⌈n2k⌉ .Proof. Let Y be the size of the largest independent set in G. Clearly, we have Pr ln n−p)=o(1), when y=⌈n2k⌉.
Thus, for sufficiently large n, property 2 holds with a probability of more than 1/2.For sufficiently large n, the probability that a graph from the distribution has both properties is positive, as the events for these properties cannot be disjoint (if they were, their probabilities would sum up to more than 1).
Two examples due to Erdős:
Here comes the trick: since G has these two properties, we can remove at most n/2 vertices from G to obtain a new graph G′ on n′≥n/2 vertices that contains only cycles of length at least g. We can see that this new graph has no independent set of size ⌈n′k⌉ . G′ can only be partitioned into at least k independent sets, and, hence, has chromatic number at least k.
Two examples due to Erdős:
This result gives a hint as to why the computation of the chromatic number of a graph is so difficult: even when there are no local reasons (such as small cycles) for a graph to require many colors the chromatic number can still be arbitrarily large.
Additional resources:
Probabilistic Methods in Combinatorics, MIT OpenCourseWare | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Short Code (computer language)**
Short Code (computer language):
Short Code was one of the first higher-level languages developed for an electronic computer. Unlike machine code, Short Code statements represented mathematic expressions rather than a machine instruction. Also known as an automatic programming, the source code was not compiled but executed through an interpreter to simplify the programming process; the execution time was much slower though.
History:
Short Code was proposed by John Mauchly in 1949 and originally known as Brief Code. William Schmitt implemented a version of Brief Code in 1949 for the BINAC computer, though it was never debugged and tested. The following year Schmitt implemented a new version of Brief Code for the UNIVAC I, where it was now known as Short Code (also Short Order Code). A revised version of Short Code was developed in 1952 for the Univac II by A. B. Tonik and J. R. Logan.While Short Code represented expressions, the representation itself was not direct and required a process of manual conversion. Elements of an expression were represented by two-character codes and then divided into 6-code groups in order to conform to the 12-byte words used by BINAC and Univac computers. For example, the expression a = (b + c) / b * c was converted to Short Code by a sequence of substitutions and a final regrouping: X3 = ( X1 + Y1 ) / X1 * Y1 substitute variables X3 03 09 X1 07 Y1 02 04 X1 Y1 substitute operators and parentheses. Note that multiplication is represented by juxtaposition.
History:
07Y10204X1Y1 group into 12-byte words.
0000X30309X1 Along with basic arithmetic, Short Code allowed for branching and calls to a library of functions. The language was interpreted and ran about 50 times slower than machine code. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HPGD**
HPGD:
Hydroxyprostaglandin dehydrogenase 15-(NAD) (the HUGO-approved official symbol = HPGD; HGNC ID, HGNC:5154), also called 15-hydroxyprostaglandin dehydrogenase [NAD+], is an enzyme that in humans is encoded by the HPGD gene.In melanocytic cells HPGD gene expression may be regulated by MITF.
Functions:
15-hydroxy prostaglandin dehydrogenase (HPGD) is an enzyme belongs to the family of oxidoreductases, specifically the short chain dehydrogenase/reductase family 36C member 1. This protein coding gene encodes a member of the short chain alcohol dehydrogenase protein family. HPGD catalyzes the first step in the catabolic pathway of prostaglandins and is therefore responsible for the metabolic/catabolic inactivation of prostaglandins. This inactivation process will oxidize the 15-hydroxyl group of prostaglandins and yield the corresponding 15-keto (oxo) metabolite.Prostaglandins have a critical role in the signaling pathways that are involved in reproduction (establishment of pregnancy, maintenance of pregnancy, and initiation of labor), blood pressure homeostasis (vasoconstriction and vasodilation), sexual dimorphism, and the immune system (inflammation). HPGD is has a critical role in the regulation of prostaglandin expression.
Expression:
HPGD RNA-seq was performed in tissue samples from 95 human individuals representing 27 different tissues to determine tissue-specificity of all protein-coding genes. HPGD was expressed in the adrenal, appendix, bone marrow, brain, colon, duodenum, endometrium, esophagus, fat, gall bladder, heart, kidney, liver, lung, lymph node, ovary, pancreas, placenta, prostate, salivary gland, skin, small intestine, spleen, stomach, testis, thyroid, urinary bladder [1]
Defects in 15-HPGD:
15-HPGD has an unappreciated role in the maintenance of pregnancy. In mice, 15-HPDG has been shown to have essential roles in prevention of early termination of pregnancy and maternal morbidity. In 15-HPGD knockout mice, early pregnancy termination was detected. 15-HPGD KO mice that were able to establish pregnancy, lost pregnancy by gestation day ~8.5. At time of pregnancy loss, 15-HPGD KO mice have normal levels of PGE2, increased levels of PGF2α and decreasing levels of serum progesterone.A hypomorphic mutation of 15-HPGD causes mice to enter labor ~ a full day earlier when compared to their wild-type littermates, due to elevated circulating PGF2α concentrations. Furthermore, it was concluded that 15-HPGD has a critical role in determining the timing of labor | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Héctor Chang**
Héctor Chang:
Héctor Chang Lara is a Venezuelan mathematician working at CIMAT, Guanajuato unit, in Mexico. Chang received his BA in Mathematics from Simon Bolivar University in Venezuela, his MS from the University of New Mexico and his PhD in mathematics from the University of Texas at Austin, advised by Luis Caffarelli. Chang works in partial differential equations, specializing in elliptic and parabolic differential equations as well as integro-differential equations and free boundary problems.
Publications:
Further Time Regularity for Non-Local, Fully Non-Linear Parabolic Equations. (CPAM) Further time regularity for fully non-linear parabolic equations. (Math. Research Letters).
Estimates for concave, non-local parabolic equations with critical drift. (Journal of Integral Equations and Applications) Hölder estimates for non-local parabolic equations with critical drift. (Journal of Differential Equations).
Shape Theorems for Poisson Hail on a Bivariate Ground. (Journal of Advances in Applied Probability).
Boundaries on Two-Dimensional Cones. (Journal of Geometric Analysis).
Regularity for solutions of nonlocal parabolic equations II. (Journal of Differential Equations).
Regularity for solutions of non local parabolic equations. (Calculus of Variations and Partial Differential Equations).
Regularity for solutions of nonlocal, non symmetric equations. (Ann. Inst. H. Poincaré Anal. Non Linéaire). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2-Deoxy-scyllo-inosamine dehydrogenase**
2-Deoxy-scyllo-inosamine dehydrogenase:
2-deoxy-scyllo-inosamine dehydrogenase (EC 1.1.1.329, neoA (gene name), kanK (gene name)) is an enzyme with systematic name 2-deoxy-scyllo-inosamine:NAD(P)+ 1-oxidoreductase. This enzyme catalyses the following chemical reaction 2-deoxy-scyllo-inosamine + NAD(P)+ ⇌ 3-amino-2,3-dideoxy-scyllo-inosose + NAD(P)H + H+This enzyme requires zinc. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stress (mechanics)**
Stress (mechanics):
In continuum mechanics, stress is a physical quantity that describes forces present during deformation. An object being pulled apart, such as a stretched elastic band, is subject to tensile stress and may undergo elongation. An object being pushed together, such as a crumpled sponge, is subject to compressive stress and may undergo shortening. The greater the force and the smaller the cross-sectional area of the body on which it acts, the greater the stress. Stress has units of force per area, such as newtons per square meter (N/m2) or pascal (Pa).
Stress (mechanics):
Stress expresses the internal forces that neighbouring particles of a continuous material exert on each other, while strain is the measure of the deformation of the material. For example, when a solid vertical bar is supporting an overhead weight, each particle in the bar pushes on the particles immediately below it. When a liquid is in a closed container under pressure, each particle gets pushed against by all the surrounding particles. The container walls and the pressure-inducing surface (such as a piston) push against them in (Newtonian) reaction. These macroscopic forces are actually the net result of a very large number of intermolecular forces and collisions between the particles in those molecules. Stress is frequently represented by a lowercase Greek letter sigma (σ).
Stress (mechanics):
Strain inside a material may arise by various mechanisms, such as stress as applied by external forces to the bulk material (like gravity) or to its surface (like contact forces, external pressure, or friction). Any strain (deformation) of a solid material generates an internal elastic stress, analogous to the reaction force of a spring, that tends to restore the material to its original non-deformed state. In liquids and gases, only deformations that change the volume generate persistent elastic stress. If the deformation changes gradually with time, even in fluids there will usually be some viscous stress, opposing that change. Elastic and viscous stresses are usually combined under the name mechanical stress.
Stress (mechanics):
Significant stress may exist even when deformation is negligible or non-existent (a common assumption when modeling the flow of water). Stress may exist in the absence of external forces; such built-in stress is important, for example, in prestressed concrete and tempered glass. Stress may also be imposed on a material without the application of net forces, for example by changes in temperature or chemical composition, or by external electromagnetic fields (as in piezoelectric and magnetostrictive materials).
Stress (mechanics):
The relation between mechanical stress, deformation, and the rate of change of deformation can be quite complicated, although a linear approximation may be adequate in practice if the quantities are sufficiently small. Stress that exceeds certain strength limits of the material will result in permanent deformation (such as plastic flow, fracture, cavitation) or even change its crystal structure and chemical composition.
History:
Humans have known about stress inside materials since ancient times. Until the 17th century, this understanding was largely intuitive and empirical, though this did not prevent the development of relatively advanced technologies like the composite bow and glass blowing.Over several millennia, architects and builders in particular, learned how to put together carefully shaped wood beams and stone blocks to withstand, transmit, and distribute stress in the most effective manner, with ingenious devices such as the capitals, arches, cupolas, trusses and the flying buttresses of Gothic cathedrals.
History:
Ancient and medieval architects did develop some geometrical methods and simple formulas to compute the proper sizes of pillars and beams, but the scientific understanding of stress became possible only after the necessary tools were invented in the 17th and 18th centuries: Galileo Galilei's rigorous experimental method, René Descartes's coordinates and analytic geometry, and Newton's laws of motion and equilibrium and calculus of infinitesimals. With those tools, Augustin-Louis Cauchy was able to give the first rigorous and general mathematical model of a deformed elastic body by introducing the notions of stress and strain. Cauchy observed that the force across an imaginary surface was a linear function of its normal vector; and, moreover, that it must be a symmetric function (with zero total momentum).
History:
The understanding of stress in liquids started with Newton, who provided a differential formula for friction forces (shear stress) in parallel laminar flow.
Definition:
Stress is defined as the force across a small boundary per unit area of that boundary, for all orientations of the boundary. Derived from a fundamental physical quantity (force) and a purely geometrical quantity (area), stress is also a fundamental quantity, like velocity, torque or energy, that can be quantified and analyzed without explicit consideration of the nature of the material or of its physical causes.
Definition:
Following the basic premises of continuum mechanics, stress is a macroscopic concept. Namely, the particles considered in its definition and analysis should be just small enough to be treated as homogeneous in composition and state, but still large enough to ignore quantum effects and the detailed motions of molecules. Thus, the force between two particles is actually the average of a very large number of atomic forces between their molecules; and physical quantities like mass, velocity, and forces that act through the bulk of three-dimensional bodies, like gravity, are assumed to be smoothly distributed over them.: p.90–106 Depending on the context, one may also assume that the particles are large enough to allow the averaging out of other microscopic features, like the grains of a metal rod or the fibers of a piece of wood.
Definition:
Quantitatively, the stress is expressed by the Cauchy traction vector T defined as the traction force F between adjacent parts of the material across an imaginary separating surface S, divided by the area of S.: p.41–50 In a fluid at rest the force is perpendicular to the surface, and is the familiar pressure. In a solid, or in a flow of viscous liquid, the force F may not be perpendicular to S; hence the stress across a surface must be regarded a vector quantity, not a scalar. Moreover, the direction and magnitude generally depend on the orientation of S. Thus the stress state of the material must be described by a tensor, called the (Cauchy) stress tensor; which is a linear function that relates the normal vector n of a surface S to the traction vector T across S. With respect to any chosen coordinate system, the Cauchy stress tensor can be represented as a symmetric matrix of 3×3 real numbers. Even within a homogeneous body, the stress tensor may vary from place to place, and may change over time; therefore, the stress within a material is, in general, a time-varying tensor field.
Definition:
Normal and shear In general, the stress T that a particle P applies on another particle Q across a surface S can have any direction relative to S. The vector T may be regarded as the sum of two components: the normal stress (compression or tension) perpendicular to the surface, and the shear stress that is parallel to the surface.
Definition:
If the normal unit vector n of the surface (pointing from Q towards P) is assumed fixed, the normal component can be expressed by a single number, the dot product T · n. This number will be positive if P is "pulling" on Q (tensile stress), and negative if P is "pushing" against Q (compressive stress) The shear component is then the vector T − (T · n)n.
Units:
The dimension of stress is that of pressure, and therefore its coordinates are measured in the same units as pressure: namely, pascals (Pa, that is, newtons per square metre) in the International System, or pounds per square inch (psi) in the Imperial system. Because mechanical stresses easily exceed a million Pascals, MPa, which stands for megapascal, is a common unit of stress.
Causes and effects:
Stress in a material body may be due to multiple physical causes, including external influences and internal physical processes. Some of these agents (like gravity, changes in temperature and phase, and electromagnetic fields) act on the bulk of the material, varying continuously with position and time. Other agents (like external loads and friction, ambient pressure, and contact forces) may create stresses and forces that are concentrated on certain surfaces, lines or points; and possibly also on very short time intervals (as in the impulses due to collisions). In active matter, self-propulsion of microscopic particles generates macroscopic stress profiles. In general, the stress distribution in a body is expressed as a piecewise continuous function of space and time.
Causes and effects:
Conversely, stress is usually correlated with various effects on the material, possibly including changes in physical properties like birefringence, polarization, and permeability. The imposition of stress by an external agent usually creates some strain (deformation) in the material, even if it is too small to be detected. In a solid material, such strain will in turn generate an internal elastic stress, analogous to the reaction force of a stretched spring, tending to restore the material to its original undeformed state. Fluid materials (liquids, gases and plasmas) by definition can only oppose deformations that would change their volume. If the deformation changes with time, even in fluids there will usually be some viscous stress, opposing that change. Such stresses can be either shear or normal in nature. Molecular origin of shear stresses in fluids is given in the article on viscosity. The same for normal viscous stresses can be found in Sharma (2019).The relation between stress and its effects and causes, including deformation and rate of change of deformation, can be quite complicated (although a linear approximation may be adequate in practice if the quantities are small enough). Stress that exceeds certain strength limits of the material will result in permanent deformation (such as plastic flow, fracture, cavitation) or even change its crystal structure and chemical composition.
Simple types:
In some situations, the stress within a body may adequately be described by a single number, or by a single vector (a number and a direction). Three such simple stress situations, that are often encountered in engineering design, are the uniaxial normal stress, the simple shear stress, and the isotropic normal stress.
Simple types:
Uniaxial normal A common situation with a simple stress pattern is when a straight rod, with uniform material and cross section, is subjected to tension by opposite forces of magnitude F along its axis. If the system is in equilibrium and not changing with time, and the weight of the bar can be neglected, then through each transversal section of the bar the top part must pull on the bottom part with the same force, F with continuity through the full cross-sectional area, A. Therefore, the stress σ throughout the bar, across any horizontal surface, can be expressed simply by the single number σ, calculated simply with the magnitude of those forces, F, and cross sectional area, A. On the other hand, if one imagines the bar being cut along its length, parallel to the axis, there will be no force (hence no stress) between the two halves across the cut.
Simple types:
This type of stress may be called (simple) normal stress or uniaxial stress; specifically, (uniaxial, simple, etc.) tensile stress. If the load is compression on the bar, rather than stretching it, the analysis is the same except that the force F and the stress σ change sign, and the stress is called compressive stress.
Simple types:
This analysis assumes the stress is evenly distributed over the entire cross-section. In practice, depending on how the bar is attached at the ends and how it was manufactured, this assumption may not be valid. In that case, the value σ = F/A will be only the average stress, called engineering stress or nominal stress. If the bar's length L is many times its diameter D, and it has no gross defects or built-in stress, then the stress can be assumed to be uniformly distributed over any cross-section that is more than a few times D from both ends. (This observation is known as the Saint-Venant's principle).
Simple types:
Normal stress occurs in many other situations besides axial tension and compression. If an elastic bar with uniform and symmetric cross-section is bent in one of its planes of symmetry, the resulting bending stress will still be normal (perpendicular to the cross-section), but will vary over the cross section: the outer part will be under tensile stress, while the inner part will be compressed. Another variant of normal stress is the hoop stress that occurs on the walls of a cylindrical pipe or vessel filled with pressurized fluid.
Simple types:
Shear Another simple type of stress occurs when a uniformly thick layer of elastic material like glue or rubber is firmly attached to two stiff bodies that are pulled in opposite directions by forces parallel to the layer; or a section of a soft metal bar that is being cut by the jaws of a scissors-like tool. Let F be the magnitude of those forces, and M be the midplane of that layer. Just as in the normal stress case, the part of the layer on one side of M must pull the other part with the same force F. Assuming that the direction of the forces is known, the stress across M can be expressed simply by the single number τ , calculated simply with the magnitude of those forces, F and the cross sectional area, A.Unlike normal stress, this simple shear stress is directed parallel to the cross-section considered, rather than perpendicular to it. For any plane S that is perpendicular to the layer, the net internal force across S, and hence the stress, will be zero.
Simple types:
As in the case of an axially loaded bar, in practice the shear stress may not be uniformly distributed over the layer; so, as before, the ratio F/A will only be an average ("nominal", "engineering") stress. That average is often sufficient for practical purposes.: p.292 Shear stress is observed also when a cylindrical bar such as a shaft is subjected to opposite torques at its ends. In that case, the shear stress on each cross-section is parallel to the cross-section, but oriented tangentially relative to the axis, and increases with distance from the axis. Significant shear stress occurs in the middle plate (the "web") of I-beams under bending loads, due to the web constraining the end plates ("flanges").
Simple types:
Isotropic Another simple type of stress occurs when the material body is under equal compression or tension in all directions. This is the case, for example, in a portion of liquid or gas at rest, whether enclosed in some container or as part of a larger mass of fluid; or inside a cube of elastic material that is being pressed or pulled on all six faces by equal perpendicular forces — provided, in both cases, that the material is homogeneous, without built-in stress, and that the effect of gravity and other external forces can be neglected.
Simple types:
In these situations, the stress across any imaginary internal surface turns out to be equal in magnitude and always directed perpendicularly to the surface independently of the surface's orientation. This type of stress may be called isotropic normal or just isotropic; if it is compressive, it is called hydrostatic pressure or just pressure. Gases by definition cannot withstand tensile stresses, but some liquids may withstand very large amounts of isotropic tensile stress under some circumstances. see Z-tube.
Simple types:
Cylinder Parts with rotational symmetry, such as wheels, axles, pipes, and pillars, are very common in engineering. Often the stress patterns that occur in such parts have rotational or even cylindrical symmetry. The analysis of such cylinder stresses can take advantage of the symmetry to reduce the dimension of the domain and/or of the stress tensor.
General types:
Often, mechanical bodies experience more than one type of stress at the same time; this is called combined stress. In normal and shear stress, the magnitude of the stress is maximum for surfaces that are perpendicular to a certain direction d , and zero across any surfaces that are parallel to d . When the shear stress is zero only across surfaces that are perpendicular to one particular direction, the stress is called biaxial, and can be viewed as the sum of two normal or shear stresses. In the most general case, called triaxial stress, the stress is nonzero across every surface element.
Cauchy tensor:
Combined stresses cannot be described by a single vector. Even if the material is stressed in the same way throughout the volume of the body, the stress across any imaginary surface will depend on the orientation of that surface, in a non-trivial way.
Cauchy tensor:
Cauchy observed that the stress vector T across a surface will always be a linear function of the surface's normal vector n , the unit-length vector that is perpendicular to it. That is, T=σ(n) , where the function σ satisfies σ(αu+βv)=ασ(u)+βσ(v) for any vectors u,v and any real numbers α,β The function σ , now called the (Cauchy) stress tensor, completely describes the stress state of a uniformly stressed body. (Today, any linear connection between two physical vector quantities is called a tensor, reflecting Cauchy's original use to describe the "tensions" (stresses) in a material.) In tensor calculus, σ is classified as second-order tensor of type (0,2).
Cauchy tensor:
Like any linear map between vectors, the stress tensor can be represented in any chosen Cartesian coordinate system by a 3×3 matrix of real numbers. Depending on whether the coordinates are numbered x1,x2,x3 or named x,y,z , the matrix may be written as or The stress vector T=σ(n) across a surface with normal vector n (which is covariant - "row; horizontal" - vector) with coordinates n1,n2,n3 is then a matrix product T=n⋅σ (where T in upper index is transposition, and as a result we get covariant (row) vector ) (look on Cauchy stress tensor), that is 11 21 31 12 22 32 13 23 33 ] The linear relation between T and n follows from the fundamental laws of conservation of linear momentum and static equilibrium of forces, and is therefore mathematically exact, for any material and any stress situation. The components of the Cauchy stress tensor at every point in a material satisfy the equilibrium equations (Cauchy's equations of motion for zero acceleration). Moreover, the principle of conservation of angular momentum implies that the stress tensor is symmetric, that is 12 21 , 13 31 , and 23 32 . Therefore, the stress state of the medium at any point and instant can be specified by only six independent parameters, rather than nine. These may be written [σxτxyτxzτxyσyτyzτxzτyzσz] where the elements σx,σy,σz are called the orthogonal normal stresses (relative to the chosen coordinate system), and τxy,τxz,τyz the orthogonal shear stresses.
Cauchy tensor:
Change of coordinates The Cauchy stress tensor obeys the tensor transformation law under a change in the system of coordinates. A graphical representation of this transformation law is the Mohr's circle of stress distribution.
Cauchy tensor:
As a symmetric 3×3 real matrix, the stress tensor σ has three mutually orthogonal unit-length eigenvectors e1,e2,e3 and three real eigenvalues λ1,λ2,λ3 , such that σei=λiei . Therefore, in a coordinate system with axes e1,e2,e3 , the stress tensor is a diagonal matrix, and has only the three normal components λ1,λ2,λ3 the principal stresses. If the three eigenvalues are equal, the stress is an isotropic compression or tension, always perpendicular to any surface, there is no shear stress, and the tensor is a diagonal matrix in any coordinate frame.
Cauchy tensor:
Tensor field In general, stress is not uniformly distributed over a material body, and may vary with time. Therefore, the stress tensor must be defined for each point and each moment, by considering an infinitesimal particle of the medium surrounding that point, and taking the average stresses in that particle as being the stresses at the point.
Thin plates Human-made objects are often made from stock plates of various materials by operations that do not change their essentially two-dimensional character, like cutting, drilling, gentle bending and welding along the edges. The description of stress in such bodies can be simplified by modeling those parts as two-dimensional surfaces rather than three-dimensional bodies.
Cauchy tensor:
In that view, one redefines a "particle" as being an infinitesimal patch of the plate's surface, so that the boundary between adjacent particles becomes an infinitesimal line element; both are implicitly extended in the third dimension, normal to (straight through) the plate. "Stress" is then redefined as being a measure of the internal forces between two adjacent "particles" across their common line element, divided by the length of that line. Some components of the stress tensor can be ignored, but since particles are not infinitesimal in the third dimension one can no longer ignore the torque that a particle applies on its neighbors. That torque is modeled as a bending stress that tends to change the curvature of the plate. These simplifications may not hold at welds, at sharp bends and creases (where the radius of curvature is comparable to the thickness of the plate).
Cauchy tensor:
Thin beams The analysis of stress can be considerably simplified also for thin bars, beams or wires of uniform (or smoothly varying) composition and cross-section that are subjected to moderate bending and twisting. For those bodies, one may consider only cross-sections that are perpendicular to the bar's axis, and redefine a "particle" as being a piece of wire with infinitesimal length between two such cross sections. The ordinary stress is then reduced to a scalar (tension or compression of the bar), but one must take into account also a bending stress (that tries to change the bar's curvature, in some direction perpendicular to the axis) and a torsional stress (that tries to twist or un-twist it about its axis).
Analysis:
Stress analysis is a branch of applied physics that covers the determination of the internal distribution of internal forces in solid objects. It is an essential tool in engineering for the study and design of structures such as tunnels, dams, mechanical parts, and structural frames, under prescribed or expected loads. It is also important in many other disciplines; for example, in geology, to study phenomena like plate tectonics, vulcanism and avalanches; and in biology, to understand the anatomy of living beings.
Analysis:
Goals and assumptions Stress analysis is generally concerned with objects and structures that can be assumed to be in macroscopic static equilibrium. By Newton's laws of motion, any external forces being applied to such a system must be balanced by internal reaction forces,: p.97 which are almost always surface contact forces between adjacent particles — that is, as stress. Since every particle needs to be in equilibrium, this reaction stress will generally propagate from particle to particle, creating a stress distribution throughout the body.
Analysis:
The typical problem in stress analysis is to determine these internal stresses, given the external forces that are acting on the system. The latter may be body forces (such as gravity or magnetic attraction), that act throughout the volume of a material;: p.42–81 or concentrated loads (such as friction between an axle and a bearing, or the weight of a train wheel on a rail), that are imagined to act over a two-dimensional area, or along a line, or at single point.
Analysis:
In stress analysis one normally disregards the physical causes of the forces or the precise nature of the materials. Instead, one assumes that the stresses are related to deformation (and, in non-static problems, to the rate of deformation) of the material by known constitutive equations.
Methods Stress analysis may be carried out experimentally, by applying loads to the actual artifact or to scale model, and measuring the resulting stresses, by any of several available methods. This approach is often used for safety certification and monitoring. Most stress is analysed by mathematical methods, especially during design.
Analysis:
The basic stress analysis problem can be formulated by Euler's equations of motion for continuous bodies (which are consequences of Newton's laws for conservation of linear momentum and angular momentum) and the Euler-Cauchy stress principle, together with the appropriate constitutive equations. Thus one obtains a system of partial differential equations involving the stress tensor field and the strain tensor field, as unknown functions to be determined. The external body forces appear as the independent ("right-hand side") term in the differential equations, while the concentrated forces appear as boundary conditions. The basic stress analysis problem is therefore a boundary-value problem.
Analysis:
Stress analysis for elastic structures is based on the theory of elasticity and infinitesimal strain theory. When the applied loads cause permanent deformation, one must use more complicated constitutive equations, that can account for the physical processes involved (plastic flow, fracture, phase change, etc.). Engineered structures are usually designed so the maximum expected stresses are well within the range of linear elasticity (the generalization of Hooke's law for continuous media); that is, the deformations caused by internal stresses are linearly related to them. In this case the differential equations that define the stress tensor are linear, and the problem becomes much easier. For one thing, the stress at any point will be a linear function of the loads, too. For small enough stresses, even non-linear systems can usually be assumed to be linear.
Analysis:
Stress analysis is simplified when the physical dimensions and the distribution of loads allow the structure to be treated as one- or two-dimensional. In the analysis of trusses, for example, the stress field may be assumed to be uniform and uniaxial over each member. Then the differential equations reduce to a finite set of equations (usually linear) with finitely many unknowns. In other contexts one may be able to reduce the three-dimensional problem to a two-dimensional one, and/or replace the general stress and strain tensors by simpler models like uniaxial tension/compression, simple shear, etc.
Analysis:
Still, for two- or three-dimensional cases one must solve a partial differential equation problem.
Analytical or closed-form solutions to the differential equations can be obtained when the geometry, constitutive relations, and boundary conditions are simple enough. Otherwise one must generally resort to numerical approximations such as the finite element method, the finite difference method, and the boundary element method.
Measurement:
Other useful stress measures include the first and second Piola–Kirchhoff stress tensors, the Biot stress tensor, and the Kirchhoff stress tensor.
Piola–Kirchhoff tensor In the case of finite deformations, the Piola–Kirchhoff stress tensors express the stress relative to the reference configuration. This is in contrast to the Cauchy stress tensor which expresses the stress relative to the present configuration. For infinitesimal deformations and rotations, the Cauchy and Piola–Kirchhoff tensors are identical.
Measurement:
Whereas the Cauchy stress tensor σ relates stresses in the current configuration, the deformation gradient and strain tensors are described by relating the motion to the reference configuration; thus not all tensors describing the state of the material are in either the reference or current configuration. Describing the stress, strain and deformation either in the reference or current configuration would make it easier to define constitutive models (for example, the Cauchy Stress tensor is variant to a pure rotation, while the deformation strain tensor is invariant; thus creating problems in defining a constitutive model that relates a varying tensor, in terms of an invariant one during pure rotation; as by definition constitutive models have to be invariant to pure rotations). The 1st Piola–Kirchhoff stress tensor, P is one possible solution to this problem. It defines a family of tensors, which describe the configuration of the body in either the current or the reference state.
Measurement:
The first Piola–Kirchhoff stress tensor, P relates forces in the present ("spatial") configuration with areas in the reference ("material") configuration.
P=JσF−T where F is the deformation gradient and det F is the Jacobian determinant.
In terms of components with respect to an orthonormal basis, the first Piola–Kirchhoff stress is given by PiL=JσikFLk−1=Jσik∂XL∂xk Because it relates different coordinate systems, the first Piola–Kirchhoff stress is a two-point tensor. In general, it is not symmetric. The First Piola–Kirchhoff stress is the 3D generalization of the 1D concept of engineering stress.
If the material rotates without a change in stress state (rigid rotation), the components of the first Piola–Kirchhoff stress tensor will vary with material orientation.
The first Piola–Kirchhoff stress is energy conjugate to the deformation gradient.
Measurement:
Whereas the first Piola–Kirchhoff stress relates forces in the current configuration to areas in the reference configuration, the second Piola–Kirchhoff stress tensor S relates forces in the reference configuration to areas in the reference configuration. The force in the reference configuration is obtained via a mapping that preserves the relative relationship between the force direction and the area normal in the reference configuration.
Measurement:
S=JF−1⋅σ⋅F−T.
In index notation with respect to an orthonormal basis, SIL=JFIk−1FLm−1σkm=J∂XI∂xk∂XL∂xmσkm This tensor, a one-point tensor, is symmetric.
If the material rotates without a change in stress state (rigid rotation), the components of the second Piola–Kirchhoff stress tensor remain constant, irrespective of material orientation.
The second Piola–Kirchhoff stress tensor is energy conjugate to the Green–Lagrange finite strain tensor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Microearthquake**
Microearthquake:
A microearthquake (or microquake) is a very low-intensity earthquake that is 2.0 or less in magnitude. They are very rarely felt beyond 8 km (5 mi) from their epicenter. In addition to having natural tectonic causes, they may also be seen as a result of underground nuclear testing or even large detonations of conventional explosives for producing excavations. They normally cause no damage to life or property, and are very rarely felt by people.Microquakes occur often near volcanoes as they approach an eruption, and frequently in certain regions exploited for geothermal energy, such as near Geyserville in Northern California. These occur so continuously that the current USGS event map for that location usually shows a substantial number of small earthquakes at that location. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Archaeogenetics**
Archaeogenetics:
Archaeogenetics is the study of ancient DNA using various molecular genetic methods and DNA resources. This form of genetic analysis can be applied to human, animal, and plant specimens. Ancient DNA can be extracted from various fossilized specimens including bones, eggshells, and artificially preserved tissues in human and animal specimens. In plants, ancient DNA can be extracted from seeds and tissue. Archaeogenetics provides us with genetic evidence of ancient population group migrations, domestication events, and plant and animal evolution. The ancient DNA cross referenced with the DNA of relative modern genetic populations allows researchers to run comparison studies that provide a more complete analysis when ancient DNA is compromised.Archaeogenetics receives its name from the Greek word arkhaios, meaning "ancient", and the term genetics, meaning "the study of heredity". The term archaeogenetics was conceived by archaeologist Colin Renfrew.In February 2021, scientists reported the oldest DNA ever sequenced was successfully retrieved from a mammoth dating back over a million years.
Early work:
Ludwik Hirszfeld (1884–1954) Ludwik Hirszfeld was a Polish microbiologist and serologist who was the President of the Blood Group Section of the Second International Congress of Blood Transfusion. He founded blood group inheritance with Erich von Dungern in 1910, and contributed to it greatly throughout his life. He studied ABO blood groups. In one of his studies in 1919, Hirszfeld documented the ABO blood groups and hair color of people at the Macedonian front, leading to his discovery that the hair color and blood type had no correlation. In addition to that he observed that there was a decrease of blood group A from western Europe to India and the opposite for blood group B. He hypothesized that the east-to-west blood group ratio stemmed from two blood groups consisting of mainly A or B mutating from blood group O, and mixing through migration or intermingling. A majority of his work was researching the links of blood types to sex, disease, climate, age, social class, and race. His work led him to discover that peptic ulcer was more dominant in blood group O, and that AB blood type mothers had a high male-to-female birth ratio.
Early work:
Arthur Mourant (1904–1994) Arthur Mourant was a British hematologist and chemist. He received many awards, most notably Fellowship of the Royal Society. His work included organizing the existing data on blood group gene frequencies, and largely contributing to the genetic map of the world through his investigation of blood groups in many populations. Mourant discovered the new blood group antigens of the Lewis, Henshaw, Kell, and Rhesus systems, and analyzed the association of blood groups and various other diseases. He also focused on the biological significance of polymorphisms. His work provided the foundation for archaeogenetics because it facilitated the separation of genetic evidence for biological relationships between people. This genetic evidence was previously used for that purpose. It also provided material that could be used to appraise the theories of population genetics.
Early work:
William Boyd (1903–1983) William Boyd was an American immunochemist and biochemist who became famous for his research on the genetics of race in the 1950s. During the 1940s, Boyd and Karl O. Renkonen independently discovered that lectins react differently to various blood types, after finding that the crude extracts of the lima bean and tufted vetch agglutinated the red blood cells from blood type A but not blood types B or O. This ultimately led to the disclosure of thousands of plants that contained these proteins. In order to examine racial differences and the distribution and migration patterns of various racial groups, Boyd systematically collected and classified blood samples from around the world, leading to his discovery that blood groups are not influenced by the environment, and are inherited. In his book Genetics and the Races of Man (1950), Boyd categorized the world population into 13 distinct races, based on their different blood type profiles and his idea that human races are populations with differing alleles. One of the most abundant information sources regarding inheritable traits linked to race remains the study of blood groups.
Methods:
Fossil DNA preservation Fossil retrieval starts with selecting an excavation site. Potential excavation sites are usually identified with the mineralogy of the location and visual detection of bones in the area. However, there are more ways to discover excavation zones using technology such as field portable x-ray fluorescence and Dense Stereo Reconstruction. Tools used include knives, brushes, and pointed trowels which assist in the removal of fossils from the earth.To avoid contaminating the ancient DNA, specimens are handled with gloves and stored in -20 °C immediately after being unearthed. Ensuring that the fossil sample is analyzed in a lab that has not been used for other DNA analysis could prevent contamination as well. Bones are milled to a powder and treated with a solution before the polymerase chain reaction (PCR) process. Samples for DNA amplification may not necessarily be fossil bones. Preserved skin, salt-preserved or air-dried, can also be used in certain situations.DNA preservation is difficult because the bone fossilisation degrades and DNA is chemically modified, usually by bacteria and fungi in the soil. The best time to extract DNA from a fossil is when it is freshly out of the ground as it contains six times the DNA when compared to stored bones. The temperature of extraction site also affects the amount of obtainable DNA, evident by a decrease in success rate for DNA amplification if the fossil is found in warmer regions. A drastic change of a fossil's environment also affects DNA preservation. Since excavation causes an abrupt change in the fossil's environment, it may lead to physiochemical change in the DNA molecule. Moreover, DNA preservation is also affected by other factors such as the treatment of the unearthed fossil like (e.g. washing, brushing and sun drying), pH, irradiation, the chemical composition of bone and soil, and hydrology. There are three perseveration diagenetic phases. The first phase is bacterial putrefaction, which is estimated to cause a 15-fold degradation of DNA. Phase 2 is when bone chemically degrades, mostly by depurination. The third diagenetic phase occurs after the fossil is excavated and stored, in which bone DNA degradation occurs most rapidly.
Methods:
Methods of DNA extraction Once a specimen is collected from an archaeological site, DNA can be extracted through a series of processes. One of the more common methods utilizes silica and takes advantage of polymerase chain reactions in order to collect ancient DNA from bone samples.There are several challenges that add to the difficulty when attempting to extract ancient DNA from fossils and prepare it for analysis. DNA is continuously being split up. While the organism is alive these splits are repaired; however, once an organism has died, the DNA will begin to deteriorate without repair. This results in samples having strands of DNA measuring around 100 base pairs in length. Contamination is another significant challenge at multiple steps throughout the process. Often other DNA, such as bacterial DNA, will be present in the original sample. To avoid contamination it is necessary to take many precautions such as separate ventilation systems and workspaces for ancient DNA extraction work. The best samples to use are fresh fossils as uncareful washing can lead to mold growth. DNA coming from fossils also occasionally contains a compound that inhibits DNA replication. Coming to a consensus on which methods are best at mitigating challenges is also difficult due to the lack of repeatability caused by the uniqueness of specimens.Silica-based DNA extraction is a method used as a purification step to extract DNA from archaeological bone artifacts and yield DNA that can be amplified using polymerase chain reaction (PCR) techniques. This process works by using silica as a means to bind DNA and separate it from other components of the fossil process that inhibit PCR amplification. However, silica itself is also a strong PCR inhibitor, so careful measures must be taken to ensure that silica is removed from the DNA after extraction. The general process for extracting DNA using the silica-based method is outlined by the following: Bone specimen is cleaned and the outer layer is scraped off Sample is collected from preferably compact section Sample is ground to fine powder and added to an extraction solution to release DNA Silica solution is added and centrifuged to facilitate DNA binding Binding solution is removed and a buffer is added to the solution to release the DNA from the silicaOne of the main advantages of silica-based DNA extraction is that it is relatively quick and efficient, requiring only a basic laboratory setup and chemicals. It is also independent of sample size, as the process can be scaled to accommodate larger or smaller quantities. Another benefit is that the process can be executed at room temperature. However, this method does contain some drawbacks. Mainly, silica-based DNA extraction can only be applied to bone and teeth samples; they cannot be used on soft tissue. While they work well with a variety of different fossils, they may be less effective in fossils that are not fresh (e.g. treated fossils for museums). Also, contamination poses a risk for all DNA replication in general, and this method may result in misleading results if applied to contaminated material.Polymerase chain reaction is a process that can amplify segments of DNA and is often used on extracted ancient DNA. It has three main steps: denaturation, annealing, and extension. Denaturation splits the DNA into two single strands at high temperatures. Annealing involves attaching primer strands of DNA to the single strands that allow Taq polymerase to attach to the DNA. Extension occurs when Taq polymerase is added to the sample and matches base pairs to turn the two single strands into two complete double strands. This process is repeated many times, and is usually repeated a higher number of times when used with ancient DNA. Some issues with PCR is that it requires overlapping primer pairs for ancient DNA due to the short sequences. There can also be “jumping PCR” which causes recombination during the PCR process which can make analyzing the DNA more difficult in inhomogeneous samples.
Methods:
Methods of DNA analysis DNA extracted from fossil remains is primarily sequenced using Massive parallel sequencing, which allows simultaneous amplification and sequencing of all DNA segments in a sample, even when it is highly fragmented and of low concentration. It involves attaching a generic sequence to every single strand that generic primers can bond to, and thus all of the DNA present is amplified. This is generally more costly and time intensive than PCR but due to the difficulties involved in ancient DNA amplification it is cheaper and more efficient. One method of massive parallel sequencing, developed by Margulies et al., employs bead-based emulsion PCR and pyrosequencing, and was found to be powerful in analyses of aDNA because it avoids potential loss of sample, substrate competition for templates, and error propagation in replication.The most common way to analyze aDNA sequence is to compare it with a known sequence from other sources, and this could be done in different ways for different purposes.
Methods:
The identity of the fossil remain can be uncovered by comparing its DNA sequence with those of known species using software such as BLASTN. This archaeogenetic approach is especially helpful when the morphology of the fossil is ambiguous. Apart from that, species identification can also be done by finding specific genetic markers in an aDNA sequence. For example, the American indigenous population is characterized by specific mitochondrial RFLPs and deletions defined by Wallace et al.aDNA comparison study can also reveal the evolutionary relationship between two species. The number of base differences between DNA of an ancient species and that of a closely related extant species can be used to estimate the divergence time of those two species from their last common ancestor. The phylogeny of some extinct species, such as Australian marsupial wolves and American ground sloths, has been constructed by this method. Mitochondrial DNA in animals and chloroplast DNA in plants are usually used for this purpose because they have hundreds of copies per cell and thus are more easily accessible in ancient fossils.Another method to investigate relationship between two species is through DNA hybridization. Single-stranded DNA segments of both species are allowed to form complementary pair bonding with each other. More closely related species have a more similar genetic makeup, and thus a stronger hybridization signal. Scholz et al. conducted southern blot hybridization on Neanderthal aDNA (extracted from fossil remain W-NW and Krapina). The results showed weak ancient human-Neanderthal hybridization and strong ancient human-modern human hybridization. The human-chimpanzee and neanderthal-chimpanzee hybridization are of similarly weak strength. This suggests that humans and neanderthals are not as closely related as two individuals of the same species are, but they are more related to each other than to chimpanzees.There have also been some attempts to decipher aDNA to provide valuable phenotypic information of ancient species. This is always done by mapping aDNA sequence onto the karyotype of a well-studied closely related species, which share a lot of similar phenotypic traits. For example, Green et al. compared the aDNA sequence from Neanderthal Vi-80 fossil with modern human X and Y chromosome sequence, and they found a similarity in 2.18 and 1.62 bases per 10,000 respectively, suggesting Vi-80 sample was from a male individual. Other similar studies include finding of a mutation associated with dwarfism in Arabidopsis in ancient Nubian cotton, and investigation on the bitter taste perception locus in Neanderthals.
Applications:
Human archaeology Africa Modern humans are thought to have evolved in Africa at least 200 kya (thousand years ago), with some evidence suggesting a date of over 300 kya. Examination of mitochondrial DNA (mtDNA), Y-chromosome DNA, and X-chromosome DNA indicate that the earliest population to leave Africa consisted of approximately 1500 males and females. It has been suggested by various studies that populations were geographically “structured” to some degree prior to the expansion out of Africa; this is suggested by the antiquity of shared mtDNA lineages. One study of 121 populations from various places throughout the continent found 14 genetic and linguistic “clusters,” suggesting an ancient geographic structure to African populations. In general, genotypic and phenotypic analysis have shown “large and subdivided throughout much of their evolutionary history.”Genetic analysis has supported archaeological hypotheses of a large-scale migrations of Bantu speakers into Southern Africa approximately 5 kya. Microsatellite DNA, single nucleotide polymorphisms (SNPs), and insertion/deletion polymorphisms (INDELS) have shown that Nilo-Saharan speaking populations originate from Sudan. Furthermore, there is genetic evidence that Chad-speaking descendants of Nilo-Saharan speakers migrated from Sudan to Lake Chad about 8 kya. Genetic evidence has also indicated that non-African populations made significant contributions to the African gene pool. For example, the Saharan African Beja people have high levels of Middle-Eastern as well as East African Cushitic DNA.
Applications:
Europe Analysis of mtDNA shows that modern humans occupied Eurasia in a single migratory event between 60 and 70 kya. Genetic evidence shows that occupation of the Near East and Europe happened no earlier than 50 kya. Studying haplogroup U has shown separate dispersals from the Near East both into Europe and into North Africa.Much of the work done in archaeogenetics focuses on the Neolithic transition in Europe. Cavalli-Svorza's analysis of genetic-geographic patterns led him to conclude that there was a massive influx of Near Eastern populations into Europe at the start of the Neolithic. This view led him “to strongly emphasize the expanding early farmers at the expense of the indigenous Mesolithic foraging populations.” mtDNA analysis in the 1990s, however, contradicted this view. M.B. Richards estimated that 10–22% of extant European mtDNA's had come from Near Eastern populations during the Neolithic. Most mtDNA's were “already established” among existing Mesolithic and Paleolithic groups. Most “control-region lineages” of modern European mtDNA are traced to a founder event of reoccupying northern Europe towards the end of the Last Glacial Maximum (LGM). One study of extant European mtDNA's suggest this reoccupation occurred after the end of the LGM, although another suggests it occurred before. Analysis of haplogroups V, H, and U5 support a “pioneer colonization” model of European occupation, with incorporation of foraging populations into arriving Neolithic populations. Furthermore, analysis of ancient DNA, not just extant DNA, is shedding light on some issues. For instance, comparison of neolithic and mesolithic DNA has indicated that the development of dairying preceded widespread lactose tolerance.
Applications:
South Asia South Asia has served as the major early corridor for geographical dispersal of modern humans from out-of-Africa. Based on studies of mtDNA line M, some have suggested that the first occupants of India were Austro-Asiatic speakers who entered about 45–60 kya. The Indian gene pool has contributions from earliest settlers, as well as West Asian and Central Asian populations from migrations no earlier than 8 kya. The lack of variation in mtDNA lineages compared to the Y-chromosome lineages indicate that primarily males partook in these migrations. The discovery of two subbranches U2i and U2e of the U mtDNA lineage, which arose in Central Asia has “modulated” views of a large migration from Central Asia into India, as the two branches diverged 50 kya. Furthermore, U2e is found in large percentages in Europe but not India, and vice versa for U2i, implying U2i is native to India.
Applications:
East Asia Analysis of mtDNA and NRY (non-recombining region of Y chromosome) sequences have indicated that the first major dispersal out of Africa went through Saudi Arabia and the Indian coast 50–100 kya, and a second major dispersal occurred 15–50 kya north of the Himalayas.Much work has been done to discover the extent of north-to-south and south-to-north migrations within Eastern Asia. Comparing the genetic diversity of northeastern groups with southeastern groups has allowed archaeologists to conclude many of the northeast Asian groups came from the southeast. The Pan-Asian SNP (single nucleotide polymorphism) study found “a strong and highly significant correlation between haplotype diversity and latitude,” which, when coupled with demographic analysis, supports the case for a primarily south-to-north occupation of East Asia. Archaeogenetics has also been used to study hunter-gatherer populations in the region, such as the Ainu from Japan and Negrito groups in the Philippines. For example, the Pan-Asian SNP study found that Negrito populations in Malaysia and the Negrito populations in the Philippines were more closely related to non-Negrito local populations than to each other, suggesting Negrito and non-Negrito populations are linked by one entry event into East Asia; although other Negrito groups do share affinities, including with Indigenous Australians. A possible explanation of this is a recent admixture of some Negrito groups with their local populations.
Applications:
Americas Archaeogenetics has been used to better understand the populating of the Americas from Asia. Native American mtDNA haplogroups have been estimated to be between 15 and 20 kya, although there is some variation in these estimates. Genetic data has been used to propose various theories regarding how the Americas were colonized. Although the most widely held theory suggests “three waves” of migration after the LGM through the Bering Strait, genetic data have given rise to alternative hypotheses. For example, one hypothesis proposes a migration from Siberia to South America 20–15 kya and a second migration that occurred after glacial recession. Y-chromosome data has led some to hold that there was a single migration starting from the Altai Mountains of Siberia between 17.2–10.1 kya, after the LGM. Analysis of both mtDNA and Y-chromosome DNA reveals evidence of “small, founding populations.” Studying haplogroups has led some scientists to conclude that a southern migration into the Americas from one small population was impossible, although separate analysis has found that such a model is feasible if such a migration happened along the coasts.
Applications:
Australia and New Guinea Finally, archaeogenetics has been used to study the occupation of Australia and New Guinea. The Indigenous people of Australia and New Guinea are phenotypically very similar, but mtDNA has shown that this is due to convergence from living in similar conditions. Non-coding regions of mt-DNA have shown “no similarities” between the aboriginal populations of Australia and New Guinea. Furthermore, no major NRY lineages are shared between the two populations. The high frequency of a single NRY lineage unique to Australia coupled with “low diversity of lineage-associated Y-chromosomal short tandem repeat (Y-STR) haplotypes” provide evidence for a “recent founder or bottleneck” event in Australia. But there is relatively large variation in mtDNA, which would imply that the bottleneck effect impacted males primarily. Together, NRY and mtDNA studies show that the splitting event between the two groups was over 50 kya, casting doubt on recent common ancestry between the two.
Applications:
Plants and animals Archaeogenetics has been used to understand the development of domestication of plants and animals.
Applications:
Domestication of plants The combination of genetics and archeological findings have been used to trace the earliest signs of plant domestication around the world. However, since the nuclear, mitochondrial, and chloroplast genomes used to trace domestication's moment of origin have evolved at different rates, its use to trace genealogy have been somewhat problematic. Nuclear DNA in specific is used over mitochondrial and chloroplast DNA because of its faster mutation rate as well as its intraspecific variation due to a higher consistency of polymorphism genetic markers. Findings in crop ‘domestication genes’ (traits that were specifically selected for or against) include tb1 (teosinte branched1) – affecting the apical dominance in maize tga1 (teosinte glume architecture1) – making maize kernels compatible for the convenience of humans te1 (Terminal ear1) – affecting the weight of kernels fw2.2 – affecting the weight in tomatoes BoCal – inflorescence of broccoli and cauliflowerThrough the study of archaeogenetics in plant domestication, signs of the first global economy can also be uncovered. The geographical distribution of new crops highly selected in one region found in another where it would have not originally been introduced serve as evidence of a trading network for the production and consumption of readily available resources.
Applications:
Domestication of animals Archaeogenetics has been used to study the domestication of animals. By analyzing genetic diversity in domesticated animal populations researchers can search for genetic markers in DNA to give valuable insight about possible traits of progenitor species. These traits are then used to help distinguish archaeological remains between wild and domesticated specimens. The genetic studies can also lead to the identification of ancestors for domesticated animals. The information gained from genetics studies on current populations helps guide the Archaeologist's search for documenting these ancestors.Archaeogenetics has been used to trace the domestication of pigs throughout the old world. These studies also reveal evidence about the details of early farmers. Methods of Archaeogenetics have also been used to further understand the development of domestication of dogs. Genetic studies have shown that all dogs are descendants from the gray wolf, however, it is currently unknown when, where, and how many times dogs were domesticated. Some genetic studies have indicated multiple domestications while others have not. Archaeological findings help better understand this complicated past by providing solid evidence about the progression of the domestication of dogs. As early humans domesticated dogs the archaeological remains of buried dogs became increasingly more abundant. Not only does this provide more opportunities for archaeologists to study the remains, it also provides clues about early human culture. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Porphyritic**
Porphyritic:
Porphyritic is an adjective used in geology to describe igneous rocks with a distinct difference in the size of mineral crystals, with the larger crystals known as phenocrysts. Both extrusive and intrusive rocks can be porphyritic, meaning all types of igneous rocks can display some degree of porphyritic texture. Most porphyritic rocks have bimodal size ranges, meaning the rock is composed of two distinct sizes of crystal.In extrusive rocks, the phenocrysts are surrounded by a fine-grained (aphanitic) matrix or groundmass of volcanic glass or non-visible crystals, commonly seen in porphyritic basalt. Porphyritic intrusive rocks have a matrix with individual crystals easily distinguished with the eye, but one group of crystals appearing clearly much bigger than the rest, as in a porphyritic granite.
Porphyritic:
The term comes from the Ancient Greek πορφύρα (porphyra), meaning "purple". Purple was the color of royalty, and the "imperial porphyry" was a deep purple igneous rock with large crystals of plagioclase, prized for monuments and building projects due to its hardness. Subsequently, the name was adapted to describe any igneous rocks with a similar texture.
Formation:
Porphyritic rocks are a product of igneous differentiation, and are generally formed when a column of rising magma is cooled in two stages: In the first stage, the magma is cooled slowly deep in the crust, creating the large crystal grains, with a diameter of 2mm or more. In the final stage, the magma is cooled rapidly at relatively shallow depth or as it erupts from a volcano, creating small grains that are usually invisible to the unaided eye, typically referred to as the matrix or groundmass.The formation of large phenocrysts is due to fractional crystallization. As the melt cools, it begins crystallizing the highest melting point minerals closest to the overall composition first. This forms large, well-shaped euhedral phenocrysts. If these phenocrysts are different in density to the remaining melt, they usually settle out of solution, eventually creating cumulates. However, when this is interrupted by sudden eruption of the melt as lava, or when the density of the crystals and remaining melt remains similar, they become entrapped in the final rock.This can also occur when the chemical composition of the remaining melt is close to the eutectic point as it cools, resulting in multiple different minerals solidifying at once and filling the remaining space simultaneously, limiting their size and shape. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Coordination isomerism**
Coordination isomerism:
Coordination isomerism is a form of structural isomerism in which the composition of the coordination complex ion varies. In a coordination isomer the total ratio of ligand to metal remains the same, but the ligands attached to a specific metal ion change. Examples of a complete series of coordination isomers require at least two metal ions and sometimes more.
For example, a solution containing ([Co(NH3)6]3+ and [Cr(CN)6]3−) is a coordination isomer with a solution containing [Cr(NH3)6]3+ and [Co(CN)6]3−. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bodystorming**
Bodystorming:
Bodystorming is a technique sometimes used in interaction design or as a creativity technique. It has also been cited as catalyzing scientific research when used as a modeling tool.The idea is to imagine what it would be like if the product existed, and act as though it exists, ideally in the place it would be used. It is going through an idea with improvised artifacts and physical activities to envision a solution. This User Experience Design (UXD) technique is ideal to design physical spaces (e.g. the interior design of a shop) but can also be used to design physical products or software.
Use in Scientific Research:
American dance company Black Label Movement's artistic director Carl Flink created a bodystorming system with University of Minnesota biomedical engineer David Odde in 2009 as a part of their Moving Cell Project. funded by the university's Institute for Advanced Study. The system initially brought dance artists and scientists together, including Dance Your PhD founder John Bohannon who first applied the term "bodystorming" to this method, in order to rapid prototype research hypotheses in biomedical engineering using choreographic rules for participants to follow. As a technique for scientists and dancers to model scientific theories, it has been credited with catalyzing scientific research and gives the participants the “psychological sense of what it would be like to be a molecule". Bodystorming sessions have been held at the 2018 Neuro-Oncology Symposium as well as the PSON Annual Investegators Meeting (2019) allowing scientists to use the Bodystorming system to model their current research. It also “offers new opportunities to learn, teach, and drive new discoveries across disciplinary boundaries.” Subsequently, research scientists have found the method not only “builds awareness of science” but understands that the body is “not just a site of knowledge but also a medium of communication.” A typical bodystorming session poses scientific questions then “provides visual information on why a model works or fails and streamlines the process of selecting a successful model.”
Opinions on this method:
The proponents of this idea like to point out the fact that you get up and move, trying things out with your own body, rather than just sitting around a table and discussing it while having to imagine it in the abstract (as in the case of brainstorming). It is a proper user-centered design method, since it can be carried out by the designers as well as the users of the final product. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ISO 860**
ISO 860:
ISO 860 Terminology work – Harmonization of concepts and terms is an ISO standard that deals with the principles which are the basis upon which concept systems can be harmonized and with the development of harmonized terminologies, in order to improve the efficiency in interlinguistic communication.
This standard specifies a methodology for the harmonization of concepts, definitions, terms, concept systems, and term systems. It is a natural extension of ISO 704.
ISO 860:
The standard addresses two types of harmonization: concept harmonization and term harmonization. Concept harmonization means the reduction or elimination of minor differences between two or more closely related concepts. Concept harmonization is not the transfer of a concept system to another language. It involves the comparison and matching of concepts and concept systems in one or more languages or subject fields.
ISO 860:
Term harmonization refers to the designation of a single concept (in different languages) by terms that reflect similar characteristics or similar forms. Term harmonization is possible only when the concepts the terms represent are almost exactly the same. The standard contains a flow chart for the harmonization process and a description of the procedures for performing it.
Amendments:
ISO 860:2007 specifies a methodological approach to the harmonization of concepts, concept systems, definitions and terms. It applies to the development of harmonized terminologies, at either the national or international level, in either a monolingual or a multilingual context. It replaces: ISO 860:1996 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mirapoint Email Appliance**
Mirapoint Email Appliance:
Mirapoint Email Appliance is a Unix-like standards-compliant black-box e-mail server, with built-in anti-spam, anti-virus, webmail, POP, IMAP, calendar, and LDAP routing options available.
System configuration and maintenance is done through a web interface, or through SSH or telnet access to a command line interpreter (CLI). Full access to the Unix-like Messaging Operating System (MOS) is not available. Depending on the model and configuration the appliances can be used as email routers, user mail servers, or as an all-in-one server.
Appliance Generations:
The first generation of Mirapoint Email Appliances were introduced in December 1998. The two models were the M100 and M1000, both of which were hardware identical. Their differences were in user count, with licensing done in software. Both platforms consisted of a 3U CPU unit, tied to a 3U high RAID array and a 3U rack mounted UPS (uninterruptible power supply). Both of these platforms initially ran Mirapoint MOS version 1.3, the first version of MOS to be released. In the initial release it was not possible to run without the UPS, later releases allowed a no UPS run state, using a license key.
Appliance Generations:
The second generation of Mirapoint products included both email servers and edge mail appliances. The email servers were introduced in August 1999 to replace the low-end M100 and shipped with MOS version 2.0. All the models consisted of 3U appliances with internal RAID storage. This generation featured a UPS built into the CPU head unit. The email server models were numbered SP270 (Service Provider), ES210, ES220 and ES230 (Enterprise Server). The edge mail appliances followed shortly after in December 1999 and were numbered as the MR200 (Mail Router) and the MS200 (Mail Switch).
Appliance Generations:
By late 2000, a minor revision was done to the 200 series, and the email servers were renamed M200 Message Servers, dropping the SP and ES designations. A high-end version of the second generation chassis was introduced, the M2000, replacing the M1000 series. The M2000 followed the M1000 specs, in offering a large external RAID array and external UPS. The new twist was clustered failover, allowing two M2000 heads to connect to a single RAID array with redundant controllers.
Acquisitions:
IceWarp Inc. acquired the Mirapoint software business from Synchronoss Technologies, Inc. on 30 January 2017. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Adult album alternative**
Adult album alternative:
Adult album alternative (also triple-A, AAA, or adult alternative) is a radio format. Its roots trace to both the "classic album stations of the ’70s as well as the alternative rock format that developed in the ’80s."
Format:
The format has a broader, more diverse playlist than most other formats. Musical selection tends to be on the fringe of mainstream pop and rock. It also includes many other music genres such as indie rock, Americana, pop rock, classic rock, alternative rock, new wave, alternative country, jazz, folk, world music, jam band and blues. The musical selections tend to avoid hard rock and rap music. Music selection also includes tracks from albums that are not singles, which leads to the enhanced and larger playlist. Some AAA outlets focus more on classic rock artists, folk and blues while others focus on more contemporary artists and modern/indie rock.
Popularity:
Some of the songs that first air on the Triple-A format have later found additional popularity on the Adult Top 40, modern rock, or adult contemporary charts months after their initial Triple-A chart runs. The format is often seen as a "test market" for emerging artists.The format has gone off and on in the Los Angeles radio market. Currently KCSN and simulcast partner KSBR broadcast a Triple A format in the Los Angeles and Orange Country areas respectively. The format still exists in New York City (WFUV); Chicago (WXRT); Philadelphia (WXPN); Seattle (KPNW); Minneapolis (KCMP); Boston (WXRV, WERS, and Americana leaning WUMB-FM); Baltimore (WTMD); Aspen, Colorado (KSPN-FM); Denver (KBCO and KVOQ); Fort Collins (KJAC); Portland, Oregon (KINK); Portland, Maine (WCLZ); Indianapolis (WTTS); Nashville (WRLT, WNXP, and Americana leaning WMOT); Conway, New Hampshire (WMWV); Burlington, Vermont (WNCS); Turners Falls-Northampton, Massachusetts (WRSI); Bozeman, Montana (KMMS-FM); Woodstock, New York (WDST); Austin (KGSR-HD2, KUTX, and KTSN); and Dallas (KKXT).On July 10, 2008 Billboard began a Triple-A chart (using information from sister-publication Radio and Records, a news magazine devoted to the radio and the music industries that has since ceased publication). Rival Mediabase 24/7 also compiles a Triple A chart. As of mid-2009, Radio and Records publications and accompanying charts were discontinued. As of 2010, Billboard publishes Triple A charts in the magazine and for its premium members on its website. Mediabase also publishes Triple A charts on a daily basis.
Popularity:
Additional Triple-A charts are published by CMJ and FMQB. FMQB also produces the annual Triple A Conference in Boulder, Colorado, USA, an event that grew out of the Gavin Report's Triple A Summit, first held in 1993. FMQB took over production of the event, rebranding it as the Triple A Conference, after the closing of Radio & Records in 2009.
Popularity:
At the end of 2019, FMQB closed and all Triple A services were absorbed by Jack Barton Entertainment, LLC (JBE), helmed by Jack Barton, former VP/Triple A at FMQB. JBE has rebranded the Boulder convention as the Triple A SummitFest and continues to publish weekly Triple A charts, including a Non-Commercial album chart, as well as a weekly newsletter (JBE Triple A Report) covering Triple A radio and the music it plays. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Autoload**
Autoload:
In computer programming, autoloading is the capability of loading and linking portions of a program from mass storage automatically when needed, so that the programmer is not required to define or include those portions of the program explicitly. Many high-level programming languages include autoload capabilities, which sacrifice some run-time speed for ease of coding and speed of initial compilation/linking.
Autoload:
Typical autoload systems intercept procedure calls to undefined subroutines. The autoloader searches through a path of directories in the computer's file system, to find a file containing source or object code that defines the subroutine. The autoloader then loads and links the file, and hands control back to the main program so that the subroutine gets executed as if it had already been defined and linked before the call.
Autoload:
Many interactive and high-level languages operate in this way. For example, IDL includes a primitive path searcher, and Perl allows individual modules to determine how and whether autoloading should occur. The Unix shell may be said to consist almost entirely of an autoloader, as its main job is to search a path of directories to load and execute command files. In PHP 5, autoload functionality is triggered when referencing an undefined class. One or more autoload functions—implemented as the __autoload magic function or any function registered to the SPL autoload stack—is called and given the opportunity to define the class, usually by loading the file it is defined in. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tropospheric scatter**
Tropospheric scatter:
Tropospheric scatter, also known as troposcatter, is a method of communicating with microwave radio signals over considerable distances – often up to 500 kilometres (310 mi) and further depending on frequency of operation, equipment type, terrain, and climate factors. This method of propagation uses the tropospheric scatter phenomenon, where radio waves at UHF and SHF frequencies are randomly scattered as they pass through the upper layers of the troposphere. Radio signals are transmitted in a narrow beam aimed just above the horizon in the direction of the receiver station. As the signals pass through the troposphere, some of the energy is scattered back toward the Earth, allowing the receiver station to pick up the signal.Normally, signals in the microwave frequency range travel in straight lines, and so are limited to line-of-sight applications, in which the receiver can be 'seen' by the transmitter. Communication distances are limited by the visual horizon to around 48–64 kilometres (30–40 mi). Troposcatter allows microwave communication beyond the horizon. It was developed in the 1950s and used for military communications until communications satellites largely replaced it in the 1970s.
Tropospheric scatter:
Because the troposphere is turbulent and has a high proportion of moisture, the tropospheric scatter radio signals are refracted and consequently only a tiny proportion of the transmitted radio energy is collected by the receiving antennas. Frequencies of transmission around 2 GHz are best suited for tropospheric scatter systems as at this frequency the wavelength of the signal interacts well with the moist, turbulent areas of the troposphere, improving signal-to-noise ratios.
Overview:
Discovery Previous to World War II, prevailing radio physics theory predicted a relationship between frequency and diffraction that suggested radio signals would follow the curvature of the Earth, but that the strength of the effect would fall off rapidly and especially at higher frequencies. However, during the war, there were numerous incidents in which high-frequency radar signals were able to detect targets at ranges far beyond the theoretical calculations. In spite of these repeated instances of anomalous range, the matter was never seriously studied.In the immediate post-war era, the limitation on television construction was lifted in the United States and millions of sets were sold. This drove an equally rapid expansion of new television stations. Based on the same calculations used during the war, the Federal Communications Commission (FCC) arranged frequency allocations for the new VHF and UHF channels to avoid interference between stations. To everyone's surprise, interference was common, even between widely separated stations. As a result, licenses for new stations were put on hold in what is known as the "television freeze" of 1948.Bell Labs was among the many organizations that began studying this effect, and concluded it was a previously unknown type of reflection off the tropopause. This was limited to higher frequencies, in the UHF and microwave bands, which is why it had not been seen prior to the war when these frequencies were beyond the ability of existing electronics. Although the vast majority of the signal went through the troposphere and on to space, the tiny amount that was reflected was useful if combined with powerful transmitters and very sensitive receivers. In 1952, Bell began experiments with Lincoln Labs, the MIT-affiliated radar research lab. Using Lincoln's powerful microwave transmitters and Bell's sensitive receivers, they built several experimental systems to test a variety of frequencies and weather effects. When Bell Canada heard of the system they felt it might be useful for a new communications network in Labrador and took one of the systems there for cold weather testing.In 1954 the results from both test series were complete and construction began on the first troposcatter system, the Pole Vault system that linked Pinetree Line radar systems along the coast of Labrador. Using troposcatter reduced the number of stations from 50 microwave relays scatted through the wilderness to only 10, all located at the radar stations. In spite of their higher unit costs, the new network cost half as much to build as a relay system. Pole Vault was quickly followed by similar systems like White Alice, relays on the Mid-Canada Line and the DEW Line, and during the 1960s, across the Atlantic Ocean and Europe as part of NATO's ACE High system.
Overview:
Use The propagation losses are very high; only about one trillionth (10×10^−12) of the transmit power is available at the receiver. This demands the use of antennas with extremely large antenna gain. The original Pole Vault system used large parabolic reflector dish antennas, but these were soon replaced by billboard antennas which were somewhat more robust, an important quality given that these systems were often found in harsh locales. Paths were established at distances over 1,000 kilometres (620 mi). They required antennas ranging from 9 to 36 metres (30 to 118 ft) and amplifiers ranging from 1 kW to 50 kW. These were analogue systems which were capable of transmitting a few voice channels.
Overview:
Troposcatter systems have evolved over the years. With communication satellites used for long-distance communication links, current troposcatter systems are employed over shorter distances than previous systems, use smaller antennas and amplifiers, and have much higher bandwidth capabilities. Typical distances are between 50 and 250 kilometres (31 and 155 mi), though greater distances can be achieved depending on the climate, terrain, and data rate required. Typical antenna sizes range from 1.2 to 12 metres (3 ft 11 in to 39 ft 4 in) while typical amplifier sizes range from 10 W to 2 kW. Data rates over 20 Mbit/s can be achieved with today's technology.
Overview:
Tropospheric scatter is a fairly secure method of propagation as dish alignment is critical, making it extremely difficult to intercept the signals, especially if transmitted across open water, making them highly attractive to military users. Military systems have tended to be ‘thin-line’ tropo – so called because only a narrow bandwidth ‘information’ channel was carried on the tropo system; generally up to 32 analogue (4 kHz bandwidth) channels. Modern military systems are "wideband" as they operate 4-16 Mbit/s digital data channels.
Overview:
Civilian troposcatter systems, such as the British Telecom (BT) North Sea oil communications network, required higher capacity ‘information’ channels than were available using HF (high frequency – 3 MHz to 30 MHz) radio signals, before satellite technology was available. The BT systems, based at Scousburgh in the Shetland Islands, Mormond Hill in Aberdeenshire and Row Brow near Scarborough, were capable of transmitting and receiving 156 analogue (4 kHz bandwidth) channels of data and telephony to / from North Sea oil production platforms, using frequency-division multiplexing (FDMX) to combine the channels.
Overview:
Because of the nature of the turbulence in the troposphere, quadruple diversity propagation paths were used to ensure 99.98% reliability of the service, equating to about 3 minutes of downtime due to propagation drop out per month. The quadruple space and polarisation diversity systems needed two separate dish antennas (spaced several metres apart) and two differently polarised feed horns – one using vertical polarisation, the other using horizontal polarisation. This ensured that at least one signal path was open at any one time. The signals from the four different paths were recombined in the receiver where a phase corrector removed the phase differences of each signal. Phase differences were caused by the different path lengths of each signal from transmitter to receiver. Once phase corrected, the four signals could be combined additively.
Tropospheric scatter communications networks:
The tropospheric scatter phenomenon has been used to build both civilian and military communication links in a number of parts of the world, including: Allied Command Europe Highband (ACE High), NATO NATO military radiocommunication and early warning system throughout Europe from the Norwegian-Soviet border to the Turkish-Soviet border.BT (British Telecom), UK United Kingdom - Shetland to Mormond HillFernmeldeturm Berlin, West Germany Torfhaus-Berlin, Clenze-Berlin at Cold War timesPortugal Telecom, Portugal Serra de Nogueira (northeastern Portugal) to Artzamendi (southwestern France)CNCP Telecommunications, Canada Tsiigehtchic to Galena Hill, Keno City Hay River - Port Radium - Lady Franklin Point Cuba - Florida Guanabo to Florida CityProject Offices - AT&T Corporation, United States Project Offices is the name sometimes used to refer to several structurally dependable facilities maintained by the AT&T Corporation in the Mid-Atlantic states since the mid-20th century to house an ongoing, non-public, company project. AT&T began constructing Project Offices in the 1960s. Since the inception of the Project Offices program, the company has chosen not to disclose the exact nature of business conducted at Project Offices. However, it has described them as central facilities.Pittsboro, North Carolina Buckingham, Virginia Charlottesville, Virginia Leesburg, Virginia Hagerstown, MarylandTexas Towers - Air defence radars, United States Air Force The Texas Towers were a set of three radar facilities off the eastern seaboard of the United States which were used for surveillance by the United States Air Force during the Cold War. Modeled on the offshore oil drilling platforms first employed off the Texas coast, they were in operation from 1958 to 1963.Mid Canada Line, Canada A series of five stations (070, 060, 050, 415, 410) in Ontario and Quebec around the lower Hudson Bay. A series of six stations were built in Labrador and Quebec between Goose Bay and Sept-Îles between 1957 and 1958.Pinetree Line, Pole Vault, Canada Pole Vault was series of fourteen stations providing communications for Eastern seaboard radar stations of the US/Canadian Pinetree line, running from N-31 Frobisher Bay, Baffin Island to N-22 St. John's, Newfoundland.White Alice/DEW Line/DEW Training (Cold War era), United States/ Canada A former military and civil communications network with eighty stations stretching up the western seaboard from Port Hardy, Vancouver Island north to Barter Island (BAR), west to Shemya, Alaska (SYA) in the Aleutian Islands (just a few hundred miles from the Soviet Union) and east across arctic Canada to Greenland. Note that not all station were troposcatter, but many were. It also included a training facility for White Alice/DEW line tropo-scatter network located between Pecatonica, Illinois, and Streator, Illinois.DEW Line (Post Cold War era), United States/ Canada Several tropo-scatter networks providing communications for the extensive air-defence radar chain in the far north of Canada and the US.North Atlantic Radio System (NARS), NATO NATO air-defence network stretching from RAF Fylingdales, via Mormond Hill, UK, Sornfelli (Faroe Islands), Höfn, Iceland to Keflavik DYE-5, Rockville.European Tropospheric Scatter - Army (ET-A), United States Army A US Army network from RAF Fylingdales to a network in Germany and a single station in France (Maison Fort). The network became active on 1966.486L Mediterranean Communications System (MEDCOM), United States Air Force A network covering the European coast of the Mediterranean Sea from San Pablo, Spain in the west to Incirlik Air Base, Turkey in the East, with headquarters at Ringstead in Dorset, England. Commissioned by the US Air Force in 1966.: Spanish Communications Region Royal Air Force, UK Communications to British Forces Germany, running from Swingate in Kent to Lammersdorf in Germany.Troposphären-Nachrichtensystem Bars, Warsaw Pact A Warsaw Pact tropo-scatter network stretching from near Rostok in the DDR (Deutsches Demokratisches Republik), Czechoslovakia, Hungary, Poland, Byelorussia USSR, Ukraine USSR and Bulgaria.TRRL SEVER, Soviet Union A Soviet network stretching across the USSR. India - Soviet Union A single section from Srinigar, Kashmir, India to Dangara, Tajikistan, USSR.Indian Air Force, India Part of an Air Defence Network covering major air bases, radar installations and missile sites in Northern and central India. The network is being phased out to be replaced with more modern fiber-optic based communication systems.Peace Ruby, Spellout, Peace Net, Imperial State of Iran An air-defence network set up by the United States prior to the 1979 Islamic Revolution. Spellout built a radar and communications network in the north of Iran. Peace Ruby built another air-defence network in the south and Peace Net integrated the two networks. Bahrain - UAE A tropo-scatter system linking Al Manamah, Bahrain to Dubai, United Arab Emirates.Royal Air Force of Oman, Oman A tropo-scatter communications system providing military comms to the former SOAF - Sultan of Oman's Air Force, (now RAFO - Royal Air Force of Oman), across the Sultanate of Oman.Royal Saudi Air Force, Saudi Arabia A Royal Saudi Air Force tropo-scatter network linking major airbases and population centres in Saudi Arabia.Yemen, Yemen A single system linking Sana'a with Sa'dah.BACK PORCH and Integrated Wideband Communications System (IWCS), United States Two networks run by the United States linking military bases in Thailand and South Vietnam. Stations were located at Bangkok, Ubon Royal Thai Air Force Base, Pleiku, Nha Trang, Vung Chua mountain (13.742°N 109.196°E / 13.742; 109.196) Quy Nhon, Monkey Mountain Facility Da Nang, Phu Bai Combat Base, Pr Line (11.868°N 108.547°E / 11.868; 108.547) near Da Lat, Hon Cong mountain An Khê, Phu Lam (10.752°N 106.627°E / 10.752; 106.627) Saigon, VC Hill (10.36°N 107.068°E / 10.36; 107.068) Vũng Tàu and Cần Thơ.Phil-Tai-Oki, Taiwan A system linking the Taiwan with the Philippines and Okinawa.Cable & Wireless Caribbean network A troposcatter link was established by Cable & Wireless in 1960, linking Barbados with Port of Spain, Trinidad. The network was extended further south to Georgetown, Guyana in 1965.Japanese Troposcatter Networks, Japan Two networks linking Japanese islands from North to South.
Tactical Troposcatter Communication systems:
As well as the permanent networks detailed above, there have been many tactical transportable systems produced by several countries: Soviet / Russian Troposcatter Systems MNIRTI R-423-1 Brig-1/R-423-2A Brig-2A/R-423-1KF MNIRTI R-444 Eshelon / R-444-7,5 Eshelon D MNIRTI R-420 Atlet-D NIRTI R-417 Baget/R-417S Baget S NPP Radiosvyaz R-412 A/B/F/S TORF MNIRTI R-410/R-410-5,5/R-410-7,5 Atlet / Albatros MNIRTI R-408/R-408M BaklanPeople's Republic of China (PRoC), People's Liberation Army (PLA) Troposcatter Systems CETC TS-504 Troposcatter Communication System CETC TS-510/GS-510 Troposcatter Communication SystemWestern Troposcatter Systems AN/TRC-97 Troposcatter Communication System AN/TRC-170 Tropospheric Scatter Microwave Radio Terminal AN/GRC-201 Troposcatter Communication SystemThe U.S. Army and Air Force use tactical tropospheric scatter systems developed by Raytheon for long haul communications. The systems come in two configurations, the original "heavy tropo", and a newer "light tropo" configuration exist. The systems provide four multiplexed group channels and trunk encryption, and 16 or 32 local analog phone extensions. The U.S. Marine Corps also uses the same device, albeit an older version. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Edwards equation**
Edwards equation:
The Edwards equation in organic chemistry is a two-parameter equation for correlating nucleophilic reactivity, as defined by relative rate constants, with the basicity of the nucleophile (relative to protons) and its polarizability. This equation was first developed by John O. Edwards in 1954 and later revised based on additional work in 1956.The general idea is that most nucleophiles are also good bases because the concentration of negatively charged electron density that defines a nucleophile will strongly attract positively charged protons, which is the definition of a base according to Brønsted–Lowry acid-base theory. Additionally, highly polarizable nucleophiles will have greater nucleophilic character than suggested by their basicity because their electron density can be shifted with relative ease to concentrate in one area.
History:
Prior to Edwards developing his equation, other scientists were also working to define nucleophilicity quantitatively. Brønsted and Pederson first discovered the relationship between basicity, with respect to protons, and nucleophilicity in 1924: where log kB=βnpKb+C where kb is the rate constant for nitramide decomposition by a base (B) and βN is a parameter of the equation.
History:
Swain and Scott later tried to define a more specific and quantitative relationship by correlating nucleophilic data with a single-parameter equation derived in 1953: log 10 (kk0)=sn This equation relates the rate constant k, of a reaction, normalized to that of a standard reaction with water as the nucleophile (k0), to a nucleophilic constant n for a given nucleophile and a substrate constant s that depends on the sensitivity of a substrate to nucleophilic attack (defined as 1 for methyl bromide). This equation was modeled after the Hammett equation.
History:
However, both the Swain–Scott equation and the Brønsted relationship make the rather inaccurate assumption that all nucleophiles have the same reactivity with respect to a specific reaction site. There are several different categories of nucleophiles with different attacking atoms (e.g. oxygen, carbon, nitrogen) and each of these atoms has different nucleophilic characteristics. The Edwards equation attempts to account for this additional parameter by introducing a polarizability term.
Edwards equations:
The first generation of the Edwards equation was log kk0=αEn+βH where k and k0 are the rate constants for a nucleophile and a standard (H2O). H is a measure of the basicity of the nucleophile relative to protons, as defined by the equation: 1.74 where the pKa is that of the conjugate acid of the nucleophile and the constant 1.74 is the correction for the pKa of H3O+.
Edwards equations:
En is the term Edwards introduced to account for the polarizability of the nucleophile. It is related to the oxidation potential (E0) of the reaction 2X−⇌X2+2e− (oxidative dimerization of the nucleophile) by the equation: 2.60 where 2.60 is the correction for the oxidative dimerization of water, obtained from a least-squares correlation of data in Edwards’ first paper on the subject. α and β are then parameters unique to specific nucleophiles that relate the sensitivity of the substrate to the basicity and polarizability factors.
Edwards equations:
However, because some β's appeared to be negative as defined by the first generation of the Edwards equation, which theoretically should not occur, Edwards adjusted his equation. The term En was determined to have some dependence on the basicity relative to protons (H) due to some factors that affect basicity also influencing the electrochemical properties of the nucleophile. To account for this, En was redefined in terms of basicity and polarizability (given as molar refractivity, RN): En=aP+bH where log RNRH20 The values of a and b, obtained by the method of least squares, are 3.60 and 0.0624 respectively. With this new definition of En, the Edwards equation can be rearranged: log kk0=AP+BH where A= αa and B = β + αb. However, because the second generation of the equation was also the final one, the equation is sometimes written as log kk0=αP+βH , especially since it was republished in that form in a later paper of Edwards’, leading to confusion over which parameters are being defined.
Significance:
A later paper by Edwards and Pearson, following research done by Jencks and Carriuolo in 1960 led to the discovery of an additional factor in nucleophilic reactivity, which Edwards and Pearson called the alpha effect, where nucleophiles with a lone pair of electrons on an atom adjacent to the nucleophilic center have enhanced reactivity. The alpha effect, basicity, and polarizability are still accepted as the main factors in determining nucleophilic reactivity. As such, the Edwards equation is applied in a qualitative sense much more frequently than in a quantitative one.
Significance:
In studying nucleophilic reactions, Edwards and Pearson noticed that for certain classes of nucleophiles most of the contribution of nucleophilic character originated from their basicity, resulting in large β values. For other nucleophiles, most of the nucleophilic character came from their high polarizability, with little contribution from basicity, resulting in large α values. This observation led Pearson to develop his hard-soft acid-base theory, which is arguably the most important contribution that the Edwards equation has made to current understanding of organic and inorganic chemistry. Nucleophiles, or bases, that were polarizable, with large α values, were categorized as “soft”, and nucleophiles that were non-polarizable, with large β and small α values, were categorized as “hard”.
Significance:
The Edwards equation parameters have since been used to help categorize acids and bases as hard or soft, due to the approach's simplicity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Eating your own dog food**
Eating your own dog food:
Eating your own dog food or "dogfooding" is the practice of using one's own products or services. This can be a way for an organization to test its products in real-world usage using product management techniques. Hence dogfooding can act as quality control, and eventually a kind of testimonial advertising. Once in the market, dogfooding can demonstrate developers' confidence in their own products.
Origin of the term:
In 2006, Warren Harrison, the Editor-in-Chief of IEEE Software recounted that in the 1970s television advertisements for Alpo dog food, Lorne Greene pointed out that he fed Alpo to his own dogs. Another possible origin he remembers is from the president of Kal Kan Pet Food, who was said to eat a can of his dog food at shareholders' meetings.In 1988, Microsoft manager Paul Maritz sent Brian Valentine, test manager for Microsoft LAN Manager, an email titled "Eating our own Dogfood", challenging him to increase internal usage of the company's product. From there, the usage of the term spread through the company.
Real world usage:
InfoWorld commented that this needs to be a transparent and honest process: "watered-down examples, such as auto dealers' policy of making salespeople drive the brands they sell, or Coca-Cola allowing no Pepsi products in corporate offices ... are irrelevant." In this sense, a corporate culture of not supporting the competitor is not the same as a philosophy of "eating your own dog food". The latter focuses on the functional aspects of the company's own product.
Real world usage:
Dogfooding allows employees to test their company's products in real-life situations; a perceived, but still controversial, advantage beyond marketing, which gives management a sense of how the product might be used—all before launch to consumers. In software development, dogfooding can occur in multiple stages: first, a stable version of the software is used with just a single new feature added. Then, multiple new features can be combined into a single version of the software and tested together. This allows several validations before the software is released. The practice enables proactive resolution of potential inconsistency and dependency issues, especially when several developers or teams work on the same product.The risks of public dogfooding, specifically that a company may have difficulties using its own products, may reduce the frequency of publicized dogfooding.
Examples:
In February 1980, Apple Computer president Michael Scott wrote a memo announcing "Effective Immediately!! No more typewriters are to be purchased, leased etc., etc. ... We believe the typewriter is obsolete. Let's prove it inside before we try and convince our customers." He set a goal to remove all typewriters from the company by 1 January 1981.By 1987, Atari Corp. was in the process of using the Atari ST throughout the company.The development of Windows NT at Microsoft involved over 200 developers in small teams, and it was held together by Dave Cutler's February 1991 insistence on dogfooding. Microsoft developed the operating system on computers running NT daily builds. The software was initially crash prone, but the immediate feedback of code breaking the build, the loss of pride, and the knowledge of impeding the work of others were all powerful motivators. Windows developers would typically dogfood or self-host Windows starting from the early (alpha) builds, while the rest of the employees would start from the more stable beta builds that were also available to MSDN subscribers. In 2005, InfoWorld reported that a tour of Microsoft's network operations center "showed pretty much beyond a reasonable doubt that Microsoft does run its 20,000-plus node, international network on 99 percent Windows technology, including servers, workstations, and edge security". InfoWorld argued that "Microsoft's use of Windows for its high-traffic operations tipped many doubters over to Windows' side of the fence." In the mid-1990s, Microsoft's internal email system was initially developed around Unix. When asked why, they publicly moved to using Microsoft Exchange. In 1997, an email storm known as the Bedlam DL3 incident made Microsoft build more robust features into Microsoft Exchange Server to avoid lost and duplicate emails and network and server down-time, although dogfooding is rarely so dramatic. A second email storm in 2006 was handled perfectly by the system.
Examples:
In 1999, Hewlett-Packard staff referred to a project using HP's own products as "Project Alpo" (referring to a brand of dog food). Around the same time, Mozilla also practised dogfooding under that exact name.Government green public procurement that allows testing of proposed environmental policies has been compared to dogfooding.On 1 June 2011, YouTube added a license feature to its video uploading service allowing users to choose between a standard or Creative Commons license. The license label was followed by the message "(Shh! – Internal Dogfood)" that appeared on all YouTube videos lacking commercial licensing. A YouTube employee confirmed that this referred to products that are tested internally.Oracle Corporation stated that as of October 2016 it "runs Oracle Linux with more than 42,000 servers [to] support more than 4 million external users and 84,000 internal users. More than 20,000 developers at Oracle use Oracle Linux".
Criticisms and support:
Forcing those who design products to actually use and rely on them is sometimes thought to improve quality and usability, but software developers may be blind to usability and may have knowledge to make software work that an end user will lack. Microsoft's chief information officer noted in 2008 that, previously, "We tended not to go through the actual customer experience. We were always upgrading from a beta, not from production disk to production disk." Dogfooding may happen too early to be viable, and those forced to use the products may assume that someone else has reported the problem or they may get used to applying workarounds. Dogfooding may be unrealistic, as customers will always have a choice of different companies' products to use together, and the product may not be used as intended. The process can lead to a loss of productivity and demoralisation, or at its extreme to "Not Invented Here" syndrome, i.e. only using internal products.
Criticisms and support:
In 1989, Donald Knuth published a paper recounting lessons from the development of his TeX Typesetting software, in which the benefits of the approach were mentioned:Thus, I came to the conclusion that the designer of a new system must not only be the implementor and the first large-scale user; the designer should also write the first user manual. The separation of any of these four components would have hurt TeX significantly. If I had not participated fully in all these activities, literally hundreds of improvements would never have been made, because I would never have thought of them or perceived why they were important.
Alternative terms:
In 2007, Jo Hoppe, the CIO of Pegasystems, said that she uses the alternative phrase "drinking our own champagne". Novell's head of public relations Bruce Lowry, commenting on his company's use of Linux and OpenOffice.org, said that he also prefers this phrase. In 2009, the new CIO of Microsoft, Tony Scott, argued that the phrase "dogfooding" was unappealing and should be replaced by "icecreaming", with the aim of developing products as "ice cream that our customers want to consume". A less controversial and common alternative term used in some contexts is self-hosting, where developers' workstations would, for instance, get updated automatically overnight to the latest daily build of the software or operating system on which they work. Developers of IBM's mainframe operating systems have long used the term "eating our own cooking". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Proceedings of the Combustion Institute**
Proceedings of the Combustion Institute:
The Proceedings of the Combustion Institute are the proceedings of the biennial Combustion Symposium put on by The Combustion Institute. The publication contains the most significant contributions in fundamentals and applications fundamental research of combustion science combustion phenomena. Research papers and invited topical reviews are included on topics of reaction kinetics, soot, PAH and other large molecules, diagnostics, laminar flames, turbulent flames, heterogenous combustion, spray and droplet combustion, detonations, explosions & supersonic combustion, fire research, stationary combustion systems, internal combustion engine and gas turbine combustion, and new technology concepts. The editors-in-chief are Daniel C. Haworth (Pennsylvania State University) and Terese Løvås (no) (Norwegian University of Science and Technology).
History:
The need for development of automotive engines, fuels, and aviation formed the basis for the organization which became The Combustion Institute. The first three symposiums were held in 1928, 1937, and 1948. Since 1952, symposiums have been held every second year. The first combustion symposium with published proceedings was in 1948.
Abstracting and indexing:
The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2015 impact factor of 4.120. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Indoor venues in Sweden**
Indoor venues in Sweden:
These venues are multi-used for indoor games, such as ice hockey, bandy, handball and floorball. Concerts and exhibitions are shown when the venues are not in use. Some of the venues are used in the national league in respectively sport.
== Venues == | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Retarded position**
Retarded position:
Einstein's equations admit gravity wave-like solutions. In the case of a moving point-like mass and in the linearized limit of a weak-gravity approximation these solutions of the Einstein equations are known as the Liénard–Wiechert gravitational potentials. Wave-like solutions (variations) in gravitational field at any point of space at some instant of time t are generated by the mass taken in the preceding (or retarded) instant of time s < t on its world-line at a vertex of the null cone connecting the mass and the field point. The position of the mass that generates the field is called the retarded position and the Liénard–Wiechert potentials are called the retarded potentials. Gravitational waves caused by acceleration of a mass appear to come from the position and direction of the mass at the time it was accelerated (the retarded time and position). The retarded time and the retarded position of the mass are a direct consequence of the finite value of the speed of gravity, the speed with which gravitational waves propagate in space.
Retarded position:
As in the case of the Liénard–Wiechert potentials for electromagnetic effects and waves, the static potentials from a moving gravitational mass (i.e., its simple gravitational field, also known as gravitostatic field) are "updated," so that they point to the mass's actual position at constant velocity, with no retardation effects. This happens also for static electric and magnetic effects and is required by Lorentz symmetry, since any mass or charge moving with constant velocity at a great distance, could be replaced by a moving observer at the same distance, with the object now at "rest." In this latter case, the static gravitational field seen by the observer would be required to point to the same position, which is the non-retarded position of the object (mass). Only gravitational waves, caused by acceleration of a mass, and which cannot be removed by a change in a distant observer's inertial frame, must be subject to aberration, and thus originate from a retarded position and direction, due to their finite velocity of travel from their source. Such waves correspond to electromagnetic waves radiated from an accelerated charge.
Retarded position:
Note that for gravitational masses moving past each other in straight lines (or for that matter for electromagnetically charged objects), there is little or no retardation effect on the effect from them, which is mediated by "static" components of the fields. So long as no radiation is emitted, conservation of momentum requires that forces between objects (either electromagnetic or gravitational forces) point at objects' instantaneous and up-to-date positions, and not in the direction of their speed-of-light-delayed (retarded) positions. However, since no information can be transmitted from such an interaction, such influences (which seem to exceed that of the influence of light), cannot be used to violate principles of relativity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Devil's torture chamber**
Devil's torture chamber:
The Devil's Torture Chamber is a magic stage illusion of the classic type involving a female magician's assistant in a large box and is probably best categorised as a penetration or restoration-type illusion.
Description:
The magician presents an upright cabinet that is just big enough to hold a person with a little space above their head. The magician then presents a rack of metal spikes. Spectators are offered the chance to tap the spikes with a metal implement to prove they are real and solid. The spikes are fitted into the top of the cabinet pointing downwards. An assistant is introduced and steps into the cabinet. The door is closed and the spikes are forced downwards using handles that protrude through slots in the side of the cabinet. The implication is that the assistant must have been impaled by the spikes. However the door is opened to reveal the assistant alive and unharmed. There are several slight variations. Sometimes the assistant carries a string of inflated balloons when he or she steps into the cabinet. These are burst as the spikes descend to give an added audible dimension to the illusion. Another description has a small door opened to show the assistant alive when the spikes are at the bottom of the cabinet; the small door is then closed again and the spikes lifted to the top before the assistant is finally fully revealed.French magician Don José de Murcia performs a version of this illusion under the title "La Herse Infernal".
History:
The illusion is thought to be the creation of Floyd Thayer, founder of the Thayer Magic Company. Blueprints for it appeared in the Thayer catalog #7 supplement, which dates it to the early 1930s. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Clinical Genetics (journal)**
Clinical Genetics (journal):
Clinical Genetics is a monthly peer-reviewed medical journal covering medical genetics. It was established in 1970 and is published by Wiley-Blackwell. The editor-in-chief is Reiner A. Veitia (University of Paris).
Abstracting and indexing:
The journal is abstracted and indexed in: According to the Journal Citation Reports, its 2018 impact factor is 4.104. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Timeline of science fiction**
Timeline of science fiction:
This is a timeline of science fiction as a literary tradition. While the date of the start of science fiction is debated, this list includes a range of Ancient, Medieval, and Renaissance-era precursors and proto-science fiction as well, as long as these examples include typical science fiction themes and topoi such as travel to outer space and encounter with alien life-forms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Goldwork (embroidery)**
Goldwork (embroidery):
Goldwork is the art of embroidery using metal threads. It is particularly prized for the way light plays on it. The term "goldwork" is used even when the threads are imitation gold, silver, or copper. The metal wires used to make the threads have never been entirely gold; they have always been gold-coated silver or cheaper metals, and even then the "gold" often contains a very low percent of real gold. Most metal threads are available in silver and sometimes copper as well as gold; some are available in colors as well.
Goldwork (embroidery):
Goldwork is always surface embroidery and free embroidery; the vast majority is a form of laid work or couching; that is, the gold threads are held onto the surface of the fabric by a second thread, usually of fine silk. The ends of the thread, depending on type, are simply cut off, or are pulled through to the back of the embroidery and carefully secured with the couching thread. A tool called a mellore or a stilleto is used to help position the threads and create the holes needed to pull them through. The threads most often have metal or gold leaf wound around a textile thread, or threads treated with an adhesive and rolled in powdered gold or other metal. Goldwork was originally developed in Asia, and has been used for at least 2000 years. Its use reached a remarkable level of skill in the Middle Ages, when a style called Opus Anglicanum was developed in England and used extensively in church vestments and hangings. After this period it was also used frequently in the clothing and furnishings of the royalty and nobility throughout Europe, and still later on military and other regalia. The same silver and gold thread were also used heavily in the most expensive tapestries, especially during the Renaissance. Goldwork is currently a fairly uncommon skill, even among embroiderers who work in other free embroidery styles; it is now most commonly used for the highest-quality church vestments and art embroidery. It has always been reserved for occasional and special use only, due both to the expense of the materials and time to create the embroidery, and because the threads - no matter how expertly applied - will not hold up to frequent laundering of any kind.
Goldwork (embroidery):
Embroidered goldwork is not to be confused with the even more luxurious cloth of gold, where similar gold threads are woven through the whole piece of textile.
History:
Goldwork was originally developed in Asia, and has been used for at least 2000 years. In China, it possibly dates back to the Shang dynasty (c. 1570 BC – c. 1045 BC) according to archaeological studies. but was certainly in use by the Eastern Han dynasty (25 to 220 AD).It had reached ancient Rome soon after 189 BC, initially made in Pergamum (modern Bergama in Western Turkey). King Attalus I probably established large state workshops there, and the gold-embroidered cloth was known as "Attalic" cloth. Pliny the Elder credited Attalus I with inventing the technique, but this is most unlikely. The toga picta, worn by generals in their Roman triumphs, and later consuls and emperors, was dyed solid purple, decorated with imagery in gold thread, and worn over a similarly-decorated tunica palmata.
History:
After the fall of the Roman empire, it was generally reserved for garments of the nobility and church hangings and vestments, and as a luxury technique survived from ancient times in the Middle Ages. It featured significantly in Byzantine dress and church textiles, and was sometimes worn by musicians and servants in uniform. When illiteracy was common and thus written materials had less impact, "images and the visual realm [had] more power over the senses and the mind....The pomp and circumstance created by the awe-inspiring use of metal threads in church work was observed keenly by kings and emulated where possible.: 214 China In China, gold embroidery is a traditional Chinese craft with a long history which was originally used in the imperial palaces and temples. Chinese goldwork, including application of gold leaf, gold powder, gold thread (as embroidery or as woven textile with the exception of Nasīj) in clothing and textile, as well as the silver-work version, originated in ancient China and was used at least since the Eastern Han dynasty (25 to 220 AD) or prior, with possible usage in the Shang dynasty (c. 1570 BC – c. 1045 BC). Since the Zhou dynasty (c.1046 to 256 BC), Chinese embroidery had been used as a social class marker. In China, embroidery in gold was found on imperial and ceremonial dress,: 3 and religious dress, and other textile objects. Chinese goldwork often used red silk threads for couching, adding a warmer tone to the embroidery.: 22 One of the two important branches of Chinese gold embroidery is the Chao embroidery which was developed in Chaozhou, Guangdong province since the Tang dynasty (618 to 907 AD) and the gold- and silver-coloured embroidery of Ningbo, which mostly uses gold and silver metallic threads.Chinese goldwork embroidery was also introduced in India, where it underwent significant development and innovation, and in Europe through the silk road.
History:
Europe The record of gold embroidery extends far back in English history. Thomas of Ely noted the Abbess of Ely, St. Etheldrada, who died in 679, was adept at embroidering goldwork and made St. Cuthbert a stole and maniple richly embroidered in gold and adorned with gems.: 7–8 Embroidery was thought to be a fitting activity for noblewomen, both those within and outside of convents.: 8 Its use reached a remarkable level of skill in the Middle Ages, particularly between 1250 and 1350, when a style called Opus Anglicanum (it translates to "English Work") reached its peak in England.: 8 It was used extensively in church vestments and hangings. It was quite costly because of the metal threads, gems, and pearls that were used. It is conjectured that some of the artwork used in this embroidery originated in illuminated manuscripts, and may have been designed by the same artists.: 8 The decline in the quality of this style of embroidery is thought to be because of the losses during the Black Death.
History:
Gold thread technology were also adopted by Italian weavers. Italian centers of silk production (Lucca, Venice, Florence, and Milan) producing cloth of gold started appearing after the Crusades. Even after gold thread was produced for millennia in Europe, gold thread was still associated with its origins in China. The producing of gold cloth became common in Europe, such as France and Italy by the 16th century.
History:
After this period it was also used frequently in the clothing and furnishings of the royalty and nobility throughout Europe, and still later on military and other regalia. The Imperial and Ecclesiastical Treasury in Vienna displays vestments decorated with accomplished Or nué in the form of saints.: 9 Or nué is a special technique invented in the 15th century, wherein many threads of passing or Japan thread are laid down parallel and touching. By varying the spacing and color of the couching stitches, elaborate, gleaming images can be created. This is commonly used to depict the garments of saints in church embroidery.
History:
India The use of goldwork in India predates the arrival of the Greeks in 365-323 BC. Indian metal thread embroidery uses precious and semiprecious stones and wire in distinctive ways.: 25 It is certain that the use of gold and silver embroideries, known as zari, was used in India in the 15th century. Gold thread which was made out of beaten metal strips wrapped around a silk core was introduced in India from Singapore.
Contemporary goldwork:
Goldwork is currently a fairly uncommon skill, even among embroiderers who work in other free embroidery styles; it is now most commonly used for the highest-quality church vestments and art embroidery. It has always been reserved for occasional and special use only, due to both the expense of the materials and the time to create the embroidery, and because the threads will usually not hold up to frequent laundering of any kind.
Contemporary goldwork:
Goldwork styles and techniques have evolved thanks to the availability of plastic sequin waste, metallic leather and other new materials. Goldwork embroiderer and textiles artist Kathleen Laurel Sage regularly uses sequin waste in her designs to create a style that is not found in traditional goldwork.
Types of metal thread:
A variety of threads exists, in order to create differing textures.
Types of metal thread:
Passing is the most basic and common thread used in goldwork; it consists of a thin strip of metal wound around a core of cotton or silk. For gold thread this is typically yellow, or in older examples orange; for silver, white or gray. This is always attached by couching, either one or two threads at a time, and pulled through to the back to secure it. When multiple threads must be laid next to each other, a technique called bricking is used: the position of the couching stitches is offset between rows, producing an appearance similar to a brick wall. This same type of thread is used in making cloth of gold.
Types of metal thread:
Japan thread, sometimes called jap, is a cheaper replacement for passing, and is far more commonly used in modern goldwork. It appears nearly identical, but rather than a strip of metal, a strip of foil paper is wrapped around the core.
Bullion or Purl is structurally a very long spring, hollow at the core; it can be stretched apart slightly and couched between the wraps of wire, or cut into short lengths and applied like beads. This thread comes in both shiny and matte versions.
Types of metal thread:
Jaceron or Pearl purl is similar to bullion, but with a much wider piece of metal which has been shaped (rounded) prior to purling it, such that it looks like a string of pearl-like beads when couched down between the wraps of metal. Lizerine is a similar thread that has a flat appearance having not been shaped prior to purling.
Types of metal thread:
Frieze or Check purl is again similar, but the metal used is shaped differently, producing a faceted, sparkly look.
Faconnee or Crimped purl is almost identical to bullion, but has been crimped at intervals.
Roccoco and the similar Crinkle cordonnet are made of wire tightly wrapped around a cotton core, with a wavy or kinked appearance.
Milliary wire is a stretched pearl purl laced to a base of passing thread.
Broad Plate is a strip of metal a 2 millimeters wide; often this is used to fill small shapes by folding it back and forth, hiding the couching stitches under the folds. This is also available as 11's plate which is 1mm wide and whipped plate where the broad plate has a fine wire wrapped around it.
Flat Worm or simply Oval thread is a thin plate wrapped around a yarn core and flattened slightly. This is used like plate, but is considerably easier to work with.
Types of metal thread:
Twists or Torsade, threads made of multiple strands of metal twisted together are also sometimes used, some of which, such as Soutache, sometimes have different colored metals or colored non-metal threads twisted together. These are either couched like passing, with the couching thread visible, or with the thread angled with the twist to make it invisible.In addition, paillettes or spangles (sequins of real metal), small pieces of appliqued rich fabric or kid leather, pearls, and real or imitation gems are commonly used as accents, and felt or string padding may be used to create raised areas or texture. Silk thread work in satin stitch or other stitches is often combined with goldwork, and in some periods goldwork was combined with blackwork embroidery as well. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Clomegestone acetate**
Clomegestone acetate:
Clomegestone acetate (USAN) (developmental code name SH-741), or clomagestone acetate, also known as 6-chloro-17α-acetoxy-16α-methylpregna-4,6-diene-3,20-dione, is a steroidal progestin of the 17α-hydroxyprogesterone group which was developed as an oral contraceptive but was never marketed. It is the acetate ester of clomegestone, which, similarly to clomegestone acetate, was never marketed. Clomegestone acetate is also the 17-desoxy cogener of clometherone, and is somewhat more potent in comparison. Similarly to cyproterone acetate, clomegestone acetate has been found to alter insulin receptor concentrations in adipose tissue, and this may indicate the presence of glucocorticoid activity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Medium access control**
Medium access control:
In IEEE 802 LAN/MAN standards, the medium access control (MAC, also called media access control) sublayer is the layer that controls the hardware responsible for interaction with the wired, optical or wireless transmission medium. The MAC sublayer and the logical link control (LLC) sublayer together make up the data link layer. The LLC provides flow control and multiplexing for the logical link (i.e. EtherType, 802.1Q VLAN tag etc), while the MAC provides flow control and multiplexing for the transmission medium.
Medium access control:
These two sublayers together correspond to layer 2 of the OSI model. For compatibility reasons, LLC is optional for implementations of IEEE 802.3 (the frames are then "raw"), but compulsory for implementations of other IEEE 802 physical layer standards. Within the hierarchy of the OSI model and IEEE 802 standards, the MAC sublayer provides a control abstraction of the physical layer such that the complexities of physical link control are invisible to the LLC and upper layers of the network stack. Thus any LLC sublayer (and higher layers) may be used with any MAC. In turn, the medium access control block is formally connected to the PHY via a media-independent interface. Although the MAC block is today typically integrated with the PHY within the same device package, historically any MAC could be used with any PHY, independent of the transmission medium.
Medium access control:
When sending data to another device on the network, the MAC sublayer encapsulates higher-level frames into frames appropriate for the transmission medium (i.e. the MAC adds a syncword preamble and also padding if necessary), adds a frame check sequence to identify transmission errors, and then forwards the data to the physical layer as soon as the appropriate channel access method permits it. For topologies with a collision domain (bus, ring, mesh, point-to-multipoint topologies), controlling when data is sent and when to wait is necessary to avoid collisions. Additionally, the MAC is also responsible for compensating for collisions by initiating retransmission if a jam signal is detected. When receiving data from the physical layer, the MAC block ensures data integrity by verifying the sender's frame check sequences, and strips off the sender's preamble and padding before passing the data up to the higher layers.
Functions performed in the MAC sublayer:
According to IEEE Std 802-2001 section 6.2.3 "MAC sublayer", the primary functions performed by the MAC layer are: Frame delimiting and recognition Addressing of destination stations (both as individual stations and as groups of stations) Conveyance of source-station addressing information Transparent data transfer of LLC PDUs, or of equivalent information in the Ethernet sublayer Protection against errors, generally by means of generating and checking frame check sequences Control of access to the physical transmission mediumIn the case of Ethernet, the functions required of a MAC are: receive/transmit normal frames half-duplex retransmission and backoff functions append/check FCS (frame check sequence) interframe gap enforcement discard malformed frames prepend(tx)/remove(rx) preamble, SFD (start frame delimiter), and padding half-duplex compatibility: append(tx)/remove(rx) MAC address
Addressing mechanism:
The local network addresses used in IEEE 802 networks and FDDI networks are called media access control addresses; they are based on the addressing scheme that was used in early Ethernet implementations. A MAC address is intended as a unique serial number. MAC addresses are typically assigned to network interface hardware at the time of manufacture. The most significant part of the address identifies the manufacturer, who assigns the remainder of the address, thus provide a potentially unique address. This makes it possible for frames to be delivered on a network link that interconnects hosts by some combination of repeaters, hubs, bridges and switches, but not by network layer routers. Thus, for example, when an IP packet reaches its destination (sub)network, the destination IP address (a layer 3 or network layer concept) is resolved with the Address Resolution Protocol for IPv4, or by Neighbor Discovery Protocol (IPv6) into the MAC address (a layer 2 concept) of the destination host.
Addressing mechanism:
Examples of physical networks are Ethernet networks and Wi-Fi networks, both of which are IEEE 802 networks and use IEEE 802 48-bit MAC addresses.
A MAC layer is not required in full-duplex point-to-point communication, but address fields are included in some point-to-point protocols for compatibility reasons.
Channel access control mechanism:
The channel access control mechanisms provided by the MAC layer are also known as a multiple access method. This makes it possible for several stations connected to the same physical medium to share it. Examples of shared physical media are bus networks, ring networks, hub networks, wireless networks and half-duplex point-to-point links. The multiple access method may detect or avoid data packet collisions if a packet mode contention based channel access method is used, or reserve resources to establish a logical channel if a circuit-switched or channelization-based channel access method is used. The channel access control mechanism relies on a physical layer multiplex scheme.
Channel access control mechanism:
The most widespread multiple access method is the contention-based CSMA/CD used in Ethernet networks. This mechanism is only utilized within a network collision domain, for example an Ethernet bus network or a hub-based star topology network. An Ethernet network may be divided into several collision domains, interconnected by bridges and switches.
A multiple access method is not required in a switched full-duplex network, such as today's switched Ethernet networks, but is often available in the equipment for compatibility reasons.
Channel access control mechanism for concurrent transmission Use of directional antennas and millimeter-wave communication in a wireless personal area network increases the probability of concurrent scheduling of non‐interfering transmissions in a localized area, which results in an immense increase in network throughput. However, the optimum scheduling of concurrent transmission is an NP-hard problem.
Cellular networks:
Cellular networks, such as GSM, UMTS or LTE networks, also use a MAC layer. The MAC protocol in cellular networks is designed to maximize the utilization of the expensive licensed spectrum. The air interface of a cellular network is at layers 1 and 2 of the OSI model; at layer 2, it is divided into multiple protocol layers. In UMTS and LTE, those protocols are the Packet Data Convergence Protocol (PDCP), the Radio Link Control (RLC) protocol, and the MAC protocol. The base station has absolute control over the air interface and schedules the downlink access as well as the uplink access of all devices. The MAC protocol is specified by 3GPP in TS 25.321 for UMTS, TS 36.321 for LTE and TS 38.321 for 5G. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Canon EF 200-400mm lens**
Canon EF 200-400mm lens:
The EF 200–400 mm f/4L IS USM Extender 1.4× is an EF mount super telephoto zoom lens produced by Canon. It is part of the professional L-series and functions with the Canon EOS line of cameras. The EF 200–400 mm lens features an ultrasonic motor, image stabilization and weather sealing. It is the first and only EF lens with a built-in extender.
History:
The EF 200–400 mm f/4L was first announced to be in development in February 2011 and a prototype was showcased at the 2011 CP+ tradeshow. In November 2011, Canon announced that availability of the lens would be postponed to an unspecified later date. The lens was only released more than a year later in May 2013.
Built-in 1.4× Extender:
Although not the first lens to be equipped with a built-in teleconverter, an honor that belongs to the older Canon FD mount 1200 mm f/5.6L 1.4× lens, the EF 200–400 is the first to be for the EF mount, and the first to enter mass production. The extender, when engaged, increases the lens' focal length by a factor of 1.4×, while decreasing the aperture by a single stop, changing the lens to a 280–560 mm f/5.6 lens. The extender is activated and deactivated by a mechanical switch close to the lens mount. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The History of Mathematical Tables**
The History of Mathematical Tables:
The History of Mathematical Tables: from Sumer to Spreadsheets is an edited volume in the history of mathematics on mathematical tables. It was edited by Martin Campbell-Kelly, Mary Croarken, Raymond Flood, and Eleanor Robson, developed out of the presentations at a conference on the subject organised in 2001 by the British Society for the History of Mathematics, and published in 2003 by the Oxford University Press.
Topics:
An introductory chapter classifies tables broadly according to whether they are intended as aids to calculation (based on mathematical formulas) or as analyses and records of data, and further subdivides them according to how they were compiled. Following this, the contributions to the book include articles on the following topics: Tables of data in Babylonian mathematics, administration, and astronomy, by Eleanor Robson Early tables of logarithms, by Graham Jagger Life tables in actuarial science, by Christopher Lewin and Margaret de Valois The work of Gaspard de Prony constructing mathematical tables in revolutionary France, by Ivor Grattan-Guinness Difference engines, by Michael Williams The uses and advantages of machines in table-making, and error correction in mechanical tables, by Doron Swade Astronomical tables, by Arthur Norberg The data processing and statistical analyses used to produce tables of census data from punched cards, by Edward Higgs British table-making committees, and the transition from calculators to computers, by Mary Croarken The Mathematical Tables Project of the Works Progress Administration, in New York during the Great Depression of the 1930s and early 1940s, by David Alan Grier The work of the British Nautical Almanac Office, by George A. Wilkins Spreadsheets, by Martin Campbell-Kelly.The work is presented on VIII + 361 pages in a unified format with illustrations throughout, and with the historical and biographical context of the material set aside in separate text boxes.
Audience and reception:
Reviewer Paul J. Campbell finds it ironic that, unlike the works it discusses, "there are no tables in the back of the book". Reviewer Sandy L. Zabell calls the book "interesting and highly readable".Both Peggy A. Kidwell and Fernando Q. Gouvêa note several topics that would have been worthwhile to include, including tables in mathematics in medieval Islam or other non-Western cultures, the book printing industry that provided inexpensive books of tables in the 19th century, and the development of mathematical tables in Germany. As Kidwell writes, "like most good books, this one not only tells good stories, but leaves the reader hoping to learn more". Gouvêa evaluates the book as being useful in its coverage of a topic often missed in broader surveys of the history of mathematics, of interest both to historians of mathematics and to a more general audience interested in the development of these topics, and "a must-have for libraries". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thyrotropin receptor**
Thyrotropin receptor:
The thyrotropin receptor (or TSH receptor) is a receptor (and associated protein) that responds to thyroid-stimulating hormone (also known as "thyrotropin") and stimulates the production of thyroxine (T4) and triiodothyronine (T3). The TSH receptor is a member of the G protein-coupled receptor superfamily of integral membrane proteins and is coupled to the Gs protein.It is primarily found on the surface of the thyroid epithelial cells, but also found on adipose tissue and fibroblasts. The latter explains the reason of the myxedema finding during Graves disease. In addition, it has also been found to be expressed in the anterior pituitary gland, hypothalamus and kidneys. Its presence in the anterior pituitary gland may be involved in mediating the paracrine signaling feedback inhibition of thyrotropin along the hypothalamus-pituitary-thyroid axis.
Function:
Upon binding circulating TSH, a G-protein signal cascade activates adenylyl cyclase and intracellular levels of cAMP rise. cAMP activates all functional aspects of the thyroid cell, including iodine pumping; thyroglobulin synthesis, iodination, endocytosis, and proteolysis; thyroid peroxidase activity; and hormone release. TSHR is involved in regulating seasonal reproduction in vertebrates. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quantum cellular automaton**
Quantum cellular automaton:
A quantum cellular automaton (QCA) is an abstract model of quantum computation, devised in analogy to conventional models of cellular automata introduced by John von Neumann. The same name may also refer to quantum dot cellular automata, which are a proposed physical implementation of "classical" cellular automata by exploiting quantum mechanical phenomena. QCA have attracted a lot of attention as a result of its extremely small feature size (at the molecular or even atomic scale) and its ultra-low power consumption, making it one candidate for replacing CMOS technology.
Usage of the term:
In the context of models of computation or of physical systems, quantum cellular automaton refers to the merger of elements of both (1) the study of cellular automata in conventional computer science and (2) the study of quantum information processing. In particular, the following are features of models of quantum cellular automata: The computation is considered to come about by parallel operation of multiple computing devices, or cells. The cells are usually taken to be identical, finite-dimensional quantum systems (e.g. each cell is a qubit).
Usage of the term:
Each cell has a neighborhood of other cells. Altogether these form a network of cells, which is usually taken to be regular (e.g. the cells are arranged as a lattice with or without periodic boundary conditions).
The evolution of all of the cells has a number of physics-like symmetries. Locality is one: the next state of a cell depends only on its current state and that of its neighbours. Homogeneity is another: the evolution acts the same everywhere, and is independent of time.
Usage of the term:
The state space of the cells, and the operations performed on them, should be motivated by principles of quantum mechanics.Another feature that is often considered important for a model of quantum cellular automata is that it should be universal for quantum computation (i.e. that it can efficiently simulate quantum Turing machines, some arbitrary quantum circuit or simply all other quantum cellular automata).
Usage of the term:
Models which have been proposed recently impose further conditions, e.g. that quantum cellular automata should be reversible and/or locally unitary, and have an easily determined global transition function from the rule for updating individual cells. Recent results show that these properties can be derived axiomatically, from the symmetries of the global evolution.
Models:
Early proposals In 1982, Richard Feynman suggested an initial approach to quantizing a model of cellular automata. In 1985, David Deutsch presented a formal development of the subject. Later, Gerhard Grössing and Anton Zeilinger introduced the term "quantum cellular automata" to refer to a model they defined in 1988, although their model had very little in common with the concepts developed by Deutsch and so has not been developed significantly as a model of computation.
Models:
Models of universal quantum computation The first formal model of quantum cellular automata to be researched in depth was that introduced by John Watrous. This model was developed further by Wim van Dam, as well as Christoph Dürr, Huong LêThanh, and Miklos Santha, Jozef Gruska. and Pablo Arrighi. However it was later realised that this definition was too loose, in the sense that some instances of it allow superluminal signalling. A second wave of models includes those of Susanne Richter and Reinhard Werner, of Benjamin Schumacher and Reinhard Werner, of Carlos Pérez-Delgado and Donny Cheung, and of Pablo Arrighi, Vincent Nesme and Reinhard Werner. These are all closely related, and do not suffer any such locality issue. In the end one can say that they all agree to picture quantum cellular automata as just some large quantum circuit, infinitely repeating across time and space. Recent reviews of the topic are available here.
Models:
Models of physical systems Models of quantum cellular automata have been proposed by David Meyer, Bruce Boghosian and Washington Taylor, and Peter Love and Bruce Boghosian as a means of simulating quantum lattice gases, motivated by the use of "classical" cellular automata to model classical physical phenomena such as gas dispersion. Criteria determining when a quantum cellular automaton (QCA) can be described as quantum lattice gas automaton (QLGA) were given by Asif Shakeel and Peter Love.
Models:
Quantum dot cellular automata A proposal for implementing classical cellular automata by systems designed with quantum dots has been proposed under the name "quantum cellular automata" by Doug Tougaw and Craig Lent, as a replacement for classical computation using CMOS technology. In order to better differentiate between this proposal and models of cellular automata which perform quantum computation, many authors working on this subject now refer to this as a quantum dot cellular automaton. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Heredofamilial amyloidosis**
Heredofamilial amyloidosis:
Heredofamilial amyloidosis is an inherited condition that may be characterized by systemic or localized deposition of amyloid in body tissues.: 522 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Envelope (mathematics)**
Envelope (mathematics):
In geometry, an envelope of a planar family of curves is a curve that is tangent to each member of the family at some point, and these points of tangency together form the whole envelope. Classically, a point on the envelope can be thought of as the intersection of two "infinitesimally adjacent" curves, meaning the limit of intersections of nearby curves. This idea can be generalized to an envelope of surfaces in space, and so on to higher dimensions.
Envelope (mathematics):
To have an envelope, it is necessary that the individual members of the family of curves are differentiable curves as the concept of tangency does not apply otherwise, and there has to be a smooth transition proceeding through the members. But these conditions are not sufficient – a given family may fail to have an envelope. A simple example of this is given by a family of concentric circles of expanding radius.
Envelope of a family of curves:
Let each curve Ct in the family be given as the solution of an equation ft(x, y)=0 (see implicit curve), where t is a parameter. Write F(t, x, y)=ft(x, y) and assume F is differentiable.
Envelope of a family of curves:
The envelope of the family Ct is then defined as the set D of points (x,y) for which, simultaneously, F(t,x,y)=0and∂F∂t(t,x,y)=0 for some value of t, where ∂F/∂t is the partial derivative of F with respect to t.If t and u, t≠u are two values of the parameter then the intersection of the curves Ct and Cu is given by F(t,x,y)=F(u,x,y)=0 or, equivalently, 0.
Envelope of a family of curves:
Letting u → t gives the definition above.
Envelope of a family of curves:
An important special case is when F(t, x, y) is a polynomial in t. This includes, by clearing denominators, the case where F(t, x, y) is a rational function in t. In this case, the definition amounts to t being a double root of F(t, x, y), so the equation of the envelope can be found by setting the discriminant of F to 0 (because the definition demands F=0 at some t and first derivative =0 i.e. its value 0 and it is min/max at that t).
Envelope of a family of curves:
For example, let Ct be the line whose x and y intercepts are t and 11−t, this is shown in the animation above. The equation of Ct is 11 −t=1 or, clearing fractions, 11 11 11 11 0.
The equation of the envelope is then 11 44 22 121 0.
Envelope of a family of curves:
Often when F is not a rational function of the parameter it may be reduced to this case by an appropriate substitution. For example, if the family is given by Cθ with an equation of the form u(x, y)cos θ+v(x, y)sin θ=w(x, y), then putting t=eiθ, cos θ=(t+1/t)/2, sin θ=(t-1/t)/2i changes the equation of the curve to u12(t+1t)+v12i(t−1t)=w or 0.
Envelope of a family of curves:
The equation of the envelope is then given by setting the discriminant to 0: (u−iv)(u+iv)−w2=0 or u2+v2=w2.
Alternative definitions The envelope E1 is the limit of intersections of nearby curves Ct.
The envelope E2 is a curve tangent to all of the Ct.
The envelope E3 is the boundary of the region filled by the curves Ct.Then E1⊆D , E2⊆D and E3⊆D , where D is the set of points defined at the beginning of this subsection's parent section.
Examples:
Example 1 These definitions E1, E2, and E3 of the envelope may be different sets. Consider for instance the curve y = x3 parametrised by γ : R → R2 where γ(t) = (t,t3). The one-parameter family of curves will be given by the tangent lines to γ.
First we calculate the discriminant D . The generating function is F(t,(x,y))=3t2x−y−2t3.
Examples:
Calculating the partial derivative Ft = 6t(x – t). It follows that either x = t or t = 0. First assume that x = t and t ≠ 0. Substituting into F: F(t,(t,y))=t3−y and so, assuming that t ≠ 0, it follows that F = Ft = 0 if and only if (x,y) = (t,t3). Next, assuming that t = 0 and substituting into F gives F(0,(x,y)) = −y. So, assuming t = 0, it follows that F = Ft = 0 if and only if y = 0. Thus the discriminant is the original curve and its tangent line at γ(0): D={(x,y)∈R2:y=x3}∪{(x,y)∈R2:y=0}.
Examples:
Next we calculate E1. One curve is given by F(t,(x,y)) = 0 and a nearby curve is given by F(t + ε,(x,y)) where ε is some very small number. The intersection point comes from looking at the limit of F(t,(x,y)) = F(t + ε,(x,y)) as ε tends to zero. Notice that F(t,(x,y)) = F(t + ε,(x,y)) if and only if := 0.
Examples:
If t ≠ 0 then L has only a single factor of ε. Assuming that t ≠ 0 then the intersection is given by lim ε→01εL=6t(t−x).
Examples:
Since t ≠ 0 it follows that x = t. The y value is calculated by knowing that this point must lie on a tangent line to the original curve γ: that F(t,(x,y)) = 0. Substituting and solving gives y = t3. When t = 0, L is divisible by ε2. Assuming that t = 0 then the intersection is given by lim ε→01ε2L=3x.
Examples:
It follows that x = 0, and knowing that F(t,(x,y)) = 0 gives y = 0. It follows that E1={(x,y)∈R2:y=x3}.
Next we calculate E2. The curve itself is the curve that is tangent to all of its own tangent lines. It follows that E2={(x,y)∈R2:y=x3}.
Examples:
Finally we calculate E3. Every point in the plane has at least one tangent line to γ passing through it, and so region filled by the tangent lines is the whole plane. The boundary E3 is therefore the empty set. Indeed, consider a point in the plane, say (x0,y0). This point lies on a tangent line if and only if there exists a t such that F(t,(x0,y0))=3t2x0−y0−2t3=0.
Examples:
This is a cubic in t and as such has at least one real solution. It follows that at least one tangent line to γ must pass through any given point in the plane. If y > x3 and y > 0 then each point (x,y) has exactly one tangent line to γ passing through it. The same is true if y < x3 y < 0. If y < x3 and y > 0 then each point (x,y) has exactly three distinct tangent lines to γ passing through it. The same is true if y > x3 and y < 0. If y = x3 and y ≠ 0 then each point (x,y) has exactly two tangent lines to γ passing through it (this corresponds to the cubic having one ordinary root and one repeated root). The same is true if y ≠ x3 and y = 0. If y = x3 and x = 0, i.e., x = y = 0, then this point has a single tangent line to γ passing through it (this corresponds to the cubic having one real root of multiplicity 3). It follows that E3=∅.
Examples:
Example 2 In string art it is common to cross-connect two lines of equally spaced pins. What curve is formed? For simplicity, set the pins on the x- and y-axes; a non-orthogonal layout is a rotation and scaling away. A general straight-line thread connects the two points (0, k−t) and (t, 0), where k is an arbitrary scaling constant, and the family of lines is generated by varying the parameter t. From simple geometry, the equation of this straight line is y = −(k − t)x/t + k − t. Rearranging and casting in the form F(x,y,t) = 0 gives: Now differentiate F(x,y,t) with respect to t and set the result equal to zero, to get These two equations jointly define the equation of the envelope. From (2) we have: t=kx Substituting this value of t into (1) and simplifying gives an equation for the envelope: Or, rearranging into a more elegant form that shows the symmetry between x and y: We can take a rotation of the axes where the b axis is the line y=x oriented northeast and the a axis is the line y=−x oriented southeast. These new axes are related to the original x-y axes by x=(b+a)/√2 and y=(b−a)/√2 . We obtain, after substitution into (4) and expansion and simplification, which is apparently the equation for a parabola with axis along a=0, or y=x.
Examples:
Example 3 Let I ⊂ R be an open interval and let γ : I → R2 be a smooth plane curve parametrised by arc length. Consider the one-parameter family of normal lines to γ(I). A line is normal to γ at γ(t) if it passes through γ(t) and is perpendicular to the tangent vector to γ at γ(t). Let T denote the unit tangent vector to γ and let N denote the unit normal vector. Using a dot to denote the dot product, the generating family for the one-parameter family of normal lines is given by F : I × R2 → R where F(t,x)=(x−γ(t))⋅T(t).
Examples:
Clearly (x − γ)·T = 0 if and only if x − γ is perpendicular to T, or equivalently, if and only if x − γ is parallel to N, or equivalently, if and only if x = γ + λN for some λ ∈ R. It follows that := {x∈R2:F(t0,x)=0} is exactly the normal line to γ at γ(t0). To find the discriminant of F we need to compute its partial derivative with respect to t: ∂F∂t(t,x)=κ(t)(x−γ(t))⋅N(t)−1, where κ is the plane curve curvature of γ. It has been seen that F = 0 if and only if x - γ = λN for some λ ∈ R. Assuming that F = 0 gives ∂F∂t=λκ(t)−1.
Examples:
Assuming that κ ≠ 0 it follows that λ = 1/κ and so D=γ(t)+1κ(t)N(t).
This is exactly the evolute of the curve γ.
Example 4 The following example shows that in some cases the envelope of a family of curves may be seen as the topologic boundary of a union of sets, whose boundaries are the curves of the envelope. For s>0 and t>0 consider the (open) right triangle in a Cartesian plane with vertices (0,0) , (s,0) and (0,t) := {(x,y)∈R+2:xs+yt<1}.
Fix an exponent α>0 , and consider the union of all the triangles Ts,t subjected to the constraint sα+tα=1 , that is the open set := ⋃sα+tα=1Ts,t.
Examples:
To write a Cartesian representation for Δα , start with any s>0 , t>0 satisfying sα+tα=1 and any (x,y)∈R+2 . The Hölder inequality in R2 with respect to the conjugated exponents := 1+1α and := 1+α gives: xαα+1+yαα+1≤(xs+yt)αα+1(sα+tα)1α+1=(xs+yt)αα+1 ,with equality if and only if s:t=x11+α:y11+α In terms of a union of sets the latter inequality reads: the point (x,y)∈R+2 belongs to the set Δα , that is, it belongs to some Ts,t with sα+tα=1 , if and only if it satisfies 1.
Examples:
Moreover, the boundary in R+2 of the set Δα is the envelope of the corresponding family of line segments {(x,y)∈R+2:xs+yt=1},sα+tα=1 (that is, the hypotenuses of the triangles), and has Cartesian equation 1.
Notice that, in particular, the value α=1 gives the arc of parabola of the Example 2, and the value α=2 (meaning that all hypotenuses are unit length segments) gives the astroid.
Examples:
Example 5 We consider the following example of envelope in motion. Suppose at initial height 0, one casts a projectile into the air with constant initial velocity v but different elevation angles θ. Let x be the horizontal axis in the motion surface, and let y denote the vertical axis. Then the motion gives the following differential dynamical system: d2ydt2=−g,d2xdt2=0, which satisfies four initial conditions: cos sin 0.
Examples:
Here t denotes motion time, θ is elevation angle, g denotes gravitational acceleration, and v is the constant initial speed (not velocity). The solution of the above system can take an implicit form: tan cos 0.
To find its envelope equation, one may compute the desired derivative: cos tan cos 0.
By eliminating θ, one may reach the following envelope equation: y=v22g−g2v2x2.
Clearly the resulted envelope is also a concave parabola.
Envelope of a family of surfaces:
A one-parameter family of surfaces in three-dimensional Euclidean space is given by a set of equations F(x,y,z,a)=0 depending on a real parameter a. For example, the tangent planes to a surface along a curve in the surface form such a family.
Two surfaces corresponding to different values a and a' intersect in a common curve defined by 0.
In the limit as a' approaches a, this curve tends to a curve contained in the surface at a 0.
This curve is called the characteristic of the family at a. As a varies the locus of these characteristic curves defines a surface called the envelope of the family of surfaces.
The envelope of a family of surfaces is tangent to each surface in the family along the characteristic curve in that surface.
Generalisations:
The idea of an envelope of a family of smooth submanifolds follows naturally. In general, if we have a family of submanifolds with codimension c then we need to have at least a c-parameter family of such submanifolds. For example: a one-parameter family of curves in three-space (c = 2) does not, generically, have an envelope.
Applications:
Ordinary differential equations Envelopes are connected to the study of ordinary differential equations (ODEs), and in particular singular solutions of ODEs. Consider, for example, the one-parameter family of tangent lines to the parabola y = x2. These are given by the generating family F(t,(x,y)) = t2 – 2tx + y. The zero level set F(t0,(x,y)) = 0 gives the equation of the tangent line to the parabola at the point (t0,t02). The equation t2 – 2tx + y = 0 can always be solved for y as a function of x and so, consider 0.
Applications:
Substituting t=(dydx)/2 gives the ODE 0.
Not surprisingly y = 2tx − t2 are all solutions to this ODE. However, the envelope of this one-parameter family of lines, which is the parabola y = x2, is also a solution to this ODE. Another famous example is Clairaut's equation.
Applications:
Partial differential equations Envelopes can be used to construct more complicated solutions of first order partial differential equations (PDEs) from simpler ones. Let F(x,u,Du) = 0 be a first order PDE, where x is a variable with values in an open set Ω ⊂ Rn, u is an unknown real-valued function, Du is the gradient of u, and F is a continuously differentiable function that is regular in Du. Suppose that u(x;a) is an m-parameter family of solutions: that is, for each fixed a ∈ A ⊂ Rm, u(x;a) is a solution of the differential equation. A new solution of the differential equation can be constructed by first solving (if possible) Dau(x;a)=0 for a = φ(x) as a function of x. The envelope of the family of functions {u(·,a)}a∈A is defined by v(x)=u(x;φ(x)),x∈Ω, and also solves the differential equation (provided that it exists as a continuously differentiable function).
Applications:
Geometrically, the graph of v(x) is everywhere tangent to the graph of some member of the family u(x;a). Since the differential equation is first order, it only puts a condition on the tangent plane to the graph, so that any function everywhere tangent to a solution must also be a solution. The same idea underlies the solution of a first order equation as an integral of the Monge cone. The Monge cone is a cone field in the Rn+1 of the (x,u) variables cut out by the envelope of the tangent spaces to the first order PDE at each point. A solution of the PDE is then an envelope of the cone field.
Applications:
In Riemannian geometry, if a smooth family of geodesics through a point P in a Riemannian manifold has an envelope, then P has a conjugate point where any geodesic of the family intersects the envelope. The same is true more generally in the calculus of variations: if a family of extremals to a functional through a given point P has an envelope, then a point where an extremal intersects the envelope is a conjugate point to P.
Applications:
Caustics In geometrical optics, a caustic is the envelope of a family of light rays. In this picture there is an arc of a circle. The light rays (shown in blue) are coming from a source at infinity, and so arrive parallel. When they hit the circular arc the light rays are scattered in different directions according to the law of reflection. When a light ray hits the arc at a point the light will be reflected as though it had been reflected by the arc's tangent line at that point. The reflected light rays give a one-parameter family of lines in the plane. The envelope of these lines is the reflective caustic. A reflective caustic will generically consist of smooth points and ordinary cusp points.
Applications:
From the point of view of the calculus of variations, Fermat's principle (in its modern form) implies that light rays are the extremals for the length functional L[γ]=∫ab|γ′(t)|dt among smooth curves γ on [a,b] with fixed endpoints γ(a) and γ(b). The caustic determined by a given point P (in the image the point is at infinity) is the set of conjugate points to P.
Applications:
Huygens's principle Light may pass through anisotropic inhomogeneous media at different rates depending on the direction and starting position of a light ray. The boundary of the set of points to which light can travel from a given point q after a time t is known as the wave front after time t, denoted here by Φq(t). It consists of precisely the points that can be reached from q in time t by travelling at the speed of light. Huygens's principle asserts that the wave front set Φq0(s + t) is the envelope of the family of wave fronts Φq(s) for q ∈ Φq0(t). More generally, the point q0 could be replaced by any curve, surface or closed set in space. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Highly accelerated life test**
Highly accelerated life test:
A highly accelerated life test (HALT) is a stress testing methodology for enhancing product reliability in which prototypes are stressed to a much higher degree than expected from actual use in order to identify weaknesses in the design or manufacture of the product. Manufacturing and research and development organizations in the electronics, computer, medical, and military industries use HALT to improve product reliability.
Highly accelerated life test:
HALT can be effectively used multiple times over a product's life time. During product development, it can find design weakness earlier in the product lifecycle when changes are much less costly to make. By finding weaknesses and making changes early, HALT can lower product development costs and compress time to market. When HALT is used at the time a product is being introduced into the market, it can expose problems caused by new manufacturing processes. When used after a product has been introduced into the market, HALT can be used to audit product reliability caused by changes in components, manufacturing processes, suppliers, etc.
Overview:
Highly accelerated life testing (HALT) techniques are important in uncovering many of the weak links of a new product. These discovery tests rapidly find weaknesses using accelerated stress conditions. The goal of HALT is to proactively find weaknesses and fix them, thereby increasing product reliability. Because of its accelerated nature, HALT is typically faster and less expensive than traditional testing techniques.
Overview:
HALT is a test technique called test-to-fail, where a product is tested until failure. HALT does not help to determine or demonstrate the reliability value or failure probability in field. Many accelerated life tests are test-to-pass, meaning they are used to demonstrate the product life or reliability.
It is highly recommended to perform HALT in the initial phases of product development to uncover weak links in a product, so that there is better chance and more time to modify and improve the product.
HALT uses several stress factors (decided by a Reliability Test Engineer) and/or the combination of various factors. Commonly used stress factors are temperature, vibration, and humidity for electronics and mechanical products. Other factors can include voltage, current, power cycling and combinations of them.
Typical HALT procedures:
Environmental stresses are applied in a HALT procedure, eventually reaching a level significantly beyond that expected during use. The stresses used in HALT are typically hot and cold temperatures, temperature cycles, random vibration, power margining, and power cycling. The product under test is in operation during HALT and is continuously monitored for failures. As stress-induced failures occur, the cause should be determined, and if possible, the problem should be repaired so that the test can continue to find other weaknesses.
Typical HALT procedures:
Output of the HALT gives you: Multiple failure modes in the product before it is subjected to demonstration testing Operating limits of the product (upper and lower). These can be compared with a designer's margin or supplier specifications Destruct limits of the product (limit at which product functionality is lost and no recovery can be made)
Test chambers:
A specialized environmental chamber is required for HALT. A suitable chamber also has to be capable of applying pseudo-random vibration with a suitable profile in relation to frequency. The HALT chamber should be capable of applying random vibration energy from 2 to 10,000 Hz in 6 degrees of freedom and temperatures from -100 to +200°C. Sometimes HALT chambers are called repetitive shock chambers because pneumatic air hammers are used to produce vibration. The chamber should also be capable of rapid changes in temperature, 50°C per minute should be considered a minimum rate of change. Usually high power resistive heating elements are used for heating and liquid nitrogen (LN2) is used for cooling.
Fixtures:
Test fixtures must transmit vibration to the item under test. They must also be open in design or use air circulation to produce rapid temperature change to internal components. Test fixtures can use simple channels to attach the product to the chamber table or more complicated fixtures sometimes are fabricated.
Monitoring and failure analysis:
The equipment under test must be monitored so that if the equipment fails under test, the failure is detected. Monitoring is typically performed with thermocouple sensors, vibration accelerometers, multimeters and data loggers. Common causes of failures during HALT are poor product design, workmanship, and poor manufacturing. Failures to individual components such as resistors, capacitors, diodes, printed circuit boards occur because of these issues. Failure types found during HALT testing are associated with the infant mortality region of the bathtub curve.
Military application:
HALT is conducted before qualification testing. By catching failures early, flaws are found earlier in the acceptance process, eliminating repetitive later-stage reviews. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Metallogels**
Metallogels:
Metallogels are one-dimensional nanostructured materials, which constitute a growing class in the Supramolecular chemistry field. Non-covalent interactions, such as hydrophobic interactions, π-π interactions, and hydrogen bonding, are among the responsible forces for the formation of those gels from small molecules. However, the main driving force for the formation of a metallogel is the metal-ligand coordination. Once the structure has been established, it resists gravitational force when inverted.
Synthesis Method:
Since the properties of gels depend on the type of non-covalent interactions involved, the metal-ligand interaction provide not only thermodynamic stability, but also kinetic liability. The general method for synthesizing gels is to heat the solution, which contains the metal ion being used and investigated, along with the ligand that will form the metallogel around it, as well as any other compounds used to create the appropriate conditions for the reaction to proceed well, until all added solids (depending on the type of gel prepared) are dissolved in the solvent used, and then cooling it down until the gels are self-assembled and properly formed. However, this method has not shown favorable results with the additions of several transition metals, along with the use of lanthanides, in an acetonitrile nitrile solution of the ligand. In these studies, the ligand used is a 2,6-Bis(1′-alkylbenzimidazolyl)pyridine, due to its commercial abundance and the wide variety of synthetic pathways that allow the functionalization of this ligand, and therefore the chemical tuning of the metallogel. Therefore, under controlled heating and cooling conditions, the addition of a source of transition metals to the solution containing the ligand, along with lanthanide ions yielded stable gels, who have passed the inversion test. Self-assembling occurs from the influence of non-covalent interactions, as depicted in figure1. These linear, self-assembling compounds can continue to self-assemble forming columnar, helical structures that further aggregate to form bundles of fibers.Another approach to form gels as functional nanomaterials, is the bottom up method used in subcomponent self-assembly. This method aims to save resources, shorten the time of synthesis, and offer a wider range of gels by the quick exchange of one of the reaction components.
Examples:
Although the synthesis method generally is the same, relying on the self-assembly of small molecules in the appropriate conditions, the metallogels differ mainly in the metal ion used, which directly influences their functions, chemical, optic, and electronic properties. Among numerous metal ions that are used, gold ions have been investigated for their wide variety of foreseen applications, as discussed in the applications section. They are further divided into two categories, based on the type of solvent used during the synthesis process. Gold Organometallogelators which are formed by Au(I) in trinuclear gold(I) pyrazolate complexes with long akyl chains, which appears as a red-luminescent organogel. Gold Hydrometallogelators are made of glutathione and Au (III), which appear as a transparent gel. Silver metal ions also show properties of self-assembly, since they have high affinity to bind nitrogen, which can act as the driving force to form stable supramolecular structures.However, copper ions have a promiscuous nature that allows them to bind to a variety of ligands, which readily form stable metallogels with tunable properties, widening the scope of their applications. Bipyridines were among the most important ligands, since the formation of those metallogels can lead to research about the coordination of copper ions to DNA base pairs. Oxalic acid dihydrate is another important ligand, that easily forms stable structures when copper salts are added, which can be used as proton conductors. Furthermore, a bile acid–picolinic acid conjugate can form gels in solvents that are 30%-50% organic. The increased water content renders this gel more bio-compatible, offering room for further investigation.Palladium ions were among the transition metals used to form catalytic and irreversible metallogels, as well.
Applications:
Metallogels obtain multi-responsive property to widely environmental responses. In particular, metallogels which are made of transition metal and lanthanoid are thermo-responsive, mechano-responsive, chemo-responsive, and photo-responsive. A metallogel system of Co/La shows inverse gel-sol transition when being heated to 100°C. When being heated, the orange color of solution remains unchanged suggesting the reaction of La/ligands only due to the heat. Such behavior is classified as thermo-response. Metallogels are also mechano-responsive. A system of Zn/La shows the formation of gel-like material upon addition of CH3CN as solvent followed by a gently shake. However, this metarial turns to transparent liquid after sitted for 20 seconds. As an example of a chemo-response, adding a small amount of formic acid to Zn/Eu will cause the breakdown of gel-like material as well as its mechanical stability and light-emission. Different system of metal and lanthanoids show different emission bands on photoluminescent spectra. Co/Eu emits no band on the spectra due to presence of low energy metal of the system. Zn/La shows signal of metal-bound ligand-based at 397 nm while Zn/Eu shows signals of lanthanide metal at 581, 594, 616, 652 nm and signals of ligand at 397 nm indicating that the ligand is sensitive to metal biding.
Applications:
In addition to the multi-responsive properties, gold metallogels can prove to be useful in cosmetics, food processing, and lubrication. Those gels are used in drug delivery, to trap active enzymes and bacteria inside them. Furthermore, the basis of certain technologies, producing valves, clutches, and dampers, rely on the multi-responsive nature of metallogels to electric and magnetic stimuli.A recent study on metal organic gels involving Cadmium and Zinc ion shows promising results to absorb dyes, which emulates the ability of natural systems to get rid of toxic material that are difficult to decompose. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**3C-like protease**
3C-like protease:
The 3C-like protease (3CLpro) or main protease (Mpro), formally known as C30 endopeptidase or 3-chymotrypsin-like protease, is the main protease found in coronaviruses. It cleaves the coronavirus polyprotein at eleven conserved sites. It is a cysteine protease and a member of the PA clan of proteases. It has a cysteine-histidine catalytic dyad at its active site and cleaves a Gln–(Ser/Ala/Gly) peptide bond.
3C-like protease:
The Enzyme Commission refers to this family as SARS coronavirus main proteinase (Mpro; EC 3.4.22.69). The 3CL protease corresponds to coronavirus nonstructural protein 5 (nsp5). The "3C" in the common name refers to the 3C protease (3Cpro) which is a homologous protease found in picornaviruses.
Function:
The 3C-like protease is able to catalytically cleave a peptide bond between a glutamine at position P1 and a small amino acid (serine, alanine, or glycine) at position P1'. The SARS coronavirus 3CLpro can for instance self-cleave the following peptides: TSAVLQ-SGFRK-NH2 and SGVTFQ-GKFKK are the two peptides corresponding to the two self-cleavage sites of the SARS 3C-like proteinase The protease is important in the processing of the coronavirus replicase polyprotein (P0C6U8). It is the main protease in coronaviruses and corresponds to nonstructural protein 5 (nsp5). It cleaves the coronavirus polyprotein at 11 conserved sites. The 3CL protease has a cysteine-histidine catalytic dyad at its active site. The sulfur of the cysteine acts as a nucleophile and the imidazole ring of the histidine as a general base.
Nomenclature:
Alternative names provided by the EC include 3CLpro, 3C-like protease, coronavirus 3C-like protease, Mpro, SARS 3C-like protease, SARS coronavirus 3CL protease, SARS coronavirus main peptidase, SARS coronavirus main protease, SARS-CoV 3CLpro enzyme, SARS-CoV main protease, SARS-CoV Mpro and severe acute respiratory syndrome coronavirus main protease.
As a treatment target:
The protease 3CLpro is used as a drug target for coronavirus infections due to its essential role in processing the polyproteins that are translated from the viral RNA. The X-ray structures of the unliganded SARS-CoV-2 protease 3CLpro and its complex with an α-ketoamide inhibitor provides a basis for design of α-ketoamide inhibitors for a treatment of SARS-CoV-2 infection.A number of protease inhibitors being developed targeting 3CLpro and homologous 3Cpro, including CLpro-1, GC376, rupintrivir, lufotrelvir, PF-07321332, and AG7404. The intravenous administered prodrug PF-07304814 (lufotrelvir) entered clinical trials in September 2020.After clinical trials, in December 2021, the oral medication nirmatrelvir (formerly PF-07321332) became commercially available under emergency regulatory authorizations, as part of the nirmatrelvir/ritonavir combination therapy (brand name Paxlovid).
As a treatment target:
In 2022, an ultralarge virtual screening campaign of 235 million molecules was able to identify a novel broad-spectrum inhibitor targeting the main protease of several coronaviruses.
Other 3C(-like) proteases:
3C-like proteases (3C(L)pro) are widely found in (+)ssRNA viruses. All of them are cysteine proteases with a chymotrypsin-like fold (PA clan), using a catalytic dyad or triad. They share some general similarities on substrate specificity and inhibitor effectiveness. They are divided into subfamilies by sequence similarity, corresponding to the family of viruses they are found in: This entry is the coronavirus 3CLpro.
Other 3C(-like) proteases:
Picornaviridae have a picornavirus 3Cpro (EC 3.4.22.28; InterPro: IPR000199; MEROPS C03). This is the earliest-studied family. Examples include the ones found in poliovirus and in rhinovirus (both are members of genus Enterovirus).
Caliciviridae have a 3CLpro (InterPro: IPR001665; MEROPS C37). Examples include the one found in Norwalk virus.Additional members are known from Potyviridae and non-Coronaviridae Nidovirales. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flip chip**
Flip chip:
Flip chip, also known as controlled collapse chip connection or its abbreviation, C4, is a method for interconnecting dies such as semiconductor devices, IC chips, integrated passive devices and microelectromechanical systems (MEMS), to external circuitry with solder bumps that have been deposited onto the chip pads. The technique was developed by General Electric's Light Military Electronics Department, Utica, New York. The solder bumps are deposited on the chip pads on the top side of the wafer during the final wafer processing step. In order to mount the chip to external circuitry (e.g., a circuit board or another chip or wafer), it is flipped over so that its top side faces down, and aligned so that its pads align with matching pads on the external circuit, and then the solder is reflowed to complete the interconnect. This is in contrast to wire bonding, in which the chip is mounted upright and fine wires are welded onto the chip pads and lead frame contacts to interconnect the chip pads to external circuitry.
Process steps:
Integrated circuits are created on the wafer.
Pads are metallized on the surface of the chips.
A solder ball is deposited on each of the pads, in a process called wafer bumping Chips are cut.
Chips are flipped and positioned so that the solder balls are facing the connectors on the external circuitry.
Solder balls are then remelted (typically using hot air reflow).
Mounted chip is "underfilled" using a (capillary, shown here) electrically-insulating adhesive.
Comparison of mounting technologies:
Wire bonding/thermosonic bonding In typical semiconductor fabrication systems, chips are built up in large numbers on a single large wafer of semiconductor material, typically silicon. The individual chips are patterned with small pads of metal near their edges that serve as the connections to an eventual mechanical carrier. The chips are then cut out of the wafer and attached to their carriers, typically via wire bonding such as thermosonic bonding. These wires eventually lead to pins on the outside of the carriers, which are attached to the rest of the circuitry making up the electronic system.
Comparison of mounting technologies:
Flip chip Processing a flip chip is similar to conventional IC fabrication, with a few additional steps. Near the end of the manufacturing process, the attachment pads are metalized to make them more receptive to solder. This typically consists of several treatments. A small dot of solder is then deposited on each metalized pad. The chips are then cut out of the wafer as normal.
Comparison of mounting technologies:
To attach the flip chip into a circuit, the chip is inverted to bring the solder dots down onto connectors on the underlying electronics or circuit board. The solder is then re-melted to produce an electrical connection, typically using a thermosonic bonding or alternatively reflow solder process.This also leaves a small space between the chip's circuitry and the underlying mounting. In many cases an electrically-insulating adhesive is then "underfilled" to provide a stronger mechanical connection, provide a heat bridge, and to ensure the solder joints are not stressed due to differential heating of the chip and the rest of the system.
Comparison of mounting technologies:
The underfill distributes the thermal expansion mismatch between the chip and the board, preventing stress concentration in the solder joints which would lead to premature failure.In 2008, High-speed mounting methods evolved through a cooperation between Reel Service Ltd. and Siemens AG in the development of a high speed mounting tape known as 'MicroTape'[1]. By adding a tape-and-reel process into the assembly methodology, placement at high speed is possible, achieving a 99.90% pick rate and a placement rate of 21,000 cph (components per hour), using standard PCB assembly equipment.
Comparison of mounting technologies:
Tape-automated bonding Tape-automated bonding (TAB) was developed for connecting dies with thermocompression or thermosonic bonding to a flexible substrate including from one to three conductive layers. Also with TAB it is possible to connect die pins all at the same time as with the soldering based flip chip mounting. Originally TAB could produce finer pitch interconnections compared to flip chip, but with the development of the flip chip this advantage has diminished and has kept TAB to be a specialized interconnection technique of display drivers or similar requiring specific TAB compliant roll-to-roll (R2R, reel-to-reel) like assembly system.
Comparison of mounting technologies:
Advantages The resulting completed flip chip assembly is much smaller than a traditional carrier-based system; the chip sits directly on the circuit board, and is much smaller than the carrier both in area and height. The short wires greatly reduce inductance, allowing higher-speed signals, and also conduct heat better.
Disadvantages Flip chips have several disadvantages.
The lack of a carrier means they are not suitable for easy replacement, or unaided manual installation. They also require very flat mounting surfaces, which is not always easy to arrange, or sometimes difficult to maintain as the boards heat and cool. This limits the maximum device size.
Also, the short connections are very stiff, so the thermal expansion of the chip must be matched to the supporting board or the connections can crack. The underfill material acts as an intermediate between the difference in CTE of the chip and board.
History:
The process was originally introduced commercially by IBM in the 1960s for individual transistors and diodes packaged for use in their mainframe systems.
Alternatives:
Since the flip chip's introduction a number of alternatives to the solder bumps have been introduced, including gold balls or molded studs, electrically conductive polymer and the "plated bump" process that removes an insulating plating by chemical means. Flip chips have recently gained popularity among manufacturers of cell phones and other small electronics where the size savings are valuable. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SE scale**
SE scale:
SE scale is a designation used by some modellers to describe miniature (model) trains which run on either Gauge 1 (45 mm or 1.772 in) track or O gauge (32 mm or 1.26 in) track. In SE scale, 7/8 of an inch equals one foot, which is a ratio of 1:13.7. On 45 mm (1.772 in) gauge track this represents real life narrow gauge railways that are 2 ft (610 mm) gauge, while on 32 mm (1.26 in) gauge track this represents 18 in (457 mm) railways.
SE scale:
Modelling in a scale where 7/8 inch = 1 foot - 0 inches is relatively new (within the last 20 years) and, as a result, the majority of the modellers build from scratch. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kerstin Perez**
Kerstin Perez:
Kerstin Perez is an Associate Professor of Particle Physics at the Massachusetts Institute of Technology. She is interested in physics beyond the standard model. She leads the silicon detector program for the General AntiParticle Spectrometer (GAPS) and the high-energy X-ray analysis community for the NuSTAR telescope array.
Early life and education:
Perez was born and raised in West Philadelphia. She studied physics and mathematics at Columbia University, earning an undergraduate degree magna cum laude in 2005. She moved to the California Institute of Technology for her graduate studies, earning a master's in 2008 and a PhD in 2011. She developed the ATLAS experiment pixel detector, and led the first ATLAS measurements of the inclusive cross-section for the production of hadronic jets. Perez returned to Columbia University as a National Science Foundation postdoctoral fellow, working in the NuSTAR Galactic Center. During her fellowship she developed outreach activities for the Columbia University Double Discovery Centre.
Research and career:
Perez joined Haverford College as an assistant professor in 2015, before moving to Massachusetts Institute of Technology in 2016. Her research interests lie in physics beyond the standard model. She leads the detection program for the general antiparticle spectrometer (GAPS), the first experiment that has been optimised to study low energy antinuclei. Perez is interested in anti-deuterons, antiproton-antineutron pairs which may provide evidence of the annihilation of weakly interacting massive particles, a candidate for dark matter. The GAPS detection is particularly novel, including over 1,000 large-area, low-cost lithium-drifted silicon detectors made by Perez, which can monitor exotic atom capture and decay. The GAPS experiment uses long-duration balloons and reaches the upper atmosphere.Alongside her work on GAPS, Perez leads the high-energy X-rays analysis group for the NuSTAR telescope array. NuSTAR has revealed how stellar remnant populations vary as you move from the Galactic Center. This helps Perez identify sterile neutrinos, which could help to explain neutrino oscillation.
Research and career:
Outreach and advocacy Perez is an advocate for improved diversity in science, and supports students from underrepresented groups to study and research physics. She is concerned that women and people of colour often carry an unnecessary burden in the scientific workplace. She is involved with public engagement through the Massachusetts Institute of Technology, supporting their massive open online course in electricity and magnetism.
Research and career:
Publications Perez, Kerstin "Striving Toward a Space for Equity and Inclusion in Physics Classrooms," Teaching and Learning Together in Higher Education: Iss. 18 (2016).
B. Roach, et al., ”NuSTAR Tests of Sterile Neutrino Dark Matter: New Galactic Bulge Observations and Combined Impact,” Phys. Rev. D 101, 103011 (2020).
F. Rogers, et al., F. Rogers, et al., “Large-area Si(Li) detectors for X-ray spectrometry and particle tracking in the GAPS experiment,” JINST, 14, 10 (2019).
Awards and honors:
2017 Sloan Research Fellowship 2017 Heising Simons Foundation Fellowship 2018 MIT School of Science Teaching Prize for Undergraduate Education 2018 MIT Buechner Special Teaching Award 2019 Research Corporation for Scientific Advancement Cottrell Scholar Award | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Settegast**
Settegast:
A Settegast is a standard medical x-ray projection that presents a tangential view of the patella.
To acquire such an image the patient is placed in a prone position with the knee flexed at least 90 degrees and the field of view centered on the patellofemoral joint space. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Prelap**
Prelap:
Prelap is a screenwriting term that means the dialogue from the next scene precedes the cut, and the beginning of the dialogue is heard in the outgoing scene. As an example: ADRIAN (V.O., PRELAP) Peter? Peter, where are you? EXT. THE WOODS – DAY Adrian is out looking for Peter. We see him wander around in the small forest.
Prelap:
ADRIAN Peter? Hello? Are you there? In this example, Adrian's voice precedes the scene out in the woods. The "V.O." means "Voice Over" and the "PRELAP" indicates that Adrian's dialogue should be heard before the next scene begins. Adrian, in this example, might not even be in the scene the other characters are in when the prelap occurs. ("O.S." or "Off Screen" would not be appropriate as the term should only be used for characters unseen but on set.) Prelaps can be of sound or dialogue, or anything non-visual, since a visual would indicate a direct cut to a new scene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ardha chandrasana**
Ardha chandrasana:
Ardha Chandrasana (Sanskrit: अर्धचन्द्रासन; IAST: ardha candrāsana) or Half Moon Pose is a standing asana in modern yoga as exercise.
Etymology and origins:
The name comes from the Sanskrit words अर्ध ardha meaning "half", चन्द्र candra meaning "moon", and आसन āsana meaning "posture" or "seat".The 19th century Sritattvanidhi uses the name Ardha Chandrasana for a different pose, Vrikshasana. Swami Yogesvarananda used the name in his 1970 First Steps to Higher Yoga for a pose similar to Kapotasana, Pigeon. The modern usage of the name is found in B. K. S. Iyengar's 1966 Light on Yoga.
Practice and benefits:
The pose is entered from Trikonasana (triangle pose), where one foot is kept forward. The arm opposite to the foot that is forward would come onto the hip. While stretching up with the rear leg and out with the front hand so that only the fingertips remain on the ground, the hand on the hip can gradually reach up towards the ceiling. The gaze is directed at the upper hand. However, Iyengar describes the pose with the upper hand resting on the hip.The pose helps to strengthen the ankles and improve balance.
Variations:
Parivrtta Ardha Chandrasana (Revolved Half Moon Pose) has the body revolved towards the standing leg.
Baddha Parivrtta Ardha Chandrasana (Bound Revolved Half Moon Pose) has the body revolved towards the standing leg with arms bound around the standing leg.
Other 'half moon' poses:
In Sivananda Yoga and its derivative styles such as the Bihar School of Yoga, half moon pose is Anjaneyasana, an asana used in the moon salutation series (Chandra Namaskar).In Bikram Yoga, the name "half moon pose" is given to a two-legged standing side bend, elsewhere called Indudalasana. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Uniformology**
Uniformology:
Uniformology is a branch of the auxiliary sciences of history which studies uniforms - especially military uniforms - through ages and civilizations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fossil wood**
Fossil wood:
Fossil wood, also known as fossilized tree, is wood that is preserved in the fossil record. Over time the wood will usually be the part of a plant that is best preserved (and most easily found). Fossil wood may or may not be petrified, in which case it is known as petrified wood or petrified tree. The study of fossil wood is sometimes called palaeoxylology, with a "palaeoxylologist" somebody who studies fossil wood.
Fossil wood:
The fossil wood may be the only part of the plant that has been preserved, with the rest of the plant completely unknown: therefore such wood may get a special kind of botanical name. This will usually include "xylon" and a term indicating its presumed affinity, such as Araucarioxylon (wood similar to that of extant Araucaria or some related genus like Agathis or Wollemia), Palmoxylon (wood similar to that of modern Arecaeae), or Castanoxylon (wood similar to that of modern chinkapin or chesnut-tree). The fact that a fossil is named so does not mean that the fossil originated from a plant undoubtedly related to the modern genus alluded to.
Types:
Petrified wood Petrified wood are fossils of wood that have turned to stone through the process of permineralization. All organic materials are replaced with minerals while maintaining the original structure of the wood.
The most notable example is the petrified forest in Arizona.
Types:
Mummified wood Mummified wood are fossils of wood that have not permineralized. They are formed when trees are buried rapidly in dry cold or hot environments. They are valued in paleobotany because they retain original cells and tissues capable of being examined with the same techniques used with extant plants in dendrology.Notable examples include the mummified forests in Ellesmere Island and Axel Heiberg Island.
Types:
Submerged forests Submerged forests are remains of trees submerged by marine transgression. They are important in determining sea level rise since the last glacial period. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vereinfachte Ausgangsschrift**
Vereinfachte Ausgangsschrift:
The Vereinfachte Ausgangsschrift (VA, meaning "simplified written script") is a simplified form of handwriting primarily based on the Lateinische Ausgangsschrift. It was developed in 1969 and tested since 1972. The letters have been simplified and the shapes approximated the block letters. In 10 of the 16 German federal states, it is available for schools to choose from, among other cursives.
Vereinfachte Ausgangsschrift:
The difficulties in learning the Latin script developed from the "Deutsche Normalschrift" prompted the development of a standardised cursive. The Vereinfachten Ausgangsschrift was intended to correct inconsistencies in the Latin source script and to develop a script that was easier to learn.
The Vereinfachte Ausgangsschrift is primarily based on the Lateinische Ausgangsschrift and is also based on the Druckschrift (DS). During development, attention should be paid to a consistent and logical flow of writing, the analogy to the printed text, a light motor implementation and the avoidance of unnecessary decorative elements.
Vereinfachte Ausgangsschrift compared to the Lateinische Ausgangsschrift:
In the Vereinfachte Ausgangsschrift (VAS), almost all lowercase letters begin and end on the upper center line.
Vereinfachte Ausgangsschrift compared to the Lateinische Ausgangsschrift:
This is very important for the flow of writing, since the termination point of each letter is always the starting point for the next one. In Lateinische Ausgangsschrift (LAS), connecting the letters is considered to be easier in practice, but more difficult to learn because there are four different options. With VAS, almost all lowercase letters begin and end at the same place. The letters are therefore standardized in the simplified initial script. In LAS, there are several letter combinations for the same letter. (Example: LAS above, VAS below). VAS disrupts the flow of writing through “jerky” transitions, but enables textbook publishers to use a standardized typeface instead of costly handwritten sample texts.
Vereinfachte Ausgangsschrift compared to the Lateinische Ausgangsschrift:
A notable feature of VAS is the “tail”, e.g. on “b”, which, when letters are connected, only trails with the last letter of the word. However, this “tail” is an integral part of each letter, as it is intended to enable a fluid, uniform connection of the letters. The form of the lowercase "z" with sub-loop emphasizes that this script continues a centuries-old tradition in the lineage of Kurrentschrift and Fraktur.
Vereinfachte Ausgangsschrift compared to the Lateinische Ausgangsschrift:
The capital letters of Vereinfachte Ausgangsschrift differ from Lateinische Ausgangsschrift. The print script (Druckschrift) was used as a basis for the capitals in VAS because the students learn it first. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chitin: I**
Chitin: I:
Chitin: I is a science fiction microgame published by Metagaming Concepts in 1977 in which bands of intelligent insects vie for resources.
Description:
Chitin: I is a two-player board wargame in which both players control cities of Hymenoptera, intelligent insects, who are battling for enough food to survive.
Description:
Components The game's plastic bag contains: 9" x 14" paper hex grid map 112 die-cut counters 17-page rulebook Gameplay Each insect army is divided into three castes: warriors, leaders, and workers, each of which have various capabilities. Players must use a combination of castes effectively in order to maximize their combat capabilities and food collection. Food chits on the board are what both sides covet, but the dead bodies of both sides can also be collected for food. Victory is dependent on how much food is collected.There are six scenarios included in the game starting with a simple introductory scenario designed to teach the rules. Each subsequent scenario then grows in complexity and length. The game can either be played with Basic rules, or with the Advanced rules, which add two new types of insects. Turns are a simple "I go, You go" system, where one player moves and fights, then the other player moves and fights.Combat is typical for wargames of the time: the ratio of attacker to defender strength is determined, the corresponding column of a Combat Results Table (CRT) consulted, and a die rolled to determine the exact result: elimination of a unit, forced retreat, or disruption, which interferes with a unit's combat and movement abilities for the next turn.
Publication history:
In 1977, Metagaming Concepts pioneered a new type of small, fast and cheap wargame packaged in a ziplock bag. Ogre was the first of this MicroGame series, and Chitin: I was the second, designed by Howard Thompson, with artwork by Jennell Jaquays. The game sold well enough that a second edition, with new artwork by Jaquays, was published the following year. The game was still popular enough four years after its publication that Dragon, owned by rival game company TSR, published a two-page article on variant castes for Chitin: I.
Reception:
David James Ritchie reviewed Chitin: I in The Space Gamer No. 13, commenting that "For those who like their carnage in cardboard, Chitin: I is definitely an attractive brew." Ritchie also reviewed the game in Ares Magazine #1, rating it a 5 out of 9 and with the comments "Fairly short and simple. Lots of fun. Those with a taste for the bizarre will appreciate their units' ability to eat friendly casualties."In Issue 27 of Simulacrum, Brian Train noted, "This is a good example of a 'minimalist' yet open ended design with a lot of replay value, making it typical of the early Metagaming microgames."In Issue 35 of Warning Order, Matt Irsik commented, "The gameplay was pretty good and this game is still well thought of after all these years."In a retrospective review of Chitin: I in Black Gate, John O'Neill said "Like Ogre before it, Chitin fired the imagination. Losing a war had pretty dire consequences for your insect colony. My most vivid memories of the game were in the hours after the board was put away, wondering what would happen to the losing side as summer ended and the hive retreated to face starvation and a long winter... would they survive to next summer? Or would a series of setbacks wipe out an entire culture? For a simple little game, Chitin packed an emotional punch." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Significant figures**
Significant figures:
Significant figures (also known as the significant digits, precision or resolution) of a number in positional notation are digits in the number that are reliable and necessary to indicate the quantity of something.
If a number expressing the result of a measurement (e.g., length, pressure, volume, or mass) has more digits than the number of digits allowed by the measurement resolution, then only as many digits as allowed by the measurement resolution are reliable, and so only these can be significant figures.
Significant figures:
For example, if a length measurement gives 114.8 mm while the smallest interval between marks on the ruler used in the measurement is 1 mm, then the first three digits (1, 1, and 4, showing 114 mm) are certain and so they are significant figures. Digits which are uncertain but reliable are also considered significant figures. In this example, the last digit (8, which adds 0.8 mm) is also considered a significant figure even though there is uncertainty in it.Another example is a volume measurement of 2.98 L with an uncertainty of ± 0.05 L. The actual volume is somewhere between 2.93 L and 3.03 L. Even when some of the digits are not certain, as long as they are reliable, they are considered significant because they indicate the actual volume within the acceptable degree of uncertainty. In this example the actual volume might be 2.94 L or might instead be 3.02 L. And so all three are significant figures.The following digits are not significant figures.
Significant figures:
All leading zeros. For example, 013 kg has two significant figures, 1 and 3, and the leading zero is not significant since it is not necessary to indicate the mass; 013 kg = 13 kg so 0 is not necessary. In the case of 0.056 m there are two insignificant leading zeros since 0.056 m = 56 mm and so the leading zeros are not necessary to indicate the length.
Significant figures:
Trailing zeros when they are merely placeholders. For example, the trailing zeros in 1500 m as a length measurement are not significant if they are just placeholders for ones and tens places (supposing the measurement resolution is 100 m). In this case, 1500 m means the length to measure is close to 1500 m rather than saying that the length is exactly 1500 m.
Significant figures:
Spurious digits, introduced by calculations resulting in a number with a greater precision than the precision of the used data in the calculations, or in a measurement reported to a greater precision than the measurement resolution.Of the significant figures in a number, the most significant is the digit with the highest exponent value (simply the left-most significant figure), and the least significant is the digit with the lowest exponent value (simply the right-most significant figure). For example, in the number "123", the "1" is the most significant figure as it counts hundreds (102), and "3" is the least significant figure as it counts ones (100).
Significant figures:
Significance arithmetic is a set of approximate rules for roughly maintaining significance throughout a computation. The more sophisticated scientific rules are known as propagation of uncertainty.
Significant figures:
Numbers are often rounded to avoid reporting insignificant figures. For example, it would create false precision to express a measurement as 12.34525 kg if the scale was only measured to the nearest gram. In this case, the significant figures are the first 5 digits from the left-most digit (1, 2, 3, 4, and 5), and the number needs to be rounded to the significant figures so that it will be 12.345 kg as the reliable value. Numbers can also be rounded merely for simplicity rather than to indicate a precision of measurement, for example, in order to make the numbers faster to pronounce in news broadcasts.
Significant figures:
Radix 10 (base-10, decimal numbers) is assumed in the following.
Identifying significant figures:
Rules to identify significant figures in a number Note that identifying the significant figures in a number requires knowing which digits are reliable (e.g., by knowing the measurement or reporting resolution with which the number is obtained or processed) since only reliable digits can be significant; e.g., 3 and 4 in 0.00234 g are not significant if the measurable smallest weight is 0.001 g.
Identifying significant figures:
Non-zero digits within the given measurement or reporting resolution are significant.
91 has two significant figures (9 and 1) if they are measurement-allowed digits.
123.45 has five significant digits (1, 2, 3, 4 and 5) if they are within the measurement resolution. If the resolution is 0.1, then the last digit 5 is not significant.
Zeros between two significant non-zero digits are significant (significant trapped zeros).
101.12003 consists of eight significant figures if the resolution is to 0.00001.
125.340006 has seven significant figures if the resolution is to 0.0001: 1, 2, 5, 3, 4, 0, and 0.
Zeros to the left of the first non-zero digit (leading zeros) are not significant.
If a length measurement gives 0.052 km, then 0.052 km = 52 m so 5 and 2 are only significant; the leading zeros appear or disappear, depending on which unit is used, so they are not necessary to indicate the measurement scale.
0.00034 has 2 significant figures (3 and 4) if the resolution is 0.00001.
Zeros to the right of the last non-zero digit (trailing zeros) in a number with the decimal point are significant if they are within the measurement or reporting resolution.
1.200 has four significant figures (1, 2, 0, and 0) if they are allowed by the measurement resolution.
0.0980 has three significant digits (9, 8, and the last zero) if they are within the measurement resolution.
120.000 consists of six significant figures (1, 2, and the four subsequent zeroes) if, as before, they are within the measurement resolution.
Trailing zeros in an integer may or may not be significant, depending on the measurement or reporting resolution.
Identifying significant figures:
45,600 has 3, 4 or 5 significant figures depending on how the last zeros are used. For example, if the length of a road is reported as 45600 m without information about the reporting or measurement resolution, then it is not clear if the road length is precisely measured as 45600 m or if it is a rough estimate. If it is the rough estimation, then only the first three non-zero digits are significant since the trailing zeros are neither reliable nor necessary; 45600 m can be expressed as 45.6 km or as 4.56 × 104 m in scientific notation, and neither expression requires the trailing zeros.
Identifying significant figures:
An exact number has an infinite number of significant figures.
If the number of apples in a bag is 4 (exact number), then this number is 4.0000... (with infinite trailing zeros to the right of the decimal point). As a result, 4 does not impact the number of significant figures or digits in the result of calculations with it.
A mathematical or physical constant has significant figures to its known digits.
Identifying significant figures:
π is a specific real number with several equivalent definitions. All of the digits in its exact decimal expansion 3.14159265358979323... are significant. Although many properties of these digits are known — for example, they do not repeat, because π is irrational — not all of the digits are known. As of 19 August 2021, more than 62 trillion digits have been calculated. A 62 trillion-digit approximation has 62 trillion significant digits. In practical applications, far fewer digits are used. The everyday approximation 3.14 has three significant decimal digits and 7 correct binary digits. The approximation 22/7 has the same three correct decimal digits but has 10 correct binary digits. Most calculators and computer programs can handle the 16-digit expansion 3.141592653589793, which is sufficient for interplanetary navigation calculations.
Identifying significant figures:
The Planck constant is 6.62607015 10 34 J⋅s and is defined as an exact value so that it is more properly defined as 6.62607015 10 34 J⋅s Ways to denote significant figures in an integer with trailing zeros The significance of trailing zeros in a number not containing a decimal point can be ambiguous. For example, it may not always be clear if the number 1300 is precise to the nearest unit (just happens coincidentally to be an exact multiple of a hundred) or if it is only shown to the nearest hundreds due to rounding or uncertainty. Many conventions exist to address this issue. However, these are not universally used and would only be effective if the reader is familiar with the convention: An overline, sometimes also called an overbar, or less accurately, a vinculum, may be placed over the last significant figure; any trailing zeros following this are insignificant. For example, 1300 has three significant figures (and hence indicates that the number is precise to the nearest ten).Less often, using a closely related convention, the last significant figure of a number may be underlined; for example, "1300" has two significant figures.A decimal point may be placed after the number; for example "1300." indicates specifically that trailing zeros are meant to be significant.As the conventions above are not in general use, the following more widely recognized options are available for indicating the significance of number with trailing zeros: Eliminate ambiguous or non-significant zeros by changing the unit prefix in a number with a unit of measurement. For example, the precision of measurement specified as 1300 g is ambiguous, while if stated as 1.30 kg it is not. Likewise 0.0123 L can be rewritten as 12.3 mLEliminate ambiguous or non-significant zeros by using Scientific Notation: For example, 1300 with three significant figures becomes 1.30×103. Likewise 0.0123 can be rewritten as 1.23×10−2. The part of the representation that contains the significant figures (1.30 or 1.23) is known as the significand or mantissa. The digits in the base and exponent (103 or 10−2) are considered exact numbers so for these digits, significant figures are irrelevant.Explicitly state the number of significant figures (the abbreviation s.f. is sometimes used): For example "20 000 to 2 s.f." or "20 000 (2 sf)".State the expected variability (precision) explicitly with a plus–minus sign, as in 20 000 ± 1%. This also allows specifying a range of precision in-between powers of ten.
Rounding to significant figures:
Rounding to significant figures is a more general-purpose technique than rounding to n digits, since it handles numbers of different scales in a uniform way. For example, the population of a city might only be known to the nearest thousand and be stated as 52,000, while the population of a country might only be known to the nearest million and be stated as 52,000,000. The former might be in error by hundreds, and the latter might be in error by hundreds of thousands, but both have two significant figures (5 and 2). This reflects the fact that the significance of the error is the same in both cases, relative to the size of the quantity being measured.
Rounding to significant figures:
To round a number to n significant figures: If the n + 1 digit is greater than 5 or is 5 followed by other non-zero digits, add 1 to the n digit. For example, if we want to round 1.2459 to 3 significant figures, then this step results in 1.25.
Rounding to significant figures:
If the n + 1 digit is 5 not followed by other digits or followed by only zeros, then rounding requires a tie-breaking rule. For example, to round 1.25 to 2 significant figures: Round half away from zero (also known as "5/4") rounds up to 1.3. This is the default rounding method implied in many disciplines if the required rounding method is not specified.
Rounding to significant figures:
Round half to even, which rounds to the nearest even number. With this method, 1.25 is rounded down to 1.2. If this method applies to 1.35, then it is rounded up to 1.4. This is the method preferred by many scientific disciplines, because, for example, it avoids skewing the average value of a long list of values upwards.
Rounding to significant figures:
For an integer in rounding, replace the digits after the n digit with zeros. For example, if 1254 is rounded to 2 significant figures, then 5 and 4 are replaced to 0 so that it will be 1300. For a number with the decimal point in rounding, remove the digits after the n digit. For example, if 14.895 is rounded to 3 significant figures, then the digits after 8 are removed so that it will be 14.9.In financial calculations, a number is often rounded to a given number of places. For example, to two places after the decimal separator for many world currencies. This is done because greater precision is immaterial, and usually it is not possible to settle a debt of less than the smallest currency unit.
Rounding to significant figures:
In UK personal tax returns, income is rounded down to the nearest pound, whilst tax paid is calculated to the nearest penny.
As an illustration, the decimal quantity 12.345 can be expressed with various numbers of significant figures or decimal places. If insufficient precision is available then the number is rounded in some manner to fit the available precision. The following table shows the results for various total precision at two rounding ways (N/A stands for Not Applicable).
Rounding to significant figures:
Another example for 0.012345. (Remember that the leading zeros are not significant.) The representation of a non-zero number x to a precision of p significant digits has a numerical value that is given by the formula: 10 round 10 n) where log 10 (|x|)⌋+1−p which may need to be written with a specific marking as detailed above to specify the number of significant trailing zeros.
Writing uncertainty and implied uncertainty:
Significant figures in writing uncertainty It is recommended for a measurement result to include the measurement uncertainty such as xbest±σx , where xbest and σx are the best estimate and uncertainty in the measurement respectively. xbest can be the average of measured values and σx can be the standard deviation or a multiple of the measurement deviation. The rules to write xbest±σx are: σx has only one or two significant figures as more precise uncertainty has no meaning.
Writing uncertainty and implied uncertainty:
1.79 ± 0.06 (correct), 1.79 ± 0.96 (correct), 1.79 ± 1.96 (incorrect).
Writing uncertainty and implied uncertainty:
The digit positions of the last significant figures in xbest and σx are the same, otherwise the consistency is lost. For example, in 1.79 ± 0.067 (incorrect), it does not make sense to have more accurate uncertainty than the best estimate. 1.79 ± 0.9 (incorrect) also does not make sense since the rounding guideline for addition and subtraction below tells that the edges of the true value range are 2.7 and 0.9, that are less accurate than the best estimate.
Writing uncertainty and implied uncertainty:
1.79 ± 0.06 (correct), 1.79 ± 0.96 (correct), 1.79 ± 0.067 (incorrect), 1.79 ± 0.9 (incorrect).
Writing uncertainty and implied uncertainty:
Implied uncertainty In chemistry (and may also be for other scientific branches), uncertainty may be implied by the last significant figure if it is not explicitly expressed. The implied uncertainty is ± the half of the minimum scale at the last significant figure position. For example, if the volume of water in a bottle is reported as 3.78 L without mentioning uncertainty, then ± 0.005 L measurement uncertainty may be implied. If 2.97 ± 0.07 kg, so the actual weight is somewhere in 2.90 to 3.04 kg, is measured and it is desired to report it with a single number, then 3.0 kg is the best number to report since its implied uncertainty ± 0.05 kg tells the weight range of 2.95 to 3.05 kg that is close to the measurement range. If 2.97 ± 0.09 kg, then 3.0 kg is still the best since, if 3 kg is reported then its implied uncertainty ± 0.5 tells the range of 2.5 to 3.5 kg that is too wide in comparison with the measurement range.
Writing uncertainty and implied uncertainty:
If there is a need to write the implied uncertainty of a number, then it can be written as x±σx with stating it as the implied uncertainty (to prevent readers from recognizing it as the measurement uncertainty), where x and σx are the number with an extra zero digit (to follow the rules to write uncertainty above) and the implied uncertainty of it respectively. For example, 6 kg with the implied uncertainty ± 0.5 kg can be stated as 6.0 ± 0.5 kg.
Arithmetic:
As there are rules to determine the significant figures in directly measured quantities, there are also guidelines (not rules) to determine the significant figures in quantities calculated from these measured quantities.
Arithmetic:
Significant figures in measured quantities are most important in the determination of significant figures in calculated quantities with them. A mathematical or physical constant (e.g., π in the formula for the area of a circle with radius r as πr2) has no effect on the determination of the significant figures in the result of a calculation with it if its known digits are equal to or more than the significant figures in the measured quantities used in the calculation. An exact number such as ½ in the formula for the kinetic energy of a mass m with velocity v as ½mv2 has no bearing on the significant figures in the calculated kinetic energy since its number of significant figures is infinite (0.500000...).
Arithmetic:
The guidelines described below are intended to avoid a calculation result more precise than the measured quantities, but it does not ensure the resulted implied uncertainty close enough to the measured uncertainties. This problem can be seen in unit conversion. If the guidelines give the implied uncertainty too far from the measured ones, then it may be needed to decide significant digits that give comparable uncertainty.
Arithmetic:
Multiplication and division For quantities created from measured quantities via multiplication and division, the calculated result should have as many significant figures as the least number of significant figures among the measured quantities used in the calculation. For example, 1.234 × 2 = 2.468 ≈ 2 1.234 × 2.0 = 2.468 ≈ 2.5 0.01234 × 2 = 0.02468 ≈ 0.02with one, two, and one significant figures respectively. (2 here is assumed not an exact number.) For the first example, the first multiplication factor has four significant figures and the second has one significant figure. The factor with the fewest or least significant figures is the second one with only one, so the final calculated result should also have one significant figure.
Arithmetic:
Exception For unit conversion, the implied uncertainty of the result can be unsatisfactorily higher than that in the previous unit if this rounding guideline is followed; For example, 8 inch has the implied uncertainty of ± 0.5 inch = ± 1.27 cm. If it is converted to the centimeter scale and the rounding guideline for multiplication and division is followed, then 20.32 cm ≈ 20 cm with the implied uncertainty of ± 5 cm. If this implied uncertainty is considered as too overestimated, then more proper significant digits in the unit conversion result may be 20.32 cm ≈ 20. cm with the implied uncertainty of ± 0.5 cm.
Arithmetic:
Another exception of applying the above rounding guideline is to multiply a number by an integer, such as 1.234 × 9. If the above guideline is followed, then the result is rounded as 1.234 × 9.000.... = 11.106 ≈ 11.11. However, this multiplication is essentially adding 1.234 to itself 9 times such as 1.234 + 1.234 + … + 1.234 so the rounding guideline for addition and subtraction described below is more proper rounding approach. As a result, the final answer is 1.234 + 1.234 + … + 1.234 = 11.106 = 11.106 (one significant digit increase).
Arithmetic:
Addition and subtraction For quantities created from measured quantities via addition and subtraction, the last significant figure position (e.g., hundreds, tens, ones, tenths, hundredths, and so forth) in the calculated result should be the same as the leftmost or largest digit position among the last significant figures of the measured quantities in the calculation. For example, 1.234 + 2 = 3.234 ≈ 3 1.234 + 2.0 = 3.234 ≈ 3.2 0.01234 + 2 = 2.01234 ≈ 2with the last significant figures in the ones place, tenths place, and ones place respectively. (2 here is assumed not an exact number.) For the first example, the first term has its last significant figure in the thousandths place and the second term has its last significant figure in the ones place. The leftmost or largest digit position among the last significant figures of these terms is the ones place, so the calculated result should also have its last significant figure in the ones place.
Arithmetic:
The rule to calculate significant figures for multiplication and division are not the same as the rule for addition and subtraction. For multiplication and division, only the total number of significant figures in each of the factors in the calculation matters; the digit position of the last significant figure in each factor is irrelevant. For addition and subtraction, only the digit position of the last significant figure in each of the terms in the calculation matters; the total number of significant figures in each term is irrelevant. However, greater accuracy will often be obtained if some non-significant digits are maintained in intermediate results which are used in subsequent calculations.
Arithmetic:
Logarithm and antilogarithm The base-10 logarithm of a normalized number (i.e., a × 10b with 1 ≤ a < 10 and b as an integer), is rounded such that its decimal part (called mantissa) has as many significant figures as the significant figures in the normalized number.
log10(3.000 × 104) = log10(104) + log10(3.000) = 4.000000... (exact number so infinite significant digits) + 0.4771212547... = 4.4771212547 ≈ 4.4771.When taking the antilogarithm of a normalized number, the result is rounded to have as many significant figures as the significant figures in the decimal part of the number to be antiloged.
104.4771 = 29998.5318119... = 30000 = 3.000 × 104.
Arithmetic:
Transcendental functions If a transcendental function f(x) (e.g., the exponential function, the logarithm, and the trigonometric functions) is differentiable at its domain element x, then its number of significant figures (denoted as "significant figures of f(x) ") is approximately related with the number of significant figures in x (denoted as "significant figures of x") by the formula log 10 (|df(x)dxxf(x)|) where |df(x)dxxf(x)| is the condition number. See the significance arithmetic article to find its derivation.
Arithmetic:
Round only on the final calculation result When performing multiple stage calculations, do not round intermediate stage calculation results; keep as many digits as is practical (at least one more digit than the rounding rule allows per stage) until the end of all the calculations to avoid cumulative rounding errors while tracking or recording the significant figures in each intermediate result. Then, round the final result, for example, to the fewest number of significant figures (for multiplication or division) or leftmost last significant digit position (for addition or subtraction) among the inputs in the final calculation.
Arithmetic:
(2.3494 + 1.345) × 1.2 = 3.6944 × 1.2 = 4.43328 ≈ 4.4.
(2.3494 × 1.345) + 1.2 = 3.159943 + 1.2 = 4.359943 ≈ 4.4.
Estimating an extra digit:
When using a ruler, initially use the smallest mark as the first estimated digit. For example, if a ruler's smallest mark is 0.1 cm, and 4.5 cm is read, then it is 4.5 (±0.1 cm) or 4.4 cm to 4.6 cm as to the smallest mark interval. However, in practice a measurement can usually be estimated by eye to closer than the interval between the ruler's smallest mark, e.g. in the above case it might be estimated as between 4.51 cm and 4.53 cm.It is also possible that the overall length of a ruler may not be accurate to the degree of the smallest mark, and the marks may be imperfectly spaced within each unit. However assuming a normal good quality ruler, it should be possible to estimate tenths between the nearest two marks to achieve an extra decimal place of accuracy. Failing to do this adds the error in reading the ruler to any error in the calibration of the ruler.
Estimation in statistic:
When estimating the proportion of individuals carrying some particular characteristic in a population, from a random sample of that population, the number of significant figures should not exceed the maximum precision allowed by that sample size.
Relationship to accuracy and precision in measurement:
Traditionally, in various technical fields, "accuracy" refers to the closeness of a given measurement to its true value; "precision" refers to the stability of that measurement when repeated many times. Thus, it is possible to be "precisely wrong". Hoping to reflect the way in which the term "accuracy" is actually used in the scientific community, there is a recent standard, ISO 5725, which keeps the same definition of precision but defines the term "trueness" as the closeness of a given measurement to its true value and uses the term "accuracy" as the combination of trueness and precision. (See the accuracy and precision article for a full discussion.) In either case, the number of significant figures roughly corresponds to precision, not to accuracy or the newer concept of trueness.
In computing:
Computer representations of floating-point numbers use a form of rounding to significant figures (while usually not keeping track of how many), in general with binary numbers. The number of correct significant figures is closely related to the notion of relative error (which has the advantage of being a more accurate measure of precision, and is independent of the radix, also known as the base, of the number system used). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Surgical Endoscopy**
Surgical Endoscopy:
Surgical Endoscopy is a peer-reviewed medical journal published by Springer Science+Business Media. It is the official journal of the Society of American Gastrointestinal and Endoscopic Surgeons and the European Association for Endoscopic Surgery.Surgical Endoscopy covers the surgical aspects of interventional endoscopy, ultrasound, and other techniques in gastroenterology, obstetrics, gynecology, and urology. Also, the fields of gastroenterologic, thoracic, traumatic, orthopedic, and pediatric surgery are represented. The journal has a 2016 impact factor of 3.747.The editors-in-chief are George Hanna (St Mary's Hospital) and Mark Talamini (Stony Brook University). Editors emeriti include Alfred Cuschieri, Kimberly Forde and Bruce MacFadyen Jr. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Raptor code**
Raptor code:
In computer science, Raptor codes (rapid tornado; see Tornado codes) are the first known class of fountain codes with linear time encoding and decoding. They were invented by Amin Shokrollahi in 2000/2001 and were first published in 2004 as an extended abstract. Raptor codes are a significant theoretical and practical improvement over LT codes, which were the first practical class of fountain codes.
Raptor code:
Raptor codes, as with fountain codes in general, encode a given source block of data consisting of a number k of equal size source symbols into a potentially limitless sequence of encoding symbols such that reception of any k or more encoding symbols allows the source block to be recovered with some non-zero probability. The probability that the source block can be recovered increases with the number of encoding symbols received above k becoming very close to 1, once the number of received encoding symbols is only very slightly larger than k. For example, with the latest generation of Raptor codes, the RaptorQ codes, the chance of decoding failure when k encoding symbols have been received is less than 1%, and the chance of decoding failure when k+2 encoding symbols have been received is less than one in a million. (See Recovery probability and overhead section below for more discussion on this.) A symbol can be any size, from a single byte to hundreds or thousands of bytes.
Raptor code:
Raptor codes may be systematic or non-systematic. In the systematic case, the symbols of the original source block, i.e. the source symbols, are included within the set of encoding symbols. Some examples of a systematic Raptor code is the use by the 3rd Generation Partnership Project in mobile cellular wireless broadcasting and multicasting, and also by DVB-H standards for IP datacast to handheld devices (see external links). The Raptor codes used in these standards is also defined in IETF RFC 5053.
Raptor code:
Online codes are an example of a non-systematic fountain code.
RaptorQ code:
The most advanced version of Raptor is the RaptorQ code defined in IETF RFC 6330. The RaptorQ code is a systematic code, can be implemented in a way to achieve linear time encoding and decoding performance, has near-optimal recovery properties (see Recovery probability and overhead section below for more details), supports up to 56,403 source symbols, and can support an essentially unlimited number of encoding symbols. The RaptorQ code defined in IETF RFC 6330 is specified as a part of the Next Gen TV (ATSC 3.0) standard to enable high quality broadcast video streaming (robust mobile TV) and efficient and reliable broadcast file delivery (datacasting). In particular, the RaptorQ code is specified in A/331: Signaling, Delivery, Synchronization, and Error Protection within ATSC 3.0 (see List of ATSC standards for a list of the ATSC 3.0 standard parts). Next Gen TV (ATSC 3.0) goes well-beyond traditional TV to provide a Broadcast internet enabling general data delivery services.
Overview:
Raptor codes are formed by the concatenation of two codes.
Overview:
A fixed rate erasure code, usually with a fairly high rate, is applied as a 'pre-code' or 'outer code'. This pre-code may itself be a concatenation of multiple codes, for example in the code standardized by 3GPP a high density parity check code derived from the binary Gray sequence is concatenated with a simple regular low density parity check code. Another possibility would be a concatenation of a Hamming code with a low density parity check code.
Overview:
The inner code takes the result of the pre-coding operation and generates a sequence of encoding symbols. The inner code is a form of LT codes. Each encoding symbol is the XOR of a pseudo-randomly chosen set of symbols from the pre-code output. The number of symbols which are XOR'ed together to form an output symbol is chosen pseudo-randomly for each output symbol according to a specific probability distribution.
Overview:
This distribution, as well as the mechanism for generating pseudo-random numbers for sampling this distribution and for choosing the symbols to be XOR'ed, must be known to both sender and receiver. In one approach, each symbol is accompanied with an identifier which can be used as a seed to a pseudo-random number generator to generate this information, with the same process being followed by both sender and receiver.
Overview:
In the case of non-systematic Raptor codes, the source data to be encoded is used as the input to the pre-coding stage.
Overview:
In the case of systematic Raptor codes, the input to the pre-coding stage is obtained by first applying the inverse of the encoding operation that generates the first k output symbols to the source data. Thus, applying the normal encoding operation to the resulting symbols causes the original source symbols to be regenerated as the first k output symbols of the code. It is necessary to ensure that the pseudo-random processes which generate the first k output symbols generate an operation which is invertible.
Decoding:
Two approaches are possible for decoding Raptor codes. In a concatenated approach, the inner code is decoded first, using a belief propagation algorithm, as used for the LT codes. Decoding succeeds if this operation recovers a sufficient number of symbols, such that the outer code can recover the remaining symbols using the decoding algorithm appropriate for that code.
In a combined approach, the relationships between symbols defined by both the inner and outer codes are considered as a single combined set of simultaneous equations which can be solved by the usual means, typically by Gaussian elimination.
Computational complexity:
Raptor codes require O(symbol size) time to generate an encoding symbol from a source block, and require O(source block size) time to recover a source block from at least k encoding symbols.
Recovery probability and overhead:
The overhead is how many additional encoding symbols beyond the number k of source symbols in the original source block need to be received to completely recover the source block.
(Based on elementary information theory considerations, complete recovery of a source block with k source symbols is not possible if less than k encoding symbols are received.) The recovery probability is the probability that the source block is completely recovered upon receiving a given number of random encoding symbols generated from the source block.
The RaptorQ code specified in IETF RFC 6330 has the following trade-off between recovery probability and recovery overhead: Greater than 99% recovery probability with overhead of 0 symbols (recovery from k received encoding symbols).
Greater than 99.99% recovery probability with overhead of 1 symbol (recovery from k+1 received encoding symbols).
Greater than 99.9999% recovery probability with overhead of 2 symbols (recovery from k+2 received encoding symbols).These statements hold for the entire range of k supported in IETF RFC 6330, i.e., k=1,...,56403. See IETF RFC 6330 for more details.
Legal status:
Qualcomm, Inc. has published an IPR statement for the Raptor code specified in IETF RFC 5053, and an IPR statement for the more advanced RaptorQ code specified in IETF RFC 6330. These statements mirror the licensing commitment Qualcomm, Inc. has made with respect to the MPEG DASH standard. The MPEG DASH standard has been deployed by a wide variety of companies, including DASH Industry Forum member companies. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dichlorine hexoxide**
Dichlorine hexoxide:
Dichlorine hexoxide is the chemical compound with the molecular formula Cl2O6, which is correct for its gaseous state. However, in liquid or solid form, this chlorine oxide ionizes into the dark red ionic compound chloryl perchlorate [ClO2]+[ClO4]−, which may be thought of as the mixed anhydride of chloric and perchloric acids.
It is produced by reaction between chlorine dioxide and excess ozone: 2 ClO2 + 2 O3 → 2 ClO3 + 2 O2 → Cl2O6 + 2 O2
Molecular structure:
It was originally reported to exist as the monomeric chlorine trioxide ClO3 in gas phase, but was later shown to remain an oxygen-bridged dimer after evaporation and until thermal decomposition into chlorine perchlorate, Cl2O4, and oxygen. The compound ClO3 was then rediscovered.It is a dark red fuming liquid at room temperature that crystallizes as a red ionic compound, chloryl perchlorate, [ClO2]+[ClO4]−. The red color shows the presence of chloryl ions. Thus, chlorine's formal oxidation state in this compound remains a mixture of chlorine (V) and chlorine (VII) both in the gas phase and when condensed; however by breaking one oxygen-chlorine bond some electron density does shifts towards the chlorine (VII).
Properties:
Cl2O6 is diamagnetic and is a very strong oxidizing agent. Although stable at room temperature, it explodes violently on contact with organic compounds and reacts with gold to produce the chloryl salt [ClO2]+[Au(ClO4)4]−. Many other reactions involving Cl2O6 reflect its ionic structure, [ClO2]+[ClO4]−, including the following: NO2F + Cl2O6 → NO2ClO4 + ClO2F NO + Cl2O6 → NOClO4 + ClO2 2 V2O5 + 12 Cl2O6 → 4 VO(ClO4)3 + 12 ClO2 + 3 O2 SnCl4 + 6 Cl2O6 → [ClO2]2[Sn(ClO4)6] + 4 ClO2 + 2 Cl2 2Au + 6Cl2O6 → 2[ClO2]+[Au(ClO4)4]− + Cl2Nevertheless, it can also react as a source of the ClO3 radical: 2 AsF5 + Cl2O6 → 2 ClO3AsF5 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Integral sliding mode**
Integral sliding mode:
In 1996, V. Utkin and J. Shi proposed an improved sliding control method named integral sliding mode control (ISMC). In contrast with conventional sliding mode control, the system motion under integral sliding mode has a dimension equal to that of the state space. In ISMC, the system trajectory always starts from the sliding surface.
Accordingly, the reaching phase is eliminated, and robustness in the whole state space is promised.
Control scheme:
For a system x⋅=f(x)+B(x)(u+h(x,t)),x∈Rn,u∈Rm,rankB=m , h(x,t) bounded uncertainty.
Mathews and DeCarlo [1] suggested to select an integral sliding surface as σ(t)=Gx(t)−Gx(0)−∫0t[GBu0(τ)+Gf(x(τ))]dτ In this case there exists a unit or discontinuous sliding mode controller compensating uncertainty h(x,t) Utkin and Shi [2] have remarked that, if σ(0)=0 is guaranteed, the reaching phase is eliminated.
In the case, when unmatched uncertainties occur G should be selected as G=B+, where B+=(BTB)−1BT is a pseudo inverse matrix [3-5]. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Empirical algorithmics**
Empirical algorithmics:
In computer science, empirical algorithmics (or experimental algorithmics) is the practice of using empirical methods to study the behavior of algorithms. The practice combines algorithm development and experimentation: algorithms are not just designed, but also implemented and tested in a variety of situations. In this process, an initial design of an algorithm is analyzed so that the algorithm may be developed in a stepwise manner.
Overview:
Methods from empirical algorithmics complement theoretical methods for the analysis of algorithms. Through the principled application of empirical methods, particularly from statistics, it is often possible to obtain insights into the behavior of algorithms such as high-performance heuristic algorithms for hard combinatorial problems that are (currently) inaccessible to theoretical analysis. Empirical methods can also be used to achieve substantial improvements in algorithmic efficiency.American computer scientist Catherine McGeoch identifies two main branches of empirical algorithmics: the first (known as empirical analysis) deals with the analysis and characterization of the behavior of algorithms, and the second (known as algorithm design or algorithm engineering) is focused on empirical methods for improving the performance of algorithms. The former often relies on techniques and tools from statistics, while the latter is based on approaches from statistics, machine learning and optimization. Dynamic analysis tools, typically performance profilers, are commonly used when applying empirical methods for the selection and refinement of algorithms of various types for use in various contexts.Research in empirical algorithmics is published in several journals, including the ACM Journal on Experimental Algorithmics (JEA) and the Journal of Artificial Intelligence Research (JAIR). Besides Catherine McGeoch, well-known researchers in empirical algorithmics include Bernard Moret, Giuseppe F. Italiano, Holger H. Hoos, David S. Johnson, and Roberto Battiti.
Performance profiling in the design of complex algorithms:
In the absence of empirical algorithmics, analyzing the complexity of an algorithm can involve various theoretical methods applicable to various situations in which the algorithm may be used. Memory and cache considerations are often significant factors to be considered in the theoretical choice of a complex algorithm, or the approach to its optimization, for a given purpose. Performance profiling is a dynamic program analysis technique typically used for finding and analyzing bottlenecks in an entire application's code or for analyzing an entire application to identify poorly performing code. A profiler can reveal the code most relevant to an application's performance issues.A profiler may help to determine when to choose one algorithm over another in a particular situation. When an individual algorithm is profiled, as with complexity analysis, memory and cache considerations are often more significant than instruction counts or clock cycles; however, the profiler's findings can be considered in light of how the algorithm accesses data rather than the number of instructions it uses.Profiling may provide intuitive insight into an algorithm's behavior by revealing performance findings as a visual representation. Performance profiling has been applied, for example, during the development of algorithms for matching wildcards. Early algorithms for matching wildcards, such as Rich Salz' wildmat algorithm, typically relied on recursion, a technique criticized on grounds of performance. The Krauss matching wildcards algorithm was developed based on an attempt to formulate a non-recursive alternative using test cases followed by optimizations suggested via performance profiling, resulting in a new algorithmic strategy conceived in light of the profiling along with other considerations. Profilers that collect data at the level of basic blocks or that rely on hardware assistance provide results that can be accurate enough to assist software developers in optimizing algorithms for a particular computer or situation. Performance profiling can aid developer understanding of the characteristics of complex algorithms applied in complex situations, such as coevolutionary algorithms applied to arbitrary test-based problems, and may help lead to design improvements. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**LYRa11**
LYRa11:
LYRa11 is a SARS-like coronavirus (SL-COV) which was identified in 2011 in samples of intermediate horseshoe bats in Baoshan, Yunnan, China. The genome of this virus strain is 29805nt long, and the similarity to the whole genome sequence of SARS-CoV that caused the SARS outbreak is 91%. It was published in 2014. Like SARS-CoV and SARS-CoV-2, LYRa11 virus uses ACE2 as a receptor for infecting cells.
Phylogenetic:
A phylogenetic tree based on whole-genome sequences of SARS-CoV-1 and related coronaviruses is: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Woodall number**
Woodall number:
In number theory, a Woodall number (Wn) is any natural number of the form Wn=n⋅2n−1 for some natural number n. The first few Woodall numbers are: 1, 7, 23, 63, 159, 383, 895, … (sequence A003261 in the OEIS).
History:
Woodall numbers were first studied by Allan J. C. Cunningham and H. J. Woodall in 1917, inspired by James Cullen's earlier study of the similarly defined Cullen numbers.
Woodall primes:
Woodall numbers that are also prime numbers are called Woodall primes; the first few exponents n for which the corresponding Woodall numbers Wn are prime are 2, 3, 6, 30, 75, 81, 115, 123, 249, 362, 384, ... (sequence A002234 in the OEIS); the Woodall primes themselves begin with 7, 23, 383, 32212254719, ... (sequence A050918 in the OEIS).
Woodall primes:
In 1976 Christopher Hooley showed that almost all Cullen numbers are composite. In October 1995, Wilfred Keller published a paper discussing several new Cullen primes and the efforts made to factorise other Cullen and Woodall numbers. Included in that paper is a personal communication to Keller from Hiromi Suyama, asserting that Hooley's method can be reformulated to show that it works for any sequence of numbers n · 2n + a + b, where a and b are integers, and in particular, that almost all Woodall numbers are composite. It is an open problem whether there are infinitely many Woodall primes. As of October 2018, the largest known Woodall prime is 17016602 × 217016602 − 1. It has 5,122,515 digits and was found by Diego Bertolotti in March 2018 in the distributed computing project PrimeGrid.
Restrictions:
Starting with W4 = 63 and W5 = 159, every sixth Woodall number is divisible by 3; thus, in order for Wn to be prime, the index n cannot be congruent to 4 or 5 (modulo 6). Also, for a positive integer m, the Woodall number W2m may be prime only if 2m + m is prime. As of January 2019, the only known primes that are both Woodall primes and Mersenne primes are W2 = M3 = 7, and W512 = M521.
Divisibility properties:
Like Cullen numbers, Woodall numbers have many divisibility properties. For example, if p is a prime number, then p divides W(p + 1) / 2 if the Jacobi symbol (2p) is +1 andW(3p − 1) / 2 if the Jacobi symbol (2p) is −1.
Generalization:
A generalized Woodall number base b is defined to be a number of the form n × bn − 1, where n + 2 > b; if a prime can be written in this form, it is then called a generalized Woodall prime.
Generalization:
The smallest value of n such that n × bn − 1 is prime for b = 1, 2, 3, ... are 3, 2, 1, 1, 8, 1, 2, 1, 10, 2, 2, 1, 2, 1, 2, 167, 2, 1, 12, 1, 2, 2, 29028, 1, 2, 3, 10, 2, 26850, 1, 8, 1, 42, 2, 6, 2, 24, 1, 2, 3, 2, 1, 2, 1, 2, 2, 140, 1, 2, 2, 22, 2, 8, 1, 2064, 2, 468, 6, 2, 1, 362, 1, 2, 2, 6, 3, 26, 1, 2, 3, 20, 1, 2, 1, 28, 2, 38, 5, 3024, 1, 2, 81, 858, 1, 2, 3, 2, 8, 60, 1, 2, 2, 10, 5, 2, 7, 182, 1, 17782, 3, ... (sequence A240235 in the OEIS)As of November 2021, the largest known generalized Woodall prime with base greater than 2 is 2740879 × 322740879 − 1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aircraft vectoring**
Aircraft vectoring:
Aircraft vectoring is a navigation service provided to aircraft by air traffic control. The controller decides on a particular airfield traffic pattern for the aircraft to fly, composed of specific legs or vectors. The aircraft then follows this pattern when the controller instructs the pilot to fly specific headings at appropriate times.
Aircraft vectoring:
Vectoring is used to separate aircraft by a specified distance, to aid the navigation of flights, and to guide arriving aircraft to a position from which they can continue their final approach to land under the guidance of an approach procedure published by the FAA.Vectoring is the provision of navigational guidance to aircraft in the form of specific headings, based on the use of an ATS surveillance system.
Aircraft vectoring:
Aircraft may be vectored to: apply ATS surveillance system separation achieve an expeditious flow of aircraft maximise use of available airspace comply with noise abatement procedures avoid areas of known hazardous weather or known severe turbulence.
adjust the arrival sequence establish the aircraft on final approach track of a pilot-interpreted approach maneuver an aircraft into a suitable position below the clouds near an aerodrome for a visual approach and landing.The nature of Terminal area operations means that vectoring plays a significant part in the way controllers' process traffic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2,5-Dimethoxy-4-butylamphetamine**
2,5-Dimethoxy-4-butylamphetamine:
2,5-Dimethoxy-4-butylamphetamine (DOBU) is a lesser-known psychedelic drug and a substituted Amphetamine. DOBU was first synthesized by Alexander Shulgin. In his book PiHKAL (Phenethylamines i Have Known And Loved), only low dosages of 2–3 mg were tested, with the duration simply listed as "very long". DOBU produces paresthesia and difficulty sleeping, but with few other effects. Compared to shorter chain homologues such as DOM, DOET and DOPR which are all potent hallucinogens, DOBU has an even stronger 5-HT2 binding affinity but fails to substitute for hallucinogens in animals or produce hallucinogenic effects in humans, suggesting it has low efficacy and is thus an antagonist or weak partial agonist at the 5-HT2A receptor.
Isomers:
Alternative isomers of DOBU can also be produced, where the 4-(n-butyl) group of DOBU is replaced with any of the three other butyl isomers, the iso-butyl, sec-butyl and tert-butyl compounds being called DOIB, DOSB and DOTB respectively. All are significantly less potent than DOBU, with DOIB being active at around 10–15 mg, and DOSB at 25–30 mg, and both being primarily stimulant in action with little or no psychedelic effects. The most highly branched isomer DOTB was completely inactive in both animal and human trials. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Genetic resources**
Genetic resources:
Genetic resources are genetic material of actual or potential value, where genetic material means any material of plant, animal, microbial or other origin containing functional units of heredity.
Genetic resources is one of the three levels of biodiversity defined by the Convention on Biological Diversity in Rio, 1992.
Examples:
Animal genetic resources for food and agriculture Forest genetic resources Germplasm, genetic resources that are preserved for various purposes such as breeding, preservation, and research Plant genetic resources | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**7-Zip**
7-Zip:
7-Zip is a free and open-source file archiver, a utility used to place groups of files within compressed containers known as "archives". It is developed by Igor Pavlov and was first released in 1999. 7-Zip has its own archive format called 7z, but can read and write several others. The program can be used from a Windows graphical user interface that also features shell integration, from a Windows command-line interface as the command 7za or 7za.exe, and from POSIX systems as p7zip. Most of the 7-Zip source code is under the LGPL-2.1-or-later license; the unRAR code, however, is under the LGPL-2.1-or-later license with an "unRAR restriction", which states that developers are not permitted to use the code to reverse-engineer the RAR compression algorithm.Since version 21.01 alpha, preliminary Linux support has been added to the upstream instead of the p7zip project.
Archive formats:
7z By default, 7-Zip creates 7z-format archives with a .7z file extension. Each archive can contain multiple directories and files. As a container format, security or size reduction are achieved by looking for similarities throughout the data using a stacked combination of filters. These can consist of pre-processors, compression algorithms, and encryption filters.
Archive formats:
The core 7z compression uses a variety of algorithms, the most common of which are bzip2, PPMd, LZMA2, and LZMA. Developed by Pavlov, LZMA is a relatively new system, making its debut as part of the 7z format. LZMA uses an LZ-based sliding dictionary of up to 3840 MB in size, backed by a range coder.The native 7z file format is open and modular. File names are stored as Unicode.In 2011, TopTenReviews found that the 7z compression was at least 17% better than ZIP, and 7-Zip's own site has since 2002 reported that while compression ratio results are very dependent upon the data used for the tests, "Usually, 7-Zip compresses to 7z format 30–70% better than to zip format, and 7-Zip compresses to zip format 2–10% better than most other zip-compatible programs."The 7z file format specification is distributed with the program's source code, in the "doc" sub-directory.
Archive formats:
Others 7-Zip supports a number of other compression and non-compression archive formats (both for packing and unpacking), including ZIP, gzip, bzip2, xz, tar, and WIM. The utility also supports unpacking APM, ar, ARJ, chm, cpio, deb, FLV, JAR, LHA/LZH, LZMA, MSLZ, Office Open XML, onepkg, RAR, RPM, smzip, SWF, XAR, and Z archives and cramfs, DMG, FAT, HFS, ISO, MBR, NTFS, SquashFS, UDF, and VHD disk images. 7-Zip supports the ZIPX format for unpacking only. It has had this support since at least version 9.20, which was released in late 2010.
Archive formats:
7-Zip can open some MSI files, allowing access to the meta-files within along with the main contents. Some Microsoft CAB (LZX compression) and NSIS (LZMA) installer formats can be opened. Similarly, some Microsoft executable programs (.EXEs) that are self-extracting archives or otherwise contain archived content (e.g., some setup files) may be opened as archives.
When compressing ZIP or gzip files, 7-Zip uses its own DEFLATE encoder, which may achieve higher compression, but at lower speed, than the more common zlib DEFLATE implementation. The 7-Zip deflate encoder implementation is available separately as part of the AdvanceCOMP suite of tools.
Archive formats:
The decompression engine for RAR archives was developed using freely available source code of the unRAR program, which has a licensing restriction against creation of a RAR compressor. 7-Zip v15.06 and later support extraction of files in the RAR5 format. Some backup systems use formats supported by archiving programs such as 7-Zip; e.g., some Android backups are in tar format, and can be extracted by archivers such as 7-Zip.7-Zip ZS, a port of 7-Zip FM with Zstandard .zst (and other formats) support, is developed by Tino Reichardt.Modern7z, a Zstandard .zst (and other formats) plugin for 7-Zip FM, is developed by Denis Anisimov (TC4shell).
File manager:
7-Zip comes with a file manager along with the standard archiver tools. The file manager has a toolbar with options to create an archive, extract an archive, test an archive to detect errors, copy, move, and delete files, and open a file properties menu exclusive to 7-Zip. The file manager, by default, displays hidden files because it does not follow Windows Explorer's policies. The tabs show name, modification time, original and compressed sizes, attributes, and comments (4DOS descript.ion format).
File manager:
When going up one directory on the root, all drives, removable or internal appear. Going up again shows a list with four options: Computer: loads the drives list Documents: loads user's documents, usually at %UserProfile%\My Documents Network: loads a list of all network clients connected \\.: Same as "Computer" except loads the drives in low-level filesystem access. This results in critical drive files and deleted files still existing on the drive to appear. (NOTE: As of November 2020, access to the active partition in low-level mode is not allowed for currently unknown reasons.)
Features:
7-Zip supports: 32 and 64-bit x86, ARM64 architecture support File Manager Encryption via the 256-bit AES cipher, which can be enabled for both files and the 7z hierarchy. When the hierarchy is encrypted, users are required to supply a password to see the filenames contained within the archive. WinZip-developed Zip file AES encryption standard is also available in 7-Zip to encrypt ZIP archives with AES 256-bit, but it does not offer filename encryption as in 7z archives.
Features:
Volumes of dynamically variable sizes, allowing use for backups on removable media such as writable CDs and DVDs Usability as a basic orthodox file manager when used in dual panel mode Multiple-core CPU threading Opening EXE files as archives, allowing the decompression of data from inside many "Setup" or "Installer" or "Extract" type programs without having to launch them Unpacking archives with corrupted filenames, renaming the files as required Create self-extracting single-volume archives Command-line interface Graphical user interface. The Windows version comes with its own GUI; however, p7zip uses the GUI of the Unix/Linux Archive Manager.
Features:
Calculating checksums in the formats CRC-32, CRC-64, SHA-1, or SHA-256 for files on disk, available either via command line or Explorer's context menu Available in 87 languages Ability to optionally record creation dates (tc) and last access dates (ta) in archives (in addition to modification dates).
Versions:
Two command-line versions are provided: 7z (7z.exe), using external libraries; and 7za (7za.exe), which is a standalone executable containing built-in modules, but with compression/decompression support limited to 7z, ZIP, gzip, bzip2, Z and tar formats. A 64-bit version is available, with support for large memory maps, leading to faster compression. All versions support multi-threading.
Forks p7zip is a fork for Unix-like operating systems (including Linux, FreeBSD, and macOS), FreeDOS, OpenVMS, AmigaOS 4, and MorphOS. It offers the 7za version only.
7-zip ZS is a fork with Zstandard and various other compression algorithms added to the file format.p7zip-zstd (p7zip with zstd) is p7zip with ZS additions.
NanaZip is a fork integrating changes from many sources, modernized for the Microsoft Store.
Plugins 7-zip comes with a plug-in system for expansion. The official "Links" page points to many plugins written by TC4Shell, providing extra file support.
Software development kit:
7-Zip has a LZMA SDK which was originally dual-licensed under both the GNU LGPL and Common Public License, with an additional special exception for linked binaries. On 2 December 2008, the SDK was placed by Igor Pavlov in the public domain.
Security:
On older versions, self-extracting archives were vulnerable to arbitrary code execution through DLL hijacking: they load and run a DLL named UXTheme.dll, if it is in the same folder as the executable file. 7-Zip 16.03 Release notes say that the installer and SFX modules have added protection against DLL preloading attack.Versions of 7-Zip prior to 18.05 contain an arbitrary code execution vulnerability in the module for extracting files from RAR archives (CVE-2018-10115), a vulnerability that was fixed on 30 April 2018.
Reception and usage:
Snapfiles.com in 2012 rated 7-Zip 4.5 stars out of 5, noting, "[its] interface and additional features are fairly basic, but the compression ratio is outstanding".On TechRepublic in 2009, Justin James found the detailed settings for Windows File Manager integration were "appreciated" and called the compression-decompression benchmark utility "neat". And though the archive dialog has settings that "will confound most users", he concluded: "7-Zip fits a nice niche in between the built-in Windows capabilities and the features of the paid products, and it is able to handle a large variety of file formats in the process."Between 2002 and 2016, 7-Zip was downloaded 410 million times from SourceForge alone.The software has received awards, In 2007, SourceForge granted it community choice awards for "Technical Design" and for "Best Project". In 2013, Tom's Hardware conducted a compression speed test comparing 7-ZIP, MagicRAR, WinRAR, WinZip; they concluded that 7-ZIP beat out all the others with regards to compression speed, ratio, and size and awarded the software the 2013 Tom's Hardware Elite award. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aylin Yener**
Aylin Yener:
Aylin Yener holds the Roy and Lois Chope Chair in engineering at Ohio State University, and she is currently the President of the IEEE Information Theory Society. Dr. Yener is a Professor of Electrical and Computer Engineering, Professor of Integrated Systems Engineering, Professor of Computer Science and Engineering, Affiliated Faculty at the Sustainability Institute, and Affiliated Faculty at the Translational Data Analytics Institute, all at Ohio State University.
Education:
Yener received her dual B.Sc degrees (1991) in Electrical and Electronics Engineering and Physics from Boğaziçi University, Istanbul, Turkey. She carried out her graduate career at Rutgers University, New Brunswick, NJ and received her M.S. in 1994 and Ph.D. in 2000 in Electrical and Computer Engineering while working in the Wireless Information Network Laboratory (WINLAB). In 2002, she joined Pennsylvania State University as an Electrical Engineer in University Park, Pennsylvania. She became a full professor by 2010 and was named a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2015 for her contributions to wireless communication theory and wireless information security. Yener was named Dean's Fellow in 2017 and made the Clarivate Analytics Highly Cited Researchers list later the same year. Yener was honored as a Pennsylvania State University Distinguished Professor in 2019.In 2020, Yener accepted a faculty position at Ohio State University becoming the Electrical Engineering Department's first chaired female professor.
Research interest:
Yener is interested in fundamental performance limits of networked systems, communications and information theory. The applications of these fields include but not limited to information theoretic physical layer security, energy harvesting communication networks, and caching systems. She runs the INSPIRE Lab (Information and Networked Systems Powered by Innovation and Research in Engineering) at Ohio State University.
Awards:
IEEE Information Theory Society President (2020), IEEE Information Theory Society Vice President (2019), IEEE Guglielmo Marconi Best Paper Award (2014), Defense Advanced Research Project Agency (DARPA) grant, "Rethinking Mobile Ad Hoc Networks: A Non-Equilibrium Information Theory" (2007). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Origamic architecture**
Origamic architecture:
Origamic architecture is a form of kirigami that involves the three-dimensional reproduction of architecture and monuments, on various scales, using cut-out and folded paper, usually thin paperboard. Visually, these creations are comparable to intricate 'pop-ups', indeed, some works are deliberately engineered to possess 'pop-up'-like properties. However, origamic architecture tends to be cut out of a single sheet of paper, whereas most pop-ups involve two or more. To create the three-dimensional image out of the two-dimensional surface requires skill akin to that of an architect.
Origin:
The development of origamic architecture began with Professor Masahiro Chatani's (then a newly appointed professor at the Tokyo Institute of Technology) experiments with designing original and unique greeting cards. Japanese culture encourages the giving and receiving of cards for various special occasions and holidays, particularly Japanese New Year, and according to his own account, Professor Chatani personally felt that greeting cards were a significant form of connection and communication between people. He worried that in today's fast-paced modern world, the emotional connections called up and created by the exchange of greeting cards would become scarce.In the early 1980s, Professor Chatani began to experiment with cutting and folding paper to make unique and interesting pop-up cards. He used techniques of origami (Japanese paper folding) and kirigami (Japanese papercutting), as well as his experience in architectural design, to create intricate patterns which played with light and shadow. Many of his creations are made of stark white paper which emphasizes the shadowing effects of the cuts and folds. In the preface to one of his books, he called the shadows of the three-dimensional cutouts created a "dreamy scene" that invited the viewer into a "fantasy world".At first, Professor Chatani simply gave the cards to his friends and family. Over the next nearly thirty years, however, he published over fifty books on origamic architecture, many directed at children. He came to believe that origamic architecture could be a good way to teach architectural design and appreciation of architecture, as well as to inspire interest in mathematics, art, and design in young children.Professor Chatani also spent a good deal of time, even after his retirement, traveling to exhibit his work. He frequently collaborated on books and exhibits with Keiko Nakazawa and Takaaki Kihara.
Origin:
Masahiro Chatani Masahiro Chatani was a Japanese architect (certified, first class) and professor considered to be the creator of origamic architecture. From its development until his death in 2008, he was widely acknowledged to be the world's foremost origamic architect.
Origin:
Masahiro Chatani was born in Hiroshima, Japan in 1934. He grew up in Tokyo, and graduated from the Tokyo Institute of Technology in 1956. He became an assistant professor at the Tokyo Institute of Technology in 1969 and an associated assistant professor at Washington University in 1977, and was promoted to full professorship at the Tokyo Institute of Technology in 1980. It was around this time that he created what is now known as "origamic architecture". He became a professor emeritus fifteen years later, and continued to lecture at a number of institutions, including the Japan Architectural College, Hosei University, and the Shizuoka University of Art and Architecture. After his retirement from active professorship, he continued to travel around the world, giving exhibits, demonstrations, and seminars on architectural design and origamic architecture.Professor Chatani died on November 19, 2008, at the age of 74, from complications from larynx cancer.
Types of origamic architecture:
There are several different styles of origamic architecture. In one style, a folded paper is cut in such a way that when the paper is opened to form a 90-degree angle, a three-dimensional image can be created, similar to most pop-up books. A second style requires attaching a cut-out form to a base sheet of paper with thread.
Types of origamic architecture:
Takaaki Kihara frequently uses another technique in which the three-dimensional structure is "punched out" of the flat card. Designs created with this technique allow the viewer to see the empty cutouts, which can create interesting shadowing effects. Kihara also points out that this style of origamic architecture is easier to store than the other 180-degree form, as the cut-out three-dimensional forms can be re-flattened with ease.Less commonly, some designs require opening the paper and folding it completely in the opposite direction, making a 360-degree angle.
Types of origamic architecture:
Uses in Architectural Design Origamic architecture has become a tool many architects use to visual the 2D as 3D in order to expand and explore on a design idea. 3D origami objects can be used in the interior design, i.e. for decorating walls. There are ways of doing origamic architecture using CAD (Computer-Aided-Design). CAD uses laser cuts to speed the cutting process along allowing for precise forms to be made. AI design programs still are in development, architects have been searching for solutions to their design struggles.EPFL architects Hani Buri and Yves Weinand, researched ways on how to mass produce "complex folded plate structures using origami architecture. Through different folding techniques like the Yoshimura pattern (an inverted diamond pattern), Miura Ori pattern (a repetition of reverse folds resulting in a diamond pattern), and the Diagonal pattern (series of parallelograms folded at a diagonal) - all were very successful due to their Origami diamond and herringbone patterns. As a result Buri and Weinand were able to produce successful models and even a Chapel in Lausanne, Switzerland.
Types of origamic architecture:
A study performed at the University of Pennsylvania in 2014, laid out the rules for folding and cutting a hexagonal lattice into a variety of three-dimensional shapes.
Leading practitioners:
Although origamic architecture was developed and first gained popularity in Japan, it is today practiced in countries all over the world.Some leading practitioners include Masahiro Chatani (Japan), Keiko Nakazawa (Japan), Takaaki Kihara (Japan), Ingrid Siliakus (Netherlands), María Victoria Garrido (Argentina), Giovanni Russo (Italy) and Marc Hagan-Guirey (UK). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Paint roller**
Paint roller:
A paint roller is a paint application tool used for painting large flat surfaces rapidly and efficiently.
Paint roller:
The paint roller typically consists of two parts: a "roller frame," and a "roller cover." The roller cover absorbs the paint and transfers it to the painted surface, the roller frame attaches to the roller cover. A painter holds the roller by the handle section. The roller frame is reusable. It is possible to clean and reuse a roller cover, but it is also typically disposed of after use.
Paint roller:
The roller cover is a cylindrical core with a pile fabric covering secured to the cylindrical core. Foam rubber rollers are also produced. There are both foam and fabric rollers that are individually available (without a handle), made to replace worn out rollers, once an old roller is removed the new roller can be fitted onto the handle section for use. An innovation of the cylindrical core has allowed it to contain paint inside, with the cover absorbing paint from the inside and filtering it through (naturally by wicking) to be applied externally, when the roller is rolled.
History:
Norman James Breakey In Canada, Norman James Breakey invented a paint roller in 1940, had it patented in Canada, and produced it in a home factory. After WW II, he sold at least 50,000 of the paint rollers under the name Koton Kotor and it was also sold as the TECO roller by Eaton's.
Richard Croxton Adams In the United States, Richard Croxton Adams produced a paint roller in a basement workshop in 1940 and patented it in the United States. The patent application was filed in 1942. However, a similar paint roller patent application was filed in the United States two years earlier in 1940 by inventor Fride E. Dahstrom.<ref>U.S. Patent 2,298,682 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gold fingerprinting**
Gold fingerprinting:
Gold fingerprinting is a method of identifying an item made of gold based on the impurities or trace elements it contains.
Importance:
This technique has been used to lay claim to stolen or relocated gold, including gold that has undergone salting can be identified by its multiple sources. Gold fingerprinting also assists in understanding the origins of gold artifacts.This method is used to characterize gold or a gold-containing item by its trace elements, a.k.a. fingerprinting the sample by mineralizing event and to the particular mine or bullion source. Elements that measure above the detection limits include : Ag, Cu, Ti, Fe, Pt, Pd, Mn, Cr, Ni, Sn, Hg, Pb, As and Te can be used for gold fingerprinting and geochemical characterization. In order for this technique to be used to identify the origins of the gold in question a database made from fingerprinting samples of gold from mines and bullion sources is required.
Method:
Electron microprobe (EMP), Synchrotron micro-XRF (SR-M-XRF), Time-of-flight secondary ion mass spectrometry (TOF-SIMS), Laser induced breakdown spectroscopy (LIBS), Atomic emission spectrometry, x-ray fluorescence spectrometry with higher energy synchrotron radiation (SR-XFS) and Laser ablation-Inductively coupled plasma mass spectrometry (LA-ICP-MS) are all methods of gold fingerprinting.
Method:
The most common method is LA-ICP-MS primarily because it is quasi-nondestructive, allowing for the preservation of the samples and convenient as samples require little to no preparation. Laser ablation allows for high spatial resolution sampling while the inductively coupled plasma mass spectrometry provides high sensitivity to identify extremely small amounts of trace elements within the gold. This method can also be conducted outside of a lab with the assistance of a portable device that uses a diode pumped solid state laser and fiber-optics, making fingerprinting more convenient as it eliminates the need for transfer of gold to a specific lab.
Method:
Advantages of LA-ICP-MS include reduced sample preparation, no sample size requirements, reduced spectral interference and increased sample throughput. Over the past 32 years, LA-ICP-MS has been used for archaeological, biological and forensic purposes. For example a group of gold foil fragments dating back to the 5th Century B.C.E. were analysized by LA-ICP-MS uncovering information on their manufacturing process, function and relationship to one another.
Complications:
LA-ICP-MS function optimally with gold particles greater than 60 μm in diameter to avoid any contamination during measurements. Although LA-ICP-MS has a lower detection limit, its overall precision was lower than other analysis techniques for trace element concentrations such as field emission-electron probe microanalysis (FE-EPMA) and synchrotron micro X-ray fluorescence spectroscopy (SR-l-XRF).Due to the small size of gold (<5μm-250μm) small fragments of minerals need to be separated from the gold before analysis can occur.Gold fingerprinting has limitations including elemental fractionation (the non-sample related analyte) and calibration requires matrix-matched standards.A few other problems exist that limit actual sourcing or provencancing of gold in relation to manufactured art objects. These problems include: a lack of an extensive database of elemental profiles in gold ores, the natural differences that coexist in ore geology and the difficulties of accurately analyzing trace elements. Also, trading, looting and re-melting of so called “precious” metal objects add to the problem of sourcing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wheel sizing**
Wheel sizing:
The wheel size for a motor vehicle or similar wheel has a number of parameters.
Units:
The millimetre is most commonly used to specify dimensions in modern production, but marketing of wheel sizes towards customers is still sometimes done with traditional systems. For example, wheels for road bicycles are often referred to as 700C, when they actually measure 622 mm. Wheel diameters and widths for cars are stated in inches, while car tire bead diameters are stated in inches and widths are in millimetres.
Wheel size:
The wheel size is the size designation of a wheel given by its diameter, width, and offset.
Wheel size:
The diameter of the wheel is the diameter of the cylindrical surface on which the tire bead rides. The width is the inside distance between the bead seat faces. The offset is the distance from the wheel's true centerline (half the width) to the wheel's mounting surface. Offset is covered in more detail below. A typical wheel size will be listed beginning with the diameter, then the width, and lastly the offset (+ or - for positive or negative). Although wheel sizes are marketed with measurements in inches, the Michelin TRX introduced in 1975 was marketed in millimeters.
Wheel size:
For example, 17 × 8.5 × +35 designates a diameter of 17 inches, width of 8.5 inches, and +35 mm positive offset (432 × 216 × +35 in fully metric numbers).
Wheel size:
Replacing the wheels on a car with larger ones can involve using tires with a lower profile. This is done to keep the overall diameter of the tire the same as stock to ensure the same clearances are achieved. Larger wheels are typically desired for their appearance but could also offer more space for brake components. This can come at cost of performance though as larger wheels can weigh more.
Wheel size:
Alternatively, smaller wheels are chosen to fit a specific style of vehicle. An example of this is the Lowrider Culture in which smaller wheels are largely desired.
Wheel size:
Wheels can be widened to allow for a wider tire to be used and to poke the wheel out to the fender of the vehicle. Running a wider tire allows for more of the vehicle's power to be put to the ground because there is a larger surface area making contact with the road. This will improve a vehicle's performance when it comes to acceleration, handling, and braking.
Wheel size:
Bolt pattern The bolt pattern determines the number and position of the bolt holes on the wheel to line up with your vehicle's studs on the mounting hubs. The bolt holes are spaced evenly about the bolt hole circle. Wheel studs are the bolts that are on your mounting hub and are used along with lug nuts to attach the wheel to the car. The bolt hole circle is the circle that the center of each bolt aligns with. The second number in a bolt pattern is the diameter of this circle. The bolt circle has the same center point as the mounting hub to ensure that the wheel will be concentric with the mounting hub. The bolt circle's measurement is called the bolt circle diameter (BCD), also called the pitch circle diameter (PCD).The bolt circle diameter is typically expressed in mm and accompanies the number of bolts in your vehicle's bolt pattern. One example of a common bolt pattern is 5x100 mm. This means there are 5 bolts evenly spaced about a 100 mm bolt circle.
Wheel size:
The picture to the right is an example of a 5×100 mm bolt pattern on a Subaru BRZ. The wheel has 5 lug nuts and utilizes a 100 mm bolt circle diameter.
Wheel size:
Some of the most common BCD values are 100 mm (≈3.94 inches), 112 mm (≈4.41 inches), and 114.3 mm (4.5 inches). Always check your owner's manual or call your local car dealership to confirm the bolt pattern on your vehicle. Over the years, over 30 different bolt patterns have been used by car manufacturers, with most of the different bolt patterns being incompatible with each other.
Wheel size:
Lug nuts and wheel studs vs. bolts On vehicles with wheel studs, wheels must be fitted with the correct type of lug nuts. On vehicles without wheel studs, wheels must be fitted with the correct type of lug bolts.
Wheel size:
Lug nuts (or bolts) will have either flat, tapered (conical), or ball (radius) seats. The type of seat a wheel requires will determine the appropriate lug nuts required to securely attach the wheel to the vehicle. A flat seat type has a flat end that puts pressure on the wheel and compress it against the mounting hub. Similarly, tapered and ball seat types have a conical or semicircular end, respectively. A place to find the lug nut type is to check OEM (Original Equipment Manufacturer) specifications if you have stock wheels or contact the wheel manufacturer if you have aftermarket wheels.Some aftermarket wheels will only fit smaller lug nuts, or not allow an ordinary lug nut to be properly torqued down because a socket will not fit into the lug hole. Tuner lug nuts were created to solve this problem by utilizing a special key to allow removal and installation with standard lug wrench or socket. The design of tuner lug nuts can range from bit style to multisided or spline drive, and are sometimes lightweight for performance purposes.
Wheel size:
Another variation of lug nut is the "locking wheel nut", which is used as a theft prevention method to keep thieves from stealing a vehicle's wheels. When utilizing locking wheel nuts, one standard lug nut on each wheel is replaced with a nut that requires a unique key (typically a computer-designed, rounded star shape) to fit and remove the nut. This ensures that at least one lug nut will remain attached and, in theory, should prevent theft. However, universal removal tools are available which grip the head of the locking nut using a hardened left-hand thread. The success of locking wheel nuts depends on the determination of the would-be thief and the tools that they have available to them.
Wheel size:
Offset The offset is the distance from the hub-mounting surface to the wheel's true centerline. It is quantified by an ET value (from the German Einpresstiefe, literally press-in depth) and measured in mm. A positive offset means the hub-mounting surface is closer to the outside edge of the wheel, while a negative offset means the hub-mounting surface is closer to the inside edge of the wheel.
Wheel size:
A wheel with too much positive offset will be closer to the edge of the fender. This can cause clearance issues between the tire and the fender. One that has too much negative offset will be closer to the suspension components and could cause the tire to rub on them. Wheel width, offset, and its accompanying tire size all determine how a particular wheel/tire combination will fit on a given vehicle. Offset also affects the scrub radius of the steering and it is advisable to stay within the limits allowed by the vehicle manufacturer. Because wheel offset changes the lever-arm length between the center of the tire and the centerline of the steering knuckle, the way bumps, road imperfections, and acceleration/braking forces turn into steering torques (bump-steer, torque-steer, etc.) and thus, will change the drivability of the vehicle depending on wheel offset. Likewise, the wheel bearings will see increased thrust loads if the wheel centerline is moved away from the bearing centerline.
Wheel size:
When choosing an offset for an aftermarket wheel, it is important to take note of the space between the outer edge of the wheel and the fender. Depending on the desired style, you may want to match the change in offset from stock wheels to the amount of space between the wheel face and the fender. For example, if there is 15 mm of space between the outer face of the wheel and the fender and you're wanting a flush fitment, you would want to go from a +45 mm offset to a +30 mm offset. This will bring the mounting surface of the wheel further inward towards the vehicle from the true center point of the wheel thus poking the wheel out by an extra 15 mm.
Wheel size:
Centerbore The centerbore of a wheel is the hole in the center of the wheel that centers it over the mounting hub of the car. Some factory wheels have a centerbore that matches exactly with the hub to reduce vibration by keeping the wheel centered. Wheels with the correct centerbore for the car they will be mounted on are known as hubcentric. Hubcentric wheels reduce the job of the lug nuts to center the wheel on the hub. Wheels that are not hubcentric are known as lugcentric, as the job of centering is done by the lug nuts assuming they are properly torqued down. Another, more common, term is hub piloted or stud piloted wheels and hubs. The stud piloted (lug centeric) is an older design while the hub piloted design is more commonly in use today and can provide for a more accurate connection.
Wheel size:
Centerbore on aftermarket wheels must be equal to or greater than that of the hub, otherwise the wheel cannot be mounted on the car. Many aftermarket wheels come with "hubcentric rings" that lock or slide into the back of the wheel to adapt a wheel with a larger centerbore to a smaller hub. These adapters are usually made of plastic but also in aluminum. Plastic rings only provide initial centering, but are not strong enough to help support the wheel in case of high-speed pot hole hit. Steel ring is strongest, and aluminum is medium Brake caliper clearance The caliper clearance, also called the "X-factor", is the amount of clearance built into the wheel to clear the vehicle’s caliper assembly.
Tire sizes:
Modern road tires have several measurements associated with their size as specified by tire codes like 225/70R14. The first number in the code (e.g., "225") represents the nominal tire width in millimeters. This is followed by the aspect ratio (e.g.,"70"), which is the height of the sidewall expressed as a percentage of the nominal tire width. "R" stands for radial and relates to the tire construction. The final number in the code (e.g.,"14") is the mating wheel diameter measured in inches. The overall circumference of the tire will increase by increasing any of the tire's specifications. For example, increasing the width of the tire will also increase its circumference, because the sidewall height is a proportional dimension. Increasing the aspect ratio will increase the height of the tire and hence the circumference.
Tire sizes:
Off-roading tires may use a different measurement scheme: Tread width × Outside diameter, followed by wheel size (all in inches) – for example 31×10.50R15 (787 mm × 267 mm R380 in metric designation). The size of the wheel, however, is denoted as 8.5 in × 20.0 in (220 mm × 510 mm). This means that the width of the wheel is 8.5 in (220 mm) and the diameter is 20 in (510 mm).
Tire sizes:
Load capacity Load capacity is the amount of mass a wheel will carry. This number will vary depending on the number of lugs, the PCD, the material used and the type of axle the wheel is used on. A wheel used on a free rolling trailer axle will carry more weight than that same wheel used on the drive or steering axle of a vehicle. All wheels will have the load capacity stamped on the back of the wheel. The Gross Vehicle Weight Rating (GVWR) is the maximum operating mass of a vehicle as specified by the manufacturer. In the United States this information is required to be on the vehicle's door placard. The load capacity of the total number of wheels on the vehicle combined must meet or exceed the vehicle's gross vehicle weight rating.
Staggered wheel fitment:
Staggered wheel fitment usually appears on rear-wheel drive vehicles (and in smaller numbers some all wheel drive cars), when the rear wheels are wider than the front wheels. Such a wheel setup may be found on the Ford Mustang, Infiniti G35, certain models of Mercedes and BMW, etc. A good example of such wheel combination is having 19 in × 8 in (480 mm × 200 mm) in front and 19 in × 9.5 in (480 mm × 240 mm) in the rear. Technically, wider wheels in the rear allow better grip with the road surface which is a performance benefit for better acceleration.
Staggered wheel fitment:
AdvantagesBetter grip with the road for improved acceleration; Better cornering ability;DisadvantagesThe rear wheels cannot be rotated to the front and vice versa; The front and rear wheels will have different tire sizes; In case of improper installation the large rear wheel may rub suspension or wheel arches.Another setup option of staggered wheel fitment is called double staggered, having smaller diameter narrow width wheels in the front with larger diameter and wider width wheel in the back. For example, a vehicle may feature 18 in × 8 in (460 mm × 200 mm) wheels in front and 19 in × 10 in (480 mm × 250 mm) in the rear. Such setups are found in the Chevrolet Corvettes, the first and second generation of the Acura NSX, and some others. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Supervisory program**
Supervisory program:
A supervisory program or supervisor is a computer program, usually part of an operating system, that controls the execution of other routines and regulates work scheduling, input/output operations, error actions, and similar functions and regulates the flow of work in a data processing system. It can also refer to a program that allocates computer component space and schedules computer events by task queuing and system interrupts. Control of the system is returned to the supervisory program frequently enough to ensure that demands on the system are met.
Supervisory program:
Historically, this term was essentially associated with IBM's line of mainframe operating systems starting with OS/360. In other operating systems, the supervisor is generally called the kernel.
In the 1970s, IBM further abstracted the supervisor state from the hardware, resulting in a hypervisor that enabled full virtualization, i.e. the capacity to run multiple operating systems on the same machine totally independently from each other. Hence the first such system was called Virtual Machine or VM. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Discrete complementary JFETS**
Discrete complementary JFETS:
Discrete complementary JFETs are N-channel and P-channel JFETs that are built with a similar process technology and are designed to have similar or matching electrical characteristics. Discrete complementary JFETs come in separate P and N-channel packages. Dual discrete complementary JFETS house two N-channel JFETs in one monolithic unit and two P-channel units in another monolithic unit.
Discrete complementary JFETS:
Because they are built on the same die, dual N-Channel JFETs have nearly equivalent or matched electrical characteristics. The same can be said for the dual P-Channel JFETs. Although, complementary P and N-Channels are built with the same process technology, because of basic differences between the construction of P and N channel devices, electrical specifications such as mobility and transconductance are slightly different for the P and N-Channel JFETs.The complementary and matched nature of the dual N-channel and dual P-channel JFETs is fundamental to the building of many analog circuits, most notably amplifiers. Specifically, complementary amplifier topologies are based on a number of complementary matched JFET pairs. As one example, a fully complementary amplifier will use matched N and P-channel JFETs for the differential amplifier front end.
Discrete complementary JFETS:
Other complementary and matched JFET blocks within complementary amplifiers include stacked and folded cascode blocks and level translators (level shifters).Source followers (buffers) are another complementary structure found in amplifiers. The complementary source follower, often used in the output stage of an amplifier, can be designed such that it can adjust the output offset voltage to zero, effectively, eliminating the need for an AC coupling capacitor. A complementary source follower can also be paralleled to create power amplifiers. Nelson Pass built such a design based on Toshiba's 2SK170 and 2SJ74 single complementary JFETs with over 1000 parallel JFETs. Today though, since Toshiba discontinued these parts, you would have to build such an amplifier with LSK170 and LSJ74 single complementary pairs or with LSK489 and LSJ689 JFETs. Because the LSK489 and LSJ689 have lower input capacitance than Toshiba parts and are duals the same kind of amplifier design would have lower noise levels and a smaller footprint. Amplifier bias networks will also incorporate JFETs current sources, although not necessarily in matched arrangements. JFET current mirror designs, for use in amplifiers, have been patented using matched JFETs, such as the LSK389 and GaAs JFETs.Besides amplifiers, discrete JFET matched pairs are also used in the design of voltage controlled resistors, voltage controlled current sources, current to voltage converters, programmable gain circuits, voltmeters, phasers and a wide range of analog computational circuits like absolute value circuits These blocks often are designed with matched N-channel pairs, matched P-channel pairs, or complementary matched pairs.
Popularity:
The long-term popularity of discrete complementary JFETs is a result of the ability of the designer to obtain better circuit performance at lower cost points than what can be obtained with more modern highly integrated devices. Well-known audio designers, like John Curl, Nelson Pass and Erno Borbely, also have proven to the marketplace, that discrete complementary JFETs are one of the better ways to achieve high-quality, low-noise audio designs.Complementary JFET duals are also noted for their low equivalent noise voltage, high operating voltage, thermal tracking characteristics, low offset voltage, low pinch-off voltages, low input bias currents, and very high input impedance. All of these characteristics make these devices ideal for use in high performance audio, sensor and measurement applications.
Future:
Improved JFET process technologies, discrete JFET devices and JFET topologies will continue to challenge highly integrated monolithic designs for sockets in high quality electronic products. The primary reasons are cost and customization. Costs are too high for integrated circuit companies to integrate customized high-performance for niche markets. Costs performance trade-offs are met more easily with discrete devices.
Discrete designs based on hybrid topology breakthroughs will also continue to challenge highly integrated, highly commercialized monolithic designs in terms of performance and cost. Hybrid topologies that combine complementary JFET, complementary MOS, complementary bipolar transistor, complementary SiC JFET and complementary GaAs JFETs, are easier and more cost-effectively built from discrete components than integrate into highly advanced monolithic chips. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Periodic inventory**
Periodic inventory:
Periodic inventory is a system of inventory in which updates are made on a periodic basis. This differs from perpetual inventory systems, where updates are made as seen fit.
Periodic inventory:
In a periodic inventory system no effort is made to keep up-to-date records of either the inventory or the cost of goods sold. Instead, these amounts are determined only periodically - usually at the end of each year. This physical count determines the amount of inventory appearing in the balance sheet. The cost of goods sold for the entire year then is determined by a short computation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Peter Adriaens**
Peter Adriaens:
Peter Adriaens is a Professor of Engineering and Entrepreneurship at the University of Michigan. He is an expert in the field of clean technology development.
Academic career:
Adriaens a B.Sc. in 1984 and M.S. in 1986 from Ghent University, following by a PhD in Environmental Sciences at University of California, Riverside. While completing his PhD, Adriaens was involved in experts surrounding the biodegradation of PCBs via two strains of bacteria. His research was presented to the American Society for Microbiology in May 1989. He has also completed post-doctorate work at Stanford University. Adriaens is now a Professor of Civil and Environmental Engineering at the University of Michigan and a Professor of Entrepreneurship at the Ross Business School at the university. He also serves as the Cleantech Director of the university's Wolverine Venture Fund. He has been a Distinguished Professor of Entrepreneurship at the Sichuan University in Chengdu. He is a past president of the Association of Environmental Science and Engineering Professors.
Clean technology development and finance:
Adriaens worked as a consultant on both the Exxon Valdez oil spill and the Gulf War oil spill, and has been interviewed about other oil spills in his capacity as an environmental engineer. He has also been interviewed on entrepreneurship issues, such as the valuation of newly public companies. At times he has been interviewed about the intersection between both engineering and entrepreneurship, in addition to the cleantech industry. Adriaens is a director of the Watershed Capital Group. Adriaens has made contributions to the field of Environmental Finance, especially in the field of cleantech clusters. He is the founder of Global CleanTech LLC, and a Director for CleanTech Acceleration Partners as well as for the Global CleanTech Cluster Association. In addition, he is appointed at Limnotech as academic in residence and director of Asian operations.
Publishing:
Adriaens has authored more than 100 peer-reviewed articles and book chapters. For example, Adriaens co-authored the chapter "Teaching Entrepreneurial business strategies in global markets: a comparison of cleantech venture assessment in the US and China" with Timothy Faley, in the 2011 book Entrepreneurship Education in Asia. In the chapter the authors describe China as the global leader in developing clean energy technology and the critical link that venture capital has played in promoting the success of the industry there. They also discuss the methodologies behind the university education of entrepreneurs in this region—specifically in the Chinese province of Sichuan, where Adriaens had previously taught. He has been published in journals including International Biodeterioration and Biodegradation, Federation of European Microbiological Societies Microbiology Ecologies, Environmental Science & Technology, and the Journal of Microbiological Methods.Adriaens has also written op-eds for newspapers including the Financial Times. His published articles in the press include a December 2011 article in Forbes Magazine, which he argues a new approach (KeyStone Compact) to assess valuation of tech companies to avoid a new tech bubble. In March 2011 he also argued for the need of a new business model for US-China competition, in which Chinese investors more "flush with cash" could legitimately invest in U.S. R&D, in a way that both sides find beneficial - in order to trigger positive trade flow from China to the US and help solve IP infringement issues. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Parameter identification problem**
Parameter identification problem:
In economics and econometrics, the parameter identification problem arises when the value of one or more parameters in an economic model cannot be determined from observable variables. It is closely related to non-identifiability in statistics and econometrics, which occurs when a statistical model has more than one set of parameters that generate the same distribution of observations, meaning that multiple parameterizations are observationally equivalent.
Parameter identification problem:
For example, this problem can occur in the estimation of multiple-equation econometric models where the equations have variables in common.
In simultaneous equations models:
Standard example, with two equations Consider a linear model for the supply and demand of some specific good. The quantity demanded varies negatively with the price: a higher price decreases the quantity demanded. The quantity supplied varies directly with the price: a higher price increases the quantity supplied.
In simultaneous equations models:
Assume that, say for several years, we have data on both the price and the traded quantity of this good. Unfortunately this is not enough to identify the two equations (demand and supply) using regression analysis on observations of Q and P: one cannot estimate a downward slope and an upward slope with one linear regression line involving only two variables. Additional variables can make it possible to identify the individual relations.
In simultaneous equations models:
In the graph shown here, the supply curve (red line, upward sloping) shows the quantity supplied depending positively on the price, while the demand curve (black lines, downward sloping) shows quantity depending negatively on the price and also on some additional variable Z, which affects the location of the demand curve in quantity-price space. This Z might be consumers' income, with a rise in income shifting the demand curve outwards. This is symbolically indicated with the values 1, 2 and 3 for Z.
In simultaneous equations models:
With the quantities supplied and demanded being equal, the observations on quantity and price are the three white points in the graph: they reveal the supply curve. Hence the effect of Z on demand makes it possible to identify the (positive) slope of the supply equation. The (negative) slope parameter of the demand equation cannot be identified in this case. In other words, the parameters of an equation can be identified if it is known that some variable does not enter into the equation, while it does enter the other equation.
In simultaneous equations models:
A situation in which both the supply and the demand equation are identified arises if there is not only a variable Z entering the demand equation but not the supply equation, but also a variable X entering the supply equation but not the demand equation: supply: Q=aS+bSP+cX demand: Q=aD+bDP+dZ with positive bS and negative bD. Here both equations are identified if c and d are nonzero.
In simultaneous equations models:
Note that this is the structural form of the model, showing the relations between the Q and P. The reduced form however can be identified easily.
In simultaneous equations models:
Fisher points out that this problem is fundamental to the model, and not a matter of statistical estimation: It is important to note that the problem is not one of the appropriateness of a particular estimation technique. In the situation described [without the Z variable], there clearly exists no way using any technique whatsoever in which the true demand (or supply) curve can be estimated. Nor, indeed, is the problem here one of statistical inference—of separating out the effects of random disturbance. There is no disturbance in this model [...] It is the logic of the supply-demand equilibrium itself which leads to the difficulty. (Fisher 1966, p. 5) More equations More generally, consider a linear system of M equations, with M > 1.
In simultaneous equations models:
An equation cannot be identified from the data if less than M − 1 variables are excluded from that equation. This is a particular form of the order condition for identification. (The general form of the order condition deals also with restrictions other than exclusions.) The order condition is necessary but not sufficient for identification.
In simultaneous equations models:
The rank condition is a necessary and sufficient condition for identification. In the case of only exclusion restrictions, it must "be possible to form at least one nonvanishing determinant of order M − 1 from the columns of A corresponding to the variables excluded a priori from that equation" (Fisher 1966, p. 40), where A is the matrix of coefficients of the equations. This is the generalization in matrix algebra of the requirement "while it does enter the other equation" mentioned above (in the line above the formulas). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bloom (shader effect)**
Bloom (shader effect):
Bloom (sometimes referred to as light bloom or glow) is a computer graphics effect used in video games, demos, and high-dynamic-range rendering (HDRR) to reproduce an imaging artifact of real-world cameras. The effect produces fringes (or feathers) of light extending from the borders of bright areas in an image, contributing to the illusion of an extremely bright light overwhelming the camera or eye capturing the scene. It became widely used in video games after an article on the technique was published by the authors of Tron 2.0 in 2004.
Theory:
There are two recognized potential causes of bloom.
Theory:
Imperfect Focus One physical basis of bloom is that, in the real world, lenses can never focus perfectly. Even a perfect lens will convolve the incoming image with an Airy disk (the diffraction pattern produced by passing a point light source through a circular aperture). Under normal circumstances, these imperfections are not noticeable, but an intensely bright light source will cause the imperfections to become visible. As a result, the image of the bright light appears to bleed beyond its natural borders.
Theory:
The Airy disc function falls off very quickly but has very wide tails (actually, infinitely wide tails). As long as the brightness of adjacent parts of the image are roughly in the same range, the effect of the blurring caused by the Airy disc is not particularly noticeable; but in parts of the image where very bright parts are adjacent to relatively darker parts, the tails of the Airy disc become visible and can extend far beyond the extent of the bright part of the image.
Theory:
In HDRR images, the effect can be reproduced by convolving the image with a windowed kernel of an Airy disc (for very good lenses), or by applying Gaussian blur (to simulate the effect of a less perfect lens), before converting the image to fixed-range pixels. The effect cannot be fully reproduced in non-HDRR imaging systems, because the amount of bleed depends on how bright the bright part of the image is.
Theory:
As an example, when a picture is taken indoors, the brightness of outdoor objects seen through a window may be 70 or 80 times brighter than objects inside the room. If exposure levels are set for objects inside the room, the bright image of the windows will bleed past the window frames when convolved with the Airy disc of the camera being used to produce the image.
Theory:
CCD Sensor Saturation Bloom in digital cameras is caused by an overflow of charge in the photodiodes, which are the light-sensitive elements in the camera's image sensor. When a photodiode is exposed to a very bright light source, the accumulated charge can spill over into adjacent pixels, creating a halo effect. This is known as "charge bleeding." The bloom effect is more pronounced in cameras with smaller pixels, as there is less room for the charge to dissipate. It can also be exacerbated by high ISO settings, which increase the camera's sensitivity to light and can result in more charge accumulation.
Theory:
While the bloom effect can be distracting in some images, it can also be used creatively to add a dreamy or otherworldly quality to photos.
Practical implementation:
Current generation gaming systems are able to render 3D graphics using floating-point frame buffers, in order to produce HDR images. To produce the bloom effect, the linear HDRR image in the frame buffer is convolved with a convolution kernel in a post-processing step, before converting to RGB space. The convolution step usually requires the use of a large gaussian kernel that is not practical for realtime graphics, causing programmers to use approximation methods.
Use in games:
Some of the earliest games to use the bloom effect include the pre-rendered CGI game Riven (1997), the voxel game Outcast (1999), and the real-time 3D polygon games The Bouncer (2000) and Ico (2001). Bloom was later popularized within the game development community in 2004, when an article on the technique was published by the authors of Tron 2.0. Bloom lighting has been used in many games, modifications and game engines such as Quake Live, Cube 2: Sauerbraten and the Spring game engine. The effect was popular in 7th-generation games, which were released from 2005 through to the early 2010s. Several games from the period have received criticism for overuse of the technique. The heavy bloom lighting in RollerCoaster Tycoon 3 (2005) was described as "disgusting" at the time by GameSpot. Gaming Bolt described the trend as a gimmick that had died with the generation, and criticised the heavy use of the technique in major releases of the time such as The Elder Scrolls IV: Oblivion (2006), the Xbox 360 port of Burnout Revenge (2006), and Twilight Princess (2006). Syndicate (2012) has also been described as featuring "eye-melting" bloom. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ordered exponential**
Ordered exponential:
The ordered exponential, also called the path-ordered exponential, is a mathematical operation defined in non-commutative algebras, equivalent to the exponential of the integral in the commutative algebras. In practice the ordered exponential is used in matrix and operator algebras.
Definition:
Let A be an algebra over a real or complex field K, and a(t) be a parameterized element of A, The parameter t in a(t) is often referred to as the time parameter in this context.
Definition:
The ordered exponential of a is denoted where the term n = 0 is equal to 1 and where T is a higher-order operation that ensures the exponential is time-ordered: any product of a(t) that occurs in the expansion of the exponential must be ordered such that the value of t is increasing from right to left of the product; a schematic example: This restriction is necessary as products in the algebra are not necessarily commutative.
Definition:
The operation maps a parameterized element onto another parameterized element, or symbolically, There are various ways to define this integral more rigorously.
Definition:
Product of exponentials The ordered exponential can be defined as the left product integral of the infinitesimal exponentials, or equivalently, as an ordered product of exponentials in the limit as the number of terms grows to infinity: where the time moments {t0, ..., tN} are defined as ti ≡ i Δt for i = 0, ..., N, and Δt ≡ t / N.
Definition:
The ordered exponential is in fact a geometric integral.
Solution to a differential equation The ordered exponential is unique solution of the initial value problem: Solution to an integral equation The ordered exponential is the solution to the integral equation: This equation is equivalent to the previous initial value problem.
Infinite series expansion The ordered exponential can be defined as an infinite sum, This can be derived by recursively substituting the integral equation into itself.
Example:
Given a manifold M where for a e∈TM with group transformation g:e↦ge it holds at a point x∈M 0.
Example:
Here, d denotes exterior differentiation and J(x) is the connection operator (1-form field) acting on e(x) . When integrating above equation it holds (now, J(x) is the connection operator expressed in a coordinate basis) exp (−∫xyJ(γ(t))γ′(t)dt)e(x) with the path-ordering operator P that orders factors in order of the path γ(t)∈M . For the special case that J(x) is an antisymmetric operator and γ is an infinitesimal rectangle with edge lengths |u|,|v| and corners at points x,x+u,x+u+v,x+v, above expression simplifies as follows : OE exp exp exp exp [−J(x)u]e(x)=[1−J(x+v)(−v)][1−J(x+u+v)(−u)][1−J(x+u)v][1−J(x)u]e(x).
Example:
Hence, it holds the group transformation identity OE OE [J]g−1 . If −J(x) is a smooth connection, expanding above quantity to second order in infinitesimal quantities |u|,|v| one obtains for the ordered exponential the identity with a correction term that is proportional to the curvature tensor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Harwell CADET**
Harwell CADET:
The Harwell CADET was the first fully transistorised computer in Europe, and may have been the first fully transistorised computer in the world.
Harwell CADET:
The electronics division of the Atomic Energy Research Establishment at Harwell, UK built the Harwell Dekatron Computer in 1951, which was an automatic calculator where the decimal arithmetic and memory were electronic, although other functions were performed by relays. By 1953, it was evident that this did not meet AERE's computing needs, and AERE director Sir John Cockcroft encouraged them to design and build a computer using transistors throughout.
Harwell CADET:
E. H. Cooke-Yarborough based the design around a 64-kilobyte (65,536 bytes) magnetic drum memory store with multiple moving heads that had been designed at the National Physical Laboratory, UK. By 1953 his team had transistor circuits operating to read and write on a smaller magnetic drum from the Royal Radar Establishment. The machine used a low clock speed of only 58 kHz to avoid having to use any valves to generate the clock waveforms. This slow speed was partially offset by the ability to add together eight numbers concurrently.The resulting machine was called CADET (Transistor Electronic Digital Automatic Computer – backward). It first ran a simple test program in February 1955. CADET used 324 point-contact transistors provided by the UK company Standard Telephones and Cables, which were the only ones available in sufficient quantity when the project started; 76 junction transistors were used for the first stage amplifiers for data read from the drum, since point-contact transistors were too noisy. CADET was built from a few standardised designs of circuit boards which never got mounted into the planned desktop unit, so it was left in its breadboard form. From August 1956 CADET was offering a regular computing service, during which it often executed continuous computing runs of 80 hours or more.Cooke-Yarborough described CADET as being "probably the second fully transistorised computer in the world to put to use", second to an unnamed IBM machine. Both the Manchester University Transistor Computer and the Bell Laboratories TRADIC were demonstrated incorporating transistors before CADET was operational, although both required some thermionic valves to supply their faster clock power, so they were not fully transistorised. In April 1955 IBM announced the IBM 608 transistor calculator, which they claim was "the first all solid-state computing machine commercially marketed" and "the first completely transistorized computer available for commercial installation", and which may have been demonstrated in October 1954, before the CADET.
Harwell CADET:
By 1956, Brian Flowers, head of the theoretical physics division at AERE, was convinced that the CADET provided insufficient computing power for the needs of his numerical analysts and ordered a Ferranti Mercury computer. In 1958, Mercury number 4 became operational at AERE to accompany the CADET for another two years before the CADET was retired after four years' operation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**International Journal of Cardiology**
International Journal of Cardiology:
The International Journal of Cardiology is a peer-reviewed medical journal that publishes research articles about the study and management of cardiac diseases. The journal is affiliated with the International Society for Adult Congenital Cardiac Disease.
Abstracting and indexing:
The journal is abstracted and indexed in MEDLINE, Science Citation Index, Current Contents, EMBASE, and Scopus. According to the Journal Citation Reports, the journal had a 2020 impact factor of 4.164. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Latin house**
Latin house:
Latin house is an electronic dance music genre that combines house and Latin American music, such as that of Puerto Rican, Cuban, Dominican,
History:
Origins In the second half of the 1980s, some of the pioneers of house music of Latin-American descent gave birth to this genre by releasing house records in Spanish. Early examples include Jesse Velez's "Girls Out on the Floor", "Amor Puertorriqueño" by Raz on DJ International and "Break 4 Love" by Raze.
History:
1990s to present In the 1990s a new generation of producers and labels broke into the market. Nervous Records released "Quiero Saber" by the Latin Kings, produced by Todd Terry, as well as "Everything's All Right" by Arts of Rhythm and "Philly The Blunt" by Trinidad. Strictly Rhythm employed producer Armand van Helden, who released "Pirates of The Caribbean Vol. III". Songs from the same label include DJ Dero's "Sube", The Tribe's "Go-san-do", R.A.W.'s "Asuca" produced by Erick Morillo, Rare Arts' "Boricua Posse", Escandalo's "Mas Buena" and Fiasco's "Las Mujeres" produced by Norty Cotto, Latin Kaos' "El Bandolero" and "Muevete Mama" and "Sugar Cane" by Afro-Cube.
History:
During the same period (1991 -1992), Chicago native Pizarro produced "The Five Tones", "New Perspective EP", "Plastica", "Caliente" and "Perdoname". Other producers like Ralphi Rosario and Masters at Work created Latin house classics, for instance Ralphie's production "Da-Me-Lo" and his remix of Albita's "No se parece a nada" as well as "Sul Chu Cha" by Rosabel, while Louie Vega and Kenny Gonzales remix "Sume Sigh Say" by House of Gypsies (Todd Terry), and a remarkable Latin house hit "Robi Rob's Boriqua Anthem" by C+C Music Factory.
History:
In the meantime hybrid experiments are put on the market by the likes of New York's Proyecto Uno, who combined house and merengue in their LPs " Todo el mundo" and "In Da House". Their female counterpart is Lisa M from Puerto Rico, who can be heard on the "No lo derrumbes" and "Flavor of the Latin" albums. Another merengue-house record worth of mention is "Así mamacita" by Henry Rivera on Los Angeles Aqua Boogie. Duo Sandy & Papo are known for their LPs "Sandy & Papo" and "Otra Vez".
History:
During the mid-1990s Cutting broke into the Latin house scene and became the most representative label of this genre. Cutting's DJ Norty Cotto was deemed the most representative producer of Latin house. Among the various hits are 2 In A Room's "Las Mujeres", "Carnival" and "Dar la vuelta", Fun City's "Padentro" and "Baila", Sancocho's "Tumba la Casa", "Alcen las manos" and "Que siga el party" (LP) and Los Compadres' "La Rumba". Norty Cotto's mixed compilations also became classics. Fulanito and their LP "El hombre mas famoso de la tierra" is a good combination of house and Latin-American rhythms from that time. El General's "Muevelo" was another of the tunes of that time remixed to the sounds of Latin house by DJ, Producer Pablo Pabanor Ortiz & Erick More Morillo and Latin House party 2 in collaboration with Producer Rafael Torres, Ray Abraxas and Davidson Ospina.
History:
Today many other Latino house artists have emerged to create many successful songs of this genre, and also remixes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chalconoid**
Chalconoid:
Chalconoids Greek: χαλκός khalkós, "copper", due to its color), also known as chalcones, are natural phenols related to chalcone. They form the central core for a variety of important biological compounds.
Chalconoid:
They show antibacterial, antifungal, antitumor and anti-inflammatory properties. Some chalconoids demonstrated the ability to block voltage-dependent potassium channels. Chalcones are also natural aromatase inhibitors.Chalcones are aromatic ketones with two phenyl rings that are also intermediates in the synthesis of many biological compounds. The closure of hydroxy chalcones causes the formation of the flavonoid structure. Flavonoids are substances in the plant's secondary metabolism with an array of biological activities.
Chalconoid:
Chalconoids are also intermediating in the Auwers synthesis of flavones.
Biosynthesis and metabolism:
Chalcone synthase is an enzyme responsible for the production of chalconoids in plants.
Chalcone isomerase is responsible for their conversion into flavanones and other flavonoids.
Naringenin-chalcone synthase uses malonyl-CoA and 4-coumaroyl-CoA to produce CoA, naringenin chalcone, and CO2.
In aurones, the chalcone-like structure closes into a 5-atom ring instead of the more typical 6-atom ring (C ring). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Omphalitis of newborn**
Omphalitis of newborn:
Omphalitis of newborn is the medical term for inflammation of the umbilical cord stump in the neonatal newborn period, most commonly attributed to a bacterial infection. Typically immediately after an infant is born, the umbilical cord is cut with a small remnant (often referred to as the stump) left behind. Normally the stump separates from the skin within 3–45 days after birth. A small amount of pus-like material is commonly seen at the base of the stump and can be controlled by keeping the stump open to air to dry. Certain bacteria can grow and infect the stump during this process and as a result significant redness and swelling may develop, and in some cases the infection can then spread through the umbilical vessels to the rest of the body. While currently an uncommon anatomical location for infection in the newborn in the United States, it has caused significant morbidity and mortality both historically and in areas where health care is less readily available. In general, when this type of infection is suspected or diagnosed, antibiotic treatment is given, and in cases of serious complications surgical management may be appropriate.
Signs and symptoms:
Clinically, neonates with omphalitis present within the first two weeks of life with signs and symptoms of a skin infection (cellulitis) around the umbilical stump (redness, warmth, swelling, pain), pus from the umbilical stump, fever, fast heart rate (tachycardia), low blood pressure (hypotension), somnolence, poor feeding, and yellow skin (jaundice). Omphalitis can quickly progress to sepsis and presents a potentially life-threatening infection. In fact, even in cases of omphalitis without evidence of more serious infection such as necrotizing fasciitis, mortality is high (in the 10% range).
Causes:
Omphalitis is most commonly caused by bacteria. The culprits usually are Staphylococcus aureus, Streptococcus, and Escherichia coli. The infection is typically caused by a combination of these organisms and is a mixed Gram-positive and Gram-negative infection. Anaerobic bacteria can also be involved.
Diagnosis:
In a normal umbilical stump, you first see the umbilicus lose its characteristic bluish-white, moist appearance and become dry and black After several days to weeks, the stump should fall off and leave a pink fleshy wound which continues to heal as it becomes a normal umbilicus.For an infected umbilical stump, diagnosis is usually made by the clinical appearance of the umbilical cord stump and the findings on history and physical examination. There may be some confusion, however, if a well-appearing neonate simply has some redness around the umbilical stump. In fact, a mild degree is common, as is some bleeding at the stump site with detachment of the umbilical cord. The picture may be clouded even further if caustic agents have been used to clean the stump or if silver nitrate has been used to cauterize granulomata of the umbilical stump.
Prevention:
During the 1950s there were outbreaks of omphalitis that then led to anti-bacterial treatment of the umbilical cord stump as the new standard of care. It was later determined that in developed countries keeping the cord dry is sufficient, (known as "dry cord care") as recommended by the American Academy of Pediatrics. The umbilical cord dries more quickly and separates more readily when exposed to air However, each hospital/birthing center has its own recommendations for care of the umbilical cord after delivery. Some recommend not using any medicinal washes on the cord. Other popular recommendations include triple dye, betadine, bacitracin, or silver sulfadiazine. With regards to the medicinal treatments, there is little data to support any one treatment (or lack thereof) over another. However one recent review of many studies supported the use of chlorhexidine treatment as a way to reduce risk of death by 23% and risk of omphalitis by anywhere between 27 and 56% in community settings in underdeveloped countries. This study also found that this treatment increased the time that it would take for the umbilical stump to separate or fall off by 1.7 days. Lastly this large review also supported the notion that in hospital settings no medicinal type of cord care treatment was better at reducing infections compared to dry cord care.
Treatment:
Treatment consists of antibiotic therapy aimed at the typical bacterial pathogens in addition to supportive care for any complications which might result from the infection itself such as hypotension or respiratory failure. A typical regimen will include intravenous antibiotics such as from the penicillin-group which is active against Staphylococcus aureus and an aminoglycoside for activity against Gram-negative bacteria. For particularly invasive infections, antibiotics to cover anaerobic bacteria may be added (such as metronidazole). Treatment is typically for two weeks and often necessitates insertion of a central venous catheter or peripherally inserted central catheter.
Epidemiology:
The current incidence in the United States is somewhere around 0.5% per year; overall, the incidence rate for developed world falls between 0.2 and 0.7%. In developing countries, the incidence of omphalitis varies from 2 to 7 for 100 live births. There does not appear to be any racial or ethnic predilection.
Epidemiology:
Like many bacterial infections, omphalitis is more common in those patients who have a weakened or deficient immune system or who are hospitalized and subject to invasive procedures. Therefore, infants who are premature, sick with other infections such as blood infection (sepsis) or pneumonia, or who have immune deficiencies are at greater risk. Infants with normal immune systems are at risk if they have had a prolonged birth, birth complicated by infection of the placenta (chorioamnionitis), or have had umbilical catheters. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dissolving pulp**
Dissolving pulp:
Dissolving pulp, also called dissolving cellulose, is bleached wood pulp or cotton linters that has a high cellulose content (> 90%). It has special properties including a high level of brightness and uniform molecular-weight distribution. This pulp is manufactured for uses that require a high chemical purity, and particularly low hemicellulose content, since the chemically similar hemicellulose can interfere with subsequent processes. Dissolving pulp is so named because it is not made into paper, but dissolved either in a solvent or by derivatization into a homogeneous solution, which makes it completely chemically accessible and removes any remaining fibrous structure. Once dissolved, it can be spun into textile fibers (viscose or Lyocell), or chemically reacted to produce derivatized celluloses, such cellulose triacetate, a plastic-like material formed into fibers or films, or cellulose ethers such as methyl cellulose, used as a thickener.
Manufacture:
Pulpwood Dissolving pulp is mainly produced chemically from pulpwood in a process that has a low yield (30 - 35% of the wood). This makes up of about 85 - 88% of the production. Dissolving pulp is made from the sulfite process or the kraft process with an acid prehydrolysis step to remove hemicelluloses. For the highest quality, it should be derived from fast-grown hardwoods with low non-cellulose content.The sulfite process produces pulp with a cellulose content up to 92 percent. It can use ammonium, calcium, magnesium or sodium as a base. The prehydrolysis sulfate process produces pulp with a cellulose content up to 96%.
Manufacture:
Special alkaline purification treatments can yield even higher cellulose levels: up to 96 percent for the sulfite process and up to 98 percent for the sulfate process.
Cotton linters A minor part is produced from the shortest cotton linters, normally second cut. These are washed mechanically and chemically to remove proteins, waxes, pectins and other polysaccharides. This is bleached to get the required brightness. Dissolving pulp from cellulose linters gives the purest cellulose and is used to manufacture acetate plastics and high-viscosity cellulose ethers.
Applications:
Dissolving pulp is used in production of regenerated cellulose. In the regenerated cellulose process the cellulose is converted to cellulose xanthate which dissolves easily in caustic soda. The resulting viscous liquid can be extruded through spinnerettes and regenerated as man-made fibres. Cellulose can also be dissolved in some organic solvents directly and processed to regenerate the cellulose fibres in different forms. The lyocell process uses an amine oxide to dissolve cellulose and Tencel is the only commercial example of this direct-dissolution process, which unlike the viscose process is pollution-free.
Applications:
The 90-92% cellulose content sulfite pulps are used mostly to make textiles (like rayon) and cellophane. The 96-% cellulose content sulfate pulps are used to make rayon yarn for industrial products such as tire cord, rayon staple for high-quality fabrics, and various acetate and other specialty products.
As a raw material of cellulose derivatives, dissolving pulp is used in carboxymethyl cellulose (CMC), methyl cellulose (MC), hydroxypropyl cellulose (HPC), hydroxyethyl cellulose (HEC), etc.
Since dissolving pulp is highly refined, it is a product of high whiteness with few impurities making it suitable in specialty paper-related products such as filter paper and vulcanized fibre.
Cellulose powder is dissolving pulp that has undergone acid hydrolysis, been mechanically disintegrated and made into fine powder.
This pulp is used as a filler for urea-formaldehyde resins and melamine resin products. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Japanese craft**
Japanese craft:
Traditional crafts (工芸, kōgei, lit. 'engineered art') in Japan have a long tradition and history. Included in the category of traditional crafts are handicrafts produced by an individual or a group, as well as work produced by independent studio artists working with traditional craft materials and/or processes.
History:
Japanese craft dates back since humans settled on its islands. Handicrafting has its roots in the rural crafts – the material-goods necessities – of ancient times. Handicrafters used naturally- and indigenously occurring materials. Traditionally, objects were created to be used and not just to be displayed and thus, the border between craft and art was not always very clear. Crafts were needed by all strata of society and became increasingly sophisticated in their design and execution. Craft had close ties to folk art, but developed into fine art, with a number of aesthetic schools of thought, such as wabi-sabi, arising. Craftsmen and women therefore became artisans with increasing sophistication. However, wares were not just produced for domestic consumption, but at some point items such as ceramics made by studio craft were produced for export and became an important pillar of the economy.Family affiliations or bloodlines are of special importance to the aristocracy and the transmission of religious beliefs in various Buddhist schools. In Buddhism, the use of the term "bloodlines" likely relates to a liquid metaphor used in the sutras: the decantation of teachings from one "dharma vessel" to another, describing the full and correct transference of doctrine from master to disciple. Similarly, in the art world, the process of passing down knowledge and experience formed the basis of familial lineages. For ceramic, metal, lacquer, and bamboo craftsmen, this acquisition of knowledge usually involved a lengthy apprenticeship with the master of the workshop, often the father of the young disciple, from one generation to the next. In this system called dentō (伝 統), traditions were passed down within a teacher-student relationship (師弟, shitei). It encompassed strict rules that had to be observed in order to enable learning and teaching of a way (dō (道)). The wisdom could be taught either orally (denshō (伝承)), or in writing (densho (伝書)). Living in the master's household and participating in household duties, apprentices carefully observed the master, senior students, and workshop before beginning any actual training. Even in the later stages of an apprenticeship it was common for a disciple to learn only through conscientious observation. Apprenticeship required hard work from the pupil almost every day in exchange for little or no pay. It was quite common that the mastery in certain crafts were passed down within the family from one generation to the next, establishing veritable dynasties. In that case the established master's name was assumed instead of the personal one. Should there be an absence of a male heir, a relative or a student could be adopted in order to continue the line and assume the prestigious name.With the end of the Edo period and the advent of the modern Meiji era, industrial production was introduced; western objects and styles were copied and started replacing the old. On the fine art level, patrons such as feudal daimyō lords were unable to support local artisans as much as they had done in the past. Although handmade Japanese craft was once the dominant source of objects used in daily life, modern era industrial production as well as importation from abroad sidelined it in the economy. Traditional craft began to wane, and disappeared in many areas, as tastes and production methods changed. Forms such as swordmaking became obsolete. Japanese scholar Okakura Kakuzō wrote against the fashionable primacy of western art and founded the periodical Kokka (國華, lit. 'Flower of the Nation') to draw attention to the issue. Specific crafts that had been practiced for centuries were increasingly under threat, while others that were more recent developments introduced from the west, such as glassmaking, saw a rise.
History:
Although these objects were designated as National Treasures – placing them under the protection of the imperial government – it took some time for their cultural value to be fully recognized. In order to further protect traditional craft and arts, the government, in 1890, instituted the guild of Imperial Household Artists (帝室技芸員, Teishitsu Gigei-in), who were specially appointed to create works of art for the Tokyo Imperial Palace and other imperial residences. These artists were considered most famous and prestigious and worked in the areas such as painting, ceramics, and lacquerware. Although this system of patronage offered them some kind of protection, craftsmen and women on the folk art level were left exposed. One reaction to this development was the mingei (民芸, "folk arts" or "arts of the people") – the folk art movement that developed in the late 1920s and 1930s, whose founding father was Yanagi Sōetsu (1889–1961). The philosophical pillar of mingei was "hand-crafted art of ordinary people" (民衆的な工芸, minshū-teki-na kōgei). Yanagi Sōetsu discovered beauty in everyday ordinary and utilitarian objects created by nameless and unknown craftspersons.
History:
The Second World War left the country devastated and as a result, craft suffered. The government introduced a new program known as Living National Treasure to recognise and protect craftspeople (individually and as groups) on the fine and folk art level. Inclusion in the list came with financial support for the training of new generations of artisans so that the art forms could continue. In 1950, the national government instituted the intangible cultural properties categorization, which is given to cultural property considered of high historical or artistic value in terms of the craft technique. The term refers exclusively to the human skill possessed by individuals or groups, which are indispensable in producing cultural property. It also took further steps: in 2009, for example, the government inscribed yūki-tsumugi into the UNESCO Intangible Cultural Heritage Lists. Prefectural governments, as well as those on the municipal level, also have their own system of recognising and protecting local craft (meibutsu). Although the government has taken these steps, private sector artisans continue to face challenges trying to stay true to tradition whilst at the same time interpreting old forms and creating new ideas in order to survive and remain relevant to customers. They also face the dilemma of an ageing society wherein knowledge is not passed down to enough pupils of the younger generation, which means dentō teacher-pupil relationships within families break down if a successor is not found. As societal rules changed and became more relaxed, the traditional patriarchal system has been forced to undergo changes as well. In the past, males were predominantly the holders of "master" titles in the most prestigious crafts. Ceramist Tokuda Yasokichi IV was the first female to succeed her father as a master, since he did not have any sons and was unwilling to adopt a male heir. Despite modernisation and westernisation, a number of art forms still exist, partly due to their close connection to certain traditions: examples include the Japanese tea ceremony, ikebana, and to a certain degree, martial arts (in the case of swordmaking).
History:
The Japan Traditional Kōgei Exhibition (日本伝統工芸展) takes place every year with the aim of reaching out to the public. In 2015, the Museum of Arts and Design in New York exhibited a number of modern kōgei artists in an effort to introduce Japanese craft to an international audience.
Ceramics:
Japanese pottery and porcelain, one of the country's oldest art forms, dates back to the Neolithic period. Kilns have produced earthenware, pottery, stoneware, glazed pottery, glazed stoneware, porcelain, and blue-and-white ware. Japan has an exceptionally long and successful history of ceramic production. Earthenware was created as early as the Jōmon period (10,000–300 BCE), giving Japan one of the oldest ceramic traditions in the world. Japan is further distinguished by the unusual esteem that ceramics holds within its artistic tradition, owing to the enduring popularity of the tea ceremony.
Ceramics:
Some of the recognised techniques of Japanese ceramic craft are: Iro-e (色絵, colour painting) Neriage (練上げ, using different colours of clay together) Sansai (三彩, three colours of brown, green, and a creamy off-white) Saiyū (彩釉, glaze technique with dripping effect) Seihakuji (青白磁, a form of blue-white hakuji porcelain) Sometsuke (染付, blue and white pottery) Tetsu-e (鉄絵, (also known as Tetsugusuri), iron glazing) Yūri-kinsai (釉裏金彩, metal-leaf application) Zōgan (象嵌, damascening and champlevé)There are many different types of Japanese ware. Those more identified as being close to the craft movement include: Bizen ware (備前焼), from Imbe in Bizen province Hagi ware (萩焼), from Hagi, Yamaguchi prefecture Hasami ware (波佐見焼), from Hasami, Nagasaki prefecture Kakiemon (柿右衛門), porcelain developed by Sakaida Kakiemon in Arita, Saga prefecture Karatsu ware (唐津焼), from Karatsu, Saga prefecture Kutani ware (九谷焼), from Kutani, Ishikawa prefecture Mashiko ware (益子焼), from Mashiko, Tochigi prefecture Mumyōi ware (無名異焼), from Sado, Niigata prefecture Onta ware (小鹿田焼), from Onta, Ōita prefecture Setoguro (瀬戸黒), from Seto, Aichi prefecture Shigaraki ware (信楽焼), from Shigaraki, Shiga prefecture Shino ware (志野焼), from Mino province Tokoname ware (常滑焼), from Tokoname, Aichi prefecture Tsuboya ware (壺屋焼), from Ryūkyū Islands
Textiles:
Textile crafts include silk, hemp, linen and cotton woven, dyed and embroidered into various forms—from crafts originating from folk designs to complex silk weaves intended for the upper classes.
Village crafts that evolved from ancient folk traditions also continued in the form of weaving and indigo dyeing—by the Ainu people of Hokkaidō (whose distinctive designs have prehistoric prototypes) and by other remote farming families in northern Japan.
Textiles:
Traditional craft textiles are typically used primarily for Japanese clothing, such as long, thin bolts of cloth (tanmono) used to sew kimono, yukata and furisode, as well as other types of kimono. Historically, these textiles would have been used to sew the kosode (the historic precursor to the kimono). They are also used to sew obi, the sash worn with a kimono. Accessories such as kanzashi are also commonly made from textiles such as kinsha and chirimen (smooth crêpe and textured crêpe respectively). Traditional footwear, such as geta, zōri and okobo, also use textiles in the form of hanao, the fabric thongs used to hold the shoe on the foot; some okobo also feature brocade fabric around the body of the shoe.
Textiles:
The different techniques for dyeing designs onto fabric are: Yūzen (友禅染) Katazome (型絵染) Edo komon (江戸小紋) Nagaita chugata (長板中形) Mokuhan-zome (木版染) Tsujigahana ShiboriSome weaving techniques are: Kasuri (絣織) Tsumugi (紬織) Echigi-jōfu (越後上布) Saga-nishiki (佐賀錦)Amongst the more well-known regional textiles are: Nishijin-ori (西陣織), silk brocade using flosting yarns and gilt paper from the Nishijin district of Kyoto Yūki-tsumugi (結城紬), a variety of tsumugi from Yūki, Ibaraki prefecture Kumejima-tsumugi (久米島紬), a variety of tsumugi from Kumejima, Okinawa Kagayūzen (加賀友禅), a dyeing techniwue from Kaga, Ishikawa prefecture Kyōyūzen (京友禅), a dyeing technique from Kyoto Bingata, a stencil-dye technique from the Ryukyuan IslandsOther techniques include kumihimo (組紐) braid making, and kogin zashi (こぎん刺し), a form of sashiko embroidery.
Lacquerware:
The art of Japanese lacquerware can be traced to prehistoric artefacts. Japanese lacquerware is most often employed on wooden objects, which receive multiple layers of refined lac juices, each of which must dry before the next is applied. These layers make a tough skin impervious to water damage and resistant to breakage, providing lightweight, easy-to-clean utensils of every sort. The decoration on such lacquers, whether carved through different-colored layers or in surface designs, applied with gold or inlaid with precious substances, has been a prized art form since the Nara period (710–94 CE).Items produced using lacquer are used for daily necessities like bowls and trays, but also for tea ceremony utensils such as chaki (tea caddies) and kōgō (incense containers). Items also decorated with lacquer, and used more commonly in the past, include netsuke and inrō.
Lacquerware:
Japanese lacquerware is closely entwined with wood and bamboo work; the base material is usually wood, but bamboo (藍胎, rantai) or linen (乾漆, kanshitsu) can also be used.The different techniques used in the application and decoration of lacquer are: Urushi-e (漆絵), which is the oldest and most basic decorative technique Maki-e (蒔絵) Raden (螺鈿) Chinkin (沈金) Kinma (蒟醤) Choshitsu (彫漆) Hiramon (平文) Rankaku (卵殻) Kamakura-bori (鎌倉彫)Amongst the more well-known types of lacquerware are: Wajima-nuri (輪島塗), lacquerware from Wajima, Ishikawa prefecture Tsugaru-nuri (津軽塗), lacquerware from Tsugaru region around Hirosaki, Aomori prefecture
Wood and bamboo:
Wood and bamboo have always had a place in Japanese architecture and art due to the abundance of available materials, resulting in the long tradition of Japanese carpentry. Both secular and religious buildings were and are made out of wood, as well as items used in the household, typically dishes and boxes.
Other traditions of woodwork include yosegi (Japanese marquetry work) and the making of furniture such as tansu. Japanese tea ceremony is closely entwined with the practices of bamboo crafts (for spoons) and woodwork and lacquerware (for natsume).
Wood and bamboo:
Types of woodwork include: Sashimono (指物) Kurimono (刳物) Hikimono (挽物) Magemono (曲物)Japanese bamboowork implements are produced for tea ceremonies, ikebana flower arrangement and interior goods. The types of bamboowork are: Amimono (編物) Kumimono (組物)The art of basket weaving in patterns such as kagome (籠目) is well known; its name is composed from the words kago (basket) and me (eyes), referring to the pattern of holes found in kagome, where laths woven in three directions (horizontally, diagonally left and diagonally right) create a pattern of trihexagonal tiling. The weaving process gives the kagome pattern a chiral wallpaper group symmetry of p6 (632).
Wood and bamboo:
Other materials such as reeds are also used in the broad category of Japanese woodwork. Neko chigura is a traditional form of weaving basket for cats.
Amongst the more well-known varieties of miscellaneous woodwork are: Hakoneyosegizaiku (箱根寄木細工), wooden marquetry from Hakone, Ashigarashimo district, and Odawara, Kanagawa prefecture Iwayadotansu (岩谷堂箪笥), wooden chests of drawers, from Oshu, Iwate prefecture
Metalwork:
Early Japanese iron-working techniques date back to the 3rd to 2nd century BCE. Japanese swordsmithing is of extremely high quality and greatly valued; swordsmithing in Japan originated before the 1st century BCE, and reached the height of its popularity as the chief possession of warlords and samurai. Swordsmithing is considered a separate artform from iron- and metalworking, and has moved beyond the craft it once started out as.
Metalwork:
Outside of swordsmithing, a number of items for daily use were historically made out of metal, resulting in the development of metalworking outside of the production of weaponry.
Traditional metal casting techniques include: Rogata (蝋型) Sogata (惣型) Komegata (込型)Smithing (鍛金), the technique of shaping metal items through beating them with a hammer, is also used in traditional Japanese metalwork.
Metalwork:
Arguably the most important Japanese metalworking technique is forge welding (鍛接), the joining of two pieces of metal—typically iron and carbon steel—by heating them to a high temperature and hammering them together, or forcing them together by other means. Forge welding is commonly used to make tools such as chisels and planes. One of the most famous areas for its use of forge welding is Yoita, Nagaoka City, located in Niigata prefecture, where a technique known as Echigo Yoita Uchihamono (越後与板打刃物) is used.
Metalwork:
To create various patterns on the surface of a piece of metal, metal carving is used to apply decorative designs. The techniques include carving (彫り), metal inlay (象嵌), and embossing (打ち出し).Amongst the more well-known types of Japanese metalware are: Nambutekki (南部鉄器), ironware from Morioka and Oshu, Iwate prefecture Takaoka Doki (高岡銅器), copperware from Takaoka, Toyama prefecture
Dolls:
There are various types of traditional Japanese dolls (人形, ningyō, lit. 'human form'), some representing children and babies, some representing the imperial court, warriors and heroes, fairy-tale characters, gods and (rarely) demons, and also everyday people. Many types of ningyō have a long tradition and are still made today, for household shrines, formal gift-giving, or for festival celebrations such as Hinamatsuri, the doll festival, or Kodomo no Hi, Children's Day. Some are manufactured as a local craft, to be purchased by pilgrims as a souvenir of a temple visit or some other trip.
Dolls:
There are four different basic types of doll, based on their base material: Wooden dolls (木彫人形) Toso dolls (桐塑人形), made out of toso, a substance made out of paulownia sawdust mixed with paste that creates a clay-like substance Harinuki dolls (張抜人形), made out of papier-mache Totai dolls (陶胎人形), made out of ceramicThe painting or application techniques are: Nunobari (布貼り) Kimekomi (木目込み) Hamekomi (嵌込み) Kamibari (紙貼り) Saishiki (彩色) Saicho (彩彫)One well-known type of ningyō is Hakata ningyō (博多人形).
Paper making:
The Japanese art of making paper from the mulberry plant called washi (和紙) is thought to have begun in the 6th century. Dyeing paper with a wide variety of hues and decorating it with designs became a major preoccupation of the Heian court, and the enjoyment of beautiful paper and its use has continued thereafter, with some modern adaptations. The traditionally made paper called izumo (after the shrine area where it is made) was especially desired for fusuma (sliding panels) decoration, artists' papers, and elegant letter paper. Some printmakers have their own logo made into their papers, and since the Meiji period, another special application has been Western marbleized end papers (made by the Atelier Miura in Tokyo).
Other crafts:
Glass The tradition of glass production goes back as far as the Kofun period in Japan, but was used very rarely and more for decorative purposes, such as decorating some kanzashi. Only relatively late in the Edo period did it experience increased popularity, and with the beginning of modernisation during the Meiji era large-scale industrial production of glassware commenced.
Other crafts:
Despite the advent of wider industrial production, glassware continues to exist as a craft – for example, in traditions such as Edo kiriko (江戸切子) and Satsuma kiriko (薩摩切子). The various techniques used are: Glassblowing (吹きガラス) Cut glass (切子) Gravure (グラヴィール) Pâte de verre (パート・ド・ヴェール) Enameling (エナメル絵付け) Cloisonné Cloisonné (截金, shippō) is a glass-like glaze that is applied to a metal framework, and then fired in a kiln. It developed especially in Owari province around Nagoya in the late Edo period and going into the Meiji era. One of the leading traditional producing companies that still exists is the Ando Cloisonné Company.
Other crafts:
Techniques of shippō include: Yusen-shippō (有線七宝) Shotai-shippō (省胎七宝) Doro-shippō (泥七宝) Gem carving Gem carving (砡, gyoku) is carving naturally patterned agate or various hard crystals into tea bowls and incense containers.
Decorative gilt or silver leaf Kirikane (截金) is a decorative technique used for paintings and Buddhist statues, which applies gold leaf, silver leaf, platinum leaf cut into geometric patterns of lines, diamonds and triangles.
Inkstone carving Calligraphy is considered one of the classical refinements and art forms of Japan. The production of inkstones was therefore greatly valued.
Ivory carving Bachiru (撥鏤) is the art of engraving and dyeing ivory. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wishbone (computer bus)**
Wishbone (computer bus):
The Wishbone Bus is an open source hardware computer bus intended to let the parts of an integrated circuit communicate with each other. The aim is to allow the connection of differing cores to each other inside of a chip. The Wishbone Bus is used by many designs in the OpenCores project.
Wishbone is intended as a "logic bus". It does not specify electrical information or the bus topology. Instead, the specification is written in terms of "signals", clock cycles, and high and low levels.
This ambiguity is intentional. Wishbone is made to let designers combine several designs written in Verilog, VHDL or some other logic-description language for electronic design automation (EDA). Wishbone provides a standard way for designers to combine these hardware logic designs (called "cores").
Wishbone (computer bus):
Wishbone is defined to have 8, 16, 32, and 64-bit buses. All signals are synchronous to a single clock but some slave responses must be generated combinatorially for maximum performance. Wishbone permits addition of a "tag bus" to describe the data. But reset, simple addressed reads and writes, movement of blocks of data, and indivisible bus cycles all work without tags.
Wishbone (computer bus):
Wishbone is open source. To prevent preemption of its technologies by aggressive patenting, the Wishbone specification includes examples of prior art, to prove its concepts are in the public domain.
A device does not conform to the Wishbone specification unless it includes a data sheet that describes what it does, bus width, utilization, etc. Promoting reuse of a design requires the data sheet. Making a design reusable in turn makes it easier to share with others.
The Simple Bus Architecture is a simplified version of the Wishbone specification.
Wishbone topologies:
Wishbone adapts well to common topologies such as point-to-point, many-to-many (i.e. the classic bus system), hierarchical, or even switched fabrics such as crossbar switches. In the more exotic topologies, Wishbone requires a bus controller or arbiter, but devices still maintain the same interface.
Shared bus Data flow Crossbar switch
Comparisons:
Wishbone control signals compared to other system on a chip (SoC) bus standards: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Frequency drift**
Frequency drift:
In electrical engineering, and particularly in telecommunications, frequency drift is an unintended and generally arbitrary offset of an oscillator from its nominal frequency. Causes may include component aging, changes in temperature that alter the piezoelectric effect in a crystal oscillator, or problems with a voltage regulator which controls the bias voltage to the oscillator. Frequency drift is traditionally measured in Hz/s. Frequency stability can be regarded as the absence (or a very low level) of frequency drift.
Frequency drift:
On a radio transmitter, frequency drift can cause a radio station to drift into an adjacent channel, causing illegal interference. Because of this, Frequency allocation regulations specify the allowed tolerance for such oscillators in a type-accepted device. A temperature-compensated, voltage-controlled crystal oscillator (TCVCXO) is normally used for frequency modulation.
On the receiver side, frequency drift was mainly a problem in early tuners, particularly for analog dial tuning, and especially on FM, which exhibits a capture effect. However, the use of a phase-locked loop (PLL) essentially eliminates the drift issue. For transmitters, a numerically controlled oscillator (NCO) also does not have problems with drift.
Drift differs from Doppler shift, which is a perceived difference in frequency due to motion of the source or receiver, even though the source is still producing the same wavelength. It also differs from frequency deviation, which is the inherent and necessary result of modulation in both FM and phase modulation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**No-till farming**
No-till farming:
No-till farming (also known as zero tillage or direct drilling) is an agricultural technique for growing crops or pasture without disturbing the soil through tillage. No-till farming decreases the amount of soil erosion tillage causes in certain soils, especially in sandy and dry soils on sloping terrain. Other possible benefits include an increase in the amount of water that infiltrates into the soil, soil retention of organic matter, and nutrient cycling. These methods may increase the amount and variety of life in and on the soil. While conventional no-tillage systems use herbicides to control weeds, organic systems use a combination of strategies, such as planting cover crops as mulch to suppress weeds.There are three basic methods of no-till farming. "Sod seeding" is when crops are sown with seeding machinery into a sod produced by applying herbicides on a cover crop (killing that vegetation). "Direct seeding" is when crops are sown through the residue of previous crop. "Surface seeding" or "direct seeding" is when crops are left on the surface of the soil; on flatlands, this requires no machinery and minimal labor.Tillage is dominant in agriculture today, but no-till methods may have success in some contexts. In some cases minimum tillage or "low-till" methods combine till and no-till methods. For example, some approaches may use shallow cultivation (i.e. using a disc harrow) but no plowing or use strip tillage.
Background:
Tillage is the agricultural preparation of soil by mechanical agitation, typically removing weeds established in the previous season. Tilling can create a flat seed bed or one that has formed areas, such as rows or raised beds, to enhance the growth of desired plants. It is an ancient technique with clear evidence of its use since at least 3000 B.C.No-till farming is not equivalent to conservation tillage or strip tillage. Conservation tillage is a group of practices that reduce the amount of tillage needed. No-till and strip tillage are both forms of conservation tillage. No-till is the practice of never tilling a field. Tilling every other year is called rotational tillage.
Background:
The effects of tillage can include soil compaction; loss of organic matter; degradation of soil aggregates; death or disruption of soil microbes and other organisms including mycorrhizae, arthropods, and earthworms; and soil erosion where topsoil is washed or blown away.
Origin:
The idea of modern no-till farming started in the 1940s with Edward H. Faulkner, author of Plowman's Folly, but it was not until the development after WWII of powerful herbicides such as paraquat that various researchers and farmers started to try out the idea. The first adopters of no-till include Klingman (North Carolina), Edward Faulkner, L.A. Porter (New Zealand), Harry and Lawrence Young (Herndon, Kentucky), the Instituto de Pesquisas Agropecuarias Meridional (1971 in Brazil) with Herbert Bartz.
Adoption across the world:
Land under no-till farming has increased across the world. In 1999, about 45 million ha (170,000 sq mi) was under no-till farming worldwide, which increased to 72 million ha (280,000 sq mi) in 2003 and to 111 million ha (430,000 sq mi) in 2009.
Adoption across the world:
Australia Per figures from the Australian Bureau of Statistics (ABS) Agricultural Resource Management Survey, in Australia the percentage of agricultural land under No-till farming methods was 26% in 2000–01, which more than doubled to 57% in 2007–08. As at 30 June 2017, of the 20 million ha (77,000 sq mi) of crop land cultivated 79% (or 16 million hectares) received no cultivation. Similarly, 70% (or 2 million hectares) of the 3 million hectares of pasture land cultivated received no cultivation, apart from sowing.
Adoption across the world:
South America South America had the highest adoption of No-till farming in the world, which in 2014 constituted 47% of the total global area under no-till farming.
Adoption across the world:
The countries with highest adoption are Argentina (80%), Brazil (50%), Paraguay (90%), and Uruguay (82%).In Argentina the usage of no-till resulted in reduction of soil erosion losses by 80%, cost reductions by more than 50% and increased farm incomes.In Brazil the usage of no-till resulted in reduction of soil erosion losses by 97%, higher farm productivity and income increase by 57% five years after the starting of no-till farming.In Paraguay, net farm incomes increased by 77% after adoption of no-till farming.
Adoption across the world:
United States No-till farming is used in the United States and the area managed in this way continues to grow. This growth is supported by a decrease in costs. No-till management results in fewer passes with equipment, and the crop residue prevents evaporation of rainfall and increases water infiltration into the soil.In 2017, no-till farming was being used in about 21% of the cultivated cropland in the US.
Benefits and issues:
Profit, economics, yield Some studies have found that no-till farming can be more profitable in some cases.In some cases it may reduce labour, fuel, irrigation and machinery costs. No-till can increase yield because of higher water infiltration and storage capacity, and less erosion. Another possible benefit is that because of the higher water content, instead of leaving a field fallow it can make economic sense to plant another crop instead.A problem of no-till farming is that in spring, the soil both warms and dries more slowly, which may delay planting. Harvest can thus occur later than in a conventionally tilled field. The slower warming is due to crop residue being a lighter color than the soil which would be exposed in conventional tillage, which then absorbs less solar energy. However, that is being affected by climate change, so warmer temperatures may offset these effects. But in the meantime, this can be managed by using row cleaners on a planter.A problem with no-till farming is that if production is impacted negatively by the implemented process then the profitability of the practice also may decrease in relation to increasing gas prices and high labor costs. As the prices for fuel and labor continue to rise, it may be more practical for farms and farming productions to turn toward a no-till operation. In spring, poor draining clay soil may have lower production due to a cold and wet year.The economic and ecological benefits of implementing no-till practices can require sixteen to nineteen years. The first decade of no-till implementation often will show trends of revenue decrease. Implementation periods greater than ten years in length usually show a gain in profit, rather than a decrease in profitability.
Benefits and issues:
Costs and management No-till farming requires some different skills from those of conventional farming. A combination of technique, equipment, pesticides, crop rotation, fertilization, and irrigation have to be used for local conditions.
Benefits and issues:
Equipment On some crops, like continuous no-till corn, the thickness of the residue on the surface of the field can become a problem without proper preparation and/or equipment. No-till farming requires specialized seeding equipment, such as heavier seed drills to penetrate through the residue. Ploughing requires more powerful tractors, so tractors can be smaller with no-tillage. Costs can be offset by selling ploughs and tractors, but farmers often keep their old equipment while trying out no-till farming. This results in a higher investment into equipment.
Benefits and issues:
Increased herbicide use One of the purposes of tilling is to remove weeds. No-till farming changes weed composition: faster growing weeds may be reduced as increased competition with eventual growth of perennials, shrubs and trees. This problem is usually solved with a herbicide such as glyphosate in lieu of tillage for seedbed preparation, so no-tillage often uses more herbicides in comparison to conventional tillage. Some alternatives can be winter cover crops, soil solarization or burning. However the use of herbicides is not strictly necessary, as demonstrated by Masanobu Fukuoka.
Benefits and issues:
No-till occasionally uses cover crops to help control weeds and increase organic residue in the soil (or nutrients by using legumes). Cover crops then need to be killed so that the newly planted crops can get enough light, water, nutrients, etc. This can be done by rollers, crimpers, choppers and other ways. The residue is then planted through, and left as a mulch. Cover crops typically must be crimped when they enter the flowering stage.With no-till farming, residue from the previous year's crops lie on the surface of the field, which can cause different, greater, or more frequent disease or weed problems compared to tillage farming.
Benefits and issues:
Fertilizer One of the most common yield reducers is nitrogen being immobilized in the crop residue, which can take a few months to several years to decompose, depending on the crop's C to N ratio and the local environment. Fertilizer needs to be applied at a higher rate. An innovative solution to this problem is to integrate animal husbandry in various ways to aid in decomposition. After a transition period (4–5 years for Kansas, USA) the soil may build up in organic matter. Nutrients in the organic matter are eventually released into the soil..
Benefits and issues:
Environmental Policy A legislative bill, H.R.2508 of the 117th Congress, also known as the NO EMITS act, has been proposed to amend the Food Security Act of 1985, that was introduced by Representative Rodney Davis of Illinois. Davis is a member of the House Committee on Agriculture. This bill proposes suggestions for offsetting emissions that are focused in agricultural means, doing so by implementing new strategies such as minimal tillage or no tillage. H.R.2508 is currently under reference by the House Committee of Agriculture. H.R.2508 is also backed by two other representatives from high agricultural states, Rep. Eric A. Crawford of Arkansas and Rep. Don Bacon of Nebraska. H.R.2508 is proposing to set up incentive programs to provide financial and mechanical assistance to farmers and agriculture plots that transition their production processes, as well as providing contacts to lower risk for producers. Funding has also been proposed for Conservation Innovation Trails.Farmers within the U.S. are encouraged through subsidies and other programs provided by the government to meet a defined level of tillage conservation. Such subsidies and programs provided by the U.S. government include: Environmental Quality Incentives Program (EQIP) and Conservation Stewardship Program (CSP). The EQIP is a voluntary program that attempts to assists farmers and other participants help through conservation and not financially suffer from doing so. Efforts are put out to help reduce the amount of contamination from the agricultural industry as well as increasing the health of the soil. The CSP attempts to assist those who are looking to implement conservation effort into their practices by giving suggestions on what might be done for their given circumstance and needs.
Benefits and issues:
Environmental Greenhouse gases No-till farming has been claimed to increase soil organic matter, and thus increase carbon sequestration. While many studies report soil organic carbon increases in no-till systems, others conclude that these effects may not be observed in all systems, depending on factors, such as climate and topsoil carbon content. A 2020 study demonstrated that the combination of no-till and cover cropping sequesters more carbon than either practice alone, suggesting that the two practices have a synergistic effect in carbon capture.There is debate over whether the increased sequestration sometimes detected is actually occurring, or is due to flawed testing methods or other factors. A 2014 study claimed that certain no-till systems may sequester less carbon than conventional tillage systems, saying that the “no-till subsurface layer is often losing more soil organic carbon stock over time than is gained in the surface layer.” The study also pointed out the need for a uniform definition of soil organic carbon sequestration among researchers. The study concludes, "Additional investments in soil organic carbon (SOC) research is needed to better understand the agricultural management practices that are most likely to sequester SOC or at least retain more net SOC stocks."No-till farming reduces nitrous oxide (N2O) emissions by 40-70%, depending on rotation. Nitrous oxide is a potent greenhouse gas, 300 times stronger than CO2, and stays in the atmosphere for 120 years.
Benefits and issues:
Soil and desertification No-till farming improves aggregates and reduces erosion. Soil erosion might be reduced almost to soil production rates.Research from over 19 years of tillage studies at the United States Department of Agriculture Agricultural Research Service found that no-till farming makes soil less erodible than ploughed soil in areas of the Great Plains. The first inch of no-till soil contains more aggregates and is two to seven times less vulnerable than that of ploughed soil. More organic matter in this layer is thought to help hold soil particles together.As per the Food and Agriculture Organization (FAO) of the United Nations, no-till farming can stop desertification by maintaining soil organic matter and reducing wind and water erosion.No ploughing also means less airborne dust.
Benefits and issues:
Water No-till farming improves water retention: crop residues help water from natural precipitation and irrigation to infiltrate the soil. Residue limits evaporation, conserving water. Evaporation from tilling increases the amount of water by around 1/3 to 3/4 inches (0.85 to 1.9 cm) per pass.Gully formation can cause soil erosion in some crops such as soybeans with no-tillage, although models of other crops under no-tillage show less erosion than conventional tillage. Grass waterways can be a solution. Any gullies that form in fields not being tilled get deeper each year instead of being smoothed out by regular plowing.
Benefits and issues:
A problem in some fields is water saturation in soils. Switching to no-till farming may increase drainage because soil under continuous no-till include a higher water infiltration rate.
Biota and wildlife No-tilled fields often have more annelids, invertebrates and wildlife such as deer mice.
Albedo Tillage lowers the albedo of croplands. The potential for global cooling as a result of increased albedo in no-till croplands is similar in magnitude to other biogeochemical carbon sequestration processes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**3D Core Graphics System**
3D Core Graphics System:
The 3D Core Graphics System (a.k.a. Core) was the very first graphical standard ever developed. A group of 25 experts of the ACM Special Interest Group SIGGRAPH developed this "conceptual framework". The specifications were published in 1977 and it became a foundation for many future developments in the field of computer graphics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mikeyy**
Mikeyy:
Mikeyy is the name of a computer worm that spread approximately 10,000 automated messages ( or "tweets") across social networking and microblogging website Twitter.com in four discrete attacks "between 2 AM Saturday April 11, 2009 Pacific time and early Monday (April 14, 2009) morning" before it was "identified and deleted". The tweets promoted a website called StalkDaily.The worm was written by 17-year-old Michael Mooney who operates a website to point out vulnerabilities in Twitter while advertising his website. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mikuni (company)**
Mikuni (company):
Mikuni Corporation (株式会社ミクニ, Kabushiki gaisha Mikuni) is a Japanese Automotive products manufacturing company. Their business activities is focused on carburetors, fuel injectors and other automobile and motorcycle related equipment.
History and description:
The firm was founded in 1923 and incorporated in 1948.The company is best known for supplying carburetors to many major Japanese motorcycle manufacturers. It is also known for its licensed copies of Solex carburetors that were also used on several Japanese cars.
Mikuni operates in Southeast Asia, especially in Thailand and Indonesia with motorcycle, scooter, and moped manufacturers Yamaha, Suzuki, Hyosung Motors & Machinery Inc., TVS Motor Company and Honda. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fence (mathematics)**
Fence (mathematics):
In mathematics, a fence, also called a zigzag poset, is a partially ordered set (poset) in which the order relations form a path with alternating orientations: a<b>c<d>e<f>h<i⋯ or a>b<c>d<e>f<h>i⋯ A fence may be finite, or it may be formed by an infinite alternating sequence extending in both directions. The incidence posets of path graphs form examples of fences.
A linear extension of a fence is called an alternating permutation; André's problem of counting the number of different linear extensions has been studied since the 19th century. The solutions to this counting problem, the so-called Euler zigzag numbers or up/down numbers, are: 10 32 122 544 2770 15872 101042.
Fence (mathematics):
(sequence A001250 in the OEIS).The number of antichains in a fence is a Fibonacci number; the distributive lattice with this many elements, generated from a fence via Birkhoff's representation theorem, has as its graph the Fibonacci cube.A partially ordered set is series-parallel if and only if it does not have four elements forming a fence.Several authors have also investigated the number of order-preserving maps from fences to themselves, or to fences of other sizes.An up-down poset Q(a,b) is a generalization of a zigzag poset in which there are a downward orientations for every upward one and b total elements. For instance, Q(2,9) has the elements and relations a>b>c<d>e>f<g>h>i.
Fence (mathematics):
In this notation, a fence is a partially ordered set of the form Q(1,n).
Equivalent conditions:
The following conditions are equivalent for a poset P: P is a disjoint union of zigzag posets.
If a ≤ b ≤ c in P, either a = b or b = c.
<∘<=∅ , i.e. it is never the case that a < b and b < c, so that < is vacuously transitive.
P has dimension at most one (defined analogously to the Krull dimension of a commutative ring).
Every element of P is either maximal or minimal.
The slice category Pos/P is cartesian closed.The prime ideals of a commutative ring R, ordered by inclusion, satisfy the equivalent conditions above if and only if R has Krull dimension at most one. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Urinary tract infection**
Urinary tract infection:
A urinary tract infection (UTI) is an infection that affects part of the urinary tract. When it affects the lower urinary tract it is known as a bladder infection (cystitis) and when it affects the upper urinary tract it is known as a kidney infection (pyelonephritis). Symptoms from a lower urinary tract infection include pain with urination, frequent urination, and feeling the need to urinate despite having an empty bladder. Symptoms of a kidney infection include fever and flank pain usually in addition to the symptoms of a lower UTI. Rarely the urine may appear bloody. In the very old and the very young, symptoms may be vague or non-specific.The most common cause of infection is Escherichia coli, though other bacteria or fungi may sometimes be the cause. Risk factors include female anatomy, sexual intercourse, diabetes, obesity, and family history. Although sexual intercourse is a risk factor, UTIs are not classified as sexually transmitted infections (STIs). Kidney infection, if it occurs, usually follows a bladder infection but may also result from a blood-borne infection. Diagnosis in young healthy women can be based on symptoms alone. In those with vague symptoms, diagnosis can be difficult because bacteria may be present without there being an infection. In complicated cases or if treatment fails, a urine culture may be useful.In uncomplicated cases, UTIs are treated with a short course of antibiotics such as nitrofurantoin or trimethoprim/sulfamethoxazole. Resistance to many of the antibiotics used to treat this condition is increasing. In complicated cases, a longer course or intravenous antibiotics may be needed. If symptoms do not improve in two or three days, further diagnostic testing may be needed. Phenazopyridine may help with symptoms. In those who have bacteria or white blood cells in their urine but have no symptoms, antibiotics are generally not needed, although during pregnancy is an exception. In those with frequent infections, a short course of antibiotics may be taken as soon as symptoms begin or long-term antibiotics may be used as a preventive measure.About 150 million people develop a urinary tract infection in a given year. They are more common in women than men, but similar between anatomies while carrying indwelling catheters. In women, they are the most common form of bacterial infection. Up to 10% of women have a urinary tract infection in a given year, and half of women have at least one infection at some point in their lifetime. They occur most frequently between the ages of 16 and 35 years. Recurrences are common. Urinary tract infections have been described since ancient times with the first documented description in the Ebers Papyrus dated to c. 1550 BC.
Signs and symptoms:
Lower urinary tract infection is also referred to as a bladder infection. The most common symptoms are burning with urination and having to urinate frequently (or an urge to urinate) in the absence of vaginal discharge and significant pain. These symptoms may vary from mild to severe and in healthy women last an average of six days. Some pain above the pubic bone or in the lower back may be present. People experiencing an upper urinary tract infection, or pyelonephritis, may experience flank pain, fever, or nausea and vomiting in addition to the classic symptoms of a lower urinary tract infection. Rarely, the urine may appear bloody or contain visible pus in the urine.UTIs have been associated with onset or worsening of delirium, dementia, and neuropsychiatric disorders such as depression and psychosis. However, there is insufficient evidence to determine whether UTI causes confusion. The reasons for this are unknown, but may involve a UTI-mediated systemic inflammatory response which affects the brain. Cytokines such as interleukin-6 produced as part of the inflammatory response may produce neuroinflammation, in turn affecting dopaminergic and/or glutamatergic neurotransmission as well as brain glucose metabolism.
Signs and symptoms:
Children In young children, the only symptom of a urinary tract infection (UTI) may be a fever. Because of the lack of more obvious symptoms, when females under the age of two or uncircumcised males less than a year exhibit a fever, a culture of the urine is recommended by many medical associations. Infants may feed poorly, vomit, sleep more, or show signs of jaundice. In older children, new onset urinary incontinence (loss of bladder control) may occur. About 1 in 400 infants of 1 to 3 months of age with a UTI also have bacterial meningitis.
Signs and symptoms:
Elderly Urinary tract symptoms are frequently lacking in the elderly. The presentations may be vague with incontinence, a change in mental status, or fatigue as the only symptoms, while some present to a health care provider with sepsis, an infection of the blood, as the first symptoms. Diagnosis can be complicated by the fact that many elderly people have preexisting incontinence or dementia.It is reasonable to obtain a urine culture in those with signs of systemic infection that may be unable to report urinary symptoms, such as when advanced dementia is present. Systemic signs of infection include a fever or increase in temperature of more than 1.1 °C (2.0 °F) from usual, chills, and an increased white blood cell count.
Cause:
Uropathogenic E. coli from the gut is the cause of 80–85% of community-acquired urinary tract infections, with Staphylococcus saprophyticus being the cause in 5–10%. Rarely they may be due to viral or fungal infections. Healthcare-associated urinary tract infections (mostly related to urinary catheterization) involve a much broader range of pathogens including: E. coli (27%), Klebsiella (11%), Pseudomonas (11%), the fungal pathogen Candida albicans (9%), and Enterococcus (7%) among others. Urinary tract infections due to Staphylococcus aureus typically occur secondary to blood-borne infections. Chlamydia trachomatis and Mycoplasma genitalium can infect the urethra but not the bladder. These infections are usually classified as a urethritis rather than urinary tract infection.
Cause:
Intercourse In young sexually active women, sexual activity is the cause of 75–90% of bladder infections, with the risk of infection related to the frequency of sex. The term "honeymoon cystitis" has been applied to this phenomenon of frequent UTIs during early marriage. In post-menopausal women, sexual activity does not affect the risk of developing a UTI. Spermicide use, independent of sexual frequency, increases the risk of UTIs. Diaphragm use is also associated. Condom use without spermicide or use of birth control pills does not increase the risk of uncomplicated urinary tract infection.
Cause:
Sex Women are more prone to UTIs than men because, in females, the urethra is much shorter and closer to the anus. As a woman's estrogen levels decrease with menopause, her risk of urinary tract infections increases due to the loss of protective vaginal flora. Additionally, vaginal atrophy that can sometimes occur after menopause is associated with recurrent urinary tract infections.Chronic prostatitis in the forms of chronic prostatitis/chronic pelvic pain syndrome and chronic bacterial prostatitis (not acute bacterial prostatitis or asymptomatic inflammatory prostatitis) may cause recurrent urinary tract infections in males. Risk of infections increases as males age. While bacteria is commonly present in the urine of older males this does not appear to affect the risk of urinary tract infections.
Cause:
Urinary catheters Urinary catheterization increases the risk for urinary tract infections. The risk of bacteriuria (bacteria in the urine) is between three and six percent per day and prophylactic antibiotics are not effective in decreasing symptomatic infections. The risk of an associated infection can be decreased by catheterizing only when necessary, using aseptic technique for insertion, and maintaining unobstructed closed drainage of the catheter.Male scuba divers using condom catheters and female divers using external catching devices for their dry suits are also susceptible to urinary tract infections.
Cause:
Others A predisposition for bladder infections may run in families. This is believed to be related to genetics. Other risk factors include diabetes, being uncircumcised, and having a large prostate. In children UTIs are associated with vesicoureteral reflux (an abnormal movement of urine from the bladder into ureters or kidneys) and constipation.Persons with spinal cord injury are at increased risk for urinary tract infection in part because of chronic use of catheter, and in part because of voiding dysfunction. It is the most common cause of infection in this population, as well as the most common cause of hospitalization.
Pathogenesis:
The bacteria that cause urinary tract infections typically enter the bladder via the urethra. However, infection may also occur via the blood or lymph. It is believed that the bacteria are usually transmitted to the urethra from the bowel, with females at greater risk due to their anatomy. After gaining entry to the bladder, E. Coli are able to attach to the bladder wall and form a biofilm that resists the body's immune response.Escherichia coli is the single most common microorganism, followed by Klebsiella and Proteus spp., to cause urinary tract infection. Klebsiella and Proteus spp., are frequently associated with stone disease. The presence of Gram positive bacteria such as Enterococcus and Staphylococcus is increased.The increased resistance of urinary pathogens to quinolone antibiotics has been reported worldwide and might be the consequence of overuse and misuse of quinolones.
Diagnosis:
In straightforward cases, a diagnosis may be made and treatment given based on symptoms alone without further laboratory confirmation. In complicated or questionable cases, it may be useful to confirm the diagnosis via urinalysis, looking for the presence of urinary nitrites, white blood cells (leukocytes), or leukocyte esterase. Another test, urine microscopy, looks for the presence of red blood cells, white blood cells, or bacteria. Urine culture is deemed positive if it shows a bacterial colony count of greater than or equal to 103 colony-forming units per mL of a typical urinary tract organism. Antibiotic sensitivity can also be tested with these cultures, making them useful in the selection of antibiotic treatment. However, women with negative cultures may still improve with antibiotic treatment. As symptoms can be vague and without reliable tests for urinary tract infections, diagnosis can be difficult in the elderly.
Diagnosis:
Based on pH Normal urine pH is slightly acidic, with usual values of 6.0 to 7.5, but the normal range is 4.5 to 8.0. A urine pH of 8.5 or 9.0 is indicative of a urea-splitting organism, such as Proteus, Klebsiella, or Ureaplasma urealyticum; therefore, an asymptomatic patient with a high pH means UTI regardless of the other urine test results. Alkaline pH also can signify struvite kidney stones, which are also known as "infection stones".
Diagnosis:
Classification A urinary tract infection may involve only the lower urinary tract, in which case it is known as a bladder infection. Alternatively, it may involve the upper urinary tract, in which case it is known as pyelonephritis. If the urine contains significant bacteria but there are no symptoms, the condition is known as asymptomatic bacteriuria. If a urinary tract infection involves the upper tract, and the person has diabetes mellitus, is pregnant, is male, or immunocompromised, it is considered complicated. Otherwise if a woman is healthy and premenopausal it is considered uncomplicated. In children when a urinary tract infection is associated with a fever, it is deemed to be an upper urinary tract infection.
Diagnosis:
Children To make the diagnosis of a urinary tract infection in children, a positive urinary culture is required. Contamination poses a frequent challenge depending on the method of collection used, thus a cutoff of 105 CFU/mL is used for a "clean-catch" mid stream sample, 104 CFU/mL is used for catheter-obtained specimens, and 102 CFU/mL is used for suprapubic aspirations (a sample drawn directly from the bladder with a needle). The use of "urine bags" to collect samples is discouraged by the World Health Organization due to the high rate of contamination when cultured, and catheterization is preferred in those not toilet trained. Some, such as the American Academy of Pediatrics recommends renal ultrasound and voiding cystourethrogram (watching a person's urethra and urinary bladder with real time x-rays while they urinate) in all children less than two years old who have had a urinary tract infection. However, because there is a lack of effective treatment if problems are found, others such as the National Institute for Health and Care Excellence only recommends routine imaging in those less than six months old or who have unusual findings.
Diagnosis:
Differential diagnosis In women with cervicitis (inflammation of the cervix) or vaginitis (inflammation of the vagina) and in young men with UTI symptoms, a Chlamydia trachomatis or Neisseria gonorrhoeae infection may be the cause. These infections are typically classified as a urethritis rather than a urinary tract infection. Vaginitis may also be due to a yeast infection. Interstitial cystitis (chronic pain in the bladder) may be considered for people who experience multiple episodes of UTI symptoms but urine cultures remain negative and not improved with antibiotics. Prostatitis (inflammation of the prostate) may also be considered in the differential diagnosis.Hemorrhagic cystitis, characterized by blood in the urine, can occur secondary to a number of causes including: infections, radiation therapy, underlying cancer, medications and toxins. Medications that commonly cause this problem include the chemotherapeutic agent cyclophosphamide with rates of 2–40%. Eosinophilic cystitis is a rare condition where eosinophiles are present in the bladder wall. Signs and symptoms are similar to a bladder infection. Its cause is not entirely clear; however, it may be linked to food allergies, infections, and medications among others.
Prevention:
A number of measures have not been confirmed to affect UTI frequency including: urinating immediately after intercourse, the type of underwear used, personal hygiene methods used after urinating or defecating, or whether a person typically bathes or showers. There is similarly a lack of evidence surrounding the effect of holding one's urine, tampon use, and douching. In those with frequent urinary tract infections who use spermicide or a diaphragm as a method of contraception, they are advised to use alternative methods. In those with benign prostatic hyperplasia urinating in a sitting position appears to improve bladder emptying which might decrease urinary tract infections in this group.Using urinary catheters as little and as short of time as possible and appropriate care of the catheter when used prevents catheter-associated urinary tract infections.
Prevention:
They should be inserted using sterile technique in hospital however non-sterile technique may be appropriate in those who self catheterize. The urinary catheter set up should also be kept sealed. Evidence does not support a significant decrease in risk when silver-alloy catheters are used.
Prevention:
Medications For those with recurrent infections, taking a short course of antibiotics when each infection occurs is associated with the lowest antibiotic use. A prolonged course of daily antibiotics is also effective. Medications frequently used include nitrofurantoin and trimethoprim/sulfamethoxazole. Some recommend against prolonged use due to concerns of antibiotic resistance. Methenamine is another agent used for this purpose as in the bladder where the acidity is low it produces formaldehyde to which resistance does not develop. A UK study showed that methenamine is as effective daily low-dose antibiotics at preventing UTIs among women who experience recurrent UTIs. As methenamine is an antiseptic, it may avoid the issue of antibiotic resistance.In cases where infections are related to intercourse, taking antibiotics afterwards may be useful. In post-menopausal women, topical vaginal estrogen has been found to reduce recurrence. As opposed to topical creams, the use of vaginal estrogen from pessaries has not been as useful as low dose antibiotics. Antibiotics following short term urinary catheterization decreases the subsequent risk of a bladder infection. A number of vaccines are in development as of 2018.
Prevention:
Children The evidence that preventive antibiotics decrease urinary tract infections in children is poor. However recurrent UTIs are a rare cause of further kidney problems if there are no underlying abnormalities of the kidneys, resulting in less than a third of a percent (0.33%) of chronic kidney disease in adults. Whether routine circumcision prevents UTIs has not been well studied as of 2011.
Prevention:
Alternative medicine Some research suggests that cranberry (juice or capsules) may decrease the number of UTIs in those with frequent infections. A Cochrane review concluded that the benefit, if it exists, is small; so did a randomized controlled trial in 2016. Long-term tolerance is also an issue with gastrointestinal upset occurring in more than 30%. Cranberry juice is thus not currently recommended for this indication. As of 2015, probiotics require further study to determine if they are beneficial. The role of the urinary microbiome in maintaining urinary tract health is not well understood as of 2015. As of 2022, one review found that taking mannose was as effective as antibiotics to prevent UTIs, while another review found that clinical trial quality was too low to allow any conclusion about using D‐mannose to prevent or treat UTIs.
Treatment:
The mainstay of treatment is antibiotics. Phenazopyridine is occasionally prescribed during the first few days in addition to antibiotics to help with the burning and urgency sometimes felt during a bladder infection. However, it is not routinely recommended due to safety concerns with its use, specifically an elevated risk of methemoglobinemia (higher than normal level of methemoglobin in the blood). Paracetamol may be used for fevers. There is no good evidence for the use of cranberry products for treating current infections.Fosfomycin can be used as an effective treatment for both UTIs and complicated UTIs including acute pyelonephritis. The standard regimen for complicated UTIs is an oral 3g dose administered once every 48 or 72 hours for a total of 3 doses or a 6 grams every 8 hours for 7 days to 14 days when fosfomycin is given in IV form.
Treatment:
Uncomplicated Uncomplicated infections can be diagnosed and treated based on symptoms alone. Antibiotics taken by mouth such as trimethoprim/sulfamethoxazole, nitrofurantoin, or fosfomycin are typically first line. Cephalosporins, amoxicillin/clavulanic acid, or a fluoroquinolone may also be used. However, antibiotic resistance to fluoroquinolones among the bacteria that cause urinary infections has been increasing. The Food and Drug Administration (FDA) recommends against the use of fluoroquinolones, including a Boxed Warning, when other options are available due to higher risks of serious side effects, such as tendinitis, tendon rupture and worsening of myasthenia gravis. These medications substantially shorten the time to recovery with all being equally effective. A three-day treatment with trimethoprim/sulfamethoxazole, or a fluoroquinolone is usually sufficient, whereas nitrofurantoin requires 5–7 days. Fosfomycin may be used as a single dose but is not as effective.Fluoroquinolones are not recommended as a first treatment. The Infectious Diseases Society of America states this due to the concern of generating resistance to this class of medication. Amoxicillin-clavulanate appears less effective than other options. Despite this precaution, some resistance has developed to all of these medications related to their widespread use. Trimethoprim alone is deemed to be equivalent to trimethoprim/sulfamethoxazole in some countries. For simple UTIs, children often respond to a three-day course of antibiotics. Women with recurrent simple UTIs are over 90% accurate in identifying new infections. They may benefit from self-treatment upon occurrence of symptoms with medical follow-up only if the initial treatment fails.
Treatment:
Complicated Complicated UTIs are more difficult to treat and usually requires more aggressive evaluation, treatment, and follow-up. It may require identifying and addressing the underlying complication. Increasing antibiotic resistance is causing concern about the future of treating those with complicated and recurrent UTI.
Treatment:
Asymptomatic bacteriuria Those who have bacteria in the urine but no symptoms should not generally be treated with antibiotics. This includes those who are old, those with spinal cord injuries, and those who have urinary catheters. Pregnancy is an exception and it is recommended that women take seven days of antibiotics. If not treated it causes up to 30% of mothers to develop pyelonephritis and increases risk of low birth weight and preterm birth. Some also support treatment of those with diabetes mellitus and treatment before urinary tract procedures which will likely cause bleeding.
Treatment:
Pregnant women Urinary tract infections, even asymptomatic presence of bacteria in the urine, are more concerning in pregnancy due to the increased risk of kidney infections. During pregnancy, high progesterone levels elevate the risk of decreased muscle tone of the ureters and bladder, which leads to a greater likelihood of reflux, where urine flows back up the ureters and towards the kidneys. While pregnant women do not have an increased risk of asymptomatic bacteriuria, if bacteriuria is present they do have a 25–40% risk of a kidney infection. Thus if urine testing shows signs of an infection—even in the absence of symptoms—treatment is recommended. Cephalexin or nitrofurantoin are typically used because they are generally considered safe in pregnancy. A kidney infection during pregnancy may result in preterm birth or pre-eclampsia (a state of high blood pressure and kidney dysfunction during pregnancy that can lead to seizures). Some women have UTIs that keep coming back in pregnancy. There is insufficient research on how to best treat these recurrent infections.
Treatment:
Pyelonephritis Pyelonephritis is treated more aggressively than a simple bladder infection using either a longer course of oral antibiotics or intravenous antibiotics. Seven days of the oral fluoroquinolone ciprofloxacin is typically used in areas where the resistance rate is less than 10%. If the local antibiotic resistance rates are greater than 10%, a dose of intravenous ceftriaxone is often prescribed. Trimethoprim/sulfamethoxazole or amoxicillin/clavulanate orally for 14 days is another reasonable option. In those who exhibit more severe symptoms, admission to a hospital for ongoing antibiotics may be needed. Complications such as ureteral obstruction from a kidney stone may be considered if symptoms do not improve following two or three days of treatment.
Prognosis:
With treatment, symptoms generally improve within 36 hours. Up to 42% of uncomplicated infections may resolve on their own within a few days or weeks.15–25% of adults and children have chronic symptomatic UTIs including recurrent infections, persistent infections (infection with the same pathogen), a re-infection (new pathogen), or a relapsed infection (the same pathogen causes a new infection after it was completely gone). Recurrent urinary tract infections are defined as at least two infections (episodes) in a six month time period or three infections in twelve months, can occur in adults and in children.Cystitis refers to a urinary tract infection that involves the lower urinary tract (bladder). An upper urinary tract infection which involves the kidney is called pyelonephritis. About 10–20% of pyelonephritis will go on and develop scarring of the affected kidney. Then, 10–20% of those develop scarring will have increased risk of hypertension in later life.
Epidemiology:
Urinary tract infections are the most frequent bacterial infection in women. They occur most frequently between the ages of 16 and 35 years, with 10% of women getting an infection yearly and more than 40–60% having an infection at some point in their lives. Recurrences are common, with nearly half of people getting a second infection within a year. Urinary tract infections occur four times more frequently in females than males. Pyelonephritis occurs between 20 and 30 times less frequently. They are the most common cause of hospital-acquired infections accounting for approximately 40%. Rates of asymptomatic bacteria in the urine increase with age from two to seven percent in women of child-bearing age to as high as 50% in elderly women in care homes. Rates of asymptomatic bacteria in the urine among men over 75 are between 7–10%. 2–10% of pregnant women have asymptomatic bacteria in the urine and higher rates are reported in women who live in some underdeveloped countries.Urinary tract infections may affect 10% of people during childhood. Among children, urinary tract infections are most common in uncircumcised males less than three months of age, followed by females less than one year. Estimates of frequency among children, however, vary widely. In a group of children with a fever, ranging in age between birth and two years, 2–20% were diagnosed with a UTI.
History:
Urinary tract infections have been described since ancient times with the first documented description in the Ebers Papyrus dated to c. 1550 BC. It was described by the Egyptians as "sending forth heat from the bladder". Effective treatment did not occur until the development and availability of antibiotics in the 1930s before which time herbs, bloodletting and rest were recommended. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**.bss**
.bss:
In computer programming, the block starting symbol (abbreviated to .bss or bss) is the portion of an object file, executable, or assembly language code that contains statically allocated variables that are declared but have not been assigned a value yet. It is often referred to as the "bss section" or "bss segment".
.bss:
Typically only the length of the bss section, but no data, is stored in the object file. The program loader allocates memory for the bss section when it loads the program. By placing variables with no value in the .bss section, instead of the .data or .rodata section which require initial value data, the size of the object file is reduced.
.bss:
On some platforms, some or all of the bss section is initialized to zeroes. Unix-like systems and Windows initialize the bss section to zero, allowing C and C++ statically allocated variables initialized to values represented with all bits zero to be put in the bss segment. Operating systems may use a technique called zero-fill-on-demand to efficiently implement the bss segment. In embedded software, the bss segment is mapped into memory that is initialized to zero by the C run-time system before main() is entered. Some C run-time systems may allow part of the bss segment not to be initialized; C variables must explicitly be placed into that portion of the bss segment.On some computer architectures, the application binary interface also supports an sbss segment for "small data". Typically, these data items can be accessed using shorter instructions that may only be able to access a certain range of addresses. Architectures supporting thread-local storage might use a tbss section for uninitialized, static data marked as thread-local.
Origin:
Historically, BSS (from Block Started by Symbol) is a pseudo-operation in UA-SAP (United Aircraft Symbolic Assembly Program), the assembler developed in the mid-1950s for the IBM 704 by Roy Nutt, Walter Ramshaw, and others at United Aircraft Corporation. The BSS keyword was later incorporated into FORTRAN Assembly Program (FAP) and Macro Assembly Program (MAP), IBM's standard assemblers for its 709 and 7090/94 computers. It defined a label (i.e. symbol) and reserved a block of uninitialized space for a given number of words. In this situation BSS served as a shorthand in place of individually reserving a number of separate smaller data locations. Some assemblers support a complementary or alternative directive BES, for Block Ended by Symbol, where the specified symbol corresponds to the end of the reserved block.
BSS in C:
In C, statically allocated objects without an explicit initializer are initialized to zero (for arithmetic types) or a null pointer (for pointer types). Implementations of C typically represent zero values and null pointer values using a bit pattern consisting solely of zero-valued bits (despite filling bss with zero is not required by the C standard, all variables in .bss are required to be individually initialized to some sort of zeroes according to Section 6.7.8 of C ISO Standard 9899:1999 or section 6.7.9 for newer standards). Hence, the BSS segment typically includes all uninitialized objects (both variables and constants) declared at file scope (i.e., outside any function) as well as uninitialized static local variables (local variables declared with the static keyword); static local constants must be initialized at declaration, however, as they do not have a separate declaration, and thus are typically not in the BSS section, though they may be implicitly or explicitly initialized to zero. An implementation may also assign statically-allocated variables and constants initialized with a value consisting solely of zero-valued bits to the BSS section.
BSS in C:
Peter van der Linden, a C programmer and author, says, "Some people like to remember it as 'Better Save Space.' Since the BSS segment only holds variables that don't have any value yet, it doesn't actually need to store the image of these variables. The size that BSS will require at runtime is recorded in the object file, but BSS (unlike the data segment) doesn't take up any actual space in the object file."
BSS in Fortran:
In Fortran, common block variables are allocated in this segment.
BSS in Fortran:
Some compilers may, for 64-bit instruction sets, limit offsets, in instructions that access this segment, to 32 bits, limiting its size to 2 GB or 4 GB. Also, note that Fortran does not require static data to be initialized to zero. On those systems where the bss segment is initialized to zero, putting common block variables and other static data into that segment guarantees that it will be zero, but for portability, programmers should not depend on that. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Easter bonnet**
Easter bonnet:
An Easter bonnet is any new or fancy hat worn by tradition as a Christian headcovering on Easter. It represents the tail end of a tradition of wearing new clothes at Easter, in harmony with the renewal of the year and the promise of spiritual renewal and redemption.
Easter bonnet:
The Easter bonnet was fixed in popular culture by Irving Berlin, whose frame of reference was the Easter parade in New York City, a festive walkabout that made its way down Fifth Avenue from St. Patrick's Cathedral: At the depths of the Great Depression a new hat at Easter, or a refurbished old one, was a simple luxury.The broader English tradition of new clothes at Easter has been noticed in late 16th century references by Peter Opie, who noted Mercutio's taunting of Benvolio in Romeo and Juliet: "Did'st thou not fall out with a Tailor for wearing his new Doublet before Easter?" At just the same time Thomas Lodge's moralising pamphlet Wits Miserie (London, 1596) recorded "The farmer that was contented in times past with his Russet Frocke & Mockado sleeues, now sels a Cow against Easter to buy him silken geere for his Credit". In Samuel Pepys' diary, 30 March (Easter Day) 1662, he notes Having my old black suit new furbished, I was pretty neat in clothes to-day, and my boy, his old suit new trimmed, very handsome.
Easter bonnet:
Poor Robin, an 18th-century English almanac maker, offered the doggerel At Easter let your clothes be new Or else be sure you will it rue and the notion that ill-luck would dog the one who had not something new at Easter expanded in the 19th century.
Easter bonnet:
Today the Easter bonnet is a type of hat that women and girls wear to Easter services, and (in the United States) in the Easter parade following it. Ladies purchased new and elaborate designs for particular church services and, in the case of Easter, took the opportunity of the end of Lent to buy luxury items. In certain localities such as Boston, Easter bonnets are becoming harder to find, while in other areas, such as Burlington County in New Jersey, Easter bonnets remain popular.Although the traditional Easter bonnet is a hat with depictions of Easter and spring with bunnies, flowers, eggs, etc., recently more creative designers have been producing full face hat and mask taking the mantilla head dress from Spain as their inspiration.
Easter bonnet:
Nowadays a traditional girl's Easter bonnet is usually white, wide-brimmed hat with a pastel colored satin ribbon wrapped around it and tied in a bow. It may also have flowers or other springtime motifs on top, and may match a special dress picked out for the occasion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Foreign exchange aggregator**
Foreign exchange aggregator:
A foreign exchange aggregator or FX Aggregator is a class of systems used in Forex trading to aggregate the liquidity from several liquidity providers.
Mechanism:
Aggregators usually provide two main functions; they allow FX traders to compare price from different liquidity venues such as banks-global market makers or ECNs like Currenex, FXall or Hotspot FX and to have a consolidated view of the market. They allow traders to trade with many participants using a single API or a single trading terminal. Some of the systems support order sweeping (an order is split into the chunks which are sent to the respective counterparties based on the price, time and other attributes of the quotes from these counterparties), other systems route the whole order to a single liquidity provider who is chosen by an order routing algorithm embedded into an aggregator.
Technology:
FX Aggregator implementation is complex as the technology needs to be fast (Latencies in microseconds) and flexible.
Some banks developed their own FX Aggregators and others bought existing products from technology vendors.
Aggregators:
There are many aggregators offered in the market: smartTrade LiquidityFX, Thomson Reuters Dealing Aggregator, Liquid-X, Liquidity Pool, FlexTrade, BidFX, Quotix, Integral, Currenex, LMAX Exchange, MarketFactory, EBS Direct, DealHub, Seamless FX, Gold-i Matrix and others. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Radical 79**
Radical 79:
Radical 79 or radical weapon (殳部) meaning "weapon" or "lance" is one of the 34 Kangxi radicals (214 radicals in total) composed of 4 strokes.
In the Kangxi Dictionary, there are 93 characters (out of 49,030) to be found under this radical.
殳 is also the 92nd indexing component in the Table of Indexing Chinese Character Components predominantly adopted by Simplified Chinese dictionaries published in mainland China.
Literature:
Fazzioli, Edoardo (1987). Chinese calligraphy : from pictograph to ideogram : the history of 214 essential Chinese/Japanese characters. calligraphy by Rebecca Hon Ko. New York: Abbeville Press. ISBN 0-89659-774-1.
Lunde, Ken (Jan 5, 2009). "Appendix J: Japanese Character Sets" (PDF). CJKV Information Processing: Chinese, Japanese, Korean & Vietnamese Computing (Second ed.). Sebastopol, Calif.: O'Reilly Media. ISBN 978-0-596-51447-1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mild-slope equation**
Mild-slope equation:
In fluid dynamics, the mild-slope equation describes the combined effects of diffraction and refraction for water waves propagating over bathymetry and due to lateral boundaries—like breakwaters and coastlines. It is an approximate model, deriving its name from being originally developed for wave propagation over mild slopes of the sea floor. The mild-slope equation is often used in coastal engineering to compute the wave-field changes near harbours and coasts.
Mild-slope equation:
The mild-slope equation models the propagation and transformation of water waves, as they travel through waters of varying depth and interact with lateral boundaries such as cliffs, beaches, seawalls and breakwaters. As a result, it describes the variations in wave amplitude, or equivalently wave height. From the wave amplitude, the amplitude of the flow velocity oscillations underneath the water surface can also be computed. These quantities—wave amplitude and flow-velocity amplitude—may subsequently be used to determine the wave effects on coastal and offshore structures, ships and other floating objects, sediment transport and resulting bathymetric changes of the sea bed and coastline, mean flow fields and mass transfer of dissolved and floating materials. Most often, the mild-slope equation is solved by computer using methods from numerical analysis.
Mild-slope equation:
A first form of the mild-slope equation was developed by Eckart in 1952, and an improved version—the mild-slope equation in its classical formulation—has been derived independently by Juri Berkhoff in 1972. Thereafter, many modified and extended forms have been proposed, to include the effects of, for instance: wave–current interaction, wave nonlinearity, steeper sea-bed slopes, bed friction and wave breaking. Also parabolic approximations to the mild-slope equation are often used, in order to reduce the computational cost.
Mild-slope equation:
In case of a constant depth, the mild-slope equation reduces to the Helmholtz equation for wave diffraction.
Formulation for monochromatic wave motion:
For monochromatic waves according to linear theory—with the free surface elevation given as ζ(x,y,t)=ℜ{η(x,y)e−iωt} and the waves propagating on a fluid layer of mean water depth h(x,y) —the mild-slope equation is: where: η(x,y) is the complex-valued amplitude of the free-surface elevation ζ(x,y,t); (x,y) is the horizontal position; ω is the angular frequency of the monochromatic wave motion; i is the imaginary unit; ℜ{⋅} means taking the real part of the quantity between braces; ∇ is the horizontal gradient operator; ∇⋅ is the divergence operator; k is the wavenumber; cp is the phase speed of the waves and cg is the group speed of the waves.The phase and group speed depend on the dispersion relation, and are derived from Airy wave theory as: where g is Earth's gravity and tanh is the hyperbolic tangent.For a given angular frequency ω , the wavenumber k has to be solved from the dispersion equation, which relates these two quantities to the water depth h
Transformation to an inhomogeneous Helmholtz equation:
Through the transformation the mild slope equation can be cast in the form of an inhomogeneous Helmholtz equation: where Δ is the Laplace operator.
Propagating waves:
In spatially coherent fields of propagating waves, it is useful to split the complex amplitude η(x,y) in its amplitude and phase, both real valued: where a=|η| is the amplitude or absolute value of η and arg {η} is the wave phase, which is the argument of η.
Propagating waves:
This transforms the mild-slope equation in the following set of equations (apart from locations for which ∇θ is singular): where E is the average wave-energy density per unit horizontal area (the sum of the kinetic and potential energy densities), κ is the effective wavenumber vector, with components (κx,κy), vg is the effective group velocity vector, ρ is the fluid density, and g is the acceleration by the Earth's gravity.The last equation shows that wave energy is conserved in the mild-slope equation, and that the wave energy E is transported in the κ -direction normal to the wave crests (in this case of pure wave motion without mean currents). The effective group speed |vg| is different from the group speed cg.
Propagating waves:
The first equation states that the effective wavenumber κ is irrotational, a direct consequence of the fact it is the derivative of the wave phase θ , a scalar field. The second equation is the eikonal equation. It shows the effects of diffraction on the effective wavenumber: only for more-or-less progressive waves, with |∇⋅(cpcg∇a)|≪k2cpcga, the splitting into amplitude a and phase θ leads to consistent-varying and meaningful fields of a and κ . Otherwise, κ2 can even become negative. When the diffraction effects are totally neglected, the effective wavenumber κ is equal to k , and the geometric optics approximation for wave refraction can be used.
Derivation of the mild-slope equation:
The mild-slope equation can be derived by the use of several methods. Here, we will use a variational approach. The fluid is assumed to be inviscid and incompressible, and the flow is assumed to be irrotational. These assumptions are valid ones for surface gravity waves, since the effects of vorticity and viscosity are only significant in the Stokes boundary layers (for the oscillatory part of the flow). Because the flow is irrotational, the wave motion can be described using potential flow theory.
Derivation of the mild-slope equation:
The following time-dependent equations give the evolution of the free-surface elevation ζ(x,y,t) and free-surface potential ϕ(x,y,t): From the two evolution equations, one of the variables φ or ζ can be eliminated, to obtain the time-dependent form of the mild-slope equation: and the corresponding equation for the free-surface potential is identical, with ζ replaced by φ.
The time-dependent mild-slope equation can be used to model waves in a narrow band of frequencies around ω0.
Monochromatic waves Consider monochromatic waves with complex amplitude η(x,y) and angular frequency ω with ω and ω0 chosen equal to each other, ω=ω0.
Using this in the time-dependent form of the mild-slope equation, recovers the classical mild-slope equation for time-harmonic wave motion:
Applicability and validity of the mild-slope equation:
The standard mild slope equation, without extra terms for bed slope and bed curvature, provides accurate results for the wave field over bed slopes ranging from 0 to about 1/3. However, some subtle aspects, like the amplitude of reflected waves, can be completely wrong, even for slopes going to zero. This mathematical curiosity has little practical importance in general since this reflection becomes vanishingly small for small bottom slopes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Marine sanitation device**
Marine sanitation device:
A marine sanitation device (MSD) is a piece of machinery or a mechanical system that is dedicated to treat, process, and/or store raw, untreated sewage that can accumulate onboard water vessels. It does not refer to portable devices such as portable toilets.
Available MSD types:
In the United States, the Environmental Protection Agency (EPA) sets performance standards for marine sanitation devices, and the U.S. Coast Guard (USCG) issues regulations governing the design, construction, certification, installation and operation of MSDs.USCG has certified three kinds of marine sanitation devices.
Available MSD types:
Type I A Type I MSD has a flow-through discharge design. The sewage is broken down and processed through the use of chlorination and/or maceration. The bacteria count per one hundred milliliters of water must be less than one thousand. Discharges from Type I MSDs must not have evident floating solids. Type I MSDs also rely heavily on chlorination and maceration to break down solids and kill any bacteria present.
Available MSD types:
Type II Type II MSDs are similar to Type I, with a flow-through discharge. However, the sewage is broken up through the use of aerobic bacteria or some other biological digestion process. The bacteria count found in one hundred milliliters of water produced from this system cannot be greater than two hundred.
Available MSD types:
Type III Type III MSDs have a different design compared to Type I and Type II. It typically consists of a large storage tank that holds treated or untreated sewage that is held and released when the vessel returns to port. In port, the Type III discharge is transferred to a wastewater treatment facility. However, Type III MSDs can also consist of a holding tank with advanced technologies, including but not limited to incineration, recirculation, and composting. The residuals are not discharged to water, but are transferred when the vessel is in port.
Laws and regulations:
Under IMO, or International Maritime Organization, MARPOL 73/78, also known as the International Convention for the Prevention of Pollution From Ships ("Marpol" is short for marine pollution and 73/78 short for the years 1973 and 1978.) There are a total of six annexes that compose Marpol. Annex IV deals with the pollution of sewage by ships. In Annex IV, there are a total of 11 regulations regarding the laws and regulations surrounding sewage discharge and treatment plants on board. It wasn't until the United States implemented Act to Prevent Pollution from Ships.
Laws and regulations:
US regulations In the US, no vessel with a toilet on board may be operated unless there is a Coast Guard-approved MSD aboard the vessel which is fully functioning. The Clean Water Act (CWA) prohibits the discharge of untreated sewage into waters of the United States. There are also restrictions on vessel manufactures and operators. Manufacturers may not sell any vessel equipped with toilet facilities unless there is an operable Type II or Type III MSD, or an operable Type I device on a vessel that is less than 65 feet (20 m). No person may operate the vessel unless there is an operable Type II or Type III MSD or an operable Type I device. If the vessel is in a water body the discharge of untreated or treated sewage is prohibited by EPA, the vessel operator must secure the device.
Laws and regulations:
No-Discharge Zones The CWA has another means of addressing sewage discharges, through establishment of no-discharge zones (NDZs) for vessel sewage. A state may completely prohibit the discharge of both treated and untreated sewage from all vessels with installed toilets into some or all waters over which it has jurisdiction (up to 3 miles (4.8 km) from land). To create a no-discharge zone to protect waters from sewage discharges by vessels, the state must apply to EPA under one of three categories.
Laws and regulations:
NDZ based on the need for higher level of water quality, and the state demonstrates that adequate pumpout facilities for safe and sanitary removal and treatment of sewage from all vessels are reasonably available. As of 2017, this category of designation has been used for 72 areas representing part or all of the waters of 26 states, including a number of inland states.
Laws and regulations:
NDZ for special waters found to have a particular environmental importance (e.g., to protect environmentally sensitive areas such as shellfish beds or coral reefs); it is not necessary for the state to show pumpout availability. This category of designation has been used twice (state waters within the Florida Keys National Marine Sanctuary and the Boundary Waters Canoe area of Minnesota).
Laws and regulations:
NDZ to prohibit the discharge of sewage into waters that are drinking water intake zones; it is not necessary for the state to show pumpout availability. This category of designation has been used to protect part of the Hudson River in New York.
Laws and regulations:
Solid waste Ship discharges of solid waste are governed by two laws. Title I of the Marine Protection, Research, and Sanctuaries Act (MPRSA) applies to cruise ships and other vessels and makes it illegal to transport garbage from the United States for the purpose of dumping it into ocean waters without a permit or to dump any material transported from a location outside the United States into U.S. territorial seas or the contiguous zone (within 12 nautical miles (22 km) from shore) or ocean waters. EPA is responsible for issuing permits that regulate the disposal of materials at sea (except for dredged material disposal, for which the U.S. Army Corps of Engineers is responsible). Beyond waters that are under U.S. jurisdiction, no MPRSA permit is required for a ship to discharge solid waste. The routine discharge of effluent incidental to the propulsion of vessels is explicitly exempted from the definition of dumping in the MPRSA.
Laws and regulations:
The Act to Prevent Pollution from Ships (APPS) and its regulations implements U.S.-ratified provisions of MARPOL. APPS prohibits the discharge of all garbage within 3 nautical miles (5.6 km) of shore, certain types of garbage within 12 nautical miles (22 km) offshore, and plastic anywhere. It applies to all vessels, whether seagoing or not, regardless of flag, operating in U.S. navigable waters and the Exclusive Economic Zone (EEZ). It is administered by the Coast Guard, which carries out inspection programs to insure the adequacy of port facilities to receive offloaded solid waste.
History:
EPA first issued its MSD regulations in 1976 under CWA authority. The intent of the law is to prevent the spread of disease, keep the oxygen content in water bodies at a healthy level, and to maintain healthy waters in regards to appearance.
Treatment:
The purpose of the MSD is to treat the incoming blackwater and graywater that accumulates on board a floating vessel. Graywater is water that drains directly from a shower, sink, or machinery located in the scullery. Normally, graywater is discharged directly overboard since it is not technically considered sewage and not damaging to the environment. However, in most ports around the world, discharge of fluids is strictly prohibited. To compensate for this situation, graywater piping is rerouted to the MSD. Blackwater is another word for sewage or human body wastes and wastes from toilets. According to the international maritime organization or the IMO, untreated sewage cannot be discharged overboard unless it is 12 nautical miles from the nearest land. Due to regulations issued by the IMO and the United States Maritime Administration (MARAD), every ship must have an approved marine sanitation device aboard their ship. Blackwater is therefore treated through a process that utilizes chlorination and/or biological treatment before being discharged overboard.
Treatment:
Biological treatment In Type II MSDs, sewage is broken down through the use of a natural biological component. Usually this biological component is aerobic bacteria that occur in the media tank. Even though the sewage can have some aerobic bacteria naturally, a majority of the bacteria population is grown on mediums located within the media tank. Since aerobic bacteria require oxygen to live, some form of air pump is necessary to provide sufficient oxygen for the bacteria. This air pump can be a fan or roots blower connected to the tank. By providing sufficient air, most of the smell caused by sewage and anaerobic bacteria is eliminated.
Treatment:
Chlorination and maceration In Type I MSDs, sewage is broken down usually through the use of chlorination and/ or maceration. The chlorination process is usually done within a large tank sometimes referred to as the contact chamber. By adding chlorine to the sewage, the effluent is sanitized and it is discharged from the MSD. The maceration process aboard ships is usually done using some form of machinery to crush and pulverize the incoming sewage. However, since a large portion of bacteria is still present in the macerated sewage, the sewage is considered untreated still. Due to this circumstance, maceration machinery is usually paired with some form of chlorination process in the same system. Very few places around the world allow the discharge of untreated sewage from a maceration process.
Treatment:
Advanced water treatment Some vessels are equipped with advanced water treatment plants, also called Advanced Wastewater Purification (AWP) systems, instead of traditional MSDs. They are most commonly found on ships that sail in Alaskan waters and sometimes work in parallel with an onboard MSD. Royal Caribbean International, for example, havsinstalled AWP systems on its ships which treat wastewater using advanced technology. Royal Caribbean AWP systems include three types of water purification systems: Scanship, Hydroxyl/Headworks and Navalis. Scanship and Hydroxyl use biological treatment while the Navalis system primarily uses advanced oxidation and filtration methods. Scanship and Hydroxyl systems use bacterial to consume the waste while also utilizing a chemical in order to break down and remove solids. Scanship and Hydroxyl systems are very similar to water treatment plants based on shore. This involves a simple five stage process. The first stage involves a prefilter where screens removes heavy and noticeable solids from the waste influent. Then the wastewater is passed through a biological reactor which uses beneficial bacteria to further break down any solids. Next the influent is pumped through a flotation unit which removes floatable waste. Afterwards, the clean water is passed through polishing filters which make the water even cleaner. The last and final stage involves an ultraviolet light reactor which disinfects the water. The final product may then be dried, incinerated, stored, or discharged at sea with respect to international regulations.
Treatment:
The Navalis AWP system utilizes a seven-stage process to treat wastewater. The first stage involves wastewater entering the shaker screens which removes any noticeable solids. Then the wastewater is passed through an AET Roughing Reactor which help with chemical equalization and load. Then the influent is treated by a three-stage particle removal process which involve chemical flocculation, Hydraulic Separation, Tubular Filtration, and Ultra Filtration membranes. The waste influent is then passed through Oxidation Reactors which serve to oxidize pollutants and aid the production of carbon dioxide gas and water. The seventh and final stage consists of a powerful Ultraviolet Reactor in which the ozonated water is broken down into oxygen compounds that provide further treatment of the water. The leftover solids are then oxidized which provide safe bio-disposal or land based discharge if needed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Variational autoencoder**
Variational autoencoder:
In machine learning, a variational autoencoder (VAE) is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling. It is part of the families of probabilistic graphical models and variational Bayesian methods.Variational autoencoders are often associated with the autoencoder model because of its architectural affinity, but with significant differences in the goal and mathematical formulation. Variational autoencoders are probabilistic generative models that require neural networks as only a part of their overall structure. The neural network components are typically referred to as the encoder and decoder for the first and second component respectively. The first neural network maps the input variable to a latent space that corresponds to the parameters of a variational distribution. In this way, the encoder can produce multiple different samples that all come from the same distribution. The decoder has the opposite function, which is to map from the latent space to the input space, in order to produce or generate data points. Both networks are typically trained together with the usage of the reparameterization trick, although the variance of the noise model can be learned separately.
Variational autoencoder:
Although this type of model was initially designed for unsupervised learning, its effectiveness has been proven for semi-supervised learning and supervised learning.
Overview of architecture and operation:
A variational autoencoder is a generative model with a prior and noise distribution respectively. Usually such models are trained using the expectation-maximization meta-algorithm (e.g. probabilistic PCA, (spike & slab) sparse coding). Such a scheme optimizes a lower bound of the data likelihood, which is usually intractable, and in doing so requires the discovery of q-distributions, or variational posteriors. These q-distributions are normally parameterized for each individual data point in a separate optimization process. However, variational autoencoders use a neural network as an amortized approach to jointly optimize across data points. This neural network takes as input the data points themselves, and outputs parameters for the variational distribution. As it maps from a known input space to the low-dimensional latent space, it is called the encoder.
Overview of architecture and operation:
The decoder is the second neural network of this model. It is a function that maps from the latent space to the input space, e.g. as the means of the noise distribution. It is possible to use another neural network that maps to the variance, however this can be omitted for simplicity. In such a case, the variance can be optimized with gradient descent.
Overview of architecture and operation:
To optimize this model, one needs to know two terms: the "reconstruction error", and the Kullback–Leibler divergence. Both terms are derived from the free energy expression of the probabilistic model, and therefore differ depending on the noise distribution and the assumed prior of the data. For example, a standard VAE task such as IMAGENET is typically assumed to have a gaussianly distributed noise, however tasks such as binarized MNIST require a Bernoulli noise. The KL-D from the free energy expression maximizes the probability mass of the q-distribution that overlaps with the p-distribution, which unfortunately can result in mode-seeking behaviour. The "reconstruction" term is the remainder of the free energy expression, and requires a sampling approximation to compute its expectation value.
Formulation:
From the point of view of probabilistic modeling, one wants to maximize the likelihood of the data x by their chosen parameterized probability distribution pθ(x)=p(x|θ) . This distribution is usually chosen to be a Gaussian N(x|μ,σ) which is parameterized by μ and σ respectively, and as a member of the exponential family it is easy to work with as a noise distribution. Simple distributions are easy enough to maximize, however distributions where a prior is assumed over the latents z results in intractable integrals. Let us find pθ(x) via marginalizing over z .pθ(x)=∫zpθ(x,z)dz, where pθ(x,z) represents the joint distribution under pθ of the observable data x and its latent representation or encoding z . According to the chain rule, the equation can be rewritten as pθ(x)=∫zpθ(x|z)pθ(z)dz In the vanilla variational autoencoder, z is usually taken to be a finite-dimensional vector of real numbers, and pθ(x|z) to be a Gaussian distribution. Then pθ(x) is a mixture of Gaussian distributions.
Formulation:
It is now possible to define the set of the relationships between the input data and its latent representation as Prior pθ(z) Likelihood pθ(x|z) Posterior pθ(z|x) Unfortunately, the computation of pθ(x) is expensive and in most cases intractable. To speed up the calculus to make it feasible, it is necessary to introduce a further function to approximate the posterior distribution as qϕ(z|x)≈pθ(z|x) with ϕ defined as the set of real values that parametrize q . This is sometimes called amortized inference, since by "investing" in finding a good qϕ , one can later infer z from x quickly without doing any integrals.
Formulation:
In this way, the problem is to find a good probabilistic autoencoder, in which the conditional likelihood distribution pθ(x|z) is computed by the probabilistic decoder, and the approximated posterior distribution qϕ(z|x) is computed by the probabilistic encoder.
Parametrize the encoder as Eϕ , and the decoder as Dθ
Evidence lower bound (ELBO):
As in every deep learning problem, it is necessary to define a differentiable loss function in order to update the network weights through backpropagation.
For variational autoencoders, the idea is to jointly optimize the generative model parameters θ to reduce the reconstruction error between the input and the output, and ϕ to make qϕ(z|x) as close as possible to pθ(z|x) . As reconstruction loss, mean squared error and cross entropy are often used.
Evidence lower bound (ELBO):
As distance loss between the two distributions the Kullback–Leibler divergence DKL(qϕ(z|x)∥pθ(z|x)) is a good choice to squeeze qϕ(z|x) under pθ(z|x) .The distance loss just defined is expanded as ln ln ln ln qϕ(z|x)pθ(x,z)] Now define the evidence lower bound (ELBO):Maximizing the ELBOis equivalent to simultaneously maximizing ln pθ(x) and minimizing DKL(qϕ(z|x)∥pθ(z|x)) . That is, maximizing the log-likelihood of the observed data, and minimizing the divergence of the approximate posterior qϕ(⋅|x) from the exact posterior pθ(⋅|x) The form given is not very convenient for maximization, but the following, equivalent form, is:where ln pθ(x|z) is implemented as −12‖x−Dθ(z)‖22 , since that is, up to an additive constant, what x∼N(Dθ(z),I) yields. That is, we model the distribution of x conditional on z to be a Gaussian distribution centered on Dθ(z) . The distribution of qϕ(z|x) and pθ(z) are often also chosen to be Gaussians as z|x∼N(Eϕ(x),σϕ(x)2I) and z∼N(0,I) , with which we obtain by the formula for KL divergence of Gaussians:Here N is the dimension of z . For a more detailed derivation and more interpretations of ELBO and its maximization, see its main page.
Reparameterization:
To efficiently search for the typical method is gradient descent.
Reparameterization:
It is straightforward to findHowever, does not allow one to put the ∇ϕ inside the expectation, since ϕ appears in the probability distribution itself. The reparameterization trick (also known as stochastic backpropagation) bypasses this difficulty.The most important example is when z∼qϕ(⋅|x) is normally distributed, as N(μϕ(x),Σϕ(x)) This can be reparametrized by letting ε∼N(0,I) be a "standard random number generator", and construct z as z=μϕ(x)+Lϕ(x)ϵ . Here, Lϕ(x) is obtained by the Cholesky decomposition:Then we haveand so we obtained an unbiased estimator of the gradient, allowing stochastic gradient descent.
Reparameterization:
Since we reparametrized z , we need to find qϕ(z|x) . Let q0 by the probability density function for ϵ , thenwhere ∂ϵz is the Jacobian matrix of ϵ with respect to z . Since z=μϕ(x)+Lϕ(x)ϵ , this is
Variations:
Many variational autoencoders applications and extensions have been used to adapt the architecture to other domains and improve its performance. β -VAE is an implementation with a weighted Kullback–Leibler divergence term to automatically discover and interpret factorised latent representations. With this implementation, it is possible to force manifold disentanglement for β values greater than one. This architecture can discover disentangled latent factors without supervision.The conditional VAE (CVAE), inserts label information in the latent space to force a deterministic constrained representation of the learned data.Some structures directly deal with the quality of the generated samples or implement more than one latent space to further improve the representation learning.Some architectures mix VAE and generative adversarial networks to obtain hybrid models. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fire whirl**
Fire whirl:
A fire whirl or fire devil (sometimes referred to as a fire tornado) is a whirlwind induced by a fire and often (at least partially) composed of flame or ash. These start with a whirl of wind, often made visible by smoke, and may occur when intense rising heat and turbulent wind conditions combine to form whirling eddies of air. These eddies can contract a tornado-like vortex that sucks in debris and combustible gases.
Fire whirl:
The phenomenon is sometimes labeled a fire tornado, firenado, fire swirl, or fire twister, but these terms usually refer to a separate phenomenon where a fire has such intensity that it generates an actual tornado. Fire whirls are not usually classifiable as tornadoes as the vortex in most cases does not extend from the surface to cloud base. Also, even in such cases, those fire whirls very rarely are classic tornadoes, as their vorticity derives from surface winds and heat-induced lifting, rather than from a tornadic mesocyclone aloft.The phenomenon was first verified in the 2003 Canberra bushfires and has since been verified in the 2018 Carr Fire in California and 2020 Loyalton Fire in California and Nevada.
Formation:
A fire whirl consists of a burning core and a rotating pocket of air. A fire whirl can reach up to 2,000 °F (1,090 °C). Fire whirls become frequent when a wildfire, or especially firestorm, creates its own wind, which can spawn large vortices. Even bonfires often have whirls on a smaller scale and tiny fire whirls have been generated by very small fires in laboratories.Most of the largest fire whirls are spawned from wildfires. They form when a warm updraft and convergence from the wildfire are present. They are usually 10–50 m (33–164 ft) tall, a few meters (several feet) wide, and last only a few minutes. Some, however, can be more than 1 km (0.6 mi) tall, contain wind speeds over 200 km/h (120 mph), and persist for more than 20 minutes.Fire whirls can uproot trees that are 15 m (49 ft) tall or more. These can also aid the 'spotting' ability of wildfires to propagate and start new fires as they lift burning materials such as tree bark. These burning embers can be blown away from the fire-ground by the stronger winds aloft.
Formation:
Fire whirls can be common within the vicinity of a plume during a volcanic eruption. These range from small to large and form from a variety of mechanisms, including those akin to typical fire whirl processes, but can result in Cumulonimbus flammagenitus (cloud) spawning landspouts and waterspouts or even to develop mesocyclone-like updraft rotation of the plume itself and/or of the cumulonimbi, which can spawn tornadoes similar to those in supercells. Pyrocumulonimbi generated by large fires on rare occasion also develops in a similar way.
Classification:
There are currently three widely recognized types of fire whirls: Type 1: Stable and centered over burning area.
Type 2: Stable or transient, downwind of burning area.
Classification:
Type 3: Steady or transient, centered over an open area adjacent to an asymmetric burning area with wind.There is evidence suggesting that the fire whirl in the Hifukusho-ato area, during the 1923 Great Kantō earthquake, was of type 3. Other mechanism and fire whirl dynamics may exist. A broader classification of fire whirls suggested by Forman A. Williams includes five different categories: Whirls generated by fuel distribution in wind Whirls above fuels in pools or on water Tilted fire whirls Moving fire whirls Whirls modified by vortex breakdownThe meteorological community views some fire-induced phenomena as atmospheric phenomena. Using the pyro- prefix, fire-induced clouds are called pyrocumulus and pyrocumulonimbus. Larger fire vortices are similarly being viewed. Based on vortex scale, the classification terms of pyronado, "pyrotornado", and "pyromesocyclone" have been proposed.
Notable examples:
During the 1871 Peshtigo fire, the community of Williamsonville, Wisconsin, was burned by a fire whirl; the area where Williamsonville once stood is now Tornado Memorial County Park.An extreme example of a fire whirl is the 1923 Great Kantō earthquake in Japan, which ignited a large city-sized firestorm which in turn produced a gigantic fire whirl that killed 38,000 people in fifteen minutes in the Hifukusho-Ato region of Tokyo.Numerous large fire whirls (some tornadic) that developed after lightning struck an oil storage facility near San Luis Obispo, California, on 7 April 1926, produced significant structural damage well away from the fire, killing two. Many whirlwinds were produced by the four-day-long firestorm coincident with conditions that produced severe thunderstorms, in which the larger fire whirls carried debris 5 km (3.1 mi) away.Fire whirls were produced in the conflagrations and firestorms triggered by firebombings of European and Japanese cities during World War II and by the atomic bombings of Hiroshima and Nagasaki. Fire whirls associated with the bombing of Hamburg, particularly those of 27–28 July 1943, were studied.Throughout the 1960s and 1970s, particularly in 1978–1979, fire whirls ranging from the transient and very small to intense, long-lived tornado-like vortices capable of causing significant damage were spawned by fires generated from the 1000 MW Météotron, a series of large oil wells located in the Lannemezan plain of France used for testing atmospheric motions and thermodynamics.During the 2003 Canberra bushfires in Canberra, Australia, a violent fire whirl was documented. It was calculated to have horizontal winds of 160 mph (260 km/h) and vertical air speed of 93 mph (150 km/h), causing the flashover of 300 acres (120 ha) in 0.04 seconds. It was the first known fire whirl in Australia to have EF3 wind speeds on the Enhanced Fujita scale.A fire whirl, of reportedly uncommon size for New Zealand wildfires, formed on day three of the 2017 Port Hills fires in Christchurch. Pilots estimated the fire column to be 100 m (330 ft) high.Residents in the city of Redding, California, while evacuating the area from the massive Carr Fire in late July 2018, reported seeing pyrocumulonimbus clouds and tornado-like behaviour from the firestorm, resulting in uprooted trees, cars, structures and other wind related damages in addition to the fire itself. As of August 2, 2018, a preliminary damage survey, led by the National Weather Service (NWS) in Sacramento, California, rated the July 26th fire whirl as an EF3 tornado with winds in excess of 143 mph (230 km/h).On August 15, 2020, for the first time in its history, the U.S. National Weather Service issued a tornado warning for a pyrocumulonimbus created by a wildfire near Loyalton, California, capable of producing a fire tornado.
Blue whirl:
In controlled small-scale experiments, fire whirls are found to transition to a mode of combustion called blue whirls. The name blue whirl was coined because the soot production is negligible, leading to the disappearance of the yellow color typical of a fire whirl. Blue whirls are partially premixed flames that reside elevated in the recirculation region of the vortex-breakdown bubble. The flame length and burning rate of a blue whirl are smaller than those of a fire whirl. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Engagement Skills Trainer**
Engagement Skills Trainer:
The Engagement Skills Trainer is a simulator that provides marksmanship training and trains soldiers on virtually all aspects of firearms training from calibrating weapons, to weapons qualification, to collective fire scenarios in numerous environments.
Users:
Jordan Kyrgyzstan Lebanon: The Lebanese Armed Forces operate five 10-lanes systems. United States | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**STK39**
STK39:
STE20/SPS1-related proline-alanine-rich protein kinase is an enzyme that in humans is encoded by the STK39 gene.This gene encodes a serine/threonine kinase that is thought to function in the cellular stress response pathway. The kinase is activated in response to hypotonic stress, leading to phosphorylation of several cation-chloride-coupled cotransporters. The catalytically active kinase specifically activates the p38 MAP kinase pathway, and its interaction with p38 decreases upon cellular stress, suggesting that this kinase may serve as an intermediate in the response to cellular stress.
STK39:
Some studies suggest that this gene might be linked to high blood pressure. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Upington disease**
Upington disease:
Upington disease is an extremely rare autosomal dominant malformation disorder. It has only one published source claiming its existence in three generations of one family from South Africa.
Presentation:
The disease is characterized by Perthes-like pelvic anomalies (premature closure of the capital femoral epiphyses and widened femoral necks with flattened femoral heads), enchondromata and ecchondromata.
Genetics:
Upington disease is inherited in an autosomal dominant manner. This means the defective gene is located on an autosome, and one copy of the defective gene is sufficient to cause the disorder, when inherited from a parent who has the disorder.
Eponym:
The name Upington refers to the city in the Northern Cape Province, South Africa, from where the family originates. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Log bucking**
Log bucking:
Bucking is the process of cutting a felled and delimbed tree into logs. Significant value can be lost by sub-optimal bucking because logs destined for plywood, lumber, and pulp each have their own value and specifications for length, diameter, and defects. Cutting from the top down is overbucking and from the bottom up is underbucking.
In British English, the process is called logging-up or crosscutting.
Methods:
A felled and delimbed tree is cut into logs of standard sizes, a process called bucking. A logger who specialises in this job is a buck sawyer.
Methods:
Bucking may be done in a variety of ways depending on the logging operation. Trees that have been previously felled and moved to a landing with a log skidder are spread out for processing. While many of the limbs have broken off during transport, the remaining limbs and stubs must be trimmed. The bucker will anchor the end of an auto rewinding tape measure which is attached to his belt and walk down the log trimming as he goes. The tape is anchored gently with a bent horseshoe nail in the bark so it can be jerked loose when the measurement is completed. When a suitable place to buck the tree is located the cut is made. Significant value may be lost by sub-optimal bucking. Local market conditions will determine the particular length of cut. It is common for log buyers to issue purchase orders for the length, diameter, grade, and species that they will accept. On the West Coast common cuts on a large Pine or Fir tree are three 32's and a 10. There are often different prices for different items.
Methods:
The person bucking is generally called a bucksawyer or bucklogger, or just a bucker and runs as many saws as he can, switching saws as soon as one is dull. The reason for this is the bucksawyer is typically paid per section of log he cuts. Generally buckloggers at smaller sawmills aren't fully mechanized. This part of the logging process is perhaps more dangerous than the felling of trees. The bucklogger is usually cutting from the edge of a treepile which can be 20 feet high and as long as there is room to dump them from the truck. Each tree must be picked out of the pile and cut so that a controlled fall of more trees can be worked as the former fall has been cut and skidded to its respective pile.
Terminology:
The pieces of bucked logs may be known by several names. Bolts are the pieces of a log which has been bucked into specific lengths which are less than 8 feet (2.4 m), especially short lengths. The etymology of bolt is related to being short and stout and related to knock, and strike possibly because bolts were traditionally split into wood shingles, treenails, clapboards, etc. These pieces may be more specifically known as peeler, shingle, stave or pulpwood bolts. Billet is variously defined as a short piece of round or partially round wood (usually a smaller diameter than a block or bolt) or as a piece split or cut from a bolt, or sometimes synonymous with bolt, particularly when the pieces are intended as firewood, and sometimes means a piece of a billet after it has been split. Round is often associated with lengths of un-split firewood. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Meta-scheduling**
Meta-scheduling:
Meta-scheduling or super scheduling is a computer software technique of optimizing computational workloads by combining an organization's multiple job schedulers into a single aggregated view, allowing batch jobs to be directed to the best location for execution.
Meta-Scheduling for MPSoCs:
Meta-scheduling technique is a solution for scheduling a set of dependent or independent faults with different scenarios that are mapping and modeling in an event-tree. It can be used as a dynamic or static scheduling method.
Scenario-Based Meta-Scheduling (SBMeS) for MPSoCs and NoCs:
Scenario-based and multi-mode approaches are essential techniques in embedded-systems, e.g., design space exploration for MPSoCs and reconfigurable systems.
Optimization techniques for the generation of schedule graphs supporting such a SBMeS approach have been developed and implemented. SBMeS can promise better performance by reducing dynamic scheduling overhead and recovering from faults.
Implementations:
The following is a partial list of noteworthy open source and commercial meta-schedulers currently available.
GridWay by the Globus Alliance Community Scheduler Framework by Platform Computing & Jilin University MP Synergy by United Devices Moab Cluster Suite and Maui Cluster scheduler from Adaptive Computing DIOGENES (DIstributed Optimal GENEtic algorithm for grid applications Scheduling, started project) SynfiniWay's meta-scheduler.
MeS is designed to generate schedules for anticipated changes of scenarios by Dr.-Ing. Babak Sorkhpour & Prof. Dr.-Ing.Roman Obermaisser in Chair for Embedded Systems in university of Siegen for Energy-Efficient, Robust and Adaptive Time-Triggered Systems (multi-core architectures with Networks-on-chip).
Accelerator Plus runs jobs by the use of host jobs in an underlying workload manager. This approach achieves high job throughput by distributing the processing load associated with submitting and managing jobs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Position operator**
Position operator:
In quantum mechanics, the position operator is the operator that corresponds to the position observable of a particle.
Position operator:
When the position operator is considered with a wide enough domain (e.g. the space of tempered distributions), its eigenvalues are the possible position vectors of the particle.In one dimension, if by the symbol we denote the unitary eigenvector of the position operator corresponding to the eigenvalue x , then, |x⟩ represents the state of the particle in which we know with certainty to find the particle itself at position x Therefore, denoting the position operator by the symbol X – in the literature we find also other symbols for the position operator, for instance Q (from Lagrangian mechanics), x^ and so on – we can write for every real position x One possible realization of the unitary state with position x is the Dirac delta (function) distribution centered at the position x , often denoted by δx In quantum mechanics, the ordered (continuous) family of all Dirac distributions, i.e. the family is called the (unitary) position basis (in one dimension), just because it is a (unitary) eigenbasis of the position operator X in the space of distributions dual to the space of wave-functions. It is fundamental to observe that there exists only one linear continuous endomorphism X on the space of tempered distributions such that for every real point x . It's possible to prove that the unique above endomorphism is necessarily defined by for every tempered distribution ψ , where x denotes the coordinate function of the position line – defined from the real line into the complex plane by
Introduction:
In one dimension – for a particle confined into a straight line – the square modulus of a normalized square integrable wave-function represents the probability density of finding the particle at some position x of the real-line, at a certain time.
Introduction:
In other terms, if – at a certain instant of time – the particle is in the state represented by a square integrable wave function ψ and assuming the wave function ψ be of L2 -norm equal 1, then the probability to find the particle in the position range [a,b] is Hence the expected value of a measurement of the position X for the particle is the value where: the particle is assumed to be in the state ψ the function x|ψ|2 is supposed integrable, i.e. of class L1 we indicate by x the coordinate function of the position axis.Additionally, the quantum mechanical operator corresponding to the observable position X is denoted also by and defined for every wave function ψ and for every point x of the real line.
Introduction:
The circumflex over the function x on the left side indicates the presence of an operator, so that this equation may be read: The result of the position operator X acting on any wave function ψ equals the coordinate function x multiplied by the wave-function ψ Or more simply: The operator X multiplies any wave-function ψ by the coordinate function x Note 1. To be more explicit, we have introduced the coordinate function which simply imbeds the position-line into the complex plane. It is nothing more than the canonical embedding of the real line into the complex plane.
Introduction:
Note 2. The expected value of the position operator, upon a wave function (state) ψ can be reinterpreted as a scalar product: assuming the particle in the state ψ∈L2 and assuming the function xψ be of class L2 – which immediately implies that the function x|ψ|2 Is integrable, i.e. of class L1 Note 3. Strictly speaking, the observable position X can be point-wisely defined as for every wave function ψ and for every point x of the real line, upon the wave-functions which are precisely point-wise defined functions. In the case of equivalence classes ψ∈L2 the definition reads directly as follows for every wave-function ψ∈L2
Basic properties:
In the above definition, as the careful reader can immediately remark, does not exist any clear specification of domain and co-domain for the position operator (in the case of a particle confined upon a line). In literature, more or less explicitly, we find essentially three main directions for this fundamental issue.
Basic properties:
The position operator is defined on the subspace DX of L2 formed by those equivalence classes ψ whose product by the imbedding x lives in the space L2 as well. In this case the position operator reveals not continuous (unbounded with respect to the topology induced by the canonical scalar product of L2 ), with no eigenvectors, no eigenvalues, consequently with empty eigenspectrum (collection of its eigenvalues).
Basic properties:
The position operator is defined on the space S1 of complex valued Schwartz functions (smooth complex functions defined upon the real-line and rapidly decreasing at infinity with all their derivatives). The product of a Schwartz function by the imbedding x lives always in the space S1 , which is a subset of L2 . In this case the position operator reveals continuous (with respect to the canonical topology of S1 ), injective, with no eigenvectors, no eigenvalues, consequently with void eigenspectrum (collection of its eigenvalues). It is (fully) self-adjoint with respect to the scalar product of L2 in the sense that for every ψ and ϕ belonging to its domain S1 This is, in practice, the most widely adopted choice in Quantum Mechanics literature, although never explicitly underlined. The position operator is defined on the space S1′ of complex valued tempered distributions (topological dual of the Schwartz function space S1 ). The product of a temperate distribution by the imbedding x lives always in the space S1′ , which contains L2 . In this case the position operator reveals continuous (with respect to the canonical topology of S1′ ), surjective, endowed with complete families of eigenvectors, real eigenvalues, and with eigenspectrum (collection of its eigenvalues) equal to the real line. It is self-adjoint with respect to the scalar product of L2 in the sense that its transpose operator which is the position operator on the Schwartz function space, is self-adjoint: for every (test) function ϕ and ψ belonging to the space S1
Eigenstates:
The eigenfunctions of the position operator (on the space of tempered distributions), represented in position space, are Dirac delta functions.
Eigenstates:
Informal proof. To show that possible eigenvectors of the position operator should necessarily be Dirac delta distributions, suppose that ψ is an eigenstate of the position operator with eigenvalue x0 . We write the eigenvalue equation in position coordinates, recalling that x^ simply multiplies the wave-functions by the function x , in the position representation. Since the function x is variable while x0 is a constant, ψ must be zero everywhere except at the point x0 . Clearly, no continuous function satisfies such properties, and we cannot simply define the wave-function to be a complex number at that point because its L2 -norm would be 0 and not 1. This suggest the need of a "functional object" concentrated at the point x0 and with integral different from 0: any multiple of the Dirac delta centered at x0 . Q.E.D.
Eigenstates:
The normalized solution to the equation is or better Proof. Here we prove rigorously that Indeed, recalling that the product of any function by the Dirac distribution centered at a point is the value of the function at that point times the Dirac distribution itself, we obtain immediately Q.E.D.
Eigenstates:
Meaning of the Dirac delta wave. Although such Dirac states are physically unrealizable and, strictly speaking, they are not functions, Dirac distribution centered at x0 can be thought of as an "ideal state" whose position is known exactly (any measurement of the position always returns the eigenvalue x0 ). Hence, by the uncertainty principle, nothing is known about the momentum of such a state.
Three dimensions:
The generalisation to three dimensions is straightforward.
The space-time wavefunction is now ψ(r,t) and the expectation value of the position operator r^ at the state ψ is where the integral is taken over all space. The position operator is
Momentum space:
Usually, in quantum mechanics, by representation in the momentum space we intend the representation of states and observables with respect to the canonical unitary momentum basis In momentum space, the position operator in one dimension is represented by the following differential operator where: the representation of the position operator in the momentum basis is naturally defined by (x^)P(ψ)P=(x^ψ)P , for every wave function (tempered distribution) ψ ;p represents the coordinate function on the momentum line and the wave-vector function k is defined by k=p/ℏ
Formalism in L2(R, C):
Consider, for example, the case of a spinless particle moving in one spatial dimension (i.e. in a line). The state space for such a particle contains the L2-space (Hilbert space) L2(R,C) of complex-valued and square-integrable (with respect to the Lebesgue measure) functions on the real line.
The position operator in L2(R,C) is pointwise defined by: for each pointwisely defined square integrable class ψ∈DQ and for each real number x, with domain where q:R→C is the coordinate function sending each point x∈R to itself.
Since all continuous functions with compact support lie in D(Q), Q is densely defined. Q, being simply multiplication by x, is a self-adjoint operator, thus satisfying the requirement of a quantum mechanical observable.
Immediately from the definition we can deduce that the spectrum consists of the entire real line and that Q has purely continuous spectrum, therefore no discrete eigenvalues.
The three-dimensional case is defined analogously. We shall keep the one-dimensional assumption in the following discussion.
Measurement theory in L2(R, C):
As with any quantum mechanical observable, in order to discuss position measurement, we need to calculate the spectral resolution of the position operator which is where μX is the so-called spectral measure of the position operator.
Since the operator of X is just the multiplication operator by the embedding function x , its spectral resolution is simple.
Measurement theory in L2(R, C):
For a Borel subset B of the real line, let χB denote the indicator function of B . We see that the projection-valued measure is given by i.e., the orthogonal projection μX(B) is the multiplication operator by the indicator function of B Therefore, if the system is prepared in a state ψ , then the probability of the measured position of the particle belonging to a Borel set B is where μ is the Lebesgue measure on the real line.
Measurement theory in L2(R, C):
After any measurement aiming to detect the particle within the subset B, the wave function collapses to either or where ‖⋅‖ is the Hilbert space norm on L2(R,C) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ross–Fahroo lemma**
Ross–Fahroo lemma:
Named after I. Michael Ross and F. Fahroo, the Ross–Fahroo lemma is a fundamental result in optimal control theory.It states that dualization and discretization are, in general, non-commutative operations. The operations can be made commutative by an application of the covector mapping principle.
Description of the theory:
A continuous-time optimal control problem is information rich. A number of interesting properties of a given problem can be derived by applying the Pontryagin's minimum principle or the Hamilton–Jacobi–Bellman equations. These theories implicitly use the continuity of time in their derivation. When an optimal control problem is discretized, the Ross–Fahroo lemma asserts that there is a fundamental loss of information. This loss of information can be in the primal variables as in the value of the control at one or both of the boundary points or in the dual variables as in the value of the Hamiltonian over the time horizon. To address the information loss, Ross and Fahroo introduced the concept of closure conditions which allow the known information loss to be put back in. This is done by an application of the covector mapping principle.
Applications to pseudospectral optimal control:
When pseudospectral methods are applied to discretize optimal control problems, the implications of the Ross–Fahroo lemma appear in the form of the discrete covectors seemingly being discretized by the transpose of the differentiation matrix.When the covector mapping principle is applied, it reveals the proper transformation for the adjoints. Application of the transformation generates the Ross–Fahroo pseudospectral methods. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.