text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Keratin 2A**
Keratin 2A:
Keratin 2A also known as keratin 2E or keratin 2 is a protein that in humans is encoded by the KRT2A gene.Keratin 2A is a type II cytokeratin. It is found largely in the upper spinous layer of epidermal keratinocytes and mutations in the gene encoding this protein have been associated with ichthyosis bullosa of Siemens. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Floating airport**
Floating airport:
A floating airport is an airport built and situated on a very large floating structure (VLFS) located many miles out at sea utilizing a flotation type of device or devices such as pneumatic stabilized platform (PSP) technology.
As the population increases and land becomes more expensive and scarce, very large floating structures (VLFS) such as floating airports could help solve land use, pollution and aircraft noise issues.
Early history:
The first discussion of a floating airport was for trans-Atlantic flights. At that time a passenger aircraft capable of making the trip could be built, but because of the massive need for fuel for the flight, it had a limited payload. An article appeared in the January 1930 issue of Popular Mechanics in which a model of a floating airport located in the Atlantic was proposed. To make safe flight possible with the aviation technology of that time, it called for eight such airports in the Atlantic. But unlike future floating airport ideas which were free floating, this 1930 concept had a floating airport platform, but with stabilizer legs which prevent the flight deck from pitching and rolling, similar in concept to some of today's off shore oil rigs. The cost of establishing eight such floating airports in 1930 was estimated at approximately USD$12,000,000 equivalent to $156,609,000 in 2021. The idea of floating airports received fresh attention in 1935 when the famous French aviation pilot and builder Bleriot gave one of his last interviews in which he made the case for installing some mid-Atlantic; he called them Seadromes as a solution to economical trans-Atlantic passenger flights.
Description:
In theory, issues and problems of land-based airports could be minimized by locating airports several miles off the coast. Takeoffs and landings would be over water, not over populated areas, thereby eliminating noise pollution and reducing risks of aircraft crashes to the land-locked population.
Description:
Since little of the ocean's surface is currently being used for human activity, growth and alterations in configuration would be relatively easy to achieve with minimal impact to the environment or to local residents who would utilize the airport. Water taxis or other high speed surface vessels would be a part of an offshore mass transit system that could connect the floating airport to coastal communities and minimize traffic issues.
Description:
A floating structure, such as a floating airport, is theorized to have less impact on the environment than the land-based alternative. It would not require much, if any, dredging or moving of mountains or clearing of green space and the floating structure provides a reef-like environment conducive to marine life. In theory, wave energy could be harnessed, using the structure to convert waves into energy to help sustain the energy needs of the airport.
Modern Floating airport projects:
In 2000, the Japanese Ministry of Land, Infrastructure, and Transport sponsored the construction of Mega-Float, a 1000-metre floating runway in Tokyo Bay. After conducting several real aircraft landings, the Ministry concluded that floating runways' hydro-elastic response would not affect aircraft operations, including precision instrument approaches in a protected waterway such as a large bay. The structure has been dismantled and is no longer in use.
Modern Floating airport projects:
The pneumatic stabilized platform (PSP) was proposed as a means for constructing a new floating airport for San Diego in the Pacific Ocean, at least three miles off the tip of Point Loma. However, this proposed design was rejected in October 2003 due to very high cost, the difficulty in accessing such an airport, the difficulty in transporting jet fuel, electricity, water, and gas to the structure, failure to address security concerns such as a bomb blast, inadequate room for high-speed exits and taxiways, and environmental concerns.Achmad Yani International Airport, the first floating airport in the world started construction on 17 June 2014, and was completed in 2018. However, only the passenger terminal and apron are floating. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PIGA**
PIGA:
Phosphatidylinositol N-acetylglucosaminyltransferase subunit A (PIG-A, or phosphatidylinositol glycan, class A) is the catalytic subunit of the phosphatidylinositol N-acetylglucosaminyltransferase enzyme, which in humans is encoded by the PIGA gene.This gene encodes a protein required for synthesis of N-acetylglucosaminyl phosphatidylinositol (GlcNAc-PI), the first intermediate in the biosynthetic pathway of GPI anchor. The GPI anchor is a glycolipid found on many blood cells and serves to anchor proteins to the cell surface. Paroxysmal nocturnal hemoglobinuria, an acquired hematologic disorder, has been shown to result from somatic mutations in this gene. Alternate splice variants have been characterized.Multiple Congenital Anomalies-Hypotonia-Seizures syndrome type 2 (MCAHS2), also known as PIGA-CDG or PIGA deficiency, has been shown to result from germline mutations in the PIGA gene.
Interactions:
PIGA has been shown for interact with PIGQ. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**12alpha-hydroxysteroid dehydrogenase**
12alpha-hydroxysteroid dehydrogenase:
In enzymology, a 12alpha-hydroxysteroid dehydrogenase (EC 1.1.1.176) is an enzyme that catalyzes the chemical reaction 3alpha,7alpha,12alpha-trihydroxy-5beta-cholanate + NADP+ ⇌ 3alpha,7alpha-dihydroxy-12-oxo-5beta-cholanate + NADPH + H+Thus, the two substrates of this enzyme are 3alpha,7alpha,12alpha-trihydroxy-5beta-cholanate and NADP+, whereas its 3 products are 3alpha,7alpha-dihydroxy-12-oxo-5beta-cholanate, NADPH, and H+.
12alpha-hydroxysteroid dehydrogenase:
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 12alpha-hydroxysteroid:NADP+ 12-oxidoreductase. Other names in common use include 12alpha-hydroxy steroid dehydrogenase, 12alpha-hydroxy steroid dehydrogenase, NAD+-dependent 12alpha-hydroxysteroid dehydrogenase, and NADP+-12alpha-hydroxysteroid dehydrogenase. This enzyme is involved in a metabolic pathway that degrades bile acids into cholesterol. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multiplicative partitions of factorials**
Multiplicative partitions of factorials:
Multiplicative partitions of factorials are expressions of values of the factorial function as products of powers of prime numbers. They have been studied by Paul Erdős and others.The factorial of a positive integer is a product of decreasing integer factors, which can in turn be factored into prime numbers. This means that any factorial can be written as a product of powers of primes. For example,If we wish to write {\textstyle 5!} as a product of factors of the form {\textstyle (p_{k})^{b_{k}}} , where each {\textstyle p_{k}} is a prime number, and the factors are sorted in nondecreasing order, then we have three ways of doing so:The number of such "sorted multiplicative partitions" of {\textstyle n!} grows with {\textstyle n} , and is given by the sequence 1, 1, 3, 3, 10, 10, 30, 75, 220, 220, 588, 588, 1568, 3696, 11616, ... (sequence A085288 in the OEIS).Not all sorted multiplicative partitions of a given factorial have the same length. For example, the partitions of {\textstyle 5!} have lengths 4, 3 and 5. In other words, exactly one of the partitions of {\textstyle 5!} has length 5. The number of sorted multiplicative partitions of {\textstyle n!} that have length equal to {\textstyle n} is 1 for {\textstyle n=4} and {\textstyle n=5} , and thereafter increases as 2, 2, 5, 12, 31, 31, 78, 78, 191, 418, 1220, 1220, 3015, ... (sequence A085289 in the OEIS).Consider all sorted multiplicative partitions of {\textstyle n!} that have length {\textstyle n} , and find the partition whose first factor is the largest. (Since the first factor in a partition is the smallest within that partition, this means finding the maximum of all the minima.) Call this factor {\textstyle m(n)} . The value of {\textstyle m(n)} is 2 for {\textstyle n=4} and {\textstyle n=5} , and thereafter grows as 2, 2, 2, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 7, 7, 7, 7, 7, 7, ... (sequence A085290 in the OEIS).To express the asymptotic behavior of {\textstyle m(n)} , letAs {\textstyle n} tends to infinity, α(n) approaches a limiting value, the Alladi–Grinstead constant (named for the mathematicians Krishnaswami Alladi and Charles Grinstead). The decimal representation of the Alladi–Grinstead constant begins, 0.80939402054063913071793188059409131721595399242500030424202871504... (sequence A085291 in the OEIS).The exact value of the constant can be written as the exponential of a certain infinite series. Explicitly,where {\textstyle c} is given byThis sum can alternatively be expressed as follows, writing {\textstyle \zeta (n)} for the Riemann zeta function:This series for the constant {\textstyle c} converges more rapidly than the one before. The function {\textstyle m(n)} is constant over stretches of {\textstyle n} , but jumps from 5 to 7, skipping the value 6. Erdős raised the question of how large the gaps in the sequence of {\textstyle m(n)} can grow, and how long the constant stretches can be. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Negligible function**
Negligible function:
In mathematics, a negligible function is a function μ:N→R such that for every positive integer c there exists an integer Nc such that for all x > Nc, |μ(x)|<1xc.
Equivalently, we may also use the following definition.
A function μ:N→R is negligible, if for every positive polynomial poly(·) there exists an integer Npoly > 0 such that for all x > Npoly poly (x).
History:
The concept of negligibility can find its trace back to sound models of analysis. Though the concepts of "continuity" and "infinitesimal" became important in mathematics during Newton and Leibniz's time (1680s), they were not well-defined until the late 1810s. The first reasonably rigorous definition of continuity in mathematical analysis was due to Bernard Bolzano, who wrote in 1817 the modern definition of continuity. Later Cauchy, Weierstrass and Heine also defined as follows (with all numbers in the real number domain R ): (Continuous function) A function f:R→R is continuous at x=x0 if for every ε>0 , there exists a positive number δ>0 such that |x−x0|<δ implies |f(x)−f(x0)|<ε.
History:
This classic definition of continuity can be transformed into the definition of negligibility in a few steps by changing parameters used in the definition. First, in the case x0=∞ with f(x0)=0 , we must define the concept of "infinitesimal function": (Infinitesimal) A continuous function μ:R→R is infinitesimal (as x goes to infinity) if for every ε>0 there exists Nε such that for all x>Nε |μ(x)|<ε.
History:
Next, we replace ε>0 by the functions 1/xc where c>0 or by poly (x) where poly (x) is a positive polynomial. This leads to the definitions of negligible functions given at the top of this article. Since the constants ε>0 can be expressed as poly (x) with a constant polynomial this shows that negligible functions are a subset of the infinitesimal functions.
Use in cryptography:
In complexity-based modern cryptography, a security scheme is provably secure if the probability of security failure (e.g., inverting a one-way function, distinguishing cryptographically strong pseudorandom bits from truly random bits) is negligible in terms of the input x = cryptographic key length n . Hence comes the definition at the top of the page because key length n must be a natural number.
Use in cryptography:
Nevertheless, the general notion of negligibility doesn't require that the input parameter x is the key length n . Indeed, x can be any predetermined system metric and corresponding mathematical analysis would illustrate some hidden analytical behaviors of the system.
Use in cryptography:
The reciprocal-of-polynomial formulation is used for the same reason that computational boundedness is defined as polynomial running time: it has mathematical closure properties that make it tractable in the asymptotic setting (see #Closure properties). For example, if an attack succeeds in violating a security condition only with negligible probability, and the attack is repeated a polynomial number of times, the success probability of the overall attack still remains negligible.
Use in cryptography:
In practice one might want to have more concrete functions bounding the adversary's success probability and to choose the security parameter large enough that this probability is smaller than some threshold, say 2−128.
Closure properties:
One of the reasons that negligible functions are used in foundations of complexity-theoretic cryptography is that they obey closure properties. Specifically, If f,g:N→R are negligible, then the function x↦f(x)+g(x) is negligible.
If f:N→R is negligible and p is any real polynomial, then the function x↦p(x)⋅f(x) is negligible.Conversely, if f:N→R is not negligible, then neither is x↦f(x)/p(x) for any real polynomial p
Examples:
n↦a−n is negligible for any a≥2 .f(n)=3−n is negligible.
log n is negligible.
log log n is negligible.
log n is not negligible, for positive c .Assume n>0 , we take the limit as n→∞ Negligible: f(n)=1/xn/2 log (nk) for k≥1 log n)k for k≥1 f(n)=1/xn Non-negligible: f(n)=1n1n log n) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Log-polar coordinates**
Log-polar coordinates:
In mathematics, log-polar coordinates (or logarithmic polar coordinates) is a coordinate system in two dimensions, where a point is identified by two numbers, one for the logarithm of the distance to a certain point, and one for an angle. Log-polar coordinates are closely connected to polar coordinates, which are usually used to describe domains in the plane with some sort of rotational symmetry. In areas like harmonic and complex analysis, the log-polar coordinates are more canonical than polar coordinates.
Definition and coordinate transformations:
Log-polar coordinates in the plane consist of a pair of real numbers (ρ,θ), where ρ is the logarithm of the distance between a given point and the origin and θ is the angle between a line of reference (the x-axis) and the line through the origin and the point. The angular coordinate is the same as for polar coordinates, while the radial coordinate is transformed according to the rule r=eρ .where r is the distance to the origin. The formulas for transformation from Cartesian coordinates to log-polar coordinates are given by ln atan2 (y,x).
Definition and coordinate transformations:
and the formulas for transformation from log-polar to Cartesian coordinates are cos sin θ.
By using complex numbers (x, y) = x + iy, the latter transformation can be written as x+iy=eρ+iθ i.e. the complex exponential function. From this follows that basic equations in harmonic and complex analysis will have the same simple form as in Cartesian coordinates. This is not the case for polar coordinates.
Some important equations in log-polar coordinates:
Laplace's equation Laplace's equation in two dimensions is given by ∂2u∂x2+∂2u∂y2=0 in Cartesian coordinates. Writing the same equation in polar coordinates gives the more complicated equation r∂∂r(r∂u∂r)+∂2u∂θ2=0 or equivalently (r∂∂r)2u+∂2u∂θ2=0 However, from the relation r=eρ it follows that r∂∂r=∂∂ρ so Laplace's equation in log-polar coordinates, ∂2u∂ρ2+∂2u∂θ2=0 has the same simple expression as in Cartesian coordinates. This is true for all coordinate systems where the transformation to Cartesian coordinates is given by a conformal mapping. Thus, when considering Laplace's equation for a part of the plane with rotational symmetry, e.g. a circular disk, log-polar coordinates is the natural choice.
Some important equations in log-polar coordinates:
Cauchy–Riemann equations A similar situation arises when considering analytical functions. An analytical function f(x,y)=u(x,y)+iv(x,y) written in Cartesian coordinates satisfies the Cauchy–Riemann equations: ∂u∂x=∂v∂y,∂u∂y=−∂v∂x If the function instead is expressed in polar form f(reiθ)=ReiΦ , the Cauchy–Riemann equations take the more complicated form log log R∂θ=−r∂Φ∂r, Just as in the case with Laplace's equation, the simple form of Cartesian coordinates is recovered by changing polar into log-polar coordinates (let log R ): ∂P∂ρ=∂Φ∂θ,∂P∂θ=−∂Φ∂ρ The Cauchy–Riemann equations can also be written in one single equation as (∂∂x+i∂∂y)f(x+iy)=0 By expressing ∂∂x and ∂∂y in terms of ∂∂ρ and ∂∂θ this equation can be written in the equivalent form (∂∂ρ+i∂∂θ)f(eρ+iθ)=0 Euler's equation When one wants to solve the Dirichlet problem in a domain with rotational symmetry, the usual thing to do is to use the method of separation of variables for partial differential equations for Laplace's equation in polar form. This means that you write u(r,θ)=R(r)Θ(θ) . Laplace's equation is then separated into two ordinary differential equations {Θ″(θ)+ν2Θ(θ)=0r2R″(r)+rR′(r)−ν2R(r)=0 where ν is a constant. The first of these has constant coefficients and is easily solved. The second is a special case of Euler's equation r2R″(r)+crR′(r)+dR(r)=0 where c,d are constants. This equation is usually solved by the ansatz R(r)=rλ , but through use of log-polar radius, it can be changed into an equation with constant coefficients: P″(ρ)+(c−1)P′(ρ)+dP(ρ)=0 When considering Laplace's equation, c=1 and d=−ν2 so the equation for r takes the simple form P″(ρ)−ν2P(ρ)=0 When solving the Dirichlet problem in Cartesian coordinates, these are exactly the equations for x and y . Thus, once again the natural choice for a domain with rotational symmetry is not polar, but rather log-polar, coordinates.
Discrete geometry:
In order to solve a PDE numerically in a domain, a discrete coordinate system must be introduced in this domain. If the domain has rotational symmetry and you want a grid consisting of rectangles, polar coordinates are a poor choice, since in the center of the circle it gives rise to triangles rather than rectangles. However, this can be remedied by introducing log-polar coordinates in the following way. Divide the plane into a grid of squares with side length 2 π /n, where n is a positive integer. Use the complex exponential function to create a log-polar grid in the plane. The left half-plane is then mapped onto the unit disc, with the number of radii equal to n. It can be even more advantageous to instead map the diagonals in these squares, which gives a discrete coordinate system in the unit disc consisting of spirals, see the figure to the right.
Discrete geometry:
Dirichlet-to-Neumann operator The latter coordinate system is for instance suitable for dealing with Dirichlet and Neumann problems. If the discrete coordinate system is interpreted as an undirected graph in the unit disc, it can be considered as a model for an electrical network. To every line segment in the graph is associated a conductance given by a function γ . The electrical network will then serve as a discrete model for the Dirichlet problem in the unit disc, where the Laplace equation takes the form of Kirchhoff's law. On the nodes on the boundary of the circle, an electrical potential (Dirichlet data) is defined, which induces an electric current (Neumann data) through the boundary nodes. The linear operator Λγ from Dirichlet data to Neumann data is called a Dirichlet-to-Neumann operator, and depends on the topology and conductance of the network.
Discrete geometry:
In the case with the continuous disc, it follows that if the conductance is homogeneous, let's say γ=1 everywhere, then the Dirichlet-to-Neumann operator satisfies the following equation Λγ2+∂2∂θ2=0 In order to get a good discrete model of the Dirichlet problem, it would be useful to find a graph in the unit disc whose (discrete) Dirichlet-to-Neumann operator has the same property. Even though polar coordinates don't give us any answer, this is approximate/asymptotically, what the rotationally symmetric network given by log-polar coordinates provides us with.
Discrete geometry:
Image analysis Already at the end of the 1970s, applications for the discrete spiral coordinate system were given in image analysis ( image registration ) . To represent an image in this coordinate system rather than in Cartesian coordinates, gives computational advantages when rotating or zooming in an image. Also, the photo receptors in the retina in the human eye are distributed in a way that has big similarities with the spiral coordinate system. It can also be found in the Mandelbrot fractal (see picture to the right).
Discrete geometry:
Log-polar coordinates can also be used to construct fast methods for the Radon transform and its inverse. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Excisive triad**
Excisive triad:
In topology, a branch of mathematics, an excisive triad is a triple (X;A,B) of topological spaces such that A, B are subspaces of X and X is the union of the interior of A and the interior of B. Note B is not required to be a subspace of A. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Strip the willow**
Strip the willow:
Strip the willow is a country or barn dance. It has variations depending upon whether it is being performed as a movement in a larger dance or a complete dance in itself.
The form described here is that commonly used as part of a Scottish country dance.
Strip the willow:
The dancers form a longways set (a row of gentlemen facing their partners, a row of ladies) of four couples. The 'objective' is to move the top couple to the bottom of the set, and the other couples move up one position. A brief description of the dance would be: The top couple link arms and spin each other for a count of 16, at which point the lady 'strips' down the line of men alternating left-handed anti-clockwise swings with someone else's partner right-handed clockwise half-turn swings with their partner working steadily down the set, the gentleman at this point swinging only with his partner. At the bottom, the couple join again and spin for a count of 8, then the gentleman 'strips' up the line of ladies the same as his partner just did, while the lady swings only with the man. At the top of the set, the couple join together and swing for a count of 8 then together they 'strip' down to the bottom, alternately swinging the other partners down the line and meeting to swing each other between people. At the bottom they meet one last time to swing for 8 beats, while the next top couple meet and swing for 16 and follow the steps above.
Strip the willow:
Thus if the set is (lower case ladies, upper case gentlemen): the movements are: (down) Clockwise whole turn A with a for 16 beats.
Anticlockwise half turn a with B.
Clockwise half turn A with a.
Anticlockwise half turn a with C.
Clockwise half turn A with a.
Anticlockwise half turn a with D.
Clockwise whole turn A with a for 8 beats.(up) Anticlockwise half turn A with d.
Clockwise half turn A with a.
Anticlockwise half turn A with c.
Clockwise half turn A with a.
Anticlockwise half turn A with b.
Clockwise whole turn A with a for 8 beats.(down) Anticlockwise half turn A with b and a with B.
Clockwise half turn A with a.
Anticlockwise half turn A with c and a with C.
Clockwise half turn A with a.
Anticlockwise half turn A with d and a with D.
Clockwise whole turn A with a for 8 beats.The sets can be as long as the music allows.
Variations include: Multiple willow stripping, best done in long sets, with every fourth or fifth couple stripping downwards and everyone else constantly moving upwards. Once a couple reach the top, they wait for the appropriate bar and start another movement. This is called 'Orcadian Strip The Willow'.
Music:
Under the title "Drops of Brandy" the "tunearch" website has 10 transcriptions of the melody. The earliest, dated to 1734 was in manuscript format in David Young's "Drummond Castle/Duke of Perth Manuscript". A more famous collection, O'Neill's "Dance Music of Ireland: 1001 Gems" (1907) has listed it as no 448. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DOS**
DOS:
DOS (, ) is a family of disk-based operating systems for IBM PC compatible computers. The DOS family primarily consists of Microsoft's MS-DOS and a rebranded version, IBM PC DOS, both of which were introduced in 1981. Later compatible systems from other manufacturers include DR-DOS (1988), ROM-DOS (1989), PTS-DOS (1993), and FreeDOS (1998). MS-DOS dominated the IBM PC compatible market between 1981 and 1995.
DOS:
Although the name has come to be identified specifically with this particular family of operating systems, DOS is a platform-independent acronym for disk operating system, whose use predates the IBM PC. Dozens of other operating systems also use the acronym, beginning with the mainframe DOS/360 from 1966. Others include Apple DOS, Apple ProDOS, Atari DOS, Commodore DOS, TRSDOS, and AmigaDOS.
History:
Origins IBM PC DOS (and the separately sold MS-DOS) and its predecessor, 86-DOS, ran on Intel 8086 16-bit processors. It was developed to be similar to Digital Research's CP/M—the dominant disk operating system for 8-bit Intel 8080 and Zilog Z80 microcomputers—in order to simplify porting CP/M applications to MS-DOS.
History:
When IBM introduced the IBM PC, built with the Intel 8088 microprocessor, they needed an operating system. Chairman John Opel had a conversation with fellow United Way National Board Executive Committee member Mary Maxwell Gates, who referred Opel to her son Bill Gates for help with an 8088-compatible build of CP/M. IBM was then sent to Digital Research, and a meeting was set up. However, initial negotiations for the use of CP/M broke down: Digital Research wished to sell CP/M on a royalty basis, while IBM sought a single license, and to change the name to "PC DOS". Digital Research founder Gary Kildall refused, and IBM withdrew.
History:
IBM again approached Bill Gates. Gates in turn approached Seattle Computer Products. There, programmer Tim Paterson had developed a variant of CP/M-80, intended as an internal product for testing SCP's new 16-bit Intel 8086 CPU card for the S-100 bus. The system was initially named QDOS (Quick and Dirty Operating System), before being made commercially available as 86-DOS. Microsoft purchased 86-DOS, allegedly for US$50,000. This became Microsoft Disk Operating System, MS-DOS, introduced in 1981.
History:
Within a year Microsoft licensed MS-DOS to over 70 other companies, which supplied the operating system for their own hardware, sometimes under their own names. Microsoft later required the use of the MS-DOS name, with the exception of the IBM variant. IBM continued to develop their version, PC DOS, for the IBM PC. Digital Research became aware that an operating system similar to CP/M was being sold by IBM (under the same name that IBM insisted upon for CP/M), and threatened legal action. IBM responded by offering an agreement: they would give PC consumers a choice of PC DOS or CP/M-86, Kildall's 8086 version. Side-by-side, CP/M cost US$200 more than PC DOS, and sales were low. CP/M faded, with MS-DOS and PC DOS becoming the marketed operating system for PCs and PC compatibles.Microsoft originally sold MS-DOS only to original equipment manufacturers (OEMs). One major reason for this was that not all early PCs were 100% IBM PC compatible. DOS was structured such that there was a separation between the system specific device driver code (IO.SYS) and the DOS kernel (MSDOS.SYS). Microsoft provided an OEM Adaptation Kit (OAK) which allowed OEMs to customize the device driver code to their particular system. By the early 1990s, most PCs adhered to IBM PC standards so Microsoft began selling a retail version of MS-DOS, starting with MS-DOS 5.0.
History:
In the mid-1980s, Microsoft developed a multitasking version of DOS. This version of DOS is generally referred to as "European MS-DOS 4" because it was developed for ICL and licensed to several European companies. This version of DOS supports preemptive multitasking, shared memory, device helper services and New Executable ("NE") format executables. None of these features were used in later versions of DOS, but they were used to form the basis of the OS/2 1.0 kernel. This version of DOS is distinct from the widely released PC DOS 4.0 which was developed by IBM and based upon DOS 3.3.
History:
Digital Research attempted to regain the market lost from CP/M-86, initially with Concurrent DOS, FlexOS and DOS Plus (both compatible with both MS-DOS and CP/M-86 software), later with Multiuser DOS (compatible with both MS-DOS and CP/M-86 software) and DR DOS (compatible with MS-DOS software). Digital Research was bought by Novell, and DR DOS became PalmDOS and Novell DOS; later, it was part of Caldera (under the names OpenDOS and DR-DOS 7.02/7.03), Lineo, and DeviceLogics.
History:
Gordon Letwin wrote in 1995 that "DOS was, when we first wrote it, a one-time throw-away product intended to keep IBM happy so that they'd buy our languages." Microsoft expected that it would be an interim solution before Xenix. The company planned to improve MS-DOS over time, so it would be almost indistinguishable from single-user Xenix, or XEDOS, which would also run on the Motorola 68000, Zilog Z-8000, and LSI-11; they would be upwardly compatible with Xenix, which BYTE in 1983 described as "the multi-user MS-DOS of the future".
History:
IBM, however, did not want to replace DOS. After AT&T began selling Unix, Microsoft and IBM began developing OS/2 as an alternative. The two companies later had a series of disagreements over two successor operating systems to DOS, OS/2 and Windows. They split development of their DOS systems as a result. The last retail version of MS-DOS was MS-DOS 6.22; after this, MS-DOS became part of Windows 95, 98 and Me. The last retail version of PC DOS was PC DOS 2000 (also called PC DOS 7 revision 1), though IBM did later develop PC DOS 7.10 for OEMs and internal use.
History:
The FreeDOS project began on 26 June 1994, when Microsoft announced it would no longer sell or support MS-DOS. Jim Hall then posted a manifesto proposing the development of an open-source replacement. Within a few weeks, other programmers including Pat Villani and Tim Norman joined the project. A kernel, the COMMAND.COM command line interpreter (shell), and core utilities were created by pooling code they had written or found available. There were several official pre-release distributions of FreeDOS before the FreeDOS 1.0 distribution was released on 3 September 2006. Made available under the GNU General Public License (GPL), FreeDOS does not require license fees or royalties.
History:
Decline Early versions of Microsoft Windows ran on MS-DOS. By the early 1990s, the Windows graphical shell saw heavy use on new DOS systems. In 1995, Windows 95 was bundled as a standalone operating system that did not require a separate DOS license. Windows 95 (and Windows 98 and ME, that followed it) took over as the default OS kernel, though the MS-DOS component remained for compatibility. With Windows 95 and 98, but not ME, the MS-DOS component could be run without starting Windows. With DOS no longer required to use Windows, the majority of users stopped using it directly.
History:
Continued use As of 2023, available compatible systems are FreeDOS, ROM-DOS, PTS-DOS, RxDOS and REAL/32. Some computer manufacturers, including Dell and HP, sell computers with FreeDOS as an OEM operating system.
Embedded systems DOS's structure of accessing hardware directly allows it to be used in embedded devices. The final versions of DR-DOS are still aimed at this market. ROM-DOS is used as operating system for the Canon PowerShot Pro 70.
Emulation On Linux, it is possible to run DOSEMU, a Linux-native virtual machine for running DOS programs at near native speed. There are a number of other emulators for running DOS on various versions of Unix and Microsoft Windows such as DOSBox. DOSBox is designed for legacy gaming (e.g. King's Quest, Doom) on modern operating systems.
Design:
MS-DOS and IBM PC DOS related operating systems are commonly associated with machines using the Intel x86 or compatible CPUs, mainly IBM PC compatibles. Machine-dependent versions of MS-DOS were produced for many non-IBM-compatible x86-based machines, with variations from relabelling of the Microsoft distribution under the manufacturer's name, to versions specifically designed to work with non-IBM-PC-compatible hardware. As long as application programs used DOS APIs instead of direct hardware access, they could run on both IBM-PC-compatible and incompatible machines. The original FreeDOS kernel, DOS-C, was derived from DOS/NT for the Motorola 68000 series of CPUs in the early 1990s. While these systems loosely resembled the DOS architecture, applications were not binary compatible due to the incompatible instruction sets of these non-x86-CPUs. However, applications written in high-level languages could be ported easily.
Design:
DOS is a single-user, single-tasking operating system with basic kernel functions that are non-reentrant: only one program at a time can use them, and DOS itself has no functionality to allow more than one program to execute at a time. The DOS kernel provides various functions for programs (an application program interface), like character I/O, file management, memory management, program loading and termination.
Design:
DOS provides the ability for shell scripting via batch files (with the filename extension .BAT). Each line of a batch file is interpreted as a program to run. Batch files can also make use of internal commands, such as GOTO and conditional statements.The operating system offers an application programming interface that allows development of character-based applications, but not for accessing most of the hardware, such as graphics cards, printers, or mice. This required programmers to access the hardware directly, usually resulting in each application having its own set of device drivers for each hardware peripheral. Hardware manufacturers would release specifications to ensure device drivers for popular applications were available.
Design:
Boot sequence The bootstrap loader on PC-compatible computers, the master boot record, is located beginning at the boot sector, the first sector on the first track (track zero), of the boot disk. The ROM BIOS will load this sector into memory at address 0000h:7C00h, and typically check for a signature "55h AAh" at offset +1FEh. If the sector is not considered to be valid, the ROM BIOS will try the next physical disk in the row, otherwise it will jump to the load address with certain registers set up.
Design:
If the loaded boot sector happens to be a Master Boot Record (MBR), as found on partitioned media, it will relocate itself to 0000h:0600h in memory, otherwise this step is skipped. The MBR code will scan the partition table, which is located within this sector, for an active partition (modern MBRs check if bit 7 is set at offset +1BEh+10h*n, whereas old MBRs simply check for a value of 80h), and, if found, load the first sector of the corresponding partition, which holds the Volume Boot Record (VBR) of that volume, into memory at 0000h:7C00h in the similar fashion as if it had been loaded by the ROM BIOS itself. The MBR will then pass execution to the loaded portion with certain registers set up.
Design:
The sector content loaded at 0000h:7C00h constitutes a VBR now. VBRs are operating system specific and cannot be exchanged between different DOS versions in general, as the exact behaviour differs between different DOS versions. In very old versions of DOS such as DOS 1.x, the VBR would load the whole IO.SYS/IBMBIO.COM file into memory at 0000h:0600h. For this to work, these sectors had to be stored in consecutive order on disk by SYS. In later issues, it would locate and store the contents of the first two entries in the root directory at 0000h:0500h and if they happen to reflect the correct boot files as recorded in the VBR, the VBR would load the first 3 consecutive sectors of the IO.SYS/IBMBIO.COM file into memory at 0070h:0000h. The VBR also has to take care to preserve the contents of the Disk Parameter Table (DPT). Finally, it passes control to the loaded portion by jumping to its entry point with certain registers set up (with considerable differences between different DOS versions).
Design:
In later DOS versions, where the VBR has loaded only the first 3 sectors of the IO.SYS/IBMBIO.COM file into memory, the loaded portion contains another boot loader, which will then load the remainder of itself into memory, using the root directory information stored at 0000h:0500h. For most versions, the file contents still need to be stored in consecutive order on disk. In older versions of DOS, which were still loaded as a whole, this step is skipped.
Design:
The DOS system initialization code will initialize its built-in device drivers and then load the DOS kernel, located in MSDOS.SYS on MS-DOS systems, into memory as well. In Windows 9x, the DOS system initialization code and built-in device drivers and the DOS kernel are combined into a single IO.SYS file while MSDOS.SYS is used as a text configuration file.
The CONFIG.SYS file is then read to parse configuration parameters. The SHELL variable specifies the location of the shell which defaults to COMMAND.COM.
The shell is loaded and executed.
Design:
The startup batch file AUTOEXEC.BAT is then run by the shell.The DOS system files loaded by the boot sector must be contiguous and be the first two directory entries. As such, removing and adding this file is likely to render the media unbootable. It is, however, possible to replace the shell at will, a method that can be used to start the execution of dedicated applications faster.
Design:
This limitation does not apply to any version of DR DOS, where the system files can be located anywhere in the root directory and do not need to be contiguous. Therefore, system files can be simply copied to a disk provided that the boot sector is DR DOS compatible already.
In PC DOS and DR DOS 5.0 and above, the DOS system files are named IBMBIO.COM instead of IO.SYS and IBMDOS.COM instead of MSDOS.SYS. Older versions of DR DOS used DRBIOS.SYS and DRBDOS.SYS instead.
Starting with MS-DOS 7.0 the binary system files IO.SYS and MSDOS.SYS were combined into a single file IO.SYS whilst MSDOS.SYS became a configuration file similar to CONFIG.SYS and AUTOEXEC.BAT. If the MSDOS.SYS BootGUI directive is set to 0, the boot process will stop with the command processor (typically COMMAND.COM) loaded, instead of executing WIN.COM automatically.
Design:
File system DOS uses a filesystem which supports 8.3 filenames: 8 characters for the filename and 3 characters for the extension. Starting with DOS 2 hierarchical directories are supported. Each directory name is also 8.3 format but the maximum directory path length is 64 characters due to the internal current directory structure (CDS) tables that DOS maintains. Including the drive name, the maximum length of a fully qualified filename that DOS supports is 80 characters using the format drive:\path\filename.ext followed by a null byte.
Design:
DOS uses the File Allocation Table (FAT) filesystem. This was originally FAT12 which supported up to 4078 clusters per drive. DOS 3.0 added support for FAT16 which used 16-bit allocation entries and supported up to 65518 clusters per drive. Compaq MS-DOS 3.31 added support for FAT16B which removed the 32‑MiB drive limit and could support up to 512 MiB. Finally MS-DOS 7.1 (the DOS component of Windows 9x) added support for FAT32 which used 32-bit allocation entries and could support hard drives up to 137 GiB and beyond.
Design:
Starting with DOS 3.1, file redirector support was added to DOS. This was initially used to support networking but was later used to support CD-ROM drives with MSCDEX. IBM PC DOS 4.0 also had preliminary installable file system (IFS) support but this was unused and removed in DOS 5.0. DOS also supported Block Devices ("Disk Drive" devices) loaded from CONFIG.SYS that could be used under the DOS file system to support network devices.
Design:
Drive naming scheme In DOS, drives are referred to by identifying letters. Standard practice is to reserve "A" and "B" for floppy drives. On systems with only one floppy drive DOS assigns both letters to the drive, prompting the user to swap disks as programs alternate access between them. This facilitates copying from floppy to floppy or having a program run from one floppy while accessing its data on another. Hard drives were originally assigned the letters "C" and "D". DOS could only support one active partition per drive. As support for more hard drives became available, this developed into first assigning a drive letter to each drive's active primary partition, then making a second pass over the drives to allocate letters to logical drives in the extended partition, then a third pass to give any other non-active primary partitions their names (where such additional partitions existed and contained a DOS-supported file system). Lastly, DOS allocates letters for optical disc drives, RAM disks, and other hardware. Letter assignments usually occur in the order the drivers are loaded, but the drivers can instruct DOS to assign a different letter; drivers for network drives, for example, typically assign letters nearer to the end of the alphabet.Because DOS applications use these drive letters directly (unlike the /dev directory in Unix-like systems), they can be disrupted by adding new hardware that needs a drive letter. An example is the addition of a new hard drive having a primary partition where a pre-existing hard drive contains logical drives in extended partitions; the new drive will be assigned a letter that was previously assigned to one of the extended partition logical drives. Moreover, even adding a new hard drive having only logical drives in an extended partition would still disrupt the letters of RAM disks and optical drives. This problem persisted through Microsoft's DOS-based 9x versions of Windows until they were replaced by versions based on the NT line, which preserves the letters of existing drives until the user changes them. Under DOS, this problem can be worked around by defining a SUBST drive and installing the DOS program into this logical drive. The assignment of this drive would then be changed in a batch job whenever the application starts. Under some versions of Concurrent DOS, as well as under Multiuser DOS, System Manager and REAL/32, the reserved drive letter L: will automatically be assigned to the corresponding load drive whenever an application starts.
Design:
Reserved device names There are reserved device names in DOS that cannot be used as filenames regardless of extension as they are occupied by built-in character devices. These restrictions also affect several Windows versions, in some cases causing crashes and security vulnerabilities.The reserved names are: COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9 (serial communication ports) CON, for console LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, LPT9 (line printers) AUX, for auxiliary PRN, for printer NUL, for null devices; added in 86-DOS 1.10 and PC DOS 1.0.These names (except for NUL) have continued to be supported in all versions of MS-DOS, PC DOS and DR-DOS ever since. LST was also available in some OEM versions of MS-DOS 1.25, whereas other OEM versions of MS-DOS 1.25 already used LPT1 (first line printer) and COM1 (first serial communication device) instead, as introduced with PC DOS. In addition to LPT1 and LPT2 as well as COM1 to COM3, Hewlett-Packard's OEM version of MS-DOS 2.11 for the HP Portable Plus also supported LST as alias for LPT2 and 82164A as alias for COM2; it also supported PLT for plotters. Otherwise, COM2, LPT2, LPT3 and the CLOCK$ (still named CLOCK in some issues of MS-DOS 2.11) clock device were introduced with DOS 2.0, and COM3 and COM4 were added with DOS 3.3. Only the multitasking MS-DOS 4 supported KEYBD$ and SCREEN$. DR DOS 5.0 and higher and Multiuser DOS support an $IDLE$ device for dynamic idle detection to saving power and improve multitasking. LPT4 is an optional built-in driver for a fourth line printer supported in some versions of DR-DOS since 7.02. CONFIG$ constitutes the real mode PnP manager in MS-DOS 7.0–8.0.
Design:
AUX typically defaults to COM1, and PRN to LPT1 (LST), but these defaults can be changed in some versions of DOS to point to other serial or parallel devices. The PLT device (present only in some HP OEM versions of MS-DOS) was reconfigurable as well.Filenames ended with a colon (:) such as NUL: conventionally indicate device names, but the colon is not actually a part of the name of the built-in device drivers. Colons are not necessary to be typed in some cases, for example: It is still possible to create files or directories using these reserved device names, such as through direct editing of directory data structures in disk sectors. Such naming, such as starting a file name with a space, has sometimes been used by viruses or hacking programs to obscure files from users who do not know how to access these locations.
Design:
Memory management DOS was designed for the Intel 8088 processor, which can only directly access a maximum of 1 MiB of RAM. Both IBM and Microsoft chose 640 kibibytes (KiB) as the maximum amount of memory available to programs and reserved the remaining 384 KiB for video memory, the read-only memory of adapters on some video and network peripherals, and the system's BIOS. By 1985, some DOS applications were already hitting the memory limit, while much of reserved was unused, depending on the machine's specifications.Specifications were developed to allow access to additional memory. The first was the Expanded Memory Specification (EMS) was designed to allow memory on an add-on card to be accessed via a 64 KiB page frame in the reserved upper memory area. 80386 and later systems could use a virtual 8086 mode (V86) mode memory manager like EMM386 to create expanded memory from extended memory without the need of an add-on card. The second specification was the Extended Memory Specification (XMS) for 80286 and later systems. This provided a way to copy data to and from extended memory, access to the 65,520-byte high memory area directly above the first megabyte of memory and the upper memory block area. Generally XMS support was provided by HIMEM.SYS or a V86 mode memory manager like QEMM or 386MAX which also supported EMS.Starting with DOS 5, DOS could directly take advantage of the HMA by loading its kernel code and disk buffers there via the DOS=HIGH statement in CONFIG.SYS. DOS 5+ also allowed the use of available upper memory blocks via the DOS=UMB statement in CONFIG.SYS.
Design:
DOS under OS/2 and Windows The DOS emulation in OS/2 and Windows runs in much the same way as native applications do. They can access all of the drives and services, and can even use the host's clipboard services. Because the drivers for file systems and such forth reside in the host system, the DOS emulation needs only provide a DOS API translation layer which converts DOS calls to OS/2 or Windows system calls. The translation layer generally also converts BIOS calls and virtualizes common I/O port accesses which many DOS programs commonly use.
Design:
In Windows 3.1 and 9x, the DOS virtual machine is provided by WINOLDAP. WinOldAp creates a virtual machine based on the program's PIF file, and the system state when Windows was loaded. The DOS graphics mode, both character and graphic, can be captured and run in the window. DOS applications can use the Windows clipboard by accessing extra published calls in WinOldAp, and one can paste text through the WinOldAp graphics.
Design:
The emulated DOS in OS/2 and Windows NT is based upon DOS 5. Although there is a default configuration (config.sys and autoexec.bat), one can use alternate files on a session-by-session basis. It is possible to load drivers in these files to access the host system, although these are typically third-party.
Design:
Under OS/2 2.x and later, the DOS emulation is provided by DOSKRNL. This is a file that represents the combined IBMBIO.COM and IBMDOS.COM, the system calls are passed through to the OS/2 windowing services. DOS programs run in their own environment, the bulk of the DOS utilities are provided by bound DOS / OS2 applications in the \OS2 directory. OS/2 can run Windows 3.1 applications by using a modified copy of Windows (Win-OS/2). The modifications allow Windows 3.1 programs to run seamlessly on the OS/2 desktop, or one can start a WinOS/2 desktop, similar to starting Windows from DOS.
Design:
OS/2 allows for 'DOS from Drive A:', (VMDISK). This is a real DOS, like MS-DOS 6.22 or PC DOS 5.00. One makes a bootable floppy disk of the DOS, adds a number of drivers from OS/2, and then creates a special image. The DOS booted this way has full access to the system, but provides its own drivers for hardware. One can use such a disk to access cdrom drives for which there is no OS/2 driver.
Design:
In all 32-bit (IA-32) editions of the Windows NT family since 1993, DOS emulation is provided by way of a virtual DOS machine (NTVDM). 64-bit (IA-64) versions of Windows do not support NTVDM and cannot run 16-bit DOS applications directly; third-party emulators such as DOSbox can be used to run DOS programs on those machines.
User interface:
DOS systems use a command-line interface. A program is started by entering its filename at the command prompt. DOS systems include utility programs and provide internal commands that do not correspond to programs.In an attempt to provide a more user-friendly environment, numerous software manufacturers wrote file management programs that provided users with menu- and/or icon-based interfaces. becoming a self-contained program loader, and replacing DOS as the most-used PC-compatible program loader. Text user interface programs included Norton Commander, DOS Navigator, Volkov Commander, Quarterdesk DESQview, and Sidekick. Graphical user interface programs included Digital Research's GEM (originally written for CP/M) and GEOS.
User interface:
Eventually, the manufacturers of major DOS systems began to include their own environment managers. MS-DOS/IBM DOS 4 included DOS Shell; DR DOS 5.0, released the following year, included ViewMAX, based upon GEM.
User interface:
Terminate and stay resident Although DOS is not a multitasking operating system, it does provide a terminate-and-stay-resident (TSR) function which allows programs to remain resident in memory. These programs can hook the system timer and/or keyboard interrupts to allow themselves to run tasks in the background or to be invoked at any time, preempting the current running program and effectively implementing a simple form of multitasking on a program-specific basis. The DOS PRINT command does this to implement background print spooling. Borland Sidekick, a popup personal information manager (PIM), also uses this technique.
User interface:
Terminate-and-stay-resident programs are also used to provide additional features not available by default. Programs like CED and DOSKEY provide command-line editing facilities beyond what is available in COMMAND.COM. Programs like the Microsoft CD-ROM Extensions (MSCDEX) provide access to files on CD-ROM disks.
User interface:
Some TSRs can even perform a rudimentary form of task switching. For example, the shareware program Back and Forth (1990) has a hotkey to save the state of the currently-running program to disk, load another program, and switch to it, making it possible to switch "back and forth" between programs (albeit slowly, due to the disk access required). Back and Forth could not enable background processing however; that needed DESQview (on at least a 386).
Software:
Arachne, a 16-bit graphical web browser dBase, database program Harvard Graphics, a presentation graphics design program Lotus 1-2-3, a spreadsheet which has been credited with the success of the IBM PC Norton Commander and XTree, file management utilities PKZIP, the utility that quickly became the standard in file compression ProComm, Qmodem, and Telix, modem communication programs Sidekick, personal information manager that could be used from within other programs WordPerfect, a word processor that was dominant in the 1980s WordStar, word processor originally for CP/M that became popular on the IBM PC Development tools BASIC language interpreters. BASICA and GW-BASIC DJGPP, the 32-bit DPMI DOS port of gcc Microsoft Macro Assembler, Microsoft C, and CodeView from Microsoft Watcom C/C++ from Watcom Turbo Pascal, Turbo BASIC, Turbo C, Turbo Prolog, and Turbo Assembler from Borland | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ATLO**
ATLO:
In aerospace, Assembly, Test, and Launch Operations (ATLO), also known as Mission System Integration and Test (MSIT) is the phase of a spacecraft project that comprises building the spacecraft, testing it, and getting it launched. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rib cage**
Rib cage:
The rib cage is an endoskeletal enclosure in the thorax of most vertebrate animals that comprises the ribs, vertebral column and sternum, which protects vital organs such as the heart, lungs and great vessels. The circumferential enclosure formed by left and right rib cages, together known as the thoracic cage, is a semi-rigid bony and cartilaginous structure which surrounds the thoracic cavity and supports the shoulder girdles to form the core part of the axial skeleton.
Rib cage:
A typical human thoracic cage consists of 12 pairs of ribs and the adjoining costal cartilages, the sternum (along with the manubrium and xiphoid process), and the 12 thoracic vertebrae articulating with the ribs. The thoracic cage also provides attachments for extrinsic skeletal muscles of the neck, upper limbs, upper abdomen and back, and together with the overlying skin and associated fascia and muscles, makes up the thoracic wall.
Rib cage:
In tetrapods, the rib cage intrinsically holds the muscles of respiration (diaphragm, intercostal muscles, etc.) that are crucial for active inhalation and forced exhalation, and therefore has a major ventilatory function in the respiratory system.
Structure:
There are thirty-three vertebrae in the human vertebral column. The rib cage is associated with TH1−TH12. Ribs are described based on their location and connection with the sternum. All ribs are attached posteriorly to the thoracic vertebrae and are numbered accordingly one to twelve. Ribs that articulate directly with the sternum are called true ribs, whereas those that do not articulate directly are termed false ribs. The false ribs include the floating ribs (eleven and twelve) that are not attached to the sternum at all.
Structure:
Attachment The terms true ribs and false ribs describe rib pairs that are directly or indirectly attached to the sternum respectively. The first seven rib pairs known as the fixed or vertebrosternal ribs are the true ribs (Latin: costae verae) as they connect directly to the sternum via their own individual costal cartilages. The next five pairs (eighth to twelfth) are the false ribs (Latin: costae spuriae) or vertebrochondral ribs, which do not connect directly to the sternum. The first three pairs of vertebrochondral ribs (eighth to tenth) connect indirectly to the sternum via the costal cartilages of the ribs above them, and the overall elasticity of their articulations allows the bucket handle movements of the rib cage essential for respiratory activity.
Structure:
The phrase floating rib (Latin: costae fluctuantes) or vertebral rib refers to the two lowermost (the eleventh and twelfth) rib pairs; so-called because they are attached only to the vertebrae and not to the sternum or any of the costal cartilages. These ribs are relatively small and delicate, and include a cartilaginous tip.The spaces between the ribs are known as intercostal spaces; they contain the instrinsic intercostal muscles and the neurovascular bundles containing intercostal nerves, arteries and veins. The superficial surface of the rib cage is covered by the thoracolumbar fascia, which provides external attachments for the neck, back, pectoral and abdominal muscles.
Structure:
Parts of rib Each rib consists of a head, neck, and a shaft. All ribs are attached posteriorly to the thoracic vertebrae. They are numbered to match the vertebrae they attach to – one to twelve, from top (T1) to bottom. The head of the rib is the end part closest to the vertebra with which it articulates. It is marked by a kidney-shaped articular surface which is divided by a horizontal crest into two articulating regions. The upper region articulates with the inferior costal facet on the vertebra above, and the larger region articulates with the superior costal facet on the vertebra with the same number. The transverse process of a thoracic vertebra also articulates at the transverse costal facet with the tubercle of the rib of the same number. The crest gives attachment to the intra-articular ligament.The neck of the rib is the flattened part that extends laterally from the head. The neck is about 3 cm long. Its anterior surface is flat and smooth, whilst its posterior is perforated by numerous foramina and its surface rough, to give attachment to the ligament of the neck. Its upper border presents a rough crest (crista colli costae) for the attachment of the anterior costotransverse ligament; its lower border is rounded.
Structure:
On the posterior surface at the neck, is an eminence—the tubercle that consists of an articular and a non-articular portion. The articular portion is the lower and more medial of the two and presents a small, oval surface for articulation with the transverse costal facet on the end of the transverse process of the lower of the two vertebrae to which the head is connected. The non-articular portion is a rough elevation and affords attachment to the ligament of the tubercle. The tubercle is much more prominent in the upper ribs than in the lower ribs.
Structure:
The angle of a rib (costal angle) may both refer to the bending part of it, and a prominent line in this area, a little in front of the tubercle. This line is directed downward and laterally; this gives attachment to a tendon of the iliocostalis muscle. At this point, the rib is bent in two directions, and at the same time twisted on its long axis.
Structure:
The distance between the angle and the tubercle is progressively greater from the second to the tenth ribs. The area between the angle and the tubercle is rounded, rough, and irregular, and serves for the attachment of the longissimus dorsi muscle.
Bones Ribs and vertebrae The first rib (the topmost one) is the most curved and usually the shortest of all the ribs; it is broad and flat, its surfaces looking upward and downward, and its borders inward and outward.
Structure:
The head is small and rounded, and possesses only a single articular facet, for articulation with the body of the first thoracic vertebra. The neck is narrow and rounded. The tubercle, thick and prominent, is placed on the outer border. It bears a small facet for articulation with the transverse costal facet on the transverse process of T1. There is no angle, but at the tubercle, the rib is slightly bent, with the convexity upward, so that the head of the bone is directed downward. The upper surface of the body is marked by two shallow grooves, separated from each other by a slight ridge prolonged internally into a tubercle, the scalene tubercle, for the attachment of the anterior scalene; the anterior groove transmits the subclavian vein, the posterior the subclavian artery and the lowest trunk of the brachial plexus. Behind the posterior groove is a rough area for the attachment of the medial scalene. The under surface is smooth and without a costal groove. The outer border is convex, thick, and rounded, and at its posterior part gives attachment to the first digitation of the serratus anterior. The inner border is concave, thin, and sharp, and marked about its center by the scalene tubercle. The anterior extremity is larger and thicker than that of any of the other ribs.
Structure:
The second rib is the second uppermost rib in humans or second most frontal in animals that walk on four limbs. In humans, the second rib is defined as a true rib since it connects with the sternum through the intervention of the costal cartilage anteriorly (at the front). Posteriorly, the second rib is connected with the vertebral column by the second thoracic vertebra. The second rib is much longer than the first rib, but has a very similar curvature. The non-articular portion of the tubercle is occasionally only feebly marked. The angle is slight and situated close to the tubercle. The body is not twisted so that both ends touch any plane surface upon which it may be laid; but there is a bend, with its convexity upward, similar to, though smaller than that found in the first rib. The body is not flattened horizontally like that of the first rib. Its external surface is convex, and looks upward and a little outward; near the middle of it is a rough eminence for the origin of the lower part of the first and the whole of the second digitation of the serratus anterior; behind and above this is attached the posterior scalene. The internal surface, smooth, and concave, is directed downward and a little inward: on its posterior part there is a short costal groove between the ridge of the internal surface of the rib and the inferior border. It protects the intercostal space containing the intercostal veins, intercostal arteries, and intercostal nerves.The ninth rib has a frontal part at the same level as the first lumbar vertebra. This level is called the transpyloric plane, since the pylorus is also at this level.The tenth rib attaches directly to the body of vertebra T10 instead of between vertebrae like the second through ninth ribs. Due to this direct attachment, vertebra T10 has a complete costal facet on its body.
Structure:
The eleventh and twelfth ribs, the floating ribs, have a single articular facet on the head, which is of rather large size. They have no necks or tubercles, and are pointed at their anterior ends. The eleventh has a slight angle and a shallow costal groove, whereas the twelfth does not. The twelfth rib is much shorter than the eleventh rib, and only has a one articular facet.
Structure:
Sternum The sternum is a long, flat bone that forms the front of the rib cage. The cartilages of the top seven ribs (the true ribs) join with the sternum at the sternocostal joints. The costal cartilage of the second rib articulates with the sternum at the sternal angle making it easy to locate.The manubrium is the wider, superior portion of the sternum. The top of the manubrium has a shallow, U-shaped border called the jugular (suprasternal) notch. The clavicular notch is the shallow depression located on either side at the superior-lateral margins of the manubrium. This is the site of the sternoclavicular joint, between the sternum and clavicle. The first ribs also attach to the manubrium.The transversus thoracis muscle is innervated by one of the intercostal nerves and superiorly attaches at the posterior surface of the lower sternum. Its inferior attachment is the internal surface of costal cartilages two through six and works to depress the ribs.
Structure:
Development Expansion of the rib cage in males is caused by the effects of testosterone during puberty. Thus, males generally have broad shoulders and expanded chests, allowing them to inhale more air to supply their muscles with oxygen.
Structure:
Variation Variations in the number of ribs occur. About 1 in 200–500 people have an additional cervical rib, and there is a female predominance. Intrathoracic supernumerary ribs are extremely rare. The rib remnant of the 7th cervical vertebra on one or both sides is occasionally replaced by a free extra rib called a cervical rib, which can mechanically interfere with the nerves (brachial plexus) going to the arm.
Structure:
In several ethnic groups, most significantly the Japanese, the tenth rib is sometimes a floating rib, as it lacks a cartilaginous connection to the seventh rib.
Function:
The human rib cage is a component of the human respiratory system. It encloses the thoracic cavity, which contains the lungs. An inhalation is accomplished when the muscular diaphragm, at the floor of the thoracic cavity, contracts and flattens, while the contraction of intercostal muscles lift the rib cage up and out.
Function:
Expansion of the thoracic cavity is driven in three planes; the vertical, the anteroposterior and the transverse. The vertical plane is extended by the help of the diaphragm contracting and the abdominal muscles relaxing to accommodate the downward pressure that is supplied to the abdominal viscera by the diaphragm contracting. A greater extension can be achieved by the diaphragm itself moving down, rather than simply the domes flattening. The second plane is the anteroposterior and this is expanded by a movement known as the 'pump handle'. The downward sloping nature of the upper ribs are as such because they enable this to occur. When the external intercostal muscles contract and lift the ribs, the upper ribs are able also to push the sternum up and out. This movement increases the anteroposterior diameter of the thoracic cavity, and hence aids breathing further. The third, transverse, plane is primarily expanded by the lower ribs (some say it is the 7th to 10th ribs in particular), with the diaphragm's central tendon acting as a fixed point. When the diaphragm contracts, the ribs are able to evert (meaning turn outwards or inside out) and produce what is known as the bucket handle movement, facilitated by gliding at the costovertebral joints. In this way, the transverse diameter is expanded and the lungs can fill.
Function:
The circumference of the normal adult human rib cage expands by 3 to 5 cm during inhalation.
Clinical significance:
Rib fractures are the most common injury to the rib cage. These most frequently affect the middle ribs. When several adjacent ribs incur two or more fractures each, this can result in a flail chest which is a life-threatening condition.
A dislocated rib can be painful and can be caused simply by coughing, or for example by trauma or lifting heavy weights.One or more costal cartilages can become inflamed – a condition known as costochondritis; the resulting pain is similar to that of a heart attack.
Clinical significance:
Abnormalities of the rib cage include pectus excavatum ("sunken chest") and pectus carinatum ("pigeon chest"). A bifid rib is a bifurcated rib, split towards the sternal end, and usually just affecting one of the ribs of a pair. It is a congenital defect affecting about 1.2% of the population. It is often without symptoms though respiratory difficulties and other problems can arise.
Clinical significance:
Rib removal is the surgical removal of one or more ribs for therapeutic or cosmetic reasons.
Rib resection is the removal of part of a rib.
Regeneration:
Since the early part of the 20th century, the ability of the human rib to regenerate itself has been appreciated. However, scientific reports demonstrating repair have been sporadic and anecdotal. Currently, this phenomenon is best taken advantage of by craniomaxillofacial surgeons, who use both cartilage and bone material from the rib for jaw, face, and ear reconstruction.The perichondrium is a fibrous sheath of vascular connective tissue surrounding the rib cartilage, containing a source of progenitor stem cells required for rib regeneration.
Society and culture:
The position of ribs can be permanently altered by a form of body modification called tightlacing, which uses a corset to compress and move the ribs.
The ribs, particularly their sternal ends, are used as a way of estimating age in forensic pathology due to their progressive ossification.
Biblical Story:
The number of ribs as 24 (12 pairs) was noted by the Flemish anatomist Vesalius in his key work of anatomy De humani corporis fabrica in 1543, setting off a wave of controversy, as it was traditionally assumed from the Biblical story of Adam and Eve that men's ribs would number one fewer than women's. However, thirteenth or “cervical rib” occurs in 1% of humans and this is more common in females than in males.
Other animals:
In herpetology, costal grooves refer to lateral indents along the integument of salamanders. The grooves run between the axilla to the groin. Each groove overlies the myotomal septa to mark the position of the internal rib.Birds and reptiles have bony uncinate processes on their ribs that project caudally from the vertical section of each rib. These serve to attach sacral muscles and also aid in allowing greater inspiration. Crocodiles have cartilaginous uncinate processes.
Notes:
This article incorporates text in the public domain from the 20th edition of Gray's Anatomy (1918) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stochastic control**
Stochastic control:
Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesian probability-driven fashion, that random noise with known probability distribution affects the evolution and observation of the state variables. Stochastic control aims to design the time path of the controlled variables that performs the desired control task with minimum cost, somehow defined, despite the presence of this noise. The context may be either discrete time or continuous time.
Certainty equivalence:
An extremely well-studied formulation in stochastic control is that of linear quadratic Gaussian control. Here the model is linear, the objective function is the expected value of a quadratic form, and the disturbances are purely additive. A basic result for discrete-time centralized systems with only additive uncertainty is the certainty equivalence property: that the optimal control solution in this case is the same as would be obtained in the absence of the additive disturbances. This property is applicable to all centralized systems with linear equations of evolution, quadratic cost function, and noise entering the model only additively; the quadratic assumption allows for the optimal control laws, which follow the certainty-equivalence property, to be linear functions of the observations of the controllers.
Certainty equivalence:
Any deviation from the above assumptions—a nonlinear state equation, a non-quadratic objective function, noise in the multiplicative parameters of the model, or decentralization of control—causes the certainty equivalence property not to hold. For example, its failure to hold for decentralized control was demonstrated in Witsenhausen's counterexample.
Discrete time:
In a discrete-time context, the decision-maker observes the state variable, possibly with observational noise, in each time period. The objective may be to optimize the sum of expected values of a nonlinear (possibly quadratic) objective function over all the time periods from the present to the final period of concern, or to optimize the value of the objective function as of the final period only. At each time period new observations are made, and the control variables are to be adjusted optimally. Finding the optimal solution for the present time may involve iterating a matrix Riccati equation backwards in time from the last period to the present period.
Discrete time:
In the discrete-time case with uncertainty about the parameter values in the transition matrix (giving the effect of current values of the state variables on their own evolution) and/or the control response matrix of the state equation, but still with a linear state equation and quadratic objective function, a Riccati equation can still be obtained for iterating backward to each period's solution even though certainty equivalence does not apply.ch.13 The discrete-time case of a non-quadratic loss function but only additive disturbances can also be handled, albeit with more complications.
Discrete time:
Example A typical specification of the discrete-time stochastic linear quadratic control problem is to minimize: ch. 13, E1∑t=1S[ytTQyt+utTRut] where E1 is the expected value operator conditional on y0, superscript T indicates a matrix transpose, and S is the time horizon, subject to the state equation yt=Atyt−1+Btut, where y is an n × 1 vector of observable state variables, u is a k × 1 vector of control variables, At is the time t realization of the stochastic n × n state transition matrix, Bt is the time t realization of the stochastic n × k matrix of control multipliers, and Q (n × n) and R (k × k) are known symmetric positive definite cost matrices. We assume that each element of A and B is jointly independently and identically distributed through time, so the expected value operations need not be time-conditional.
Discrete time:
Induction backwards in time can be used to obtain the optimal control solution at each time,: ch. 13 ut∗=−[E(BTXtB+R)]−1E(BTXtA)yt−1, with the symmetric positive definite cost-to-go matrix X evolving backwards in time from XS=Q according to Xt−1=Q+E[ATXtA]−E[ATXtB][E(BTXtB+R)]−1E(BTXtA), which is known as the discrete-time dynamic Riccati equation of this problem. The only information needed regarding the unknown parameters in the A and B matrices is the expected value and variance of each element of each matrix and the covariances among elements of the same matrix and among elements across matrices.
Discrete time:
The optimal control solution is unaffected if zero-mean, i.i.d. additive shocks also appear in the state equation, so long as they are uncorrelated with the parameters in the A and B matrices. But if they are so correlated, then the optimal control solution for each period contains an additional additive constant vector. If an additive constant vector appears in the state equation, then again the optimal control solution for each period contains an additional additive constant vector.
Discrete time:
The steady-state characterization of X (if it exists), relevant for the infinite-horizon problem in which S goes to infinity, can be found by iterating the dynamic equation for X repeatedly until it converges; then X is characterized by removing the time subscripts from its dynamic equation.
Continuous time:
If the model is in continuous time, the controller knows the state of the system at each instant of time. The objective is to maximize either an integral of, for example, a concave function of a state variable over a horizon from time zero (the present) to a terminal time T, or a concave function of a state variable at some future date T. As time evolves, new observations are continuously made and the control variables are continuously adjusted in optimal fashion.
Stochastic model predictive control:
In the literature, there are two types of MPCs for stochastic systems; Robust model predictive control and Stochastic Model Predictive Control (SMPC). Robust model predictive control is a more conservative method which considers the worst scenario in the optimization procedure. However, this method, similar to other robust controls, deteriorates the overall controller's performance and also is applicable only for systems with bounded uncertainties. The alternative method, SMPC, considers soft constraints which limit the risk of violation by a probabilistic inequality.
Stochastic model predictive control:
In finance In a continuous time approach in a finance context, the state variable in the stochastic differential equation is usually wealth or net worth, and the controls are the shares placed at each time in the various assets. Given the asset allocation chosen at any time, the determinants of the change in wealth are usually the stochastic returns to assets and the interest rate on the risk-free asset. The field of stochastic control has developed greatly since the 1970s, particularly in its applications to finance. Robert Merton used stochastic control to study optimal portfolios of safe and risky assets. His work and that of Black–Scholes changed the nature of the finance literature. Influential mathematical textbook treatments were by Fleming and Rishel, and by Fleming and Soner. These techniques were applied by Stein to the financial crisis of 2007–08.The maximization, say of the expected logarithm of net worth at a terminal date T, is subject to stochastic processes on the components of wealth. In this case, in continuous time Itô's equation is the main tool of analysis. In the case where the maximization is an integral of a concave function of utility over an horizon (0,T), dynamic programming is used. There is no certainty equivalence as in the older literature, because the coefficients of the control variables—that is, the returns received by the chosen shares of assets—are stochastic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fumigaclavine A dimethylallyltransferase**
Fumigaclavine A dimethylallyltransferase:
Fumigaclavine A dimethylallyltransferase (EC 2.5.1.100, FgaPT1) is an enzyme with systematic name dimethylallyl-diphosphate:fumigaclavine A dimethylallyltransferase. This enzyme catalyses the following chemical reaction fumigaclavine A + dimethylallyl diphosphate ⇌ fumigaclavine C + diphosphateFumigaclavine C is an ergot alkaloid produced by some fungi of the Trichocomaceae family. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Engineering science and mechanics**
Engineering science and mechanics:
Engineering science and mechanics (ESM) is a multidisciplinary and interdisciplinary engineering program and/or academic department. It is available at various American universities, including Pennsylvania State University, University of Virginia, Virginia Polytechnic Institute and State University, Georgia Institute of Technology, and University of Alabama.
Programs:
A Bachelor of Science, Master of Science, Master of Engineering, or Ph.D. degree in engineering science, engineering mechanics, or engineering science and mechanics is awarded upon completion of the respective program.
Programs:
Areas of specialization include aerodynamics, biomechanics, bionanotechnology, biosensors and bioelectronics, composite materials, continuum mechanics, data mining, electromagnetics of complex materials, electronic materials and devices, experimental mechanics, fluid mechanics, laser-assisted micromanufacturing, metamaterials, microfabrication, microfluidic systems, microelectromechanical systems (MEMS) and microoptoelectromechanical systems (MOEMS), nanotechnology, neural engineering, non-destructive testing or evaluation, nonlinear dynamics, optoelectronics, photonics and plasmonics, quantum mechanics, solar-energy-harvesting materials, solid mechanics, solid-state physics, structural health monitoring, and thin films and nanostructured materials.
History:
In 1972, the department of engineering mechanics at the Virginia Polytechnic Institute and State University changed its name and undergraduate program to engineering science and mechanics. In 1974, the department of engineering mechanics at the Pennsylvania State University merged with engineering science program and the department was renamed to engineering science and mechanics. Engineering science and mechanics is a graduate program in the School of Civil and Environmental Engineering at the Georgia Institute of Technology. The department of aerospace engineering and mechanics at the University of Alabama offers graduate degrees in engineering science and mechanics.
Academic departments and programs:
Department of Engineering Science and Mechanics, Pennsylvania State University.
Department of Engineering Science and Mechanics, Virginia Polytechnic Institute and State University.
Graduate Programs in Engineering Science and Mechanics, Georgia Institute of Technology.
Graduate Programs in Engineering Science and Mechanics, University of Alabama. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flavo-1 RNA motif**
Flavo-1 RNA motif:
The Flavo-1 RNA motif is a conserved RNA structure that was identified by bioinformatics. The vast majority of Flavo-1 RNAs are found in Flavobacteria, but some were detected in the phylum Bacteroidota, which contains Flavobacteria, or the phylum Spirochaetota, which is evolutionarily related to Bacteroidota. It was presumed that Flavo-1 RNAs function as non-coding RNAs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NLRP14**
NLRP14:
NLRP14, short for NOD-like receptor family pyrin domain containing 14, is an intracellular protein of mammals associated with a role in spermatogenesis. It is also known as NALP14, NOD5, GC-LRR, Nalp-iota, PAN8, and CLR11.2, and is one of 14 pyrin domain containing members of the NOD-like receptor family of cytoplasmic receptors. NLRP14 is found exclusively in the testes where it is expressed within spermatogonia, spermatocytes and spermatids. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Voussoir**
Voussoir:
A voussoir () is a wedge-shaped element, typically a stone, which is used in building an arch or vault.Although each unit in an arch or vault is a voussoir, two units are of distinct functional importance: the keystone and the springer. The keystone is the centre stone or masonry unit at the apex of an arch. The springer is the lowest voussoir on each side, located where the curve of the arch springs from the vertical support or abutment of the wall or pier.The keystone is often decorated or enlarged. An enlarged and sometimes slightly dropped keystone is often found in Mannerist arches of the 16th century, beginning with the works of Giulio Romano, who also began the fashion for using voussoirs above rectangular openings, rather than a lintel (Palazzo Stati Maccarani, Rome, circa 1522).
Voussoir:
The word is a stonemason's term borrowed in Middle English from French verbs connoting a "turn" (OED). Each wedge-shaped voussoir turns aside the thrust of the mass above, transferring it from stone to stone to the springer's bottom face (impost), which is horizontal and passes the thrust on to the supports. Voussoir arches distribute weight efficiently, and take maximum advantage of the compressive strength of stone, as in an arch bridge. The outer boundary of a voussoir is an extrados.In Visigothic and Moorish architectural traditions, the voussoirs are often in alternating colours (ablaq), usually red and white. This is also found sometimes in Romanesque architecture.
Voussoir:
During the 18th and 19th centuries, British bricklayers became aware that, by thickening the vertical mortar joint between regularly shaped bricks from bottom to top, they could construct an elliptical arch of useful strength over either a standard "former", or over specially constructed timber falsework (temporary structure to be removed once the construction is complete). The bricks used in such an arch are often referred to as "voussoirs". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rød pølse**
Rød pølse:
Rød pølse (listen , "red sausage") is a type of brightly red, boiled pork sausage very common in Denmark. Since hot dog stands are ubiquitous in Denmark, some people regard røde pølser as one of the national dishes. They are made of the Vienna type and the skin is colored with a traditional red dye (carmine).
Traditional preparation:
Rød pølse are to be heated in hot water and are commonly served with remoulade, mustard or ketchup, fried onions and pickled sliced cucumber (gherkin). A common legend says that it was once ordered that day-old sausages be dyed as a means of warning. Another interpretation is that starting in the 1920s, vendors used red dye to disguise the diminished quality of older sausages.
Other Scandinavian sausages:
Scandinavian sausages are usually made of 60–80% finely ground pork, spiced with pepper, nutmeg, allspice or similar sweet spices (ground mustard seed, onions and sugar may also be added). Water, lard, pork rind, potato starch flour and soybean or milk protein are often added as fillers. Nearly all commercially available sausages are industrially precooked to be subsequently fried or heated in boiling water.In Norway, sausages are most often served in white buns, or in a traditional flat bread. The sausages are grilled or warmed in hot water, and they are normally served with ketchup or mustard. An alternative condiment to the sausages may be mashed potatoes.In Iceland, the sausages may contain mutton, giving them a distinct taste. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Audio/modem riser**
Audio/modem riser:
The audio/modem riser (AMR) is a riser expansion slot found on the motherboards of some Pentium III, Pentium 4, Duron, and Athlon personal computers. It was designed by Intel to interface with chipsets and provide analog functionality, such as sound cards and modems, on an expansion card.
Technology:
Physically, it has two rows of 23 pins, making 46 pins total. Three drawbacks of AMR are that it eliminates one PCI slot, it is not plug and play, and it does not allow for hardware accelerated cards (only software-based).Technologically, it has been superseded by the Advanced Communications Riser (ACR) and Intel's own communications and networking riser (CNR). However, riser technologies in general never really took off. Modems generally remained as PCI cards while audio and network interfaces were integrated on to motherboards. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Digital Author Identifier**
Digital Author Identifier:
In the Dutch research system, the Digital Author Identifier (DAI) system assigns a unique number to all academic authors as a form of authority control. The DAI links the PICA database in institutional libraries with the METIS national research information system.
The Digital Author Identifier is a unique national number for every author active within a Dutch university, university of applied sciences, or research institute. The DAI is prepared from the ISO standard “ISNI” (International Standard Name Identifier). The DAI brings several publications from an author together, and distinguishes between authors with the same name.
Other author identifiers:
The DAI is part of the national knowledge infrastructure. In the scientific community, other identifiers are in use as well, such as ORCID, ResearcherID, and ScopusId.SURFfoundation has, in cooperation with OCLC PICA, created a connection with PICA National Thesaurus Authornames (NTA) that is supplied and maintained by university libraries. Important to this is the connection between the research information system Metis and the repositories.
Applications:
There are many potential applications for the DAI. Publications by an author can be collected more easily, even though the author may have worked at several institutions. When an author changes name, for example because of marriage, the DAI remains the same, enabling anyone to find publications from before the change of name. With a tool, publication lists can be generated on the basis of the DAI. These publications are collected from several repositories in Dutch scientific institutions. With the DAI, this information can be integrated into one list. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lateral shoot**
Lateral shoot:
A lateral shoot, commonly known as a branch, is a part of a plant's shoot system that develops from axillary buds on the stem's surface, extending laterally from the plant's stem.
Importance to photosynthesis:
As a plant grows it requires more energy, it also is required to out-compete nearby plants for this energy. One of the ways a plant can compete for this energy is to increase its height, another is to increase its overall surface area. That is to say, the more lateral shoots a plant develops, the more foliage the plant can support increases how much photosynthesis the plant can perform as it allows for more area for the plant to uptake carbon dioxide as well as sunlight.
Genes, transcription factors, and growth:
Through testing with Arabidopsis thaliana (A plant considered a model organism for plant genetic studies) genes including MAX1 and MAX2 have been found to affect growth of lateral shoots. Gene knockouts of these genes cause abnormal proliferation of the plants affected, implying they are used for repressing said growth in wild type plants. Another set of experiments with Arabidopsis thaliana testing genes in the plant hormone florigen, two genes FT and TSF (which are abbreviations for Flowering Locus T, and Twin Sister of FT) when knocked out, appear to affect lateral shoot in a negative fashion. These mutants cause slower growth and improper formation of lateral shoots, which could also mean that lateral shoots are important to florigen's function. Along with general growth there are also transcription factors that directly effect the production of additional lateral shoots like the TCP family (also known as Teosinte branched 1/cycloidea/proliferating cell factor) which are plant specific proteins that suppress lateral shoot branching. Additionally the TCP family has been found to be partially responsible for inhibiting the cell's Growth hormone–releasing hormone (GHRF) which means it also inhibits cell proliferation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Action (physics)**
Action (physics):
In physics, action is a scalar quantity describing how a physical system has changed over time (its dynamics). Action is significant because the equations of motion of the system can be derived through the principle of stationary action.
Action (physics):
In the simple case of a single particle moving with a constant velocity (uniform linear motion), the action is the momentum of the particle times the distance it moves, added up along its path; equivalently, action is twice the particle's kinetic energy times the duration for which it has that amount of energy. For more complicated systems, all such quantities are combined.
Action (physics):
More formally, action is a mathematical functional which takes the trajectory (also called path or history) of the system as its argument and has a real number as its result. Generally, the action takes different values for different paths. Action has dimensions of energy × time or momentum × length, and its SI unit is joule-second (like the Planck constant h).
Introduction:
Hamilton's principle states that the differential equations of motion for any physical system can be re-formulated as an equivalent integral equation. Thus, there are two distinct approaches for formulating dynamical models.
Introduction:
It applies not only to the classical mechanics of a single particle, but also to classical fields such as the electromagnetic and gravitational fields. Hamilton's principle has also been extended to quantum mechanics and quantum field theory—in particular the path integral formulation of quantum mechanics makes use of the concept—where a physical system randomly follows one of the possible paths, with the phase of the probability amplitude for each path being determined by the action for the path.
Introduction:
Solution of differential equation Empirical laws are frequently expressed as differential equations, which describe how physical quantities such as position and momentum change continuously with time, space or a generalization thereof. Given the initial and boundary conditions for the situation, the "solution" to these empirical equations is one or more functions that describe the behavior of the system and are called equations of motion.
Introduction:
Minimization of action integral Action is a part of an alternative approach to finding such equations of motion. Classical mechanics postulates that the path actually followed by a physical system is that for which the action is minimized, or more generally, is stationary. In other words, the action satisfies a variational principle: the principle of stationary action (see also below). The action is defined by an integral, and the classical equations of motion of a system can be derived by minimizing the value of that integral.
Introduction:
This simple principle provides deep insights into physics, and is an important concept in modern theoretical physics.
History:
Action was defined in several now obsolete ways during the development of the concept.
Gottfried Leibniz, Johann Bernoulli and Pierre Louis Maupertuis defined the action for light as the integral of its speed or inverse speed along its path length.
Leonhard Euler (and, possibly, Leibniz) defined action for a material particle as the integral of the particle's speed along its path through space.
Pierre Louis Maupertuis introduced several ad hoc and contradictory definitions of action within a single article, defining action as potential energy, as virtual kinetic energy, and as a hybrid that ensured conservation of momentum in collisions.
Mathematical definition:
Expressed in mathematical language, using the calculus of variations, the evolution of a physical system (i.e., how the system actually progresses from one state to another) corresponds to a stationary point (usually, a minimum) of the action.
Several different definitions of "the action" are in common use in physics. The action is usually an integral over time. However, when the action pertains to fields, it may be integrated over spatial variables as well. In some cases, the action is integrated along the path followed by the physical system.
The action is typically represented as an integral over time, taken along the path of the system between the initial time and the final time of the development of the system: where the integrand L is called the Lagrangian. For the action integral to be well-defined, the trajectory has to be bounded in time and space.
Action has the dimensions of [energy] × [time], and its SI unit is joule-second, which is identical to the unit of angular momentum.
Action in classical physics:
In classical physics, the term "action" has a number of meanings.
Action in classical physics:
Action (functional) Most commonly, the term is used for a functional S which takes a function of time and (for fields) space as input and returns a scalar. In classical mechanics, the input function is the evolution q(t) of the system between two times t1 and t2, where q represents the generalized coordinates. The action S[q(t)] is defined as the integral of the Lagrangian L for an input evolution between the two times: where the endpoints of the evolution are fixed and defined as q1=q(t1) and q2=q(t2) . According to Hamilton's principle, the true evolution qtrue(t) is an evolution for which the action S[q(t)] is stationary (a minimum, maximum, or a saddle point). This principle results in the equations of motion in Lagrangian mechanics.
Action in classical physics:
Abbreviated action (functional) The abbreviated action is also a functional. It is usually denoted as S0 . Here the input function is the path followed by the physical system without regard to its parameterization by time. For example, the path of a planetary orbit is an ellipse, and the path of a particle in a uniform gravitational field is a parabola; in both cases, the path does not depend on how fast the particle traverses the path. The abbreviated action S0 is defined as the integral of the generalized momenta along a path in the generalized coordinates: Spelled out concretely, this is According to Maupertuis' principle, the true path is a path for which the abbreviated action S0 is stationary.
Action in classical physics:
Hamilton's principal function Hamilton's principal function S=S(q,t;q0,t0) is obtained from the action functional S by fixing the initial time t0 and the initial endpoint q0, while allowing the upper time limit t and the second endpoint q to vary. The Hamilton's principal function satisfies the Hamilton–Jacobi equation, a formulation of classical mechanics. Due to a similarity with the Schrödinger equation, the Hamilton–Jacobi equation provides, arguably, the most direct link with quantum mechanics.
Action in classical physics:
Hamilton's characteristic function When the total energy E is conserved, the Hamilton–Jacobi equation can be solved with the additive separation of variables: where the time-independent function W(q1, q2, ..., qN) is called Hamilton's characteristic function. The physical significance of this function is understood by taking its total time derivative This can be integrated to give which is just the abbreviated action.
Action in classical physics:
Other solutions of Hamilton–Jacobi equations The Hamilton–Jacobi equations are often solved by additive separability; in some cases, the individual terms of the solution, e.g., Sk(qk), are also called an "action".
Action in classical physics:
Action of a generalized coordinate This is a single variable Jk in the action-angle coordinates, defined by integrating a single generalized momentum around a closed path in phase space, corresponding to rotating or oscillating motion: The variable Jk is called the "action" of the generalized coordinate qk; the corresponding canonical variable conjugate to Jk is its "angle" wk, for reasons described more fully under action-angle coordinates. The integration is only over a single variable qk and, therefore, unlike the integrated dot product in the abbreviated action integral above. The Jk variable equals the change in Sk(qk) as qk is varied around the closed path. For several physical systems of interest, Jk is either a constant or varies very slowly; hence, the variable Jk is often used in perturbation calculations and in determining adiabatic invariants.
Action in classical physics:
Action for a Hamiltonian flow See tautological one-form.
Euler–Lagrange equations:
In Lagrangian mechanics, the requirement that the action integral be stationary under small perturbations is equivalent to a set of differential equations (called the Euler–Lagrange equations) that may be obtained using the calculus of variations.
The action principle:
Classical fields The action principle can be extended to obtain the equations of motion for fields, such as the electromagnetic field or gravitational field.
The Einstein equation utilizes the Einstein–Hilbert action as constrained by a variational principle.
The trajectory (path in spacetime) of a body in a gravitational field can be found using the action principle. For a free falling body, this trajectory is a geodesic.
The action principle:
Conservation laws Implications of symmetries in a physical situation can be found with the action principle, together with the Euler–Lagrange equations, which are derived from the action principle. An example is Noether's theorem, which states that to every continuous symmetry in a physical situation there corresponds a conservation law (and conversely). This deep connection requires that the action principle be assumed.
The action principle:
Quantum mechanics and quantum field theory In quantum mechanics, the system does not follow a single path whose action is stationary, but the behavior of the system depends on all permitted paths and the value of their action. The action corresponding to the various paths is used to calculate the path integral, which gives the probability amplitudes of the various outcomes.
The action principle:
Although equivalent in classical mechanics with Newton's laws, the action principle is better suited for generalizations and plays an important role in modern physics. Indeed, this principle is one of the great generalizations in physical science. It is best understood within quantum mechanics, particularly in Richard Feynman's path integral formulation, where it arises out of destructive interference of quantum amplitudes.
The action principle:
Maxwell's equations can also be derived as conditions of stationary action.
The action principle:
Single relativistic particle When relativistic effects are significant, the action of a point particle of mass m travelling a world line C parametrized by the proper time τ is If instead, the particle is parametrized by the coordinate time t of the particle and the coordinate time ranges from t1 to t2, then the action becomes where the Lagrangian is Modern extensions The action principle can be generalized still further. For example, the action need not be an integral, because nonlocal actions are possible. The configuration space need not even be a functional space, given certain features such as noncommutative geometry. However, a physical basis for these mathematical extensions remains to be established experimentally.
Sources and further reading:
For an annotated bibliography, see Edwin F. Taylor who lists, among other things, the following books The Cambridge Handbook of Physics Formulas, G. Woan, Cambridge University Press, 2010, ISBN 978-0-521-57507-2.
Cornelius Lanczos, The Variational Principles of Mechanics (Dover Publications, New York, 1986). ISBN 0-486-65067-7. The reference most quoted by all those who explore this field.
L. D. Landau and E. M. Lifshitz, Mechanics, Course of Theoretical Physics (Butterworth-Heinenann, 1976), 3rd ed., Vol. 1. ISBN 0-7506-2896-0. Begins with the principle of least action.
Thomas A. Moore "Least-Action Principle" in Macmillan Encyclopedia of Physics (Simon & Schuster Macmillan, 1996), Volume 2, ISBN 0-02-897359-3, OCLC 35269891, pages 840–842.
Gerald Jay Sussman and Jack Wisdom, Structure and Interpretation of Classical Mechanics (MIT Press, 2001). Begins with the principle of least action, uses modern mathematical notation, and checks the clarity and consistency of procedures by programming them in computer language.
Dare A. Wells, Lagrangian Dynamics, Schaum's Outline Series (McGraw-Hill, 1967) ISBN 0-07-069258-0, A 350-page comprehensive "outline" of the subject.
Robert Weinstock, Calculus of Variations, with Applications to Physics and Engineering (Dover Publications, 1974). ISBN 0-486-63069-2. An oldie but goodie, with the formalism carefully defined before use in physics and engineering.
Wolfgang Yourgrau and Stanley Mandelstam, Variational Principles in Dynamics and Quantum Theory (Dover Publications, 1979). A nice treatment that does not avoid the philosophical implications of the theory and lauds the Feynman treatment of quantum mechanics that reduces to the principle of least action in the limit of large mass.
Edwin F. Taylor's page | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Year and a day**
Year and a day:
Year and a day can refer to: The year and a day rule, a period tied into various legal principles in a number of jurisdictions A Year and a Day (1998 novel), by Virginia Henley A Year and a Day (2004 novel), by Leslie Pietrzyk (pub. William Morrow) A Year and a Day (2006 novel), by Sara M. Harvey A poem by Elizabeth Siddall "Year and a Day", a song by the Beastie Boys A Year and a Day, a 2005 film A period used in handfastings – though more from the works of Sir Walter Scott than history The time The Owl and the Pussycat sailed for in Edward Lear's poem of that name.
Year and a day:
Long term assets are considered to be those held for a year and a day.
Pagans and secret societies often use a year and a day as a minimum period of initiation or between degrees of membership.
A Year and a Day, a 2008 mixtape by rapper T.I.Note: a lunar year (13 lunar months of 28 days) plus a day is a solar year (365 days). Also that 366 days would be a full year even if a leap day was included. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TikTok food trends**
TikTok food trends:
TikTok food trends are specific food recipes and food-related fads on the social media platform TikTok. This content amassed popularity in 2020 during the COVID-19 pandemic, as many people cooked and ate at home and more people turned to social media for entertainment. While some TikTok users share their diets and recipes, others expand their brand or image on TikTok through step-by-step videos of easy and popular recipes. Users often refer to food-related content as "FoodTok."The hashtags #TikTokFood and #FoodTok are used to identify food-related content, and have been viewed 40.2 billion and 9.7 billion times respectively since the app's creation, according to the company. Food trends have had profound societal impacts on their millions of viewers. There has been increased popularity in youth cooking, conversations on body image, use of marketing of food products on social media, and food shortages due to mass trends. Certain TikTok content creators such as Eitan Bernath, Jeron Combs, and Emily Mariko have achieved fame by crafting recipes that have become food trends. They and their colleagues have developed recipes such as the leftover salmon bowl, baked feta cheese pasta, and pesto eggs.
Timeline:
2020 Dalgona coffee Dalgona coffee is whipped coffee that is made by combining equal parts of coffee, sugar, hot water, and then whipping the mixture to produce a froth-like texture. Dalgona coffee originates from Macau, but first emerged as a trend in South Korea where it earned its name. In March 2020, the trend emerged in the United States.
Timeline:
Mini pancake cereal Mini pancake cereal is a TikTok food trend where pancakes are made in miniature and served in the style of breakfast cereal. To replicate a bowl of cereal, users make tiny pancakes and add them to a bowl usually topped with maple syrup and butter. Sydney Melhoff (@sydneymelhoff) is credited with first posting the trend on the platform in April. Following the pancake cereal trend, several individuals developed their own take on the dish and recreated it using different foods like cookies, donuts, and croissants.
Timeline:
Decorative focaccia bread This TikTok trend was created in home kitchens, using the hashtag #focacciaart in the spring season of 2020. People decorate focaccia loaves with vegetables, herbs, and more.
Cloud bread Cloud bread is a light and fluffy low-carb substitute for bread made with egg whites, corn starch, and sugar. Cloud bread became popular on TikTok in July and @linqanaaa is credited for bringing it to the platform.
Timeline:
Hot chocolate bombs Hot chocolate bombs (also known as cocoa bombs) were popularised on TikTok by Eric Torres-Garcia in December 2020. They are chocolate spheres filled with hot chocolate powder and other confections, such as marshmallows, that are then submerged into hot milk which causes the chocolate sphere to erupt. They became popular around the Christmas of 2020, prompting several bakers and store owners to add these confections to their menus.Torres-Garcia claims to have posted the first cocoa bombs video on TikTok, and trademarked the name. Since then, many people have attempted to recreate the dessert as well as create their own signature chocolate bombs.
Timeline:
2021 Baked feta cheese pasta The simplicity and ease of creating the dish is something that has allowed the recipe to go viral. The dish is made using a few simple ingredients: cherry tomatoes, a block of feta cheese, olive oil, pasta, basil, and garlic. In 2019, a Finnish food blogger name Jenni Häyrinen developed the dish but it only went on to became viral in February of 2021. The trend became so popular that it caused a shortage of feta cheese in Finnish grocery stores.
Timeline:
Nature's cereal Nature's cereal is a trend on TikTok developed by user @natures_food in March 2021, where traditional cereal is replaced with fruit; and the milk is replaced with coconut water. Some users claim this cereal relieves constipation and gives people energy. One factor that has boosted the popularity of this trend is the attention it received from singer Lizzo who posted several videos of herself enjoying nature's cereal.
Timeline:
Baked Oats This trend originating in the spring of 2021, starts with using oat flour, instead of whole oats. Baked oats can be a variety of different flavors and can be baked in a short amount of time.
Timeline:
Pesto eggs Pesto eggs is a TikTok food trend involving the substitution of pesto sauce for oil when cooking eggs on a stovetop. The technique is successful since pesto sauce already contains olive oil as a primary ingredient. Amy Wilichowsky, a dietitian and TikToker, shared a video of her cooking eggs using the technique on the social media platform on April 24, 2021, and is credited as the creator of the trend. The original recipe included bread topped with avocado, ricotta cheese, honey, salt, pepper, and red pepper flakes but multiple variations have arisen since then.
Timeline:
Pasta Chips Pasta chips was created in June 2021 and are mostly eaten as a snack or appetizer. After cooking pasta in boiling water, the pasta is then added to an air fryer to get crispy. Pasta chips can be seasoned in a variety of different flavors.
Timeline:
Frozen honey In the summer of 2021, eating honey frozen from a plastic bottle went viral on TikTok. The hashtag #FrozenHoney achieved nearly 600 million views by the start of August according to the company. The origin of the trend is unclear, although NBC News noted that ASMR creators had previously consumed frozen honey in their YouTube videos because their audiences found the noise satisfying to listen to. However, NBC News reported that some users on the app had experienced diarrhea or otherwise felt sick. The article speculated that the cause could be the amount consumed, saying that while honey does not pose a health risk in small amounts, eating an excessive amount can cause diarrhea or dental issues.
Timeline:
Leftover salmon bowl The salmon rice bowl trend was originally developed by TikTok lifestyle influencer Emily Mariko. The recipe was first introduced on August 25, 2021, but was revised multiple times with the final variation uploaded on September 21, 2021. The video received forty million views and inspired one hundred and fifty-five videos with related content. In this dish, mashed salmon and rice are heated in a microwave and then covered in mayonnaise, soy sauce, and sriracha. It is consumed with kimchi and dried nori seaweed squares. Mariko heats the dish with an ice cube on top to steam the rice while it is heating up. This salmon rice bowl is easy to make due to its simple and few ingredients, under 5 minute cooking time, and incorporation of leftover salmon.
Timeline:
Chili oil eggs This is TikToker, Jen Curley's, twist on pesto eggs created in September 2021. Only two ingredients, these eggs have complex umami flavors from the chili oil.
Timeline:
Flamin' Hot Cheetos salad TikTok user @rxthism created a viral recipe by adding Flamin' Hot Cheetos to a salad mix. With over four and a half million views, this mixture of cucumbers, Flamin' Hot Cheetos, hot sauce, cilantro, and lemon juice has sparked substantial interest. The implementation of junk food into a regular dish became a new trend in FoodTok in September 2021, and more creative dishes of the sort were created in response.
Timeline:
2022 Green Goddess Cabbage Salad Created by Baked by Melissa in January 2022, this vegan pesto-like dressing is accompanied by nuts and any vegetables you may have on hand. Typically, this salad is made with shredded cabbage, cucumbers, chives, and scallions.
Spicy Pickled Garlic Spicy Pickled Garlic is credited to TikTok user @lalaleluu in March 2022. This trend consists of pickled garlic in a jar, sriracha, chili flakes, and thyme.
Timeline:
Cowboy Caviar Originally created by TikTok user @brialem in June 2022, Cowboy Caviar was arguably the most viral recipe of the year with over 17 million views and 2.7 million likes. This dip typically contains beans, corn, avocado, tomatoes, peppers, onions, and a dressing, but can include as many (or as few) ingredients as desired. Fans of the dip have created variations by adding ingredients like mangoes, peaches, and pomegranate seeds.
Timeline:
Water Pie A Great Depression recipe for a pie with a filling made primarily from water went viral on TikTok in 2022. One variant of the dish popular on the site was made with the soft drink Sprite.
Notable figures:
Eitan Bernanth Eitan Bernath is a 19-year-old TikTok star with over 1.6 million followers on the platform as of May 2021. After teaching himself to cook by watching YouTube and the Food Network, he posted his first TikTok in 2019: within 24 hours of posting his first video on an easy-to-make recipe, he gained tens of thousands of followers. His trademark upbeat and energetic behavior in combination with his focus on easy recipes differentiates him from traditional culinary experts.
Notable figures:
Jeron Combs Jeron Combs posted his first TikTok video in May 2020 from a prison cell, and since then has attracted millions of viewers. Combs converted his metal bed frame into a cooking surface and documented his meal preparation process. His account, @blockboyjmoney, has now been deleted from TikTok's platform, but an alternative account with the handle @blaise.x0 posts videos on his behalf to over 330,000 followers as reported by the company.
Notable figures:
Emily Mariko Emily Mariko attained TikTok fame after posting a recipe video about leftover salmon bowls on August 25, 2021. Her signature salmon dish, along with the lack of music and filler audio in contrast to most TikTok culinary videos, has created a following of over 6 million people.
Jeremy Scheck With over 2 million followers on TikTok, Jeremy Scheck is a college student who creates culinary content that focuses on culture, nutrition, and humor. He quickly found success after taking classes relating to dairy science, nutrition, and horticulture. His nutritional commentary and cultural references stem from his university coursework.
Notable figures:
Jessica Woo As a mother of three, Jessica Woo documents her process for packing her kids’ lunches. Her focus on consistently artful presentations of common foods, such as salami and string cheese, draws an audience of over 5 million viewers as of August 2020. Her handwritten notes and catchphrase, “let’s make some lunch for my kids,” attract hundreds of thousands of viewers to her videos, and she cites an emphasis on being “like a regular mom” as her key to success.
Notable figures:
@menwiththepot This TikToker is known for using unique kitchen tools, in very scenic landscapes. His scenery mostly consists of a wilderness backdrop. This user uses massive knives to cut his food for his recipes and then cooks it in his huge pot.
Societal impact:
Body image TikTok food trends are sometimes seen or used as templates for a healthier, nutritional lifestyle for viewers to follow. However, many of these posts are created by users who lack professional qualifications to promote these ideas. Quite often, these food trends are associated with lifestyle tips, therefore influencing their diet, daily tasks, and personal routine. On the other hand, TikTok food trends can encourage and stimulate body positivity and allow people to promote the importance of self-satisfaction relating to body image if they so desire. These food trends can be an opportunity to express themselves and their personal diet choices, while also not conforming to the ideas created by novice users.'What I eat in a day' videos have been criticised for causing more harm than good. These videos are meant to give an inside look into influencers eating habits; however, Cara Harbstreet says that the cost, time, and energy it takes to produce this day's worth of food, is often left off-camera. Harbstreet, MS, RD, LD, of Street Smart Nutrition, states that the main issue is that influencers are saying, If you eat like me, you can look like me." This contributes to an unhealthy obsession with healthy eating and disordered eating behaviors.
Societal impact:
Food shortages TikTok food trends have also caused food shortages of ingredients highlighted in viral videos. For example, the baked feta cheese pasta trend resulted in feta cheese shortages. Saxelby Cheesemongers, a cheese seller based in Rhode Island, was affected by this shortage. Its warehouse in Brooklyn usually sells about two-thousand pounds of cheese per week to their regular customers in the city. However, after the video of feta cheese pasta was released, its distributor stated there was none in stock. Other cheese companies, such as Winnimere, reported similar impacts.
Societal impact:
Dangerous TikTok food trends According to food safety experts, there are some viral TikTok trends that should be avoided. According to Janilyn Hutchings, instructions given on TikTok for making grilled cheese sandwiches in a toaster risked causing kitchen fires, because toasters are not designed like panini presses.
Societal impact:
Increased popularity of youth cooking The easy-to-follow nature of TikTok food trends, recipes, and tutorial videos has led to an increase in youth interaction with the platform. TikTok has proved itself to be an accessible platform for teaching youth groups about basic cooking skills and nutrition. The short duration of TikTok videos on the has requires more compressed and clear recipes, taking away from the complexity usually associated with cooking. With 92% of U.S. adolescents having access to the Internet on a daily basis, TikTok has become an extremely accessible source of information for them to gain practical information, including cooking skills.A popular trend for college students was folding a tortilla wrap into four triangular pieces and filling it with at-home ingredients such as vegetables and cold meats. Jeremy Scheck, an undergraduate at Cornell University, began to create TikTok content about his passion for food and his recipes when the COVID-19 pandemic began in early 2020. When school transitioned online, Scheck shared trending recipes such as crispy potatoes and fried rice that sparked interest for college students like himself.Popularity of TikTok recipes among college students may stem from their interest in staying away from fast food. These trends also help college students feel more comfortable in the kitchen.
Societal impact:
Marketing TikTok has turned into a marketing platform for many brands as cooking-related products gained popularity in the past year. Surveys have proven that using TikTok as a marketing tool has been a successful investment for restaurants or individual food items. The instant feedback allows each company to discover what factors affect the popularity of their products through both reactions to and numbers of views of posts. For instance, Nutter Butter's TikTok has repeatedly dueted TikTok stars including Bella Poarch to reach a larger audience. Another example is when Dunkin’ Donuts launched a collaboration with TikTok star Charli D’Amelio to promote a new beverage and the corporation as a whole. From this, Dunkin’ cold brew sales rose 20% and 45% respectively in the first two days after the launch. On top of that, the first collaboration video concluded with a 57% increase in the Dunkin’ mobile app downloads (corresponding to downloads within 90 days preceding).It also benefits corporations by providing recommendations to improve their marketing strategy, food items, décor, or any other factor illustrated in the ad. Some input that drives future marketing decisions on platforms like TikTok would include the attractiveness of the items advertised, innovation regarding new and interesting products, and the level of ease for the consumer to purchase the product after seeing the post. Drivers such as these can eliminate possible negative connotations surrounding a product and influence positive reinforcement, feedback, and action for and by the consumer. An example of a company that implements these strategies is Chipotle Mexican Grill. One of their social campaigns is creating TikTok challenges like #GuacDance and #Boorito, in which they create interactive content that follows current social media trends to stimulate an increase in revenue. In particular, #GuacDance became the largest “branded” challenge in the United States with hundreds of thousands of user responses within the week-long event. This campaign resulted in the more than 800,000 sides of guacamole given out on July 31, 2019. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Event Horizon Telescope**
Event Horizon Telescope:
The Event Horizon Telescope (EHT) is a large telescope array consisting of a global network of radio telescopes. The EHT project combines data from several very-long-baseline interferometry (VLBI) stations around Earth, which form a combined array with an angular resolution sufficient to observe objects the size of a supermassive black hole's event horizon. The project's observational targets include the two black holes with the largest angular diameter as observed from Earth: the black hole at the center of the supergiant elliptical galaxy Messier 87 (M87*, pronounced "M87-Star"), and Sagittarius A* (Sgr A*, pronounced "Sagittarius A-Star") at the center of the Milky Way.The Event Horizon Telescope project is an international collaboration that was launched in 2009 after a long period of theoretical and technical developments. On the theory side, work on the photon orbit and first simulations of what a black hole would look like progressed to predictions of VLBI imaging for the Galactic Center black hole, Sgr A*. Technical advances in radio observing moved from the first detection of Sgr A*, through VLBI at progressively shorter wavelengths, ultimately leading to detection of horizon scale structure in both Sgr A* and M87. The collaboration now comprises over 300 members, and 60 institutions, working in over 20 countries and regions.The first image of a black hole, at the center of galaxy Messier 87, was published by the EHT Collaboration on April 10, 2019, in a series of six scientific publications. The array made this observation at a wavelength of 1.3 mm and with a theoretical diffraction-limited resolution of 25 microarcseconds. In March 2021, the Collaboration presented, for the first time, a polarized-based image of the black hole which may help better reveal the forces giving rise to quasars. Future plans involve improving the array's resolution by adding new telescopes and by taking shorter-wavelength observations. On 12 May 2022, astronomers unveiled the first image of the supermassive black hole at the center of the Milky Way, Sagittarius A*.
Telescope array:
The EHT is composed of many radio observatories or radio-telescope facilities around the world, working together to produce a high-sensitivity, high-angular-resolution telescope. Through the technique of very-long-baseline interferometry (VLBI), many independent radio antennas separated by hundreds or thousands of kilometres can act as a phased array, a virtual telescope which can be pointed electronically, with an effective aperture which is the diameter of the entire planet, substantially improving its angular resolution. The effort includes development and deployment of submillimeter dual polarization receivers, highly stable frequency standards to enable very-long-baseline interferometry at 230–450 GHz, higher-bandwidth VLBI backends and recorders, as well as commissioning of new submillimeter VLBI sites.Each year since its first data capture in 2006, the EHT array has moved to add more observatories to its global network of radio telescopes. The first image of the Milky Way's supermassive black hole, Sagittarius A*, was expected to be produced from data taken in April 2017, but because there are no flights in or out of the South Pole during austral winter (April to October), the full data set could not be processed until December 2017, when the shipment of data from the South Pole Telescope arrived.Data collected on hard drives are transported by commercial freight airplanes (a so-called sneakernet) from the various telescopes to the MIT Haystack Observatory and the Max Planck Institute for Radio Astronomy, where the data are cross-correlated and analyzed on a grid computer made from about 800 CPUs all connected through a 40 Gbit/s network.Because of the COVID-19 pandemic, weather patterns, and celestial mechanics, the 2020 observational campaign was postponed to March 2021.
Published images:
Messier 87* The Event Horizon Telescope Collaboration announced its first results in six simultaneous press conferences worldwide on April 10, 2019. The announcement featured the first direct image of a black hole, which showed the supermassive black hole at the center of Messier 87, designated M87*. The scientific results were presented in a series of six papers published in The Astrophysical Journal Letters. Clockwise rotating black hole was observed in the 6σ region.The image provided a test for Albert Einstein's general theory of relativity under extreme conditions. Studies have previously tested general relativity by looking at the motions of stars and gas clouds near the edge of a black hole. However, an image of a black hole brings observations even closer to the event horizon. Relativity predicts a dark shadow-like region, caused by gravitational bending and capture of light, which matches the observed image. The published paper states: "Overall, the observed image is consistent with expectations for the shadow of a spinning Kerr black hole as predicted by general relativity." Paul T.P. Ho, EHT Board member, said: "Once we were sure we had imaged the shadow, we could compare our observations to extensive computer models that include the physics of warped space, superheated matter, and strong magnetic fields. Many of the features of the observed image match our theoretical understanding surprisingly well."The image also provided new measurements for the mass and diameter of M87*. EHT measured the black hole's mass to be 6.5±0.7 billion solar masses and measured the diameter of its event horizon to be approximately 40 billion kilometres (270 AU; 0.0013 pc; 0.0042 ly), roughly 2.5 times smaller than the shadow that it casts, seen at the center of the image. Previous observations of M87 showed that the large-scale jet is inclined at an angle of 17° relative to the observer's line of sight and oriented on the plane of the sky at a position angle of −72°. From the enhanced brightness of the southern part of the ring due to relativistic beaming of approaching funnel wall jet emission, EHT concluded the black hole, which anchors the jet, spins clockwise, as seen from Earth. EHT simulations allow for both prograde and retrograde inner disk rotation with respect to the black hole, while excluding zero black hole spin using a conservative minimum jet power of 1042 erg/s via the Blandford–Znajek process.Producing an image from data from an array of radio telescopes requires much mathematical work. Four independent teams created images to assess the reliability of the results. These methods included both an established algorithm in radio astronomy for image reconstruction known as CLEAN, invented by Jan Högbom, as well as self-calibrating image processing methods for astronomy such as the CHIRP algorithm created by Katherine Bouman and others. The algorithms that were ultimately used were a regularized maximum likelihood (RML) algorithm and the CLEAN algorithm.In March 2020, astronomers proposed an improved way of seeing more of the rings in the first black hole image. In March 2021, a new photo was revealed, showing how the M87 black hole looks in polarised light. This is the first time astronomers have been able to measure polarisation so close to the edge of a black hole. The lines on the photo mark the orientation of polarisation, which is related to the magnetic field around the shadow of the black hole.In August 2022, a team led by University of Waterloo researcher Avery Broderick released a "remaster[ed]" version of original image generated from the data collected by the EHT. This image "resolve[d] a fundamental signature of gravity around a black hole," with it showing a displaying photon ring around M87*.The claim has been subsequently disputed.In 2023, EHT released new, sharper images of the M87 black hole, reconstructed from the same 2017 data but created with PRIMO algorithm.
Published images:
3C 279 In April 2020, the EHT released the first 20 microarcsecond resolution images of the archetypal blazar 3C 279 it observed in April 2017. These images, generated from observations over 4 nights in April 2017, reveal bright components of a jet whose projection on the observer plane exhibit apparent superluminal motions with speeds up to 20 c. Such apparent superluminal motion from relativistic emitters such as an approaching jet is explained by emission originating closer to the observer (downstream along the jet) catching up with emission originating further from the observer (at the jet base) as the jet propagates close to the speed of light at small angles to the line of sight.
Published images:
Centaurus A In July 2021, high resolution images of the jet produced by the supermassive black hole sitting at the center of Centaurus A were released. With a mass around 5.5×107 M☉, the black hole is not large enough for its photon sphere to be observed, as in EHT images of Messier M87*, but its jet extends even beyond its host galaxy while staying as a highly collimated beam which is a point of study. Edge-brightening of the jet was also observed which would exclude models of particle acceleration that are unable to reproduce this effect. The image was 16 times sharper than previous observations and utilized a 1.3 mm wavelength.
Published images:
Sagittarius A* On May 12, 2022, the EHT Collaboration revealed an image of Sagittarius A*, the supermassive black hole at the center of the Milky Way galaxy. The black hole is 27,000 light-years away from Earth; it is thousands of times smaller than M87*. Sera Markoff, Co-Chair of the EHT Science Council, said: "We have two completely different types of galaxies and two very different black hole masses, but close to the edge of these black holes they look amazingly similar. This tells us that General Relativity governs these objects up close, and any differences we see further away must be due to differences in the material that surrounds the black holes." J1924-2914 In August 2022, the EHT together with Global Millimeter VLBI Array and the Very Long Baseline Array imaged the distant blazar J1924-2914. They operated at 230 GHz, 86 GHz and 2.3+8.7 GHz, respectively, the highest angular resolution images of polarized emission from a quasar ever obtained. Observations reveal a helically bent jet and the polarization of its emission suggest a toroidal magnetic field structure. The object is used as calibrator for Sagittarius A* sharing strong optical variability and polarization with it.
Published images:
NRAO 530 In February 2023, the EHT reported on the observations of the quasar NRAO 530. NRAO 530 (1730−130, J1733−1304) is a flat-spectrum radio quasar (FSRQ) that belongs to the class of bright γ-ray blazars and shows significant variability across the entire electromagnetic spectrum. The source was monitored by the University of Michigan Radio Observatory at 4.8, 8.4, and 14.5 GHz for several decades until 2012. The quasar underwent a dramatic radio outburst in 1997, during which its flux density at 14.5 GHz exceeded 10 Jy, while the average value is ~2 Jy. Since 2002, NRAO 530 has been monitored by the Submillimeter Array (SMA; Maunakea, Hawaii) at 1.3 mm and 870 μm. NRAO 530 has a redshift of z = 0.902 (Junkkarinen 1984), for which 100 μas corresponds to a linear distance of 0.803 pc. The source contains a supermassive black hole, the mass of which is currently uncertain, with estimates ranging from 3×108 M☉ to 2×109 M☉.It was observed with the Event Horizon Telescope on 2017 April 5−7, when NRAO 530 was used as a calibrator for the EHT observations of Sagittarius A*. The observations were performed with the full EHT 2017 array of eight telescopes located at six geographical sites. At z = 0.902, this is the most distant object imaged by the EHT so far. The team reconstructed the first images of the source at 230 GHz, at an angular resolution of ~20 μas, both in total intensity and in linear polarization (LP). Source variability was not detected, that allowed to represent the whole data set with static images. The images reveal a bright feature located on the southern end of the jet, which was associated with the core. The feature is linearly polarized, with a fractional polarization of ~5%–8%, and it has a substructure consisting of two components. Their observed brightness temperature suggests that the energy density of the jet is dominated by the magnetic field. The jet extends over 60 μas along a position angle ~ −28°. It includes two features with orthogonal directions of polarization (electric vector position angle), parallel and perpendicular to the jet axis, consistent with a helical structure of the magnetic field in the jet. The outermost feature has a particularly high degree of LP, suggestive of a nearly uniform magnetic field.
Collaborating institutes:
The EHT Collaboration consists of 13 stakeholder institutes: the Academia Sinica Institute of Astronomy and Astrophysics the University of Arizona the University of Chicago the East Asian Observatory Goethe University Frankfurt Smithsonian Astrophysical Observatory (part of the Center for Astrophysics | Harvard & Smithsonian) Institut de radioastronomie millimétrique (IRAM, itself a collaboration between the French CNRS, the German Max Planck Society, and the Spanish Instituto Geográfico Nacional), Large Millimeter Telescope Alfonso Serrano Max Planck Institute for Radio Astronomy MIT Haystack Observatory National Astronomical Observatory of Japan Perimeter Institute for Theoretical Physics Radboud University | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**RMON**
RMON:
The Remote Network Monitoring (RMON) MIB was developed by the IETF to support monitoring and protocol analysis of local area networks (LANs). The original version (sometimes referred to as RMON1) focused on OSI layer 1 and layer 2 information in Ethernet and Token Ring networks. It has been extended by RMON2 which adds support for Network- and Application-layer monitoring and by SMON which adds support for switched networks. It is an industry-standard specification that provides much of the functionality offered by proprietary network analyzers. RMON agents are built into many high-end switches and routers.
Overview:
Remote Monitoring (RMON) is a standard monitoring specification that enables various network monitors and console systems to exchange network-monitoring data. RMON provides network administrators with more freedom in selecting network-monitoring probes and consoles with features that meet their particular networking needs.
Overview:
An RMON implementation typically operates in a client/server model. Monitoring devices (commonly called "probes" in this context) contain RMON software agents that collect information and analyze packets. These probes act as servers and the Network Management applications that communicate with them act as clients. While both agent configuration and data collection use SNMP, RMON is designed to operate differently than other SNMP-based systems: Probes have more responsibility for data collection and processing, which reduces SNMP traffic and the processing load of the clients.
Overview:
Information is only transmitted to the management application when required, instead of continuous polling and monitoringIn short, RMON is designed for "flow-based" monitoring, while SNMP is often used for "device-based" management. RMON is similar to other flow-based monitoring technologies such as NetFlow and SFlow because the data collected deals mainly with traffic patterns rather than the status of individual devices. One disadvantage of this system is that remote devices shoulder more of the management burden, and require more resources to do so. Some devices balance this trade-off by implementing only a subset of the RMON MIB groups (see below). A minimal RMON agent implementation could support only statistics, history, alarm, and event.
Overview:
The RMON1 MIB consists of ten groups: Statistics: real-time LAN statistics e.g. utilization, collisions, CRC errors History: history of selected statistics Alarm: definitions for RMON SNMP traps to be sent when statistics exceed defined thresholds Hosts: host specific LAN statistics e.g. bytes sent/received, frames sent/received Hosts top N: record of N most active connections over a given time period Matrix: the sent-received traffic matrix between systems Filter: defines packet data patterns of interest e.g. MAC address or TCP port Capture: collect and forward packets matching the Filter Event: send alerts (SNMP traps) for the Alarm group Token Ring: extensions specific to Token RingThe RMON2 MIB adds ten more groups: Protocol Directory: list of protocols the probe can monitor Protocol Distribution: traffic statistics for each protocol Address Map: maps network-layer (IP) to MAC-layer addresses Network-Layer Host: layer 3 traffic statistics, per each host Network-Layer Matrix: layer 3 traffic statistics, per source/destination pairs of hosts Application-Layer Host: traffic statistics by application protocol, per host Application-Layer Matrix: traffic statistics by application protocol, per source/destination pairs of hosts User History: periodic samples of user-specified variables Probe Configuration: remote configure of probes RMON Conformance: requirements for RMON2 MIB conformance
Important RFCs:
RMON1: RFC 2819 - Remote Network Monitoring Management Information Base RMON2: RFC 4502 - Remote Network Monitoring Management Information Base Version 2 using SMIv2 HCRMON: RFC 3273 - Remote Network Monitoring Management Information Base for High Capacity Networks SMON: RFC 2613 - Remote Network Monitoring MIB Extensions for Switched Networks Overview: RFC 3577 - Introduction to the RMON Family of MIB Modules | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Granular applicator**
Granular applicator:
Granular applicator is a machine that applies granular fertiliser, pesticide, such as slug pellets or Avadex, or insecticide.Granular applicators are used for precision application of solids to improve crop yields and quality. Application rates are often controlled electronically to improve accuracy.
Granular applicator manufacturers:
UK Lite-Trac Horstine Opico America Sutton Agricultural Enterprises Inc Gandy Canada Valmar | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GlassFish Shoal**
GlassFish Shoal:
Project Shoal is a java based scalable dynamic clustering framework that provides infrastructure to build fault tolerance, reliability and availability and can be plugged into the GlassFish Application Server.
GlassFish Shoal:
The framework can be plugged into any product needing clustering and related distributed systems capabilities without tightly binding to a specific communications infrastructure. The framework can be plugged in as an in-process component. The framework will have two broad categories of public APIs, namely, a Client API, and a Group Communication Provider API. Some of the Shoal Capabilities are Shoal Group Event Notifications Distributed State Cache Shoal Automated Delegated Recovery Initiation Shoal Messaging | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nikon F 80-200mm lens**
Nikon F 80-200mm lens:
Nikon F 80-200mm lens refer to lens made by Japanese manufacturer Nikon, for its camera systems.
Overview:
Nikon has manufactured 9 different zoom lenses with a focal-length range of 80 to 200 mm range for its F-mount 35mm film camera or its full-frame DSLR lineup: f/4.5 MK-I (discontinued) f/4.5 MK-II (discontinued) f/4.0 AI-S (discontinued) f/2.8 ED AI-S (discontinued) f/2.8D ED AF (discontinued) f/2.8D ED AF II (discontinued) f/2.8D ED AF III (discontinued) f/4.5-5.6D AF (discontinued) f/2.8D IF-ED AF-S (discontinued) All models are out of production, including the latest "AF-S 80-200mm f/2.8D IF-ED". Instead, Nikon has released new lens in this focal length, such as AF-S VR 70-200mm f/2.8G lens in 2003. In which, IF stand for Internal focusing while VR stand for Nikon's anti-vibration system, while the letter G behind the F number stand for the absence of aperture control on the lens (all G lens are D lens). To sum up, the new 70-200mm Nikon F-mount lens are not directly comparable to its older sister variant.
Overview:
Generally, most Nikon F-mount 80-200mm lens have a larger maximum aperture than sister range Nikon F 70-210mm lens. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Timeline of progressive rock**
Timeline of progressive rock:
This is an introductory page to timelines of artists, albums, and events in progressive rock and its subgenres. While this page shows the formation of significant bands in the genre, the detailed timeline is presented in separate articles for each decade.
Timeline by decade:
Click on the header for each decade to see the detailed timeline.
1960s Newly formed bands 1970s Newly formed bands 1980s Newly formed bands 1990s Newly formed bands 2000s Newly formed bands 2010s Newly formed bands | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phase-comparison monopulse**
Phase-comparison monopulse:
Phase-comparison monopulse is a technique used in radio frequency (RF) applications such as radar and direction finding to accurately estimate the direction of arrival of a signal from the phase difference of the signal measured on two (or more) separated antennas or more typically from displaced phase centers of an array antenna. Phase-comparison monopulse differs from amplitude-comparison monopulse in that the former uses displaced phase centers with a common beam pointing direction, while the latter uses a common phase center and displaced beam pointing directions.In phase-comparison monopulse, typically an array is subdivided into sub-arrays, and then a "sum" and a "difference" or "del" channel are formed. For a linear array, these subarrays would each be half of the elements, divided in the middle. For a planar array, these sub-arrays would be the four quadrants of the array, each with 1/4 of the array's elements. In a linear array, the output of each sub-array is summed to form the "sum" channel, and the same outputs are subtracted to form the "del" channel. The monopulse ratio is formed by dividing the imaginary part of the del channel by the real part of the sum channel. This ratio gives an error signal that indicates to a high degree of accuracy the actual target angle as compared to the center of the beam. For a planar array, one sum channel is formed as the sum of the outputs of all four quadrants, but two del channels are formed, one for the elevation dimension and one for the orthogonal azimuth dimension. Two monopulse ratios are formed just as with a linear array, each one indicating the deviation angle in one dimension from the center of the beam.There are some common misconceptions about phase comparison monopulse. First, only one beam is formed. Monopulse processing is done entirely with the received signal in the array manifold and beam forming network. Speaking in terms of only one dimension for clarity, such as with a linear array, the signal is received by the array and summed into each of two subarrays with displaced phase centers. The sum channel is formed simply by adding these two subarray outputs, and the result is exactly the same as if the entire array was initially summed in one step. The del channel is formed simply by subtracting these same subarray outputs. Second, phase-comparison monopulse doesn't technically actually do a phase comparison, but rather simply divides the del channel by the sum channel to arrive at a ratio wherein the angle information is encoded. The following mathematical derivation should make it clear why this is so.
Mathematics:
Sum Pattern We can define the beam pattern (array factor) of a uniform linear array (ULA) with N elements, as: Bθ(θ)=w→Hv→θ(θ)=∑n=0N−1wn∗[v→θ(θ)]n=∑n=0N−1wn∗ej(n−N−12)2πλdcosθ , where v→θ is the array manifold vector and w→ is a vector of complex weights representing amplitude and phase adjustments applied to each antenna element. The manifold vector, v→θ , fully encapsulates all of the spatial properties of the array. d is the distance between elements of the array, and θ is the angle of arrival of an incident plane wave, defined from end-fire, i.e., 90 ∘ is a signal from array broadside.It is common to perform a variable substitution to ψ -space, where ψ=2πλdcosθ , and therefore we have: Bψ(ψ)=∑n=0N−1wn∗ej(n−N−12)ψ and we can more easily see that ψ is simply the phase shift between adjacent elements. The N−12 term simply references the absolute phase to the physical center of the array.
Mathematics:
Notice that this result is the same if we instead first sum each half of the array, then add those results together.
Mathematics:
Bψ(ψ)=∑n=0N2−1wn∗ej(n−N−12)ψ+∑n=N2N−1wn∗ej(n−N−12)ψ The weight vector is a combination of a steering vector that steers the beam in a steered direction, ψS , using phase adjustments and an amplitude taper that is often applied to reduce sidelobes. Thus, [w→]n=anej(n−N−12)ψS , and Bψ(ψΔ)=ej(N−12)ψΔ∑n=0N−1ane−jnψΔ , where ψΔ=ψS−ψ .We can clearly see now that the beam pattern, in ψ -space, is the spatial equivalent of the discrete time Fourier transform (DTFT) of the array amplitude tapering vector times a linear phase term. The advantage of ψ -space is that the beam shape is identical no matter where it is steered, and is only a function of the deviation of the desired target phase from the actual target phase.
Mathematics:
Let us now assume an un-tapered, normalized array with an=1N . The beam pattern can be easily shown to be the familiar aliased sinc (asinc) function: Bψ(ψΔ)=1Nsin(NψΔ2)sinψΔ2 This pattern is also known, for monopulse purposes, as the "sum" pattern, as it was obtained by summing all of the elements together. Going forward we will suppress the Δ subscript and instead use only ψ with the understanding that it represents the deviation of the steered target phase and the actual target phase.
Mathematics:
Difference Pattern Let us now develop the monopulse "difference" or "del" pattern by dividing the array into two equal halves called subarrays. We could have just as easily derived the sum pattern by first determining the pattern of each subarray individually and adding these two results together. In monopulse practice, this is what is actually done. The reader is left to show that v→ψ(ψ) is conjugate symmetric, so it can be re-written in terms of only its first half, v→ψ1(ψ) using an exchange matrix, J , that "flips" this vector. J=[0⋯01⋮⋱100⋅⋅⋅⋱⋮10⋯0] Note that J⋅J=I . Assuming that N is even (we could just as easily develop this using an odd N), v→ψ(ψ)=[v→ψ1(ψ)⋯Jv→ψ1∗(ψ)] If we assume that the weight matrix is also conjugate symmetric (a good assumption), then w→=[w→1⋯Jw→1∗] and the sum beam pattern can be rewritten as: Bψ(ψ)=Σψ(ψ)=w→Hv→ψ(ψ)=[w→1H⋮w→1TJ][v→ψ1(ψ)⋯Jv→ψ1∗(ψ)]=w→1Hv→ψ1(ψ)+w→1Tv→ψ1∗(ψ)=2Re[w→1Hv→ψ1(ψ)] The difference or "del" pattern can easily be inferred from the sum pattern simply by flipping the sign of the weights for the second half of the array: Δψ(ψ)=[w→1H⋮−w→1TJ][v→ψ1(ψ)⋯Jv→ψ1∗(ψ)]=w→1Hv→ψ1(ψ)−w→1Tv→ψ1∗(ψ)=2Im[w→1Hv→ψ1(ψ)] Again assuming that an=1N , the del pattern can be shown to reduce to: Δψ(ψ)=2NIm[∑n=0N2−1e−j(n−N−12)ψ]=2Nsin2(Nψ4)sinψ2 Monopulse Ratio The monopulse ratio is formed as: ΔψΣψ=2Nsin2(Nψ4)sinψ21Nsin(Nψ2)sinψ2=2sin2(Nψ4)sin(Nψ2)=1−cos(Nψ2)sin(Nψ2)=tan(Nψ4) One can see that, within the 3dB beam width of the system, the monopulse ratio is almost linear. In fact, for many systems a linear approximation is good enough. One can also note that the monopulse ratio is continuous within the null-to-null beam width, but has asymptotes that occur at the beam nulls. Therefore, the monopulse ratio is only accurate to measure the deviation angle of a target within the main lobe of the system. However, targets detected in the sidelines of a system, if not mitigated, will produce erroneous results regardless.
Concept of Operations:
Before performing monopulse processing, a system must first detect a target, which it does as normal using the sum channel. All of the typical measurements that a non-monopulse system make are done using the sum channel, e.g., range, Doppler, and angle. However, the angle measurement is limited in that the target could be anywhere within the beam width of the sum beam, and therefore the system can only assume that the beam pointing direction is the same as the actual target angle. In reality, of course, the actual target angle and the beam steered angle will differ.
Concept of Operations:
Therefore, a monopulse processor functions by first detecting and measuring the target signal on the sum channel. Then, only as necessary for detected targets, it measures the same signal on the "del" channel, dividing the imaginary part of this result by the real part of the "sum" channel, then converting this ratio to a deviation angle using the relationships: ψΔ=ψS−ψ=4Narctan(ΔψΣψ) and θ=arccos((ψS−ψΔ)λ2πd)=arccos(λ2πd(2πλdcosθS−4Narctan(ΔψΣψ)))=arccos(cosθS−2λNπdarctan(ΔψΣψ)) This deviation angle, which can be positive or negative, is added to the beam pointing angle to arrive at the more accurate estimate of the actual target bearing angle. Of course, if the array is 2-dimensional, such as a planar array, there are two del channels, one for elevation and one for azimuth, and therefore two monopulse ratios are formed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Armando J. L. Pombeiro**
Armando J. L. Pombeiro:
Armando José Latourrette de Oliveira Pombeiro is a Portuguese chemical engineer.
Armando J. L. Pombeiro:
He was born in 1949 in Porto, Portugal. His education includes Chemical Engineering (1971, Instituto Superior Técnico (IST), Technical Univ. Lisbon); D. Phil. (1976, University of Sussex, England; supervisors: Prof. J. Chatt and Dr. R.L. Richards). He is currently a Full Professor (IST, since 1989) and Coordinator/Founder of the research Group on “Coordination Chemistry and Molecular Electrochemistry, Synthesis and Catalysis”. Pombeiro is a Full Member of the Academy of Sciences of Lisbon (since 1988), Member of the International Society of Electrochemistry, and was Chairman of the XXV Int. Conf. Organometallic Chemistry (XXV ICOMC, 2012). As a published author, he is widely held in libraries worldwide.
Academic work:
Since 1971, Pombeiro has been working at the Instituto Superior Técnico (IST), Technical Univ. Lisbon in Lisbon. He is currently Full Professor. His research activities concern: Activation of small molecules Activation of small molecules with biological, pharmacological, environmental or industrial interest or related ones [e.g., alkanes (functionalization under mild conditions), alkynes, phosphaalkynes, isocyanides, carbon monoxide, dinitrogen, nitriles, cyanamides, nitric oxide, oximes, oxadiazolines, carboxamides, amidines, olefins, azides or cyanates] by transition metal centres, and developing their application in metal-mediated synthesis and catalysis, namely by searching for mimetic systems of biological processes (e.g. catalysed by peroxidases, particulate methanemonooxygenase, nitrile hydratases and nitrogenases), alternatives for industrial processes and new types of molecular activation with significance in fine chemistry (including the synthesis of compounds with bioactivity). Thus, he developed carboxylation of saturated hydrocarbons with carbon monoxide and persulfate anion catalyzed by various metal compounds (the Sen–Fujiwara–Pombeiro reaction).
Academic work:
Crystal engineering of coordination compounds Crystal engineering of coordination compounds, self-assembly of polynuclear and supramolecular structures, transition metal and organometallic chemistries and catalysis in aqueous media, high pressure gas reactions.
Molecular electrochemistry of coordination and organic compounds Molecular electrochemistry of coordination and organic compounds, namely towards applications in electrosynthesis, electrocatalysis and in mechanistic studies, as well as in the establishment of potential-structure relationships, and in the induction of chemical reactivity by electron-transfer.
Publications A. J. L. Pombeiro has published about 500 papers in chemical journals. He is the editor and author (coauthor) of monographs and chapters.
Academic work:
Pombeiro is a Member of the Editorial Advisory Board of the ACS Catalysis (2011, the year of foundation), Inorganic Chemistry Communications (since 2003), Trends in Inorganic Chemistry (since 2008), Letters in Organic Chemistry (2008–10) and Portugaliae Electrochimiac Acta (since 1998) of the Journal of the Chinese Institute of Engineers (since 2011) and Catalysts (since 2010). His selected prizes: Madinabeitia (International Hispano-Portuguese prize), Royal Spanish Chemical Society, 2013, J. Heyrovský Centennial Medal ("J. Heyrovský Centennial Congress on Polarography", Prague, 1990), Rotary Club (Oporto) scholar prize, 1965-66. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solar eclipse of April 21, 2069**
Solar eclipse of April 21, 2069:
A partial solar eclipse will occur on April 21, 2069. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth.
Related eclipses:
Solar eclipses 2069–2072 This eclipse is a member of a semester series. An eclipse in a semester series of solar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit.
Related eclipses:
Saros 120 This eclipse is a part of Saros cycle 120, repeating every 18 years, 11 days, containing 71 events. The series started with partial solar eclipse on May 27, 933 AD, and reached an annular eclipse on August 11, 1059. It was a hybrid event for 3 dates: May 8, 1510, through May 29, 1546, and total eclipses from June 8, 1564, through March 30, 2033. The series ends at member 71 as a partial eclipse on July 7, 2195. The longest duration of totality was 2 minutes, 50 seconds on March 9, 1997. All eclipses in this series occurs at the Moon’s descending node. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**N-Phenylacetyl-L-prolylglycine ethyl ester**
N-Phenylacetyl-L-prolylglycine ethyl ester:
N-Phenylacetyl-l-prolylglycine ethyl ester is promoted as a nootropic and is a prodrug of cyclic glycine-proline. Other names include the brand name Noopept (Russian: Ноопепт), developmental code GVS-111; proposed INN omberacetam.Its synthesis was first reported in 1996. It is orally available, as of 2017 its metabolism and elimination half-life were not well understood, and cycloprolylglycine has not been measured in humans following administration. In cell culture, cycloprolylglycine increases brain derived neurotrophic factor (BDNF).It has been evaluated for neuroprotective effects in treating brain injuries and stroke.
Pharmacology:
One oft-cited study (originally published in Russian) conducted on rats, suggests that Noopept works via the "antioxidant effect, the anti-inflammatory action, and the ability to inhibit the neurotoxicity of excess calcium and glutamate, and to improve the blood rheology".Some studies suggest that the pharmacological properties of Noopept are derived from its action as an activator of Hypoxia-inducible factor (HIF-1).
Most of the effects of Noopept could be explained by its action as an activator of HIF-1.
Dosage:
Noopept is frequently dosed at 10-30mg per day. However, there is no solid evidence indicating that any dose of Noopept is optimal. Few human trials have ever been carried out on Noopept, and as one meta-analysis notes, animal studies have used doses ranging from 0.1mg/kg bodyweight to 10mg/kg bodyweight. Furthermore, no long-term studies have been done to evaluate the lasting effects of chronic use at any given dose; the longest human study lasted for 56 days. There is, therefore, no dose of Noopept which may be called "safe".
Legal status:
Hungary: As of 25 August 2020, Noopept is added to the controlled psychoactive substances list, prohibiting production, sale, import, storage and use.
Russia: Noopept in Russia is a drug of medicine and is available without a prescription.
United Kingdom: Contrary to popular belief, omberacetam is not illegal to produce, supply, or import under the Psychoactive Substance Act in the UK, which came into effect on May 26, 2016 because it does neither work as a CNS (central nervous system) depressant, nor as a CNS stimulant. However, sale and supply for human consumption are prohibited.
Legal status:
United States: The Food and Drug Administration has issued import alerts for imports of omberacetam, considering it an analog of piracetam. FDA considers such racetam-family substances Active Pharmaceutical Ingredients (APIs) that require new drug applications and adequate labelling before being imported. Similarly, warnings have been issued for claims of medical and pharmacological effects. Despite these FDA enforcement actions, omberacetam is sold in over-the-counter supplements in this US with some products formulated with dosages greater than pharmaceutical levels. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DEC Technical Character Set**
DEC Technical Character Set:
DEC Technical (TCS) is a 7-bit character set developed by Digital Equipment Corporation.
Character set:
� Characters from 31 to 37 are intended to assemble a 3x5 uppercase sigma and do not have Unicode equivalents. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Intermediate luminosity optical transient**
Intermediate luminosity optical transient:
An Intermediate Luminosity Optical Transient (ILOT) is an astronomical object which undergoes an optically detectable explosive event with an absolute magnitude (M) brighter than a classical nova (M ~ -8) but fainter than that of a supernova (M ~ -17). That nine magnitude range corresponds to a factor of nearly 4000 in luminosity, so the ILOT class may include a wide variety of objects. The term ILOT first appeared in a 2009 paper discussing the nova-like event NGC 300 OT2008-1. As the term has gained more widespread use, it has begun to be applied to some objects like KjPn 8 and CK Vulpeculae for which no transient event has been observed, but which may have been dramatically affected by an ILOT event in the past. The number of ILOTs known is expected to increase substantially when the Vera C. Rubin Observatory becomes operational.
Intermediate luminosity optical transient:
A very wide variety of objects have been classified as ILOTs in the astronomical literature. Kashi and Soker proposed a model for the outburt of ASASSN-15qi, in which a Jupiter-mass planet is tidally destroyed and accreted onto a young main sequence star. Red novae, believed be caused by the merger of two stars, are classified as ILOTs. Some luminous blue variables, such as η Car have been classified as ILOTs. Some objects which have been classified as failed supernovae may be ILOTs. The common thread tying all of these objects together is a transfer of a large amount of mass (0.001 M⊙ to a few M⊙) from a planet or star to a companion star, over a short period of time, leading to a massive eruption. That large range in accretion mass explains the large range in ILOT event brightness. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Borocarbonitrides**
Borocarbonitrides:
Borocarbonitrides are two-dimensional compounds that contain boron, nitrogen, and carbon atoms in a ratio BxCyNz. Borocarbonitrides are distinct from B,N co-doped graphene in that the former contains separate boron nitride and graphene domains as well as rings with B-C, B-N, C-N, and C-C bonds. These compounds generally have a high surface area, but borocarbonitrides synthesized from a high surface area carbon material, urea, and boric acid tend to have the highest surface areas. This high surface area coupled with the presence of Stone-Wales defects in the structure of borocarbonitrides also allows for high absorption of CO2 and CH4, which may make borocarbonitride compounds a useful material in sequestering these gases.
Electrical:
The band gap of borocarbonitrides range from 1.0–3.9eV and is dependent on the content of the carbon and boron nitride domains as they have different electrical properties. Borocarbonitrides with a high carbon content have lower bandgaps whereas those with higher content of boron nitride domains have higher band gaps. Borocarbonitrides synthesized in gas or solid reactions also tend to have large bandgaps and are more insulating in character. The wide range of composition of boronitrides allows for the tuning of the bandgap, which when coupled with its high surface area and Stone-Wales defects may make boronitrides a promising material in electrical devices.
Synthesis:
Solid state reaction A high surface area carbon material such as activated charcoal, boric acid, and urea are mixed together and then heated at high temperatures to synthesize borocarbonitride. The composition of the resulting compounds may be changed by varying the concentration of the reagents as well as the temperature.
Gas phase synthesis In chemical vapor deposition, boron, nitrogen, and carbon precursors react at high heat and are deposited onto a metal substrate. Varying the concentration of precursors and the selection of certain precursors will give different ratios of boron, nitrogen, and carbon in the resulting borocarbonitride compound.
Synthesis:
Borocarbonitride composites Borocarbonitride can also be synthesized by random stacking of boronitride and graphene domains through covalent interactions or through liquid interactions. In the first method, graphene and boron nitride sheets are functionalized and then are reacted to form layers of borocarbonitride. In the second method, boron nitride and graphite powder are dissolved in isopropanol and dimethylformamide, respectively, and then sonicated. This is then exfoliated to isolate borocarbonitride layers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Neuropathy, ataxia, and retinitis pigmentosa**
Neuropathy, ataxia, and retinitis pigmentosa:
Neuropathy, ataxia, and retinitis pigmentosa, also known as NARP syndrome, is a rare disease with mitochondrial inheritance that causes a variety of signs and symptoms chiefly affecting the nervous system Beginning in childhood or early adulthood, most people with NARP experience numbness, tingling, or pain in the arms and legs (sensory neuropathy); muscle weakness; and problems with balance and coordination (ataxia). Many affected individuals also have vision loss caused by changes in the light-sensitive tissue that lines the back of the eye (the retina). In some cases, the vision loss results from a condition called retinitis pigmentosa. This eye disease causes the light-sensing cells of the retina gradually to deteriorate.
Presentation:
Learning disabilities and developmental delays are often seen in children with NARP, and older individuals with this condition may experience a loss of intellectual function (dementia). Other features of NARP include seizures, hearing loss, and abnormalities of the electrical signals that control the heartbeat (cardiac conduction defects). These signs and symptoms vary among affected individuals.
Genetics:
Neuropathy, ataxia, and retinitis pigmentosa is a condition related to changes in mitochondrial DNA. Mutations in the MT-ATP6 gene cause neuropathy, ataxia, and retinitis pigmentosa. The MT-ATP6 gene provides instructions for making a protein that is essential for normal mitochondrial function. Through a series of chemical reactions, mitochondria use oxygen and simple sugars to create adenosine triphosphate (ATP), the cell's main energy source. The MT-ATP6 protein forms one part (subunit) of an enzyme called ATP synthase, which is responsible for the last step in ATP production. Mutations in the MT-ATP6 gene alter the structure or function of ATP synthase, reducing the ability of mitochondria to make ATP. It remains unclear how this disruption in mitochondrial energy production leads to muscle weakness, vision loss, and the other specific features of NARP.This condition is inherited in a pattern reflecting its location in mitochondrial DNA, which is also known as maternal inheritance. This pattern of inheritance applies to genes contained in mitochondrial DNA. Because egg cells, but not sperm cells, contribute mitochondria to the developing embryo, only females pass mitochondrial conditions to their children. Mitochondrial disorders can appear in every generation of a family and can affect both males and females, but fathers do not pass mitochondrial traits to their children. Most of the body's cells contain thousands of mitochondria, each with one or more copies of mitochondrial DNA. The severity of some mitochondrial disorders is associated with the percentage of mitochondria in each cell that has a particular genetic change. Most individuals with NARP have a specific MT-ATP6 mutation in 70 percent to 90 percent of their mitochondria. When this mutation is present in a higher percentage of a person's mitochondria—greater than 90 percent to 95 percent—it causes a more severe condition known as maternally inherited Leigh syndrome. Because these two conditions result from the same genetic changes and can occur in different members of a single family, researchers believe that they may represent a spectrum of overlapping features instead of two distinct syndromes.
Diagnosis:
The clinical diagnosis is backed up by investigative findings. Citrulline level in blood is decreased. Mitochondrial studies or NARP mtDNA evaluation plays a role in genetic diagnosis which can also be done prenatally.
Treatment:
There is currently no known cure for NARP syndrome. Symptomatic relief is targeted. Antioxidants play a role in improving the oxidative phosphorylation that is otherwise impaired.
Prognosis:
The severity and prognosis vary with the type of mutation involved. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Littlewood's law**
Littlewood's law:
Littlewood's law states that a person can expect to experience events with odds of one in a million (referred to as a "miracle") at the rate of about one per month. It was framed by British mathematician John Edensor Littlewood.
History:
The law was framed by Cambridge University Professor John Edensor Littlewood and published in a 1986 collection of his work, A Mathematician's Miscellany. It seeks, among other things, to debunk one element of supposed supernatural phenomenology and is related to the more general law of truly large numbers, which states that with a sample size large enough, any outrageous (in terms of probability model of single sample) thing is likely to happen.
Description:
Littlewood defines a miracle as an exceptional event of special significance occurring at one in-a-million frequency. He assumes that during the hours a human is awake and alert, a human will see or hear one "event" per second, which may be either exceptional or unexceptional. Additionally, Littlewood supposes that a human is alert for about eight hours daily.
As a result, in 35 days, a human will have experienced about one million events under these suppositions. Therefore, accepting this definition of a miracle, one can expect to observe one miraculous event every 35 days, on average – therefore, according to this reasoning, seemingly miraculous events are commonplace. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**UAProf**
UAProf:
The UAProf (User Agent Profile) specification is concerned with capturing capability and preference information for wireless devices. This information can be used by content providers to produce content in an appropriate format for the specific device.
UAProf is related to the Composite Capability/Preference Profiles Specification created by the World Wide Web Consortium. UAProf is based on RDF.
UAProf files typically have the file extensions rdf or xml, and are usually served with mimetype application/xml. They are an XML-based file format. The RDF format means that the document schema is extensible.
A UAProf file describes the capabilities of a mobile handset, including Vendor, Model, Screensize, Multimedia Capabilities, Character Set support, and more. Recent UAProfiles have also begun to include data conforming to MMS, PSS5 and PSS6 schemas, which includes much more detailed data about video, multimedia, streaming and MMS capabilities.
A mobile handset sends a header within an http request, containing the URL to its UAProf. The http header is usually X-WAP-Profile:, but sometimes may look more like 19-Profile:, WAP-Profile: or a number of other similar headers.
UAProf production for a device is voluntary: for GSM devices, the UAProf is normally produced by the vendor of the device (e.g. Nokia, Samsung, LG) whereas for CDMA / BREW devices it's more common for the UAProf to be produced by the telecommunications company.
UAProf:
A content delivery system (such as a WAP site) can use UAProf to adapt content for display, or to decide what items to offer for download. However, drawbacks to relying solely on UAProf are (See also ): Not all devices have UAProfs (including many new Windows Mobile devices, iDen handsets, or legacy handsets) Not all advertised UAProfs are available (about 20% of links supplied by handsets are dead or unavailable, according to figures from UAProfile.com) UAProf can contain schema or data errors which can cause parsing to fail Retrieving and parsing UAProfs in real-time is slow and can add substantial overhead to any given web request: necessitating the creation of a Device Description Repository to cache the UAProfs in, and a workflow to refresh UAProfs to check for deprecation.
UAProf:
There is no industry-wide data quality standard for the data within each field in an UAProf.
The UAProf document itself does not contain the user agents of the devices it might apply to in the schema (Nokia put it in the comments).
UAProf:
UAProf headers can often be plain wrong. (i.e. for a completely different device)UAProf device profiles are one of the sources of device capability information for WURFL, which maps the UAProfile schema to its own with many other items and boolean fields relating to device markup, multimedia capabilities and more. This XML data is keyed on the User-Agent: header in a web request.
UAProf:
Another approach to the problem is to combine real-time derived information, component analysis, manual data and UAProfiles to deal with the actual device itself rather than the idealised representation of "offline" approaches such as UAProf or WURFL. This approach allows detection of devices modified by the user, Windows Mobile devices, Legacy devices, Spiders and Bots, and is evidenced in at least one commercially available system.
UAProf:
The W3C MWI (Mobile Web Initiative) and the associated DDWG (Device Description Working Group), recognising the difficulty in collecting and keeping track of UAProfs and device handset information, and the practical shortcomings in the implementation of UAProf across the industry have outlined specifications for a Device Description Repository, in the expectation that an ecosystem of such Repositories will eventually eliminate the need for local device repositories in favour of a web service ecosystem. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PrestaShop**
PrestaShop:
PrestaShop is a freemium, open source e-commerce platform. The software is published under the Open Software License (OSL). It is written in the PHP programming language with support for the MySQL database management system. It has a software dependency on the Symfony PHP framework.
PrestaShop is currently used by 300,000 shops worldwide and is available in 60 different languages.
History:
PrestaShop started in 2005 as a student project within the EPITECH IT School in Paris, France. Originally named phpOpenStore, the software was first available in two languages: English and French. Three months after its launch, the project was translated into thirteen languages.The company, PrestaShop SA, was founded in 2007 by Igor Schlumberger and Bruno Lévêque.
History:
Between May 2010 and April 2012, PrestaShop grew from 17 employees to more than a hundred, with the establishment of secondary headquarters in Miami. As of April 2016, PrestaShop has over 120 employees and offices in 6 countries.In March 2014, PrestaShop SA secured $9.3M in Series B Funding to continue its global expansion efforts.In January 2015, the company launched PrestaShop Cloud, a free self-hosted version of its software, but at least since 2017 is no longer available.The 1.7.x branch of PrestaShop was first released as a stable version in November 2016.Initially, maintenance for the 1.6 version was planned to expire in October 2018. For various reasons, Prestashop decided to extend this maintenance period until June 30, 2019.PrestaShop has been built as a monolith following traditional object-oriented PHP practices. Originally based on a custom framework, it is progressively being migrated to Symfony.In February 2018, Alexandre Eruimy took over as CEO of PrestaShop. Since then, the company has been signing large-scale strategic partnerships with companies such as Paypal, Google, Meta, TikTok and many others, in order to make the latest technological solutions available to e-retailers.As of October 2021, 0.31% of sites employing open-source shopping cart software use PrestaShop, according to software tracking website BuiltWith. According to W3Techs, PrestaShop is used by 0.5% of all websites.In October 2019, PrestaShop closed the Miami headquarters and ceased its operations in the Americas.In 2019, PrestaShop received the Acteurs du Libre International Award for its international development strategy.
History:
The current and latest version of Prestashop is 8.0.1 Migration from PrestaShop 1.7.8 to 8.0 just got easier.In November 2021, PrestaShop joined the MBE Worldwide group to accelerate its growth and become the leading commerce platform for accelerating business growth worldwide.
Business model:
As an open-source organization, PrestaShop is faced with the challenge of generating revenues.
By leveraging the size and international scope of its open-source community, the company established two main sources of revenue: PrestaShop Addons, a marketplace through which merchants purchase custom addons and themes for their stores Strategic partnerships with e-commerce industry leaders such as PayPal or Google
Features:
PrestaShop has more than three hundred built-in features for managing product listing, payments, shipping, manufacturers and suppliers.
PrestaShop uses a web template system that allows users to customize store themes and add new features through add-on modules. The PrestaShop Addons marketplace provides a platform for third-party developers to sell themes and modules to merchants.
Themes PrestaShop provides a basic responsive theme by default. Users can install or develop their own themes that change the display of the website without altering its content.
Modules Add-on modules extend the software's built-in functionalities. Users may install modules directly within the software administration panel or develop their own.
SEO The most important element for SEO optimization in PrestaShop is On-page SEO (Content, Website architecture and HTML).
Partnerships:
On June 14, 2021, Wish announced a partnership with PrestaShop whereby Prestashop will provide over 300k merchants with access to the Wish marketplace. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MT6235**
MT6235:
The MT6235 is a processor used in many Chinese cellular phones (eg. ZTE). It is a member of the MT62xx series of processors by MediaTek.
MediaTek-based Chinese cell phones often come with features not common to North American phones, such as analog television viewing and recording. While these phones have vastly different builds and configurations, they all run Mediatek's proprietary operating system based on the Nucleus RTOS.
The MT6235 is a specialized processor design containing both an ARM926EJ-S RISC CPU running at frequencies between 26/52/104 and 208 MHz and a digital signal processor (DSP).
Subsystems:
Microcontroller Unit (MCU) Subsystem: includes an ARM926EJ-S RISC processor and its accompanying memory management and interrupt handling logics; Digital Signal Processor (DSP) Subsystem: includes a DSP and its accompanying memory, memory controller, and interrupt controller; MCU/DSP Interface: the junction at which the MCU and the DSP exchange hardware and software information; Microcontroller Peripherals: includes all user interface modules and RF control interface modules; Microcontroller Coprocessors: runs computing-intensive processes in place of the Microcontroller; DSP Peripherals: hardware accelerators for GSM/GPRS/EDGE channel codec; Multi-media Subsystem: integrates several advanced accelerators to support multi-media applications; Voice Front End: the data path for converting analog speech to and from digital speech; Audio Front End: the data path for converting stereo audio from an audio source; Baseband Front End: the data path for converting a digital signal to and from an analog signal from the RF modules; Timing Generator: generates the control signals related to the TDMA frame timing; and, Power, Reset and Clock Subsystem: manages the power, reset, and clock distribution inside MT6235.
Sources:
http://ryan.com.br/smf/index.php?topic=481.75;wap2 http://blog.csdn.net/sergeycao/archive/2008/08/26/2832568.aspx MT6235 GSM GPRS Baseband Processor Data Sheet_1.02.pdf https://web.archive.org/web/20160215061959/http://www.mediatek.com/en/products/mobile-communications/feature-phone/mt6235/ | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alternatives to Darwinian evolution**
Alternatives to Darwinian evolution:
Alternatives to Darwinian evolution have been proposed by scholars investigating biology to explain signs of evolution and the relatedness of different groups of living things. The alternatives in question do not deny that evolutionary changes over time are the origin of the diversity of life, nor that the organisms alive today share a common ancestor from the distant past (or ancestors, in some proposals); rather, they propose alternative mechanisms of evolutionary change over time, arguing against mutations acted on by natural selection as the most important driver of evolutionary change. This distinguishes them from certain other kinds of arguments that deny that large-scale evolution of any sort has taken place, as in some forms of creationism, which do not propose alternative mechanisms of evolutionary change but instead deny that evolutionary change has taken place at all. Not all forms of creationism deny that evolutionary change takes place; notably, proponents of theistic evolution, such as the biologist Asa Gray, assert that evolutionary change does occur and is responsible for the history of life on Earth, with the proviso that this process has been influenced by a god or gods in some meaningful sense.
Alternatives to Darwinian evolution:
Where the fact of evolutionary change was accepted but the mechanism proposed by Charles Darwin, natural selection, was denied, explanations of evolution such as Lamarckism, catastrophism, orthogenesis, vitalism, structuralism and mutationism (called saltationism before 1900) were entertained. Different factors motivated people to propose non-Darwinian mechanisms of evolution. Natural selection, with its emphasis on death and competition, did not appeal to some naturalists because they felt it immoral, leaving little room for teleology or the concept of progress (orthogenesis) in the development of life. Some who came to accept evolution, but disliked natural selection, raised religious objections. Others felt that evolution was an inherently progressive process that natural selection alone was insufficient to explain. Still others felt that nature, including the development of life, followed orderly patterns that natural selection could not explain.
Alternatives to Darwinian evolution:
By the start of the 20th century, evolution was generally accepted by biologists but natural selection was in eclipse. Many alternative theories were proposed, but biologists were quick to discount theories such as orthogenesis, vitalism and Lamarckism which offered no mechanism for evolution. Mutationism did propose a mechanism, but it was not generally accepted. The modern synthesis a generation later claimed to sweep away all the alternatives to Darwinian evolution, though some have been revived as molecular mechanisms for them have been discovered.
Unchanging forms:
Aristotle did not embrace either divine creation or evolution, instead arguing in his biology that each species (eidos) was immutable, breeding true to its ideal eternal form (not the same as Plato's theory of forms). Aristotle's suggestion in De Generatione Animalium of a fixed hierarchy in nature - a scala naturae ("ladder of nature") provided an early explanation of the continuity of living things. Aristotle saw that animals were teleological (functionally end-directed), and had parts that were homologous with those of other animals, but he did not connect these ideas into a concept of evolutionary progress.In the Middle Ages, Scholasticism developed Aristotle's view into the idea of a great chain of being. The image of a ladder inherently suggests the possibility of climbing, but both the ancient Greeks and mediaeval scholastics such as Ramon Lull maintained that each species remained fixed from the moment of its creation.By 1818, however, Étienne Geoffroy Saint-Hilaire argued in his Philosophie anatomique that the chain was "a progressive series", where animals like molluscs low on the chain could "rise, by addition of parts, from the simplicity of the first formations to the complication of the creatures at the head of the scale", given sufficient time. Accordingly, Geoffroy and later biologists looked for explanations of such evolutionary change.Georges Cuvier's 1812 Recherches sur les Ossements Fossiles set out his doctrine of the correlation of parts, namely that since an organism was a whole system, all its parts mutually corresponded, contributing to the function of the whole. So, from a single bone the zoologist could often tell what class or even genus the animal belonged to. And if an animal had teeth adapted for cutting meat, the zoologist could be sure without even looking that its sense organs would be those of a predator and its intestines those of a carnivore. A species had an irreducible functional complexity, and "none of its parts can change without the others changing too". Evolutionists expected one part to change at a time, one change to follow another. In Cuvier's view, evolution was impossible, as any one change would unbalance the whole delicate system.Louis Agassiz's 1856 "Essay on Classification" exemplified German philosophical idealism. This held that each species was complex within itself, had complex relationships to other organisms, and fitted precisely into its environment, as a pine tree in a forest, and could not survive outside those circles. The argument from such ideal forms opposed evolution without offering an actual alternative mechanism. Richard Owen held a similar view in Britain.The Lamarckian social philosopher and evolutionist Herbert Spencer, ironically the author of the phrase "survival of the fittest" adopted by Darwin, used an argument like Cuvier's to oppose natural selection. In 1893, he stated that a change in any one structure of the body would require all the other parts to adapt to fit in with the new arrangement. From this, he argued that it was unlikely that all the changes could appear at the right moment if each one depended on random variation; whereas in a Lamarckian world, all the parts would naturally adapt at once, through a changed pattern of use and disuse.
Alternative explanations of change:
Where the fact of evolutionary change was accepted by biologists but natural selection was denied, including but not limited to the late 19th century eclipse of Darwinism, alternative scientific explanations such as Lamarckism, orthogenesis, structuralism, catastrophism, vitalism and theistic evolution were entertained, not necessarily separately. (Purely religious points of view such as young or old earth creationism or intelligent design are not considered here.) Different factors motivated people to propose non-Darwinian evolutionary mechanisms. Natural selection, with its emphasis on death and competition, did not appeal to some naturalists because they felt it immoral, leaving little room for teleology or the concept of progress in the development of life. Some of these scientists and philosophers, like St. George Jackson Mivart and Charles Lyell, who came to accept evolution but disliked natural selection, raised religious objections. Others, such as the biologist and philosopher Herbert Spencer, the botanist George Henslow (son of Darwin's mentor John Stevens Henslow, also a botanist), and the author Samuel Butler, felt that evolution was an inherently progressive process that natural selection alone was insufficient to explain. Still others, including the American paleontologists Edward Drinker Cope and Alpheus Hyatt, had an idealist perspective and felt that nature, including the development of life, followed orderly patterns that natural selection could not explain.Some felt that natural selection would be too slow, given the estimates of the age of the earth and sun (10–100 million years) being made at the time by physicists such as Lord Kelvin, and some felt that natural selection could not work because at the time the models for inheritance involved blending of inherited characteristics, an objection raised by the engineer Fleeming Jenkin in a review of Origin written shortly after its publication. Another factor at the end of the 19th century was the rise of a new faction of biologists, typified by geneticists like Hugo de Vries and Thomas Hunt Morgan, who wanted to recast biology as an experimental laboratory science. They distrusted the work of naturalists like Darwin and Alfred Russel Wallace, dependent on field observations of variation, adaptation, and biogeography, as being overly anecdotal. Instead they focused on topics like physiology and genetics that could be investigated with controlled experiments in the laboratory, and discounted less accessible phenomena like natural selection and adaptation to the environment.
Alternative explanations of change:
Vitalism Vitalism holds that living organisms differ from other things in containing something non-physical, such as a fluid or vital spirit, that makes them live. The theory dates to ancient Egypt.
Alternative explanations of change:
Since Early Modern times, vitalism stood in contrast to the mechanistic explanation of biological systems started by Descartes. Nineteenth century chemists set out to disprove the claim that forming organic compounds required vitalist influence. In 1828, Friedrich Wöhler showed that urea could be made entirely from inorganic chemicals. Louis Pasteur believed that fermentation required whole organisms, which he supposed carried out chemical reactions found only in living things. The embryologist Hans Driesch, experimenting on sea urchin eggs, showed that separating the first two cells led to two complete but small blastulas, seemingly showing that cell division did not divide the egg into sub-mechanisms, but created more cells each with the vital capability to form a new organism. Vitalism faded out with the demonstration of more satisfactory mechanistic explanations of each of the functions of a living cell or organism. By 1931, biologists had "almost unanimously abandoned vitalism as an acknowledged belief." Theistic evolution The American botanist Asa Gray used the name "theistic evolution" for his point of view, presented in his 1876 book Essays and Reviews Pertaining to Darwinism. He argued that the deity supplies beneficial mutations to guide evolution. St George Jackson Mivart argued instead in his 1871 On the Genesis of Species that the deity, equipped with foreknowledge, sets the direction of evolution by specifying the (orthogenetic) laws that govern it, and leaves species to evolve according to the conditions they experience as time goes by. The Duke of Argyll set out similar views in his 1867 book The Reign of Law. According to the historian Edward Larson, the theory failed as an explanation in the minds of late 19th century biologists as it broke the rules of methodological naturalism which they had grown to expect. Accordingly, by around 1900, biologists no longer saw theistic evolution as a valid theory. In Larson's view, by then it "did not even merit a nod among scientists." In the 20th century, theistic evolution could take other forms, such as the orthogenesis of Teilhard de Chardin.
Alternative explanations of change:
Orthogenesis Orthogenesis or Progressionism is the hypothesis that life has an innate tendency to change, developing in a unilinear fashion in a particular direction, or simply making some kind of definite progress. Many different versions have been proposed, some such as that of Teilhard de Chardin openly spiritual, others such as Theodor Eimer's apparently simply biological. These theories often combined orthogenesis with other supposed mechanisms. For example, Eimer believed in Lamarckian evolution, but felt that internal laws of growth determined which characteristics would be acquired and would guide the long-term direction of evolution.Orthogenesis was popular among paleontologists such as Henry Fairfield Osborn. They believed that the fossil record showed unidirectional change, but did not necessarily accept that the mechanism driving orthogenesis was teleological (goal-directed). Osborn argued in his 1918 book Origin and Evolution of Life that trends in Titanothere horns were both orthogenetic and non-adaptive, and could be detrimental to the organism. For instance, they supposed that the large antlers of the Irish elk had caused its extinction.Support for orthogenesis fell during the modern synthesis in the 1940s when it became apparent that it could not explain the complex branching patterns of evolution revealed by statistical analysis of the fossil record. Work in the 21st century has supported the mechanism and existence of mutation-biased adaptation (a form of mutationism), meaning that constrained orthogenesis is now seen as possible. Moreover, the self-organizing processes involved in certain aspects of embryonic development often exhibit stereotypical morphological outcomes, suggesting that evolution will proceed in preferred directions once key molecular components are in place.
Alternative explanations of change:
Lamarckism Jean-Baptiste Lamarck's 1809 evolutionary theory, transmutation of species, was based on a progressive (orthogenetic) drive toward greater complexity. Lamarck also shared the belief, common at the time, that characteristics acquired during an organism's life could be inherited by the next generation, producing adaptation to the environment. Such characteristics were caused by the use or disuse of the affected part of the body. This minor component of Lamarck's theory became known, much later, as Lamarckism. Darwin included Effects of the increased Use and Disuse of Parts, as controlled by Natural Selection in On the Origin of Species, giving examples such as large ground feeding birds getting stronger legs through exercise, and weaker wings from not flying until, like the ostrich, they could not fly at all. In the late 19th century, neo-Lamarckism was supported by the German biologist Ernst Haeckel, the American paleontologists Edward Drinker Cope and Alpheus Hyatt, and the American entomologist Alpheus Packard. Butler and Cope believed that this allowed organisms to effectively drive their own evolution. Packard argued that the loss of vision in the blind cave insects he studied was best explained through a Lamarckian process of atrophy through disuse combined with inheritance of acquired characteristics. Meanwhile, the English botanist George Henslow studied how environmental stress affected the development of plants, and he wrote that the variations induced by such environmental factors could largely explain evolution; he did not see the need to demonstrate that such variations could actually be inherited. Critics pointed out that there was no solid evidence for the inheritance of acquired characteristics. Instead, the experimental work of the German biologist August Weismann resulted in the germ plasm theory of inheritance, which Weismann said made the inheritance of acquired characteristics impossible, since the Weismann barrier would prevent any changes that occurred to the body after birth from being inherited by the next generation.In modern epigenetics, biologists observe that phenotypes depend on heritable changes to gene expression that do not involve changes to the DNA sequence. These changes can cross generations in plants, animals, and prokaryotes. This is not identical to traditional Lamarckism, as the changes do not last indefinitely and do not affect the germ line and hence the evolution of genes.
Alternative explanations of change:
Catastrophism Catastrophism is the hypothesis, argued by the French anatomist and paleontologist Georges Cuvier in his 1812 Recherches sur les ossements fossiles de quadrupèdes, that the various extinctions and the patterns of faunal succession seen in the fossil record were caused by large-scale natural catastrophes such as volcanic eruptions and, for the most recent extinctions in Eurasia, the inundation of low-lying areas by the sea. This was explained purely by natural events: he did not mention Noah's flood, nor did he ever refer to divine creation as the mechanism for repopulation after an extinction event, though he did not support evolutionary theories such as those of his contemporaries Lamarck and Geoffroy Saint-Hilaire either. Cuvier believed that the stratigraphic record indicated that there had been several such catastrophes, recurring natural events, separated by long periods of stability during the history of life on earth. This led him to believe the Earth was several million years old.Catastrophism has found a place in modern biology with the Cretaceous–Paleogene extinction event at the end of the Cretaceous period, as proposed in a paper by Walter and Luis Alvarez in 1980. It argued that a 10 kilometres (6.2 mi) asteroid struck Earth 66 million years ago at the end of the Cretaceous period. The event, whatever it was, made about 70% of all species extinct, including the dinosaurs, leaving behind the Cretaceous–Paleogene boundary. In 1990, a 180 kilometres (110 mi) candidate crater marking the impact was identified at Chicxulub in the Yucatán Peninsula of Mexico.
Alternative explanations of change:
Structuralism Biological structuralism objects to an exclusively Darwinian explanation of natural selection, arguing that other mechanisms also guide evolution, and sometimes implying that these supersede selection altogether. Structuralists have proposed different mechanisms that might have guided the formation of body plans. Before Darwin, Étienne Geoffroy Saint-Hilaire argued that animals shared homologous parts, and that if one was enlarged, the others would be reduced in compensation. After Darwin, D'Arcy Thompson hinted at vitalism and offered geometric explanations in his classic 1917 book On Growth and Form. Adolf Seilacher suggested mechanical inflation for "pneu" structures in Ediacaran biota fossils such as Dickinsonia. Günter P. Wagner argued for developmental bias, structural constraints on embryonic development. Stuart Kauffman favoured self-organisation, the idea that complex structure emerges holistically and spontaneously from the dynamic interaction of all parts of an organism. Michael Denton argued for laws of form by which Platonic universals or "Types" are self-organised. In 1979 Stephen J. Gould and Richard Lewontin proposed biological "spandrels", features created as a byproduct of the adaptation of nearby structures. Gerd Müller and Stuart Newman argued that the appearance in the fossil record of most of the current phyla in the Cambrian explosion was "pre-Mendelian" evolution caused by plastic responses of morphogenetic systems that were partly organized by physical mechanisms. Brian Goodwin, described by Wagner as part of "a fringe movement in evolutionary biology", denied that biological complexity can be reduced to natural selection, and argued that pattern formation is driven by morphogenetic fields. Darwinian biologists have criticised structuralism, emphasising that there is plentiful evidence from deep homology that genes have been involved in shaping organisms throughout evolutionary history. They accept that some structures such as the cell membrane self-assemble, but question the ability of self-organisation to drive large-scale evolution.
Alternative explanations of change:
Saltationism, mutationism Saltationism held that new species arise as a result of large mutations. It was seen as a much faster alternative to the Darwinian concept of a gradual process of small random variations being acted on by natural selection. It was popular with early geneticists such as Hugo de Vries, who along with Carl Correns helped rediscover Gregor Mendel's laws of inheritance in 1900, William Bateson, a British zoologist who switched to genetics, and early in his career, Thomas Hunt Morgan. These ideas developed into mutationism, the mutation theory of evolution. This held that species went through periods of rapid mutation, possibly as a result of environmental stress, that could produce multiple mutations, and in some cases completely new species, in a single generation, based on de Vries's experiments with the evening primrose, Oenothera, from 1886. The primroses seemed to be constantly producing new varieties with striking variations in form and color, some of which appeared to be new species because plants of the new generation could only be crossed with one another, not with their parents. However, Hermann Joseph Muller showed in 1918 that the new varieties de Vries had observed were the result of polyploid hybrids rather than rapid genetic mutation.Initially, de Vries and Morgan believed that mutations were so large as to create new forms such as subspecies or even species instantly. Morgan's 1910 fruit fly experiments, in which he isolated mutations for characteristics such as white eyes, changed his mind. He saw that mutations represented small Mendelian characteristics that would only spread through a population when they were beneficial, helped by natural selection. This represented the germ of the modern synthesis, and the beginning of the end for mutationism as an evolutionary force.Contemporary biologists accept that mutation and selection both play roles in evolution; the mainstream view is that while mutation supplies material for selection in the form of variation, all non-random outcomes are caused by natural selection. Masatoshi Nei argues instead that the production of more efficient genotypes by mutation is fundamental for evolution, and that evolution is often mutation-limited. The endosymbiotic theory implies rare but major events of saltational evolution by symbiogenesis. Carl Woese and colleagues suggested that the absence of RNA signature continuum between domains of bacteria, archaea, and eukarya shows that these major lineages materialized via large saltations in cellular organization. Saltation at a variety of scales is agreed to be possible by mechanisms including polyploidy, which certainly can create new species of plant, gene duplication, lateral gene transfer, and transposable elements (jumping genes).
Alternative explanations of change:
Genetic drift The neutral theory of molecular evolution, proposed by Motoo Kimura in 1968, holds that at the molecular level most evolutionary changes and most of the variation within and between species is not caused by natural selection but by genetic drift of mutant alleles that are neutral. A neutral mutation is one that does not affect an organism's ability to survive and reproduce. The neutral theory allows for the possibility that most mutations are deleterious, but holds that because these are rapidly purged by natural selection, they do not make significant contributions to variation within and between species at the molecular level. Mutations that are not deleterious are assumed to be mostly neutral rather than beneficial.The theory was controversial as it sounded like a challenge to Darwinian evolution; controversy was intensified by a 1969 paper by Jack Lester King and Thomas H. Jukes, provocatively but misleadingly titled "Non-Darwinian Evolution". It provided a wide variety of evidence including protein sequence comparisons, studies of the Treffers mutator gene in E. coli, analysis of the genetic code, and comparative immunology, to argue that most protein evolution is due to neutral mutations and genetic drift.According to Kimura, the theory applies only for evolution at the molecular level, while phenotypic evolution is controlled by natural selection, so the neutral theory does not constitute a true alternative.
Combined theories:
The various alternatives to Darwinian evolution by natural selection were not necessarily mutually exclusive. The evolutionary philosophy of the American palaeontologist Edward Drinker Cope is a case in point. Cope, a religious man, began his career denying the possibility of evolution. In the 1860s, he accepted that evolution could occur, but, influenced by Agassiz, rejected natural selection. Cope accepted instead the theory of recapitulation of evolutionary history during the growth of the embryo - that ontogeny recapitulates phylogeny, which Agassiz believed showed a divine plan leading straight up to man, in a pattern revealed both in embryology and palaeontology. Cope did not go so far, seeing that evolution created a branching tree of forms, as Darwin had suggested. Each evolutionary step was however non-random: the direction was determined in advance and had a regular pattern (orthogenesis), and steps were not adaptive but part of a divine plan (theistic evolution). This left unanswered the question of why each step should occur, and Cope switched his theory to accommodate functional adaptation for each change. Still rejecting natural selection as the cause of adaptation, Cope turned to Lamarckism to provide the force guiding evolution. Finally, Cope supposed that Lamarckian use and disuse operated by causing a vitalist growth-force substance, "bathmism", to be concentrated in the areas of the body being most intensively used; in turn, it made these areas develop at the expense of the rest. Cope's complex set of beliefs thus assembled five evolutionary philosophies: recapitulationism, orthogenesis, theistic evolution, Lamarckism, and vitalism. Other palaeontologists and field naturalists continued to hold beliefs combining orthogenesis and Lamarckism until the modern synthesis in the 1930s.
Rebirth of natural selection, with continuing alternatives:
By the start of the 20th century, during the eclipse of Darwinism, biologists were doubtful of natural selection, but equally were quick to discount theories such as orthogenesis, vitalism and Lamarckism which offered no mechanism for evolution. Mutationism did propose a mechanism, but it was not generally accepted. The modern synthesis a generation later, roughly between 1918 and 1932, broadly swept away all the alternatives to Darwinism, though some including forms of orthogenesis, epigenetic mechanisms that resemble Lamarckian inheritance of acquired characteristics, catastrophism, structuralism, and mutationism have been revived, such as through the discovery of molecular mechanisms.Biology has become Darwinian, but belief in some form of progress (orthogenesis) remains both in the public mind and among biologists. Ruse argues that evolutionary biologists will probably continue to believe in progress for three reasons. Firstly, the anthropic principle demands people able to ask about the process that led to their own existence, as if they were the pinnacle of such progress. Secondly, scientists in general and evolutionists in particular believe that their work is leading them progressively closer to a true grasp of reality, as knowledge increases, and hence (runs the argument) there is progress in nature also. Ruse notes in this regard that Richard Dawkins explicitly compares cultural progress with memes to biological progress with genes. Thirdly, evolutionists are self-selected; they are people, such as the entomologist and sociobiologist E. O. Wilson, who are interested in progress to supply a meaning for life.
Sources:
Birch, Charles; Cobb, John B. (1985). The Liberation of Life: From the Cell to the Community. University of North Texas. ISBN 978-0-9626807-0-0.
Bowler, Peter J. (1989) [1983]. The Eclipse of Darwinism: anti-Darwinian evolutionary theories in the decades around 1900. Johns Hopkins University Press. ISBN 978-0-8018-4391-4.
Bowler, Peter J. (2003). Evolution:The History of an Idea. University of California Press. ISBN 978-0-520-23693-6.
Darwin, Charles (1872). The Origin of Species by Means of Natural Selection, or the Preservation of Favored Races in the Struggle for Life (6th ed.). London: John Murray. ISBN 978-1-904633-78-5.
Endersby, Jim (2007). A Guinea Pig's History of Biology. Harvard University Press. ISBN 978-0-674-02713-8.
Larson, Edward J. (2004). Evolution: The Remarkable History of Scientific Theory. Modern Library. ISBN 978-0-679-64288-6.
Leroi, Armand Marie (2015) [2014]. The Lagoon: How Aristotle Invented Science. Bloomsbury. ISBN 978-1-4088-3622-4.
Lloyd, G. E. R. (1968). Aristotle: The Growth and Structure of His Thought. Cambridge University Press. ISBN 978-0-521-09456-6.
Lovejoy, Arthur O. (2011) [1936]. The Great Chain of Being: A Study of the History of an Idea. Transaction Publishers. ISBN 978-0-674-36153-9.
Mayr, Ernst (1985). The Growth of Biological Thought: Diversity, Evolution, and Inheritance. Harvard University Press. ISBN 978-0-674-36446-2.
McGowan, Christopher (2001). The Dragon Seekers. Cambridge, Massachusetts: Perseus Publishing. ISBN 978-0-7382-0282-2.
Quammen, David (2006). The Reluctant Mr. Darwin. Atlas Books. ISBN 978-0-393-05981-6.
Rudwick, Martin J. S. (1972). The Meaning of Fossils. Chicago, Illinois: University of Chicago Press. ISBN 978-0-226-73103-2.
Ruse, Michael (1996). Monad to man: the Concept of Progress in Evolutionary Biology. Harvard University Press. ISBN 978-0-674-03248-4.
Ruse, Michael (2013). "17. From Organicism to Mechanism-and Halfway Back?". In Henning, Brian G.; Scarfe, Adam (eds.). Beyond Mechanism: Putting Life Back Into Biology. Lexington Books. p. 419. ISBN 9780739174371.
Seilacher, Adolf (1991). "Self-Organizing Mechanisms in Morphogenesis and Evolution". In Schmidt-Kittler, Norbert; Vogel, Klaus (eds.). Constructional Morphology and Evolution. Springer. pp. 251–271. doi:10.1007/978-3-642-76156-0_17. ISBN 978-3-642-76158-4. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Street magic**
Street magic:
Street magic falls into two genres; traditional street performance and guerrilla magic.
Traditional street performance:
The first definition of street magic refers to a traditional form of magic performance – that of busking. In this, the magician draws an audience from passers by and performs an entire act for them. In exchange, the magician seeks remuneration either by having a receptacle for tips available throughout the act (known in the parlance as a "trickle show"), or by offering a receptacle for tips at the end of the show. The term "passing the hat" comes from the practice of having the hat passed before the final trick is performed, as opposed to "bottling" the audience at the end of the performance.
Traditional street performance:
Street magic most often consists of what has been referred to in the past as "hand" or "pocket" magic, sleight of hand. Whether card magic or magic performed with coins, balls, scarves, or rope, even occasionally mentalism, regardless of the props involved, the ability to draw and hold an audience is cited by contemporary practitioners as a skill of greater importance than the illusions themselves.
Traditional street performance:
The famous Indian Mango Tree is an old and venerated trick as performed by street magicians of the past and while it is demonstrably not of the hand magic variety, it exemplifies the fact that even large stage sized illusions can be presented in the street. In the trick, the magician apparently plants a mango seed, covers it with a cloth, makes mysterious incantations and, removing the cloth from time to time successively shows a tree of various heights, up to two or three feet. The same effect was achieved by the Apaches. Instead of a mango seed, a yucca seed was planted and watered. Covering the seed with a rawhide animal skin, the seed would apparently root, grow and finally flower within the span of but a few minutes.
Traditional street performance:
Anthropologists chronicle this form of street magic from approximately 3,000 years ago – and there are records of such performers across the continents, notably Europe, Asia/South Asia and the Middle East. While it is a very old performing style, its history is not particularly well documented in print. In his diary, Samuel Pepys mentions seeing magicians performing in this fashion and one can see street magicians in depictions by Hieronymous Bosch, William Hogarth, and Pieter Brueghel. Book XIII of Reginald Scot's Discoverie of Witchcraft (1584) describes magic tricks of the type performed by buskers in the 16th century.
Traditional street performance:
New York based artist and magician Jeff Sheridan is regarded as one of the pre-eminent U.S. street magicians to emerge from the surge in street performance artistry which began in the late '60s. He authored the 1977 book, Street Magic, taught Jeff McBride and allegedly was one of the performers who inspired and taught the young David Blaine after Blaine saw Sheridan perform in Central Park.
Traditional street performance:
Yorkshire Egg Magic is a long practised form of traditional street magic in the UK.More recently, other performers have garnered accolades from the magic community for their contributions to the art. Jim Cellini (a.k.a. Richard Sullivan) has been a full-time street performer since the 1970s and has published a book (Cellini: The Royal Touch) and DVDs (The Art of Street Performing, volumes 1–3) on the subject. Gazzo Macee (a.k.a. Gary Osborne) has been a full-time street performer since the 1980s and has published a booklet ("The Art of Krowd Keeping" written for Gazzo by Danny Hustle and Jim Wells) and DVD (Street Cups) on the subject. Eric Evans has been a full-time professional since the 1990s and co-wrote – along with Nowlin Craver – a book on the subject (The Secret Art of Magic).
Guerrilla magic:
The second category is more appropriately called "guerrilla magic". It is a relatively recent style of performing magic illusions where the magician performs a single trick or two in a public space (such as on a sidewalk) for an unpaying audience. The desired effect of this "hit and run" style of magic is to give the audience a feeling that what they are seeing is impromptu, unrehearsed, and experimental.
Guerrilla magic:
This style of "street magic" is associated with David Blaine (who popularized the term) and more recently, Criss Angel, Derren Brown and Cyril Takayama. The format was developed to play well on television beginning with the 1997 ABC television special David Blaine: Street Magic. Many magicians respect Blaine's choice of material and give him credit for creating an image of the contemporary magician distinct from other magicians in recent television history, such as David Copperfield or Doug Henning. However, magic historians, such as Jamy Ian Swiss note that "guerrilla magic" is primarily associated with only a few individuals who perform on television and certain magic dealers who sell effects to amateur magicians who watch these programs. Eugene Burger opined to Jamy Ian Swiss "On one level it's the ultimate trivialization of magic: accosting strangers on the street." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Slow order**
Slow order:
A slow order is a local speed restriction on a rail line that is set below the track's normal speed limit.
Slow orders are usually imposed by railway dispatchers for sections of track that are in some way deficient or when there is a requirement to perform maintenance on a section of railway.
Slow orders are employed whenever continuous welded rail has some sort of derail or danger condition, such as an open critical joint, joints close to a bridge or movable bridge, or issues with settling ballast. Sometimes, slow orders are imposed because of rail geometry defects or snow accumulations.
When maintenance workers wish to work under dispatcher protection without a designated "window" of time where no trains are allowed to run, they typically post flags at either end of the section on which they will be working, and a slow order is posted on the track.
Since slow orders tend to disrupt timetables and can affect time-sensitive shipments, railroads try to get them cleared as soon as is safely possible.
Around the world:
Croatia On Croatian Railways, slow order is signaled with the following signs:1 - Signal "Slow" (Croatian: Lagano) - round yellow board with a white border. This sign warns the train driver that he is approaching the part of the track where the slow order is installed. The board is placed at the beginning of the braking path to the point where the slow order begins.
Around the world:
2 - Signal "Beginning of slow order" (Croatian: Početak lagane vožnje) - white alphanumeric number on a black square board - indicates the place of the start of slow order. The number on the board shows maximum permitted speed over the slow order section in kilometers per hour.
3 - Revocation signal (Croatian: Opozivni signal) - round green board with a white border - indicates the place of the end of slow order. The driver may accelerate the train to regular speed only after the last axle of the last vehicle in the train composition has passed this sign.
United States In June 2022, the Coast Line between Camarillo and San Luis Obispo was subject to a slow order issued by the Union Pacific Railroad due to malfunctioning grade crossing signals. As such, Pacific Surfliner trains experienced delays of up to 30 minutes since trains were required to slow down to 20 mph at most grade crossings. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Three-mirror anastigmat**
Three-mirror anastigmat:
A three-mirror anastigmat is an anastigmat telescope built with three curved mirrors, enabling it to minimize all three main optical aberrations – spherical aberration, coma, and astigmatism. This is primarily used to enable wide fields of view, much larger than possible with telescopes with just one or two curved surfaces.
Three-mirror anastigmat:
A telescope with only one curved mirror, such as a Newtonian telescope, will always have aberrations. If the mirror is spherical, it will suffer from spherical aberration. If the mirror is made parabolic, to correct the spherical aberration, then it must necessarily suffer from coma and off-axis astigmatism. With two curved mirrors, such as the Ritchey–Chrétien telescope, coma can be minimized as well. This allows a larger useful field of view, and the remaining astigmatism is symmetrical around the distorted objects, allowing astrometry across the wide field of view. However, the astigmatism can be reduced by including a third curved optical element. When this element is a mirror, the result is a three-mirror anastigmat. In practice, the design may also include any number of flat fold mirrors, used to bend the optical path into more convenient configurations.
History:
Many combinations of three mirror figures can be used to cancel all third-order aberrations. In general these involve solving a relatively complicated set of equations. A few configurations are simple enough, however, that they could be designed starting from a few intuitive concepts.
History:
Paul telescope The first were proposed in 1935 by Maurice Paul. The basic idea behind Paul's solution is that spherical mirrors, with an aperture stop at the centre of curvature, have only spherical aberration – no coma or astigmatism (but they do produce an image on a curved surface of half the radius of curvature of the spherical mirror). So if the spherical aberration can be corrected, a very wide field of view can be obtained. This is similar to the conventional Schmidt design, but the Schmidt does this with a refractive corrector plate instead of a third mirror.
History:
Paul's idea was to start with a Mersenne beam compressor, which looks like a Cassegrain made from two (confocal) paraboloids, with both the input and output beams collimated. The compressed input beam is then directed to a spherical tertiary mirror, which results in traditional spherical aberration. Paul's key insight is that the secondary can then be converted back to a spherical mirror.
History:
One way to look at this is to imagine the tertiary mirror, which suffers from spherical aberration, is replaced by a Schmidt telescope, with a correcting plate at its centre of curvature. If the radii of the secondary and tertiary are of the same magnitude, but opposite sign, and if the centre of curvature of the tertiary is placed directly at the vertex of the secondary mirror, then the Schmidt plate would lie on top of the paraboloid secondary mirror. Therefore, the Schmidt plate required to make the tertiary mirror a Schmidt telescope is eliminated by the paraboloid figuring on the convex secondary of the Mersenne system, as each corrects the same magnitude of spherical aberration, but the opposite sign. Also, as the system of Mersenne + Schmidt is the sum of two anastigmats (the Mersenne system is an anastigmat, and so is the Schmidt system), the resultant system is also an anastigmat, as third-order aberrations are purely additive. In addition the secondary is now easier to fabricate. This design is also called a Mersenne–Schmidt, since it uses a Mersenne configuration as the corrector for a Schmidt telescope.
History:
Paul–Baker telescope Paul's solution had a curved focal plane, but this was corrected in the Paul–Baker design, introduced in 1969 by James Gilbert Baker. The Paul–Baker design adds extra spacing and reshapes the secondary to elliptical, which corrects field curvature to flatten the focal plane.
Korsch telescope A more general set of solutions was developed by Dietrich Korsch in 1972. A Korsch telescope is corrected for spherical aberration, coma, astigmatism, and field curvature and can have a wide field of view while ensuring that there is little stray light in the focal plane.
Examples:
The James Webb Space Telescope is a three-mirror anastigmat featuring an ellipsoidal primary, hyperboloidal secondary, and ellipsoidal tertiary.
The Euclid mission uses a Korsch telescope.
The "Cambridge University Three-Mirror Telescope". project includes a 100 mm working model built in 1985 and a 500 mm prototype built in 1986.
The Vera C. Rubin Observatory's telescope (formerly known as Large Synoptic Survey Telescope) is a modified three-mirror anastigmat of Paul–Baker design.
The KH-11 Kennen (or perhaps the now cancelled Future Imagery Architecture) telescopes may be a three-mirror anastigmat, since the spare telescopes given to NASA by the National Reconnaissance Office are of this form.
The Extremely Large Telescope will be a three-mirror anastigmat design, with two additional flat fold mirrors.
The Deimos‑2 and DubaiSat‑2 Earth observation satellites both carry a three-mirror anastigmat Korsch design telescope.
Ralph imaging spectrometer on New Horizons spacecraft The Nancy Grace Roman Space Telescope, formerly named the Wide Field Infrared Survey Telescope (WFIRST), employs a folded three-mirror anastigmat featuring an ellipsoidal primary, hyperboloidal secondary, and ellipsoidal tertiary. An earlier design used an off-axis three-mirror anastigmat. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DDR SDRAM**
DDR SDRAM:
Double Data Rate Synchronous Dynamic Random-Access Memory (DDR SDRAM) is a double data rate (DDR) synchronous dynamic random-access memory (SDRAM) class of memory integrated circuits used in computers. DDR SDRAM, also retroactively called DDR1 SDRAM, has been superseded by DDR2 SDRAM, DDR3 SDRAM, DDR4 SDRAM and DDR5 SDRAM. None of its successors are forward or backward compatible with DDR1 SDRAM, meaning DDR2, DDR3, DDR4 and DDR5 memory modules will not work on DDR1-equipped motherboards, and vice versa.
DDR SDRAM:
Compared to single data rate (SDR) SDRAM, the DDR SDRAM interface makes higher transfer rates possible through more strict control of the timing of the electrical data and clock signals. Implementations often have to use schemes such as phase-locked loops and self-calibration to reach the required timing accuracy. The interface uses double pumping (transferring data on both the rising and falling edges of the clock signal) to double data bus bandwidth without a corresponding increase in clock frequency. One advantage of keeping the clock frequency low is that it reduces the signal integrity requirements on the circuit board connecting the memory to the controller. The name "double data rate" refers to the fact that a DDR SDRAM with a certain clock frequency achieves nearly twice the bandwidth of a SDR SDRAM running at the same clock frequency, due to this double pumping.
DDR SDRAM:
With data being transferred 64 bits at a time, DDR SDRAM gives a transfer rate (in bytes/s) of (memory bus clock rate) × 2 (for dual rate) × 64 (number of bits transferred) / 8 (number of bits/byte). Thus, with a bus frequency of 100 MHz, DDR SDRAM gives a maximum transfer rate of 1600 MB/s.
History:
In the late 1980s IBM had built DRAMs using a dual-edge clocking feature and presented their results at the International Solid-State Circuits Convention in 1990.Samsung demonstrated the first DDR memory prototype in 1997, and released the first commercial DDR SDRAM chip (64 Mbit) in June 1998, followed soon after by Hyundai Electronics (now SK Hynix) the same year. The development of DDR began in 1996, before its specification was finalized by JEDEC in June 2000 (JESD79). JEDEC has set standards for the data rates of DDR SDRAM, divided into two parts. The first specification is for memory chips, and the second is for memory modules. The first retail PC motherboard using DDR SDRAM was released in August 2000.
Specification:
Modules To increase memory capacity and bandwidth, chips are combined on a module. For instance, the 64-bit data bus for DIMM requires eight 8-bit chips, addressed in parallel. Multiple chips with common address lines are called a memory rank. The term was introduced to avoid confusion with chip internal rows and banks. A memory module may bear more than one rank. The term sides would also be confusing because it incorrectly suggests the physical placement of chips on the module. All ranks are connected to the same memory bus (address + data). The chip select signal is used to issue commands to specific rank.
Specification:
Adding modules to the single memory bus creates additional electrical load on its drivers. To mitigate the resulting bus signaling rate drop and overcome the memory bottleneck, new chipsets employ the multi-channel architecture.
Note: All items listed above are specified by JEDEC as JESD79F. All RAM data rates in-between or above these listed specifications are not standardized by JEDEC – often they are simply manufacturer optimizations using tighter tolerances or overvolted chips. The package sizes in which DDR SDRAM is manufactured are also standardized by JEDEC.
Specification:
There is no architectural difference between DDR SDRAM modules. Modules are instead designed to run at different clock frequencies: for example, a PC-1600 module is designed to run at 100 MHz, and a PC-2100 is designed to run at 133 MHz. A module's clock speed designates the data rate at which it is guaranteed to perform, hence it is guaranteed to run at lower (underclocking) and can possibly run at higher (overclocking) clock rates than those for which it was made.DDR SDRAM modules for desktop computers, dual in-line memory modules (DIMMs), have 184 pins (as opposed to 168 pins on SDRAM, or 240 pins on DDR2 SDRAM), and can be differentiated from SDRAM DIMMs by the number of notches (DDR SDRAM has one, SDRAM has two). DDR SDRAM for notebook computers, SO-DIMMs, have 200 pins, which is the same number of pins as DDR2 SO-DIMMs. These two specifications are notched very similarly and care must be taken during insertion if unsure of a correct match. Most DDR SDRAM operates at a voltage of 2.5 V, compared to 3.3 V for SDRAM. This can significantly reduce power consumption. Chips and modules with the DDR-400/PC-3200 standard have a nominal voltage of 2.6 V.
Specification:
JEDEC Standard No. 21–C defines three possible operating voltages for 184 pin DDR, as identified by the key notch position relative to its centreline. Page 4.5.10-7 defines 2.5V (left), 1.8V (centre), TBD (right), while page 4.20.5–40 nominates 3.3V for the right notch position. The orientation of the module for determining the key notch position is with 52 contact positions to the left and 40 contact positions to the right.
Specification:
Increasing the operating voltage slightly can increase maximum speed but at the cost of higher power dissipation and heating, and at the risk of malfunctioning or damage.
Capacity Number of DRAM devices The number of chips is a multiple of 8 for non-ECC modules and a multiple of 9 for ECC modules. Chips can occupy one side (single sided) or both sides (dual sided) of the module. The maximal number of chips per DDR module is 36 (9×4) for ECC and 32 (8x4) for non-ECC.
ECC vs non-ECC Modules that have error-correcting code are labeled as ECC. Modules without error correcting code are labeled non-ECC.
Timings CAS latency (CL), clock cycle time (tCK), row cycle time (tRC), refresh row cycle time (tRFC), row active time (tRAS).
Buffering Registered (or buffered) vs unbuffered.
Packaging Typically DIMM or SO-DIMM.
Specification:
Power consumption A test with DDR and DDR2 RAM in 2005 found that average power consumption appeared to be of the order of 1–3 W per 512 MB module; this increases with clock rate and when in use rather than idling. A manufacturer has produced calculators to estimate the power used by various types of RAM.Module and chip characteristics are inherently linked.
Specification:
Total module capacity is a product of one chip's capacity and the number of chips. ECC modules multiply it by 8⁄9 because they use 1 bit per byte (8 bits) for error correction. A module of any particular size can therefore be assembled either from 32 small chips (36 for ECC memory), or 16(18) or 8(9) bigger ones.
Specification:
DDR memory bus width per channel is 64 bits (72 for ECC memory). Total module bit width is a product of bits per chip and number of chips. It also equals number of ranks (rows) multiplied by DDR memory bus width. Consequently, a module with a greater number of chips or using ×8 chips instead of ×4 will have more ranks.
Specification:
This example compares different real-world server memory modules with a common size of 1 GB. One should definitely be careful buying 1 GB memory modules, because all these variations can be sold under one price position without stating whether they are ×4 or ×8, single- or dual-ranked.
There is a common belief that number of module ranks equals number of sides. As above data shows, this is not true. One can also find 2-side/1-rank modules. One can even think of a 1-side/2-rank memory module having 16(18) chips on single side ×8 each, but it is unlikely such a module was ever produced.
Specification:
Chip characteristics DRAM density Size of the chip is measured in megabits. Most motherboards recognize only 1 GB modules if they contain 64M×8 chips (low density). If 128M×4 (high density) 1 GB modules are used, they most likely will not work. The JEDEC standard allows 128M×4 only for registered modules designed specifically for servers, but some generic manufacturers do not comply.
Specification:
Organization The notation like 64M×4 means that the memory matrix has 64 million (the product of banks x rows x columns) 4-bit storage locations. There are ×4, ×8, and ×16 DDR chips. The ×4 chips allow the use of advanced error correction features like Chipkill, memory scrubbing and Intel SDDC in server environments, while the ×8 and ×16 chips are somewhat less expensive. x8 chips are mainly used in desktops/notebooks but are making an entry into the server market. There are normally 4 banks and only one row can be active in each bank.
Specification:
Double data rate (DDR) SDRAM specification From Ballot JCB-99-70, and modified by numerous other Board Ballots, formulated under the cognizance of Committee JC-42.3 on DRAM Parametrics.
Specification:
Standard No. 79 Revision Log: Release 1, June 2000 Release 2, May 2002 Release C, March 2003 – JEDEC Standard No. 79C."This comprehensive standard defines all required aspects of 64Mb through 1Gb DDR SDRAMs with X4/X8/X16 data interfaces, including features, functionality, ac and dc parametrics, packages and pin assignments. This scope will subsequently be expanded to formally apply to x32 devices, and higher density devices as well." Organization PC3200 is DDR SDRAM designed to operate at 200 MHz using DDR-400 chips with a bandwidth of 3,200 MB/s. Because PC3200 memory transfers data on both the rising and falling clock edges, its effective clock rate is 400 MHz.
Specification:
1 GB PC3200 non-ECC modules are usually made with 16 512 Mbit chips, 8 on each side (512 Mbits × 16 chips) / (8 bits (per byte)) = 1,024 MB. The individual chips making up a 1 GB memory module are usually organized as 226 8-bit words, commonly expressed as 64M×8. Memory manufactured in this way is low-density RAM and is usually compatible with any motherboard specifying PC3200 DDR-400 memory.
Generations:
DDR (DDR1) was superseded by DDR2 SDRAM, which had modifications for a higher clock frequency and again doubled throughput, but operates on the same principle as DDR. Competing with DDR2 was Rambus XDR DRAM. DDR2 dominated due to cost and support factors. DDR2 was in turn superseded by DDR3 SDRAM, which offered higher performance for increased bus speeds and new features. DDR3 has been superseded by DDR4 SDRAM, which was first produced in 2011 and whose standards were still in flux (2012) with significant architectural changes.
Generations:
DDR's prefetch buffer depth is 2 (bits), while DDR2 uses 4. Although the effective clock rates of DDR2 are higher than DDR, the overall performance was not greater in the early implementations, primarily due to the high latencies of the first DDR2 modules. DDR2 started to be effective by the end of 2004, as modules with lower latencies became available.Memory manufacturers stated that it was impractical to mass produce DDR1 memory with effective transfer rates in excess of 400 MHz (i.e. 400 MT/s and 200 MHz external clock) due to internal speed limitations. DDR2 picks up where DDR1 leaves off, utilizing internal clock rates similar to DDR1, but is available at effective transfer rates of 400 MHz and higher. DDR3 advances extended the ability to preserve internal clock rates while providing higher effective transfer rates by again doubling the prefetch depth.
Generations:
The DDR4 SDRAM is a high-speed dynamic random-access memory internally configured as 16 banks, 4 bank groups with 4 banks for each bank group for ×4/×8 and 8 banks, 2 bank groups with 4 banks for each bank group for ×16 DRAM.
Generations:
The DDR4 SDRAM uses an 8n prefetch architecture to achieve high-speed operation. The 8n prefetch architecture is combined with an interface designed to transfer two data words per clock cycle at the I/O pins. A single read or write operation for the DDR4 SDRAM consists of a single 8n-bit-wide 4-clock data transfer at the internal DRAM core and 8 corresponding n-bit-wide half-clock-cycle data transfers at the I/O pins.RDRAM was a particularly expensive alternative to DDR SDRAM, and most manufacturers dropped its support from their chipsets. DDR1 memory's prices substantially increased from Q2 2008, while DDR2 prices declined. In January 2009, 1 GB DDR1 was 2–3 times more expensive than 1 GB DDR2.
Generations:
Mobile DDR MDDR is an acronym that some enterprises use for Mobile DDR SDRAM, a type of memory used in some portable electronic devices, like mobile phones, handhelds, and digital audio players. Through techniques including reduced voltage supply and advanced refresh options, Mobile DDR can achieve greater power efficiency. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hydrogen highway**
Hydrogen highway:
A hydrogen highway is a chain of hydrogen-equipped public filling stations, along a road or highway, that allows hydrogen powered cars to travel. It is an element of the hydrogen infrastructure that is generally assumed to be a pre-requisite for mass utilization of hydrogen cars. For instance, William Clay Ford Jr. has stated that infrastructure is one of three factors (also including costs and manufacturability in high volumes) that hold back the marketability of fuel cell cars.[1]
Supply issues, cost and pollution:
Hydrogen fueling stations generally receive deliveries of hydrogen by tanker truck from hydrogen suppliers. An interruption at a hydrogen supply facility can shut down multiple hydrogen fueling stations. A hydrogen fueling station costs between $1 million and $4 million to build.As of 2019, 98% of hydrogen is produced by steam methane reforming, which emits carbon dioxide. The bulk of hydrogen is also transported in trucks, so pollution is emitted in its transportation.
Existing public stations:
Asia At the end of 2012 there were 17 private hydrogen stations. In 2014, Japan got its first commercial hydrogen fueling station.As of June 2020, there were 178 publicly available hydrogen fuel stations in operation in Asia. 114 of these were in Japan.
Europe As of November 2014, there were 27 publicly available hydrogen fuel stations in operation in Western Europe. As of June 2020, there were more than 177 stations in Europe and 43 under construction; about half of these were in Germany.
Existing public stations:
United States In 2013, The New York Times reported that there were "10 hydrogen stations available to the public in the United States: one in Columbia, S.C., eight in Southern California and the one in Emeryville, California". As of September 2022, there were 54 publicly accessible hydrogen refueling stations in the US, 53 of which were located in California, and one in Hawaii. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Purse accessories**
Purse accessories:
Purse accessories are fashion accessories that are made specifically for handbags, to enhance their functionality or appearance.
Purse rain cover:
Purse rain covers, also widely known as Purse Raincoats, assist in waterproofing purses. They take the form of a waterproof cover that is worn on a purse to protect it from rain. The purse rain cover is commonly made out of waterproof fabrics such as PEVA or polyester. Some retailers offer them for free along with purchases of certain purses, such as Hermes Birkin handbags.
Purse organizer:
These assist in organizing, and ease finding objects inside purses, especially when they are overloaded. The purse organizer is inserted into the purse, and typically, has several pockets that can be used to group different items into separate groups, for example, electronics, make-up, and food and drinks pockets, thus, making it easier to find them. Purse organizers can be made out of plastic, although the more expensive ones are made out of leather.
Fur and charms:
Fur or charms can be added to purses, either as attachments, or as cosmetic covers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Conical pendulum**
Conical pendulum:
A conical pendulum consists of a weight (or bob) fixed on the end of a string or rod suspended from a pivot. Its construction is similar to an ordinary pendulum; however, instead of swinging back and forth along a circular arc, the bob of a conical pendulum moves at a constant speed in a circle or ellipse with the string (or rod) tracing out a cone. The conical pendulum was first studied by the English scientist Robert Hooke around 1660 as a model for the orbital motion of planets. In 1673 Dutch scientist Christiaan Huygens calculated its period, using his new concept of centrifugal force in his book Horologium Oscillatorium. Later it was used as the timekeeping element in a few mechanical clocks and other clockwork timing devices.
Uses:
During the 1800s, conical pendulums were used as the timekeeping element in a few clockwork timing mechanisms where a smooth motion was required, as opposed to the unavoidably jerky motion provided by ordinary pendulums. Two examples were mechanisms to turn the lenses of lighthouses to sweep their beams across the sea, and the location drives of equatorial mount telescopes, to allow the telescope to follow a star smoothly across the sky as the Earth turns.One of the most important uses of the conical pendulum was in the flyball governor (centrifugal governor) invented by James Watt in 1788 which regulated the speed of steam engines during the Steam Age in the 1800s. Some playground games, including totem tennis and tetherball, use a ball attached to a pole by a cord which functions as a conical pendulum, although in tetherball the pendulum gets shorter as the cord wraps around the pole. Some amusement park rides also act as conical pendulums.
Analysis:
Consider a conical pendulum consisting of a bob of mass m revolving without friction in a circle at a constant speed v on a string of length L at an angle of θ from the vertical.
There are two forces acting on the bob: the tension T in the string, which is exerted along the line of the string and acts toward the point of suspension.
Analysis:
the downward bob weight mg, where m is the mass of the bob and g is the local gravitational acceleration.The force exerted by the string can be resolved into a horizontal component, T sin(θ), toward the center of the circle, and a vertical component, T cos(θ), in the upward direction. From Newton's second law, the horizontal component of the tension in the string gives the bob a centripetal acceleration toward the center of the circle: sin θ=mv2r Since there is no acceleration in the vertical direction, the vertical component of the tension in the string is equal and opposite to the weight of the bob: cos θ=mg These two equations can be solved for T/m and equated, thereby eliminating T and m and yielding the centripetal acceleration: tan θ=v2r A little rearrangement gives: cos sin θ Since the speed of the pendulum bob is constant, it can be expressed as the circumference 2πr divided by the time t required for one revolution of the bob: v=2πrt Substituting the right side of this equation for v in the previous equation, we find: cos sin sin θ Using the trigonometric identity tan(θ) = sin(θ) / cos(θ) and solving for t, the time required for the bob to travel one revolution is tan θ In a practical experiment, r varies and is not as easy to measure as the constant string length L. r can be eliminated from the equation by noting that r, h, and L form a right triangle, with θ being the angle between the leg h and the hypotenuse L (see diagram). Therefore, sin θ Substituting this value for r yields a formula whose only varying parameter is the suspension angle θ: For small angles θ, cos(θ) ≈ 1; in which case t≈2πLg so that for small angles the period t of a conical pendulum is equal to the period of an ordinary pendulum of the same length. Also, the period for small angles is approximately independent of changes in the angle θ. This means the period of rotation is approximately independent of the force applied to keep it rotating. This property, called isochronism, is shared with ordinary pendulums and makes both types of pendulums useful for timekeeping. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**JAM notation**
JAM notation:
JAM is both the software and file format for representing music as human-readable and human-writable text.
Unlike the ABC notation, another text-based music format, that is best suitable for one-voice tunes, JAM is mainly focused on chords.
JAM notation:
Here is an example of jam notation: ### LULLABY OF BIRDLAND Dm7 Bm7-5 | E7 A7 | Dm7 / | Gm7 C7 F+7 F7 | Bb+7 Bbm7 | F+7 / | Em7-5 A7 Dm7 Bm7-5 | E7 A7 | Dm7 / | Gm7 C7 F+7 F7 | Bb+7 Bbm7 | F+7 C7 | F+7 / Am7-5 D7 | Gm7 / | Gm7-5 C7 | F+7 / Am7-5 D7 | Gm7 / | Gm7-5 C7 | F+7 A7 Dm7 Bm7-5 | E7 A7 | Dm7 / | Gm7 C7 F+7 F7 | Bb+7 Bbm7 | F+7 C7 | F+7 The software is proprietary, windows-only freeware. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Venetoclax**
Venetoclax:
Venetoclax, sold under the brand names Venclexta and Venclyxto, is a medication used to treat adults with chronic lymphocytic leukemia (CLL), small lymphocytic lymphoma (SLL), or acute myeloid leukemia (AML).The most common side effects are low levels of neutrophils (a type of white blood cell), diarrhea, nausea, anemia (low red blood cell counts), nose and throat infection and tiredness.Venetoclax attaches to a protein called Bcl-2. This protein is present in high amounts in CLL cancer cells, where it helps the cells survive for longer in the body and makes them resistant to cancer medicines. By attaching to Bcl-2 and blocking its actions, venetoclax causes the death of cancer cells and thereby slows down progression of the disease.
Medical uses:
CLL/SLL In the US, venetoclax is indicated for adults with chronic lymphocytic leukemia (CLL) or small lymphocytic lymphoma (SLL). Indication does not depend on mutation status (e. g. 17p deletion, IGHV mutation, 12+).
Medical uses:
In the EU, venetoclax monotherapy is indicated for the treatment of chronic lymphocytic leukaemia (CLL) in the presence of 17p deletion or TP53 mutation in adults who are unsuitable for or have failed a B cell receptor pathway inhibitor and for the treatment of CLL in the absence of 17p deletion or TP53 mutation in adults who have failed both chemoimmunotherapy and a B cell receptor pathway inhibitor.
Medical uses:
Other types of leukemia Venetoclax is also indicated as part of a combination therapy for acute myeloid leukemia (AML). For this purpose it is used with azacitidine, decitabine, or low-dose cytarabine for newly diagnosed adults who are age 75 years or older, or those with other health problems where intensive chemotherapy cannot be used.
Side effects:
Common side effects of venetoclax include neutropenia (low white blood cell count), nausea, anemia, diarrhea, upper respiratory tract infection, fatigue, and thrombocytopenia (low platelet count). Major side effects include tumor lysis syndrome and severe neutropenia. Additionally, this drug may cause fertility problems in males.
Pharmacology:
Mechanism of action Venetoclax is a BH3-mimetic. Venetoclax blocks the anti-apoptotic B-cell lymphoma-2 (Bcl-2) protein, leading to programmed cell death of CLL cells. Overexpression of Bcl-2 in some lymphoid malignancies has been linked to increased resistance to chemotherapy.
Pharmacology:
Pharmacokinetics The maximum plasma concentration achieved after oral administration occurred 5–8 hours after dose. Steady state maximum concentration with low-fat meal conditions at the 400 mg once daily dose was found to be 2.1 ± 1.1 μg/mL. It is recommended that venetoclax be administered with a meal.The apparent volume of distribution for venetoclax is approximately 256–321 L. It is highly bound to human plasma protein. Within a concentration range of 1-30 μM (0.87-26 μg/mL), the fraction unbound in plasma was less than 0.01.Venetoclax is metabolized by CYP3A4/5 as proven by in-vitro studies. Those using the drug should not consume grapefruit products because they contain CYP3A inhibitors. Additionally, while using venetoclax it is not recommended to use other drugs which contain CYP3A inhibitors (i.e.: erythromycin, ciprofloxacin, diltiazem, dronedarone, fluconazole, verapamil). Venetoclax is excreted from the body via the fecal route.
History:
In 2015, the United States Food and Drug Administration (FDA) granted the breakthrough therapy designation to venetoclax for people with CLL or SLL who have relapsed, become intolerant to, or refractory to previous treatment.In April 2016, the FDA approved venetoclax for use in those with CLL who have 17p deletion (deletion located on the chromosome 17 short arm) and who have been treated with at least one prior therapy. Based on overall response rate, the indication was approved under accelerated FDA approval.The efficacy of venetoclax was tested in a single-arm clinical trial of 106 participants with CLL who have a 17p deletion and who had received at least one prior therapy. Trial participants took venetoclax orally every day, beginning with 20 mg and increasing over a five-week period to 400 mg. Results showed that 80 percent of trial participants experienced a complete or partial remission of their cancer. The trial was conducted in the US, Canada, France, Germany, Poland, the United Kingdom, and Australia.The application for venetoclax was granted priority review and accelerated approval along with breakthrough therapy designation and orphan drug designation.Venetoclax was approved for use in the European Union in December 2016.In June 2018, the FDA granted regular approval to venetoclax for people with CLL or small lymphocytic lymphoma (SLL), with or without 17p deletion, who have received at least one prior therapy.Approval was based on MURANO (NCT02005471), a randomized (1:1), multicenter, open-label trial of venetoclax with rituximab (VEN+R) versus bendamustine with rituximab (B+R) in 389 participants with CLL who had received at least one prior line of therapy. Participants in the VEN+R arm completed a 5-week ramp-up venetoclax scheduleand then received venetoclax 400 mg once daily for 24 months measured from the rituximab start date. Rituximab was initiated after venetoclax ramp-up and given for 6 cycles (375 mg/m2 intravenously on cycle 1 day 1 and 500 mg/m2 intravenously on day 1 of cycles 2–6, with a 28-day cycle length). The comparator arm received 6 cycles of B+R (bendamustine 70 mg/m2 on days 1 and 2 of each 28-day cycle and rituximab at the above described dose and schedule).The application for venetoclax in combination with rituximab was granted priority review along with a breakthrough therapy designation.In November 2018, in the United States, venetoclax was approved in combination with azacitidine or decitabine or low-dose cytarabine for the treatment of newly diagnosed acute myeloid leukemia (AML) in adults who are age 75 years or older, or who have comorbidities that preclude use of intensive induction chemotherapy.Accelerated approval was based on two open-label non-randomized trials in participants with newly diagnosed AML who were >= 75 years of age or had comorbidities that precluded the use of intensive induction chemotherapy. Efficacy was established based on the rate of complete remission (CR) and CR duration.Study M14-358 (NCT02203773) was a non-randomized, open-label clinical trial of venetoclax in combination with azacitidine (n=67) or decitabine (n=13) in newly diagnosed participants with AML. In combination with azacitidine, 25 participants achieved a CR (37%, 95% CI: 26, 50) with a median observed time in remission of 5.5 months (range: 0.4–30 months). In combination with decitabine, 7 participants achieved a CR (54%, 95% CI: 25, 81) with a median observed time in remission of 4.7 months (range: 1.0–18 months). The observed time in remission is the time from start of CR to data cut-off date or relapse from CR. In a phase 3 study of azacitidine and venetoclax in untreated acute myeloid leukemia not eligible for standard induction chemotherapy, the addition of venetoclax to azacitidine resulted in an improvement in median overall survival (14.7 months versus 9.6 months) and improved complete remission rates.Study M14-387 (NCT02287233) was a non-randomized, open-label trial of venetoclax in combination with low-dose cytarabine (n=61) in newly diagnosed participants with AML, including participants with previous exposure to a hypomethylating agent for an antecedent hematologic disorder. In combination with low-dose cytarabine, 13 participants achieved a CR (21%, 95% CI: 12, 34) with a median observed time in remission of 6 months (range: 0.03–25 months).In May 2019, the label was extended by accelerated approval to include all adults with CLL/SLL disregarding prior treatment or mutation status.Approval was based on CLL14 (NCT02242942), a randomized (1:1), multicenter, open label, actively controlled trial of venetoclax in combination with obinutuzumab (VEN+G) versus obinutuzumab in combination with chlorambucil (GClb) in 432 participants with previously untreated CLL with coexisting medical conditions.The major efficacy outcome was progression-free survival (PFS) assessed by an independent review committee. The trial demonstrated a statistically significant improvement in PFS for participants who received VEN+G compared with those who received GClb (HR 0.33; 95% CI: 0.22, 0.51; p<0.0001). Median PFS was not reached in either arm after a median follow-up duration of 28 months. The overall response rate was 85% in VEN+G arm compared to 71% in GClb arm, p=0.0007. The trial also demonstrated statistically significant improvements in rates of minimal residual disease negativity (less than one CLL cell per 104 leukocytes) in bone marrow and peripheral blood. Overall survival data were not mature at this analysis.The FDA used the Real-Time Oncology Review and Assessment Aid Pilot Program for this application and granted priority review as well as orphan drug and breakthrough therapy designations. Approval was granted 3.7 months ahead of the Prescription Drug User Fee Act (PDUFA) date.
Society and culture:
AbbVie Inc. manufactures Venclexta. It is marketed by both Abbvie and Genentech USA, which is a member of the Roche Group. AbbVie and Genentech are both commercializing the drug within the United States, but only AbbVie has rights to do so outside of the U.S.According to Reuters 2016 Drugs to Watch, the 2020 forecast sales for venetoclax are US$1.48 billion.: 3, 7–8 Competition as well as potential for combination is expected from other drugs such as ibrutinib and idelalisib, both of which were also approved in 2014 to treat CLL.: 7–8 Venetoclax is patented by AbbVie Inc.
Research:
As of 2016, venetoclax had been tested to treat other hematological cancers, including non-Hodgkin's lymphoma, multiple myeloma, diffuse large B-cell lymphoma and follicular lymphoma.: 7–8 On 13 June 2020 at the European Hematology Association (EHA) annual congress, AbbVie and Roche announced the results of a Phase III trial that showed a 34 percent reduction in the risk of death in AML patients who were ineligible for intensive chemotherapy treated with venetoclax plus azacitidine compared to azacitidine plus placebo. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Donkey milk**
Donkey milk:
Donkey milk (or ass milk, or jenny milk) is the milk from the domesticated donkey (Equus asinus). It has been used since antiquity for cosmetic purposes as well as infant nutrition.
History:
Donkey milk has been used by humans for alimentary and cosmetic purposes since Egyptian antiquity; doctors recommended it to treat several afflictions, due to its healing and cosmetic virtues.Hippocrates (460 – 370 BC), was the first to write of the medicinal use of donkey milk and prescribed it for numerous conditions including poisoning, fevers, infectious diseases, edema, healing wounds, nose bleeds, and liver trouble. In the Roman era donkey milk was a recognized remedy; Pliny the Elder (23 – 79 AD) in his encyclopedic work, Naturalis Historia, wrote extensively about its health benefits, i.e. to fight fever, fatigue, eye strain, weakened teeth, face wrinkles, poisonings, ulcerations, asthma and certain gynecological troubles, but it wasn't until the Renaissance that the first real scientific consideration was given to donkey milk. Georges-Louis Leclerc the Comte de Buffon (1707–1788) mentions the benefits of donkey milk in his Histoire naturelle and Pauline Bonaparte (1780–1825), Napoleon's sister, is reported to have used donkey milk for skin care. In France in the nineteenth century, Dr. Parrot of the Hospital des Enfants Assistés spread the practice of bringing motherless babies directly to the donkey's nipple (Bullettin de l’Académie de médicine, 1882). The donkey's milk was then sold until the twentieth century to feed orphaned infants and to cure delicate children, the sick and the elderly. For this reason, in Greece, Italy, Belgium, Germany, Switzerland many donkeys are born on farms. Nowadays donkey milk is largely used in the manufacture of soaps and moisturizers, but new evidence show its possible medical use, especially to treat, under the supervision of a doctor, infants and children with cow's milk protein allergy (CMPA) and with appropriate precautions such as a natural "formula" for infants.
Production:
The donkey is considered a seasonal polyestrous one, but the latitude in which the farm is located can greatly influence the reproduction cycle. The female is normally pregnant for about 12 months.Donkey milk production differs greatly from that of conventional dairy species, especially in terms of milk supply which is much more limited. The equid mammary gland has a low capacity (max 2.5 L) and a part of the milk production should be left to the foal and milking may be carried out two or three hours after separation from the foal. Donkeys should be milked three times a day from 20 to 90 days after foaling. A female gives between 0.5 and 1.3 litres of milk a day for about 6–7 months. The variability of donkey milk production is due to many factors, such as individual milkability, nutrition, genetics, management of reproduction, etc., in addition to milking management.Generally, a donkey farm (breeding), aimed at milk production is small, with some tens of heads and rarely more. In Europe, and specifically in Emilia Romagna (Italy) there is only one very large donkey farm with 800 head.
Composition:
Gross composition Published data on donkey milk gross composition confirm the closer resemblance to breast milk for lactose, protein and ash levels when compared with cow, sheep and goat milk. Despite the high lactose content of donkey milk the average fat content is lower for this purpose. When used in infant nutrition, donkey milk is usually supplemented with vegetable oil (4 mL per 100 mL of milk) to conform to human milk energy.
Composition:
The casein to whey protein ratio in donkey milk was lower compared to the value on cow milk.
The non-protein nitrogen (NPN) accounts for an average of 16% of total nitrogen in donkey milk, is much closer than values reported for human milk (20%) but higher than those of domestic ruminants (5%).
The amino acid profile of the donkey milk proteins shows a very similar percentage of essential amino acids (36.7 e 38.2 g amino acid /100 g protein) than in human milk proteins (40.7 g amino acid /100 g protein).
Composition:
Functional and bioactive components Among the functional proteins detected in donkey milk, there are molecules active in antimicrobial protection such as lysozyme and lactoferrin. The lactoferrin content of donkey milk is intermediate between the lower values of cow milk and the higher values of human milk. Lactoferrin inhibits the growth of iron-dependent bacteria in the gastrointestinal tract. This inhibits certain organisms, such as coliforms and yeast, that require iron. Lysozyme in donkey milk is present in large amounts, indeed ranges from 1.0 mg/mL to 4 mg/mL, depending on the analytical method used (chemical or microbiological); this substance is present also in human (0.12 mg/mL) but only in trace amounts in cow and goat milk. Lysozyme in donkey milk is highly thermo-stable and is very resistant to acid and protease and may play a significant role in the intestinal immune response.In donkey mammary secretion, defatted or not, growth factors and hormones have also been determined. In detail, donkey mammary secretions contain human-like leptin at levels close to human milk (3.35 e 5.32 ng/mL milk). The bioactive peptides insulin like growth factor 1, ghrelin and triiodothyronine were also found in frozen donkey milk. These molecules, and many others present in human milk, are increasingly receiving attention from a nutraceutical point of view because of their potential direct role in regulating food intake, metabolism, and infant body condition.
Nutritional use:
Natural hypoallergenic milk for infants with cows’ milk protein allergy Pasteurized donkey milk is used as a natural hypoallergenic milk, because it is tolerated by about 90% of infants with food allergies, e.g., cows’ milk protein allergy (CMPA), a common food allergy in childhood with a prevalence of approximately 3% during the first 3 years of life. However the infants tolerance of donkey milk must be evaluated first subjectively, under medical supervision and after carrying out specific allergy tests. As natural hypoallergenic formula it is preferred over those of soy or produced from protein hydrolysates because has a pleasant taste and does not cause allergies in some people who also have allergic reactions to soy proteins or protein hydrolysates Natural infants "formula" Donkey's milk is similar to human milk for its lactose, proteins, minerals, amino-acid content.
Nutritional use:
In terms of energy despite the high lactose content of donkey milk the average fat content is lower if used predominantly before weaning.
Nutritional use:
When used in infant nutrition before weaning, due to its low fat content to mimic breast milk, like all infant formulas, donkey milk should be integrated with a source of fat particular attention must also be given to essential fatty acids. Omega‐3 and omega‐6 fatty acids, particularly docosahexaenoic acid (DHA), are known to play an essential role in the development of the brain and retina. Intakes in pregnancy and early life affect growth and cognitive performance later in childhood,ensuring adequate intakes of fat, essential fatty acids and especially DHA through these life stages is crucial, cost effective dietary sources of these fatty acids are needed to ensure adequate essential fatty acid and DHA intakes in these populations. The integration of these substances can take place with supplements of essential fatty acids (omega-3; omega-6) and vegetable oil certified for babies; this aspect is important to exclude the presence of spores that can pass the gastric mucosa in the first 4 months. For children who are not allergic to cow or goat milk, a part of fat can be compensated naturally by adding 1-2% of cow or goat butter. In any case, the integration of fats and essential fats can be done through the integration of donkey milk with artificial formulas for infants.
Nutritional use:
From the point of view of hygienic-sanitary safety, like all milks, donkey milk and its ingredients must be pasteurized before taking; the process of pasteurizing donkey milk deactivates bacterial and viral contaminants.
Donkey milk contains immune-enhancing compounds (in particular lysozyme and lactoferrin) to help protect infants from disease. In addition, the flavour and appearance of donkey milk have been found to be attractive to children.
Diet supplement Donkey’s milk is recommended for countering stomach acid, promoting the growth of intestinal flora, calming coughs and pertussis (a.k.a. whooping cough), and for use in the treatment of immune-mediated disorders
Commercial forms:
Raw donkey milk Donkey milk milked and cooled to refrigeration temperature. According to European legislation, like all milks of animal origin, it must be pasteurized before being used, i.e. it must be cooked at home up to about 90 °C for at least 2 minutes.
Raw milk can be kept for 3 days at refrigerator temperature starting from the day of milking. To prolong conservation, raw milk can be frozen for up to 2-3 months. In any case, it must be thawed in the refrigerator and pasteurized before use.
Commercial forms:
Pasteurized donkey milk Donkey milk is pasteurized in a closed circuit of pasteurization and bottling (aseptic) at at least 72 °C for 15 seconds or equivalent times and temperatures. In case of pasteurization in discontinuous systems, the temperature must be higher depending on the method used and the type of plant and destination Freeze drying (lyophilized) Donkey milk can be freeze dried to preserve the biological quality of the milk, and so preserve its nutritional, functional and cosmetic properties. This is possible because in freeze drying the milk is frozen and brought under vacuum at low temperatures. During this process the water is removed by sublimation. The result is approximately ten percent of dry matter that is called lyophilized (or freeze dried) donkey milk. This powder is easy to reconstitute. The lyophilized product has to be packaged without any oxygen. It has a shelf life of two years. Normally it is produced from pasteurized donkey milk so it is ready to use Concluding, the treatment of lyophilization (freeze dried) of donkey's milk demonstrated that the natural colour, flavours, nutrients, bioactive substances of the fresh donkey milk are retained. Instead, with the spray-drying method, another way to dry products, the milk is being heated whereby vitamins and other important bioactive substances will get lost. In addition Freeze-dried don't require chemical preservatives and can be either consumed directly or re hydrated easily. However, this method for its high costs is practiced only by a few companies.
Commercial forms:
This product it is easy to find in Italy and Europa , where it was for the first time put on the market.
Fermented donkey milk (kumis) The use of fermented equid milk is an ancient tradition in central Asia, like kumis or airag, a fermented mares milk very popular in Asia and Russia; but there are also traditional variants made from donkey milk.In Mongolia, where kumis is the national drink, people have a saying that ‘kumys cures 40 diseases’.
Cosmetic use:
Cosmetics with donkey milk In recent years, the cosmetic industry is mainly focused towards products made with natural ingredients and it is oriented to a sustainable consumption. Because of their natural origin, milk components correspond in many fields to the needs of cosmetology.Recent scientific study on a cream containing of lyophilized donkey milk showed different benefits for the skin. These results are related to the effectiveness of donkey milk components like proteins, minerals, vitamins, essential fatty acids, bioactive enzyme and coenzyme which allow the skin a balanced nourishment and a proper hydration. In particular vitamin C content in donkey milk is almost 4 times more of cow's milk. Donkey milk contain more lactoferrin of cow milk and a considerable mounts of lysozyme, from 1.0 mg/mL to 4 mg/mL (depending on the analytical method used: chemical or microbiological), instead cow's milk only traces. For this reason, it has the potentiality, when properly formulated, to reduce problem skin with eczema, acne, psoriasis and herpes and properties in calming the irritation symptoms as reported by some authors.
Cosmetic use:
Some authors have preliminarily evaluated whether the use of a face cream made from donkey milk affected the perception of some sensory aspects. The results showed that treated cream resulted appreciated by dry skin consumers for the following sensory aspects: spreadability, total appearance, smoothness, moisturisation and total effectiveness . The overall judgement also resulted highest for face cream made with donkey milk.Today, donkey milk is still used in the manufacture of soaps and creams.
Cosmetic use:
History It is said that Cleopatra, Queen of Ancient Egypt, took baths in donkey milk to preserve the beauty and youth of her skin. Legend has it that no less than 700 donkeys were needed to provide the quantity of milk necessary for her daily bath.This was also the case of Poppaea Sabina (30–65), second wife of Roman Emperor Nero, who is referred to in Pliny’s description of the ass milk virtues for the skin: "It is generally believed that ass milk effaces wrinkles in the face, renders the skin more delicate, and preserves its whiteness : and it is a well-known fact, that some women are in the habit of washing their face with it seven times daily, strictly observing that number. Poppaea, the wife of the Emperor Nero, was the first to practise this; indeed, she had sitting-baths, prepared solely with ass milk, for which purpose whole troops of she- asses used to attend her on her journeys "The Roman poet Ovid (43 BC. – 18 AD.) also in his poem Medicamina Faciei Femineae, suggest beauty masks made with donkey milk.
Cosmetic use:
Pauline Bonaparte (1780–1825), Napoleon's sister, is also reported to have used ass milk for her skin's health care.
Cosmetic use:
Traditional Medicine Much of the "medicinal" use of equid milk (donkey and mare) is based on tradition. The accuracy and clarity of the results that can be obtained with the scientific method are certainly to be appreciated, however, scientific studies on equid milk are often lacking regarding the beneficial effects towards certain pathologies.Popular medicine or traditional medicine is defined as one that follows traditions and not the scientific method and is the set of medical practices prior to the advent of industrial medicine (founded with the establishment of large pharmaceutical companies).The scientific method has weaknesses and limitations as much as any other method, so in the meantime the data that derives from cultural experience should not be underestimated. Many of these practices have become rooted in popular knowledge and tradition. The first written documents reporting the nutritional and "curative" effects of equine milk date back to around 2000 years ago. In fact, already Herodotus in the 5th century B.C. mentions it as a nutritious drink. Hippocrates (460–370 BC.), the father of medicine, was the first to describe the medicinal virtues of donkey milk. He prescribed donkey milk for numerous ailments, such as liver problems, edemas, nosebleeds, poisonings, infectious diseases, the healing of sores, and fevers. In Roman times, donkey milk was used as a universal remedy: Pliny the Elder (23–79 AD), in his encyclopedic work Naturalis Historia, has widely described its health benefits. In particular, Pliny writes about 54 medicinal uses of donkey milk, ranging (spacing) from its use as an anti-venom or as a relief for external irritations (itching) to the use of it in a pomade (ointment) for the eyes. He states that donkey milk is the most effective as a medicine, followed by cow’s milk, and then goat’s milk. During the Renaissance, donkey milk was the subject of a first real scientific consideration by the wise men of the time, when Francis I, king of France, on the advice of his doctors, used donkey milk to recover from a long illness. There are various testimonials concerning the effectiveness of donkey milk. The famous French naturalist Georges-Louis Leclerc (1707–1788) underlined the benefits of donkey milk in his Histoire Naturelle.Some effects have also been supported by systematic and scientific studies starting from the mid-1800s, especially by Russian doctors.
Cosmetic use:
It is worth remembering that donkey and mare's milk are very similar therefore it is assumed that we have similar properties therefore for knowledge based on tradition we often speak indistinctly of donkey and mare's milk (equid milk).
Cosmetic use:
The beneficial effects of equine milk, from the first historical sources to the present day, are aimed at: Lungs and the entire respiratory system Entire digestive system including the liver Metabolism Skin, directly and indirectly through the intestine Hematopoietic organsIt was generally described as a food capable of regenerating a weakened, emaciated, impoverished organism in an unusually short time, allowing the body to achieve better resistance. It was used by the Asian (Mongolian) equestrian peoples often as the only source of food for long periods of time and during high physical exertion, without the body developing symptoms of deficiency. Under Genghis-chan, the Mongols established the largest world empire ever. They moved on their horses across the steppes, deserts and mountains and covered distances that required weeks of travel in a few days and for long periods they lived mainly on the milk of their mares, both fresh and fermented (kumyss). Around 1850, various Russian doctors observed the habits of the shepherds of the Baskirian steppe. They reported that the Basic and Tatars spent the winter in very unfavorable environmental conditions, with temperatures down to minus 60°C, severe winter storms, and very little or no food. Weakened nomads regained their strength unusually quickly as soon as they fed on mare's milk.Russian doctors observed in the 19th century that tuberculosis was practically non-existent among the steppe nomads. Doctors attributed it to fermented mare's milk as the staple food of the steppe people. When this became known in Russia, a migration of tuberculosis patients from Russia to the Asian steppes began. The treatment was initially "wild", without medical supervision.From 1850 the first sanatoriums were founded and treatments were oriented along systematic, medical-scientific lines, however the importance of Kumyss treatment of tuberculosis in Russia lasted until about 1970, then it was gradually replaced by modern medicine. However, Kumys' treatment was the most effective tuberculosis therapy for many years. Treatment with Kumys & mare's milk has been extended to other diseases in Russia and Kazakhstan over the decades: non-tuberculous diseases of the respiratory system (e.g. pneumonia, all forms of bronchitis), diseases of the digestive system (inflammations and stomach and duodenal ulcers, inflammatory bowel disease), liver disease (all forms of liver inflammation e.g. hepatitis up to cirrhosis of the liver, dyslipidaemia), various forms of anemia, all forms of debilitating and exhausting diseases, irrespective of cause (e.g. major operations, cancer, burns, immunodeficiencies) as well as more rare and to a lesser extent concomitant with surgical, gynecological, urological diseases, both in adults and children.Language barriers and cultural differences still prevent an exchange between the Western cultural area and these cultures today, however Russia and Kazakhstan are still conducting scientific research on the effects of equine milk and kumyss on humans. Postnikov, a Russian doctor who dedicated his entire life to the research and use of horse milk in the mid-19th century , summed up its effects in three words: Nourishes: gives the body the ability to better absorb and use food.
Cosmetic use:
Strengthens : strengthens and stimulates the functional activity of the organs.
Modification: Change, renew the metabolism functions in the body towards healthy and normal | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Black level**
Black level:
Video black level is defined as the level of brightness at the darkest (black) part of a visual image or the level of brightness at which no light is emitted from a screen, resulting in a pure black screen.
Black level:
Video displays generally need to be calibrated so that the displayed black is true to the black information in the video signal. If the black level is not correctly adjusted, visual information in a video signal could be displayed as black, or black information could be displayed as above black information (gray). The voltage of the black level varies across different television standards. PAL sets the black level the same as the blanking level, while NTSC sets the black level approximately 54 mV above the blanking level. User misadjustment of black level on monitors is common. It results in darker colors having their hue changed, it affects contrast, and in many cases causes some of the image detail to be lost.
Black level:
Black level is set by displaying a testcard image and adjusting display controls. With CRT displays: "brightness" adjusts black level "contrast" adjusts white level CRTs tend to have some interdependence of controls, so a control sometimes needs adjustment more than once.In digital video black level usually means the range of RGB values in video signal, which can be either [0..255] (or "normal"; typical of a computer output) or [16..235] (or "low"; standard for video). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Image circle**
Image circle:
The image circle is the cross section of the cone of light transmitted by a lens or series of lenses onto the image plane. When this light strikes a perpendicular target such as photographic film or a digital camera sensor, it forms a circle of light – the image circle. Various sensor aspect ratios may be used which all fit inside the same image circle, 3:2, 4:3, 16:9, etc.
Image circle:
A lens to be used on a camera that provides movements must have an image circle larger than the size of the image format (Adams 1980, 54). To avoid vignetting, a photographer using a view camera must ensure that the area remains within the image circle (Adams 1980, 56–57; 151–52; 157–61); a tilt/shift lens or perspective-control lens used on a small- or medium-format camera usually has mechanical limitations that keep the frame area within the image circle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Comparison of orthotics**
Comparison of orthotics:
Comparison of orthotics stem from podiatrists having molded custom orthotics to address patients' foot malformations. Over the years they have developed numerous means to create the basis for their molds, plaster casts, foam box impressions, or three-dimensional computer imaging. None is very accurate: all produce proper fit under 80% of the time.Traditionally, they were created from plaster casts made from the patient's foot. These casts were made by wrapping dipped plaster or fiberglass strips around the foot to capture the form, then letting it dry and harden. Once the cast was hardened, the doctor would carefully remove it from the patient's foot and ship it, along with a prescription, to an orthotics lab which would use the negative of the cast to create an orthopedic insert. Research studies demonstrate that inter-practitioner variability is a major factor in orthotic intervention in treating a single patient and for a specific pathologyRecently, several companies have developed digital foot scanners that use specialized software to scan a patient's foot and create a "virtual" cast. These scans are made by having the patient place the foot onto a specialized flat image scanner that uses light and software to capture and create a 3D model. This 3D model is then electronically submitted (along with a prescription) to an orthotics lab, where it is used to program a CNC machine that will ultimately produce the orthopedic insert.-->
Styles:
Manufacturers of these products choose various materials.
Firm supports stay in one exact position.
Flexible supports maintain the arch positions while moving with the foot through the stride.
Styles:
Soft supports might use materials like foam rubber of varying intensity, memory foam, EVA, carbon fiber, silicone gel or filled leather. Because they are soft, their contour is less relevant. Instead, these tend to flatten, serving as shock absorbers. These give the proprioception of support, causing muscles to trigger in response, without true articulated support of the firmer models. Many shoe manufacturers, including athletic shoes, include similar pads with their shoes. Some products might be rubber pads shaped for a specific problem spot. Some of those could include a wrapping apparatus to hold them in place. Currently, there is a paucity of research providing recommendations on the type of orthotic or material used in its construction for different patient requirements.The firm or flexible models might require a period of adjustment. Depending on the severity of the arch collapse and the body's previous conditioning in response to that collapse, sudden readjustment can seem painful. Many attribute the feeling to walking on a walnut. It is recommended new users build up to wearing firm arch supports, starting with only a couple of hours the first day and adding an hour each successive day until the foot is adjusted to full-time usage. To mitigate this adjustment period, many manufacturers sell covering pads or have different gradations to build up to solid support. Some manufacturers cover their products in leather, which somewhat moderates the intensity of the correction while also adding to the stylistic look. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Amateur radio homebrew**
Amateur radio homebrew:
Homebrew is an amateur radio slang term for home-built, noncommercial radio equipment. Design and construction of equipment from first principles is valued by amateur radio hobbyists, known as "hams", for educational value, and to allow experimentation and development of techniques or levels of performance not readily available as commercial products. Some items can be home-brewed at similar or lower cost than purchased equivalents.
History:
In the early years of amateur radio, long before factory-built gear was easily available, hams built their own transmitting and receiving equipment, known as homebrewing. In the 1930s, 40s, and 50s, hams handcrafted reasonable-quality vacuum tube-based transmitters and receivers which were often housed in their basements, and it was common for a well-built "homebrew rig" to cover all the high frequency bands (1.8 to 30 MHz). After WWII ended, surplus material (transmitters/receivers, etc.), was readily available, providing previously unavailable material at costs low enough for amateur experimental use.Homebrewing was often encouraged by amateur radio publications. In 1950, CQ Amateur Radio Magazine announced a ‘‘$1000 Cash Prize ‘Home Brew’ Contest’’ and called independently-built equipment ‘‘the type of gear which has helped to make amateur radio our greatest reservoir of technical proficiency.’’ The magazine tried to steer hams back into building by sponsoring such competitions and by publishing more construction plans, saying that homebrewing imparted a powerful technical mastery to hams. In 1958, a CQ editorial opined that if ham radio lost status as a technical activity, it might also lose the privilege of operating on the public airwaves, saying, ‘‘As our ranks of home constructors thin we also fall to a lower technical level as a group’’.In the 1950s and 60s, some hams turned to constructing their stations from kits sold by Heathkit, Eico, EF Johnson, Allied Radio's Knight-Kit, World Radio Laboratories and other suppliers.Today, only a minority of hams own and operate completely homebrew or kit-built amateur stations. However, there are many new ham radio kit suppliers, and the "art" of homebrewing is alive and thriving.
Practices:
Homebrewing differs from kit-building in that "homebrew" connotes the process of constructing equipment using parts and designs gathered from varied and often improvised sources. Even the most skilled homebrewer may not have time or resources to build the equivalent of modern commercially made amateur radio gear from scratch, as the commercial units contain custom integrated circuits, custom cabinets, and are the result of multiple prototypes and exhaustive testing. However, constructing one's own equipment using relatively simple designs and easily obtainable or junk box electronic components is still possible. Homebrew enthusiasts say that building one's own radio equipment is fun and gives them the satisfaction that comes from mastering electronic knowledge.
QRP homebrew:
QRPers are ham radio enthusiasts known to use a power output of five watts, sometimes operating with as little as 100 milliwatts or even less. Extremely low power—one watt and below—is often referred to by hobbyists as QRPp. Commercial transceivers designed to operate at or near-QRP power levels have been available for many years, but some QRPers prefer to design and build their own equipment, either from kits or from scratch. Some have built miniature transmitters and transceivers into Altoids boxes and operate using battery power. Popular QRP kit models include the Elecraft K2, KX1, and now KX3 and those produced by NorCal, Small Wonder Labs, and others. QRP activity can often be heard on 7.030 MHz.
Homebrewing with vacuum tubes:
"Glowbug" is a term used by US amateurs to describe a simple home-made tube-type radio set, reminiscent of the shortwave radio-building craze of the 1920s and 30s. Generally, any small, home-built tube-type transmitter or receiver may be referred to as a glowbug. The majority of glowbug transmitters are designed to be used in the CW radiotelegraphy mode. A number of radio amateurs also build their own tube receivers and AM voice transmitters.As late as the 1960s, glowbugs were part of many beginner ham stations because of their simple, tube-based designs. Glowbugs are popular among QRP enthusiasts and others with a penchant for constructing their own equipment. Enthusiasts may assemble glowbugs on steel chassis, tin cakepans, and wooden boards. Glowbug enthusiasts can often be heard communicating on the shortwave bands via CW using Morse code. Simple oscillators for this frequency can be built with common NTSC color burst oscillator crystals, which operate at 3.5795 MHz. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Charge transfer switch**
Charge transfer switch:
A charge transfer switch OR CTS charge pump is a charge pump that offers better low-voltage performance and "a better voltage pumping gain and a higher output voltage" than previous charge pumps such as the Dickson charge pump. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CNAME record**
CNAME record:
A Canonical Name (CNAME) record is a type of resource record in the Domain Name System (DNS) that maps one domain name (an alias) to another (the canonical name).This can prove convenient when running multiple services (like an FTP server and a web server, each running on different ports) from a single IP address. One can, for example, use CNAME records to point ftp.example.com and www.example.com to the DNS entry for example.com, which in turn has an A record which points to the IP address. Then, if the IP address ever changes, one only has to record the change in one place within the network: in the DNS A record for example.com.
CNAME record:
CNAME records must always point to another domain name, never directly to an IP address.
Details:
DNS CNAME records are specified in RFC 1034 and clarified in Section 10 of RFC 2181.
Details:
CNAME records are handled specially in the domain name system, and have several restrictions on their use. When a DNS resolver encounters a CNAME record while looking for a regular resource record, it will restart the query using the canonical name instead of the original name. (If the resolver is specifically told to look for CNAME records, the canonical name (right-hand side) is returned, rather than restarting the query.) The canonical name that a CNAME record points to can be anywhere in the DNS, whether local or on a remote server in a different DNS zone.
Details:
For example, if there is a DNS zone as follows: NAME TYPE VALUE -------------------------------------------------- bar.example.com. CNAME foo.example.com.
foo.example.com. A 192.0.2.23 when an A record lookup for bar.example.com is carried out, the resolver will see a CNAME record and restart the checking at foo.example.com and will then return 192.0.2.23.
Details:
Possible confusion With a CNAME record, one can point a name such as "bar.example.com" to "foo.example.com." Because of this, during casual discussion the bar.example.com. (left-hand) side of a DNS entry can be incorrectly identified as "the CNAME" or "a CNAME." However, this is inaccurate. The canonical (true) name of "bar.example.com." is "foo.example.com." Because CNAME stands for Canonical Name, the right-hand side is the actual "CNAME"; on the same side as the address "A".
Details:
This confusion is specifically mentioned in RFC 2181, "Clarifications to the DNS Specification." The left-hand label is an alias for the right-hand side (the RDATA portion), which is (or should be) a canonical name. In other words, a CNAME record like this: may be read as: bar.example.com is an alias for the canonical name (CNAME) foo.example.com. A client will request bar.example.com and the answer will be foo.example.com.
DNAME record:
A DNAME record or Delegation Name record is defined by RFC 6672 (original RFC 2672 is now obsolete). A DNAME record creates an alias for an entire subtree of the domain name tree. In contrast, the CNAME record creates an alias for a single name and not its subdomains. Like the CNAME record, the DNS lookup will continue by retrying the lookup with the new name. The name server synthesizes a CNAME record to actually apply the DNAME record to the requested name—CNAMEs for every node on a subtree have the same effect as a DNAME for the entire subtree.
DNAME record:
For example, if there is a DNS zone as follows: An A record lookup for foo.example.com will return no data because a DNAME is not a CNAME and there is no A record directly at foo.
However, a lookup for xyzzy.foo.example.com will be DNAME mapped and return the A record for xyzzy.bar.example.com, which is 192.0.2.24; if the DNAME record had been a CNAME record, this request would have returned name not found.
Lastly, a request for foobar.foo.example.com would be DNAME mapped and return 192.0.2.25.
ANAME record:
Several managed DNS platforms implement a non-standard ALIAS or ANAME record type. These pseudo records are managed by DNS administrators like CNAME records, but are published and resolved by (some) DNS clients like A records. ANAME records are typically configured to point to another domain, but when queried by a client, answer with an IP address. While ANAME record types was submitted for standardization, there are other non-conforming implementations, so they can do whatever the owner of the DNS platform chooses, including existing at the apex of a zone and existing for domains that receive mail.
ANAME record:
Main advantage of ANAME records over CNAME records is that it can be used on zone apex, while standard-following resolver will not tread domain name with CNAME record as zone apex.
ANAME record:
Also, while a DNS client requires at least two queries to resolve a CNAME to an A record to an IP address, having ANAME will shift the second and subsequent query to the server. If the DNS server can resolve the A record and cache the requested IP address more efficiently and with less latency than its DNS clients can, then the DNS client can get the resolution faster.
ANAME record:
The ANAME record type was submitted as a draft standard to IETF, however the latest draft document has expired in January 2020. and has been superseded by a series of proposals, with the most recent one is the one for SVCB and HTTPS record type. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Photonic integrated circuit**
Photonic integrated circuit:
A photonic integrated circuit (PIC) or integrated optical circuit is a microchip containing two or more photonic components which form a functioning circuit. This technology detects, generates, transports, and processes light. Photonic integrated circuits utilize photons (or particles of light) as opposed to electrons that are utilized by electronic integrated circuits. The major difference between the two is that a photonic integrated circuit provides functions for information signals imposed on optical wavelengths typically in the visible spectrum or near infrared (850–1650 nm).
Photonic integrated circuit:
The most commercially utilized material platform for photonic integrated circuits is indium phosphide (InP), which allows for the integration of various optically active and passive functions on the same chip. Initial examples of photonic integrated circuits were simple 2-section distributed Bragg reflector (DBR) lasers, consisting of two independently controlled device sections – a gain section and a DBR mirror section. Consequently, all modern monolithic tunable lasers, widely tunable lasers, externally modulated lasers and transmitters, integrated receivers, etc. are examples of photonic integrated circuits. As of 2012, devices integrate hundreds of functions onto a single chip. Pioneering work in this arena was performed at Bell Laboratories. The most notable academic centers of excellence of photonic integrated circuits in InP are the University of California at Santa Barbara, USA, the Eindhoven University of Technology and the University of Twente in the Netherlands.
Photonic integrated circuit:
A 2005 development showed that silicon can, even though it is an indirect bandgap material, still be used to generate laser light via the Raman nonlinearity. Such lasers are not electrically driven but optically driven and therefore still necessitate a further optical pump laser source.
History:
Photonics is the science behind the detection, generation, and manipulation of photons. According to quantum mechanics and the concept of wave-particle duality first proposed by Albert Einstein in 1905, light acts as both an electromagnetic wave and a particle. For example, total internal reflection in an optical fibre allows it to act as a waveguide.
History:
Integrated circuits using electrical components were first developed in the late 1940s and early 1950s, but it took until 1958 for them to become commercially available. When the laser and laser diode were invented in the 1960s, the term 'photonics' fell into more common usage to describe the application of light to replace applications previously achieved through the use of electronics.
History:
By the 1980s, photonics gained traction through its role in fibre optic communication. At the start of the decade, an assistant in a new research group at Delft University Of Technology, Meint Smit, started pioneering in the field of integrated photonics. He is credited with inventing the Arrayed Waveguide Grating (AWG): a core component of modern digital connections for the Internet and phones. Smit has received several awards, including an ERC Advanced Grant, a Rank Prize for Optoelectronics and a LEOS Technical Achievement Award.Thanks to the pioneering work of both Meint Smit and Ton Backx over the last few decades, the Dutch integrated photonics sector has risen to prominence. Backx has been appointed Knight in the Order of the Netherlands Lion for among others his role in reforming the Department of Electrical Engineering at Eindhoven University of Technology and in founding both the Institute of Photonic Integration and PhotonDelta.In October 2022, during an experiment held at the Technical University of Denmark in Copenhagen, a photonic chip transmitted 1.84 petabits per second of data over a fibre optic cable more than 7.9 kilometres long. First, the data stream was split into 37 sections, each of which was sent down a separate core of the fibre-optic cable. Next, each of these channels was split into 223 parts corresponding to equidistant spikes of light across the spectrum.
Comparison to electronic integration:
Unlike electronic integration where silicon is the dominant material, system photonic integrated circuits have been fabricated from a variety of material systems, including electro-optic crystals such as lithium niobate, silica on silicon, silicon on insulator, various polymers and semiconductor materials which are used to make semiconductor lasers such as GaAs and InP. The different material systems are used because they each provide different advantages and limitations depending on the function to be integrated. For instance, silica (silicon dioxide) based PICs have very desirable properties for passive photonic circuits such as AWGs (see below) due to their comparatively low losses and low thermal sensitivity, GaAs or InP based PICs allow the direct integration of light sources and Silicon PICs enable co-integration of the photonics with transistor based electronics.The fabrication techniques are similar to those used in electronic integrated circuits in which photolithography is used to pattern wafers for etching and material deposition. Unlike electronics where the primary device is the transistor, there is no single dominant device. The range of devices required on a chip includes low loss interconnect waveguides, power splitters, optical amplifiers, optical modulators, filters, lasers and detectors. These devices require a variety of different materials and fabrication techniques making it difficult to realize all of them on a single chip.Newer techniques using resonant photonic interferometry is making way for UV LEDs to be used for optical computing requirements with much cheaper costs leading the way to petahertz consumer electronics.
Examples of photonic integrated circuits:
The primary application for photonic integrated circuits is in the area of fiber-optic communication though applications in other fields such as biomedical and photonic computing are also possible.
Examples of photonic integrated circuits:
The arrayed waveguide grating (AWG) which are commonly used as optical (de)multiplexers in wavelength division multiplexed (WDM) fiber-optic communication systems are an example of a photonic integrated circuit which has replaced previous multiplexing schemes which utilized multiple discrete filter elements. Since separating optical modes is a need for quantum computing, this technology may be helpful to miniaturize quantum computers (see linear optical quantum computing).
Examples of photonic integrated circuits:
Another example of a photonic integrated chip in wide use today in fiber-optic communication systems is the externally modulated laser (EML) which combines a distributed feed back laser diode with an electro-absorption modulator on a single InP based chip.
Applications:
As global data consumption rises and demand for faster networks continues to grow, the world needs to find more sustainable solutions to the energy crisis and climate change. At the same time, ever more innovative applications for sensor technology, such as Lidar in autonomous driving vehicles, appear on the market. There is a need to keep pace with technological challenges.
Applications:
The expansion of 5G data networks and data centres, safer autonomous driving vehicles, and more efficient food production cannot be sustainably met by electronic microchip technology alone. However, combining electrical devices with integrated photonics provides a more energy efficient way to increase the speed and capacity of data networks, reduce costs and meet an increasingly diverse range of needs across various industries.
Applications:
Data and telecommunications The primary application for PICs is in the area of fibre-optic communication. The arrayed waveguide grating (AWG) which are commonly used as optical (de)multiplexers in wavelength division multiplexed (WDM) fibre-optic communication systems are an example of a photonic integrated circuit. Another example in fibre-optic communication systems is the externally modulated laser (EML) which combines a distributed feedback laser diode with an electro-absorption modulator. For instance, EFFECT Photonics develops affordable and high-performance optical communications solutions, such as SPF+ optical transceivers, which help meet the demand for bandwidth and faster data transfer.
Applications:
The PICs can also increase bandwidth and data transfer speeds by deploying few-modes optical planar waveguides. Especially, if modes can be easily converted from conventional single-mode planar waveguides into few-mode waveguides, and selectively excite the desired modes. For example, a bidirectional spatial mode slicer and combiner can be used to achieve the desired higher or lower-order modes. Its principle of operation depends on cascading stages of V-shape and/ or M-shape graded-index planar waveguides.
Applications:
Not only can PICs increase bandwidth and data transfer speeds, they can reduce energy consumption in data centres, which spend a large proportion of energy on cooling servers. Compared with solely electronic solutions, PICs generate far less heat and can mitigate the need for cooling, reducing energy consumption. For example, QuiX Quantum develops quantum photonic processors which enable quantum photonic computers to operate at room temperature leading to a reduction in size and cost.
Applications:
Healthcare and medicine Using advanced biosensors and creating more affordable diagnostic biomedical instruments, integrated photonics opens the door to lab-on-a-chip (LOC) technology, cutting waiting times, and taking diagnosis out of laboratories and into the hands of doctors and patients. Based on an ultrasensitive photonic biosensor, SurfiX Diagnostics' diagnostics platform provides a variety of point-of-care tests. Similarly, Amazec Photonics has developed a fibre optic sensing technology with photonic chips which enables high-resolution temperature sensing (fractions of 0.1 milliKelvin) without having to inject the temperature sensor within the body. This way, medical specialists are able to measure both cardiac output and circulating blood volume from outside the body. Another example of optical sensor technology is EFI's 'OptiGrip' device, which offers greater control over tissue feeling for minimal invasive surgery.
Applications:
Automotive and engineering applications PICs can be applied in sensor systems, like Lidar (which stands for light detection and ranging), to monitor the surroundings of vehicles. It can also be deployed in-car connectivity through Li-Fi, which is similar to WiFi but uses light. This technology facilitates communication between vehicles and urban infrastructure to improve driver safety. For example, some modern vehicles pick up traffic signs and remind the driver of the speed limit.
Applications:
In terms of engineering, fibre optic sensors can be used to detect different quantities, such as pressure, temperature, vibrations, accelerations, and mechanical strain. Sensing technology from PhotonFirst uses integrated photonics to measure things like shape changes in aeroplanes, electric vehicle battery temperature, and infrastructure strain.
Applications:
Agriculture and food Sensors play a role in innovations in agriculture and the food industry in order to reduce wastage and detect diseases. Light sensing technology powered by PICs can measure variables beyond the range of the human eye, allowing the food supply chain to detect disease, ripeness and nutrients in fruit and plants. It can also help food producers to determine soil quality and plant growth, as well as measuring CO2 emissions. A new, miniaturised, near-infrared sensor, developed by MantiSpectra, is small enough to fit into a smartphone, and can be used to analyse chemical compounds of products like milk and plastics.
Types of fabrication and materials:
The fabrication techniques are similar to those used in electronic integrated circuits, in which photolithography is used to pattern wafers for etching and material deposition.
The platforms considered most versatile are indium phosphide (InP) and silicon photonics (SiPh): Indium phosphide (InP) PICs have active laser generation, amplification, control, and detection. This makes them an ideal component for communication and sensing applications.
Silicon nitride (SiN) PICs have a vast spectral range and ultra low-loss waveguide. This makes them highly suited to detectors, spectrometers, biosensors, and quantum computers. The lowest propagation losses reported in SiN (0.1 dB/cm down to 0.1 dB/m) have been achieved by LioniX International's TriPleX waveguides.
Types of fabrication and materials:
Silicon photonics (SiPh) PICs provide low losses for passive components like waveguides and can be used in minuscule photonic circuits. They are compatible with existing electronic fabrication.The term "silicon photonics" actually refers to the technology rather than the material. It combines high density photonic integrated circuits (PICs) with complementary metal oxide semiconductor (CMOS) electronics fabrication. The most technologically mature and commercially used platform is silicon on insulator (SOI).
Types of fabrication and materials:
Other platforms include: Lithium niobate (LiNbO3) is an ideal modulator for low loss mode. It is highly effective at matching fibre input–output due to its low index and broad transparency window. For more complex PICs, lithium niobate can be formed into large crystals. As part of project ELENA, there is a European initiative to stimulate production of LiNbO3-PICs. Attempts are also being made to develop lithium niobate on insulator (LNOI).
Types of fabrication and materials:
Silica has a low weight and small form factor. It is a common component of optical communication networks, such as planar light wave circuits (PLCs).
Types of fabrication and materials:
Gallium arsenide (GaAS) has high electron mobility. This means GaAS transistors operate at high speeds, making them ideal analogue integrated circuit drivers for high speed lasers and modulators.By combining and configuring different chip types (including existing electronic chips) in a hybrid or heterogeneous integration, it is possible to leverage the strengths of each. Taking this complementary approach to integration addresses the demand for increasingly sophisticated energy-efficient solutions.
Developers:
Public–private partnerships, such as PhotonDelta in Europe and the American Institute for Manufacturing Integrated Photonics in the United States, also provide end-to-end supply chains and ecosystems to help kickstart and scale companies working within integrated photonics.
Developers:
Organizations specializing in different types of fabrication and R&D: Smart Photonics (Netherlands) is a foundry for indium phosphide (InP) Ligentec (Switzerland) is a foundry for silicon nitride (SiN) LioniX International (Netherlands) is an organization specializing in silicon nitride (SiN) AMF (Singapore) and VTT (Finland) are foundries for silicon photonics (SiPh) GlobalFoundries (United States), and Tower Semiconductor (Israel) are foundries for silicon photonics (SiPh) Lightelligence, a 2017 startup that began at MIT.
Developers:
Salience Labs, the photonic computing company.
Current status:
As of 2010, photonic integration was an active topic in U.S. Defense contracts. It was included by the Optical Internetworking Forum for inclusion in 100 gigahertz optical networking standards. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dinic's algorithm**
Dinic's algorithm:
Dinic's algorithm or Dinitz's algorithm is a strongly polynomial algorithm for computing the maximum flow in a flow network, conceived in 1970 by Israeli (formerly Soviet) computer scientist Yefim (Chaim) A. Dinitz. The algorithm runs in O(|V|2|E|) time and is similar to the Edmonds–Karp algorithm, which runs in O(|V||E|2) time, in that it uses shortest augmenting paths. The introduction of the concepts of the level graph and blocking flow enable Dinic's algorithm to achieve its performance.
History:
Yefim Dinitz invented this algorithm in response to a pre-class exercise in Adelson-Velsky's algorithms class. At the time he was not aware of the basic facts regarding the Ford–Fulkerson algorithm.Dinitz mentions inventing his algorithm in January 1969, which was published in 1970 in the journal Doklady Akademii Nauk SSSR. In 1974, Shimon Even and (his then Ph.D. student) Alon Itai at the Technion in Haifa were very curious and intrigued by Dinitz's algorithm as well as Alexander V. Karzanov's related idea of blocking flow. However it was hard for them to decipher these two papers, each being limited to four pages to meet the restrictions of journal Doklady Akademii Nauk SSSR. Even did not give up, and after three days of effort managed to understand both papers except for the layered network maintenance issue. Over the next couple of years, Even gave lectures on "Dinic's algorithm", mispronouncing the name of the author while popularizing it. Even and Itai also contributed to this algorithm by combining BFS and DFS, which is how the algorithm is now commonly presented.For about 10 years of time after the Ford–Fulkerson algorithm was invented, it was unknown if it could be made to terminate in polynomial time in the general case of irrational edge capacities. This caused a lack of any known polynomial-time algorithm to solve the max flow problem in generic cases. Dinitz's algorithm and the Edmonds–Karp algorithm (published in 1972) both independently showed that in the Ford–Fulkerson algorithm, if each augmenting path is the shortest one, then the length of the augmenting paths is non-decreasing and the algorithm always terminates.
Definition:
Let G=((V,E),c,f,s,t) be a network with c(u,v) and f(u,v) the capacity and the flow of the edge (u,v) , respectively.
Definition:
The residual capacity is a mapping cf:V×V→R+ defined as, if (u,v)∈E ,cf(u,v)=c(u,v)−f(u,v) if (v,u)∈E ,cf(u,v)=f(v,u) cf(u,v)=0 otherwise.The residual graph is an unweighted graph Gf=((V,Ef),cf|Ef,s,t) , where Ef={(u,v)∈V×V:cf(u,v)>0} .An augmenting path is an s –t path in the residual graph Gf .Define dist (v) to be the length of the shortest path from s to v in Gf . Then the level graph of Gf is the graph GL=((V,EL),cf|EL,s,t) , where dist dist (u)+1} .A blocking flow is an s –t flow f′ such that the graph G′=((V,EL′),s,t) with EL′={(u,v):f′(u,v)<cf|EL(u,v)} contains no s –t path.
Algorithm:
Dinic's Algorithm Input: A network G=((V,E),c,s,t) Output: An s –t flow f of maximum value.Set f(e)=0 for each e∈E Construct GL from Gf of G . If dist (t)=∞ , stop and output f Find a blocking flow f′ in GL Add augment flow f by f′ and go back to step 2.
Analysis:
It can be shown that the number of layers in each blocking flow increases by at least 1 each time and thus there are at most |V|−1 blocking flows in the algorithm. For each of them: the level graph GL can be constructed by breadth-first search in O(E) time a blocking flow in the level graph GL can be found in O(VE) timewith total running time O(E+VE)=O(VE) for each layer. As a consequence, the running time of Dinic's algorithm is O(V2E) .Using a data structure called dynamic trees, the running time of finding a blocking flow in each phase can be reduced to log V) and therefore the running time of Dinic's algorithm can be improved to log V) Special cases In networks with unit capacities, a much stronger time bound holds. Each blocking flow can be found in O(E) time, and it can be shown that the number of phases does not exceed O(E) and O(V2/3) . Thus the algorithm runs in min {V2/3,E1/2}E) time.In networks that arise from the bipartite matching problem, the number of phases is bounded by O(V) , therefore leading to the O(VE) time bound. The resulting algorithm is also known as Hopcroft–Karp algorithm. More generally, this bound holds for any unit network — a network in which each vertex, except for source and sink, either has a single entering edge of capacity one, or a single outgoing edge of capacity one, and all other capacities are arbitrary integers.
Example:
The following is a simulation of Dinic's algorithm. In the level graph GL , the vertices with labels in red are the values dist (v) . The paths in blue form a blocking flow. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mandelkubb**
Mandelkubb:
Mandelkubb is a traditional Swedish bitter almond cookie characterized by a bittersweet flavor. Its distinct flavor is derived from bitter almonds.
The pastry is made with flour, sugar, eggs, butter, bitter almonds, ammonium carbonate, and leavening agents. They are often garnished with nib sugar. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GM2 gangliosidoses**
GM2 gangliosidoses:
The GM2 gangliosidoses are a group of three related genetic disorders that result from a deficiency of the enzyme beta-hexosaminidase. This enzyme catalyzes the biodegradation of fatty acid derivatives known as gangliosides. The diseases are better known by their individual names: Tay–Sachs disease, AB variant, and Sandhoff disease.
GM2 gangliosidoses:
Beta-hexosaminidase is a vital hydrolytic enzyme, found in the lysosomes, that breaks down lipids. When beta-hexosaminidase is no longer functioning properly, the lipids accumulate in the nervous tissue of the brain and cause problems. Gangliosides are made and biodegraded rapidly in early life as the brain develops. Except in some rare, late-onset forms, the GM2 gangliosidoses are fatal.All three disorders are rare in the general population. Tay–Sachs disease has become famous as a public health model because an enzyme assay test for TSD was discovered and developed in the late 1960s and early 1970s, providing one of the first "mass screening" tools in medical genetics. It became a research and public health model for understanding and preventing all autosomal genetic disorders.Tay–Sachs disease, AB variant, and Sandhoff disease might easily have been defined together as a single disease, because the three disorders are associated with failure of the same metabolic pathway and have the same outcome. Classification and naming for many genetic disorders reflects history, because most diseases were first observed and classified based on biochemistry and pathophysiology before genetic diagnosis was available. However, the three GM2 gangliosidoses were discovered and named separately. Each represents a distinct molecular point of failure in a subunit that is required for activation of the enzyme.
Tay–Sachs disease:
Tay–Sachs disease is a rare autosomal recessive genetic disorder that causes a progressive deterioration of nerve cells and of mental and physical abilities that begins around six months of age and usually results in death by the age of four. It is the most common of the GM2 gangliosidoses. The disease occurs when harmful quantities of cell membrane gangliosides accumulate in the brain's nerve cells, eventually leading to the premature death of the cells.
Sandhoff disease:
Sandhoff disease is a rare, autosomal recessive metabolic disorder that causes progressive destruction of nerve cells in the brain and spinal cord. The disease results from mutations on chromosome 5 in the HEXB gene, critical for the lysosomal enzymes beta-N-acetylhexosaminidase A and B. Sandhoff disease is clinically indistinguishable from Tay–Sachs disease. The most common form, infantile Sandhoff disease, is usually fatal by early childhood.
AB variant:
GM2-gangliosidosis, AB variant is a rare, autosomal recessive metabolic disorder that causes progressive destruction of nerve cells in the brain and spinal cord. Mutations in the GM2A gene cause AB variant. The GM2A gene provides instructions for making a protein called the GM2 activator. This protein is a cofactor that is required for the normal function of beta-hexosaminidase A. The disease is usually fatal by early childhood.
Treatment:
There are no authorized therapies for the treatment of the GM2 Gangliosidosis (Tay-Sachs and Sandhoff disease). The current standard of care for GM2 Gangliosidosis disease is limited to supportive care and aimed at providing adequate nutrition and hydration.This supportive care may substantially improve the quality of life of people affected by GM2. The therapeutic team may include specialists in neurology, pulmonology, gastroenterology, psychiatrist, orthopaedics, nutrition, physical therapy and occupational therapy.
Treatment:
N-Acetyl-Leucine N-Acetyl-Leucine is an orally administered, modified amino acid that is being developed as a novel treatment for multiple rare and common neurological disorders by IntraBio Inc (Oxford, United Kingdom).N-Acetyl-Leucine has been granted multiple orphan drug designations from the U.S. Food & Drug Administration (FDA) and the European Medicines Agency (EMA) and the European Medicines Agency (EMA) for the treatment of a various genetic diseases, including GM2 Gangliosidosis (Tay-Sachs and Sandhoff). The US FDA has granted IntraBio a Rare Pediatric Disease Designation for N-Acetyl-Leucine for the treatment of GM2 Gangliosidosis.Compassionate use studies in both Tay-Sachs and Sandhoff patients have demonstrated the positive clinical effects of treatment with N-Acetyl-Leucine for GM2 Gangliosidosis These studies further demonstrated that the treatment is well tolerated, with a good safety profile.A multinational clinical trial investigating N-Acetyl-L-Leucine for the treatment of GM2 Gangliosidosis (Tay-Sachs and Sandhoff) began in 2019 Recruitment is ongoing. IntraBio is also conducting parallel clinical trials with N-Acetyl-L-Leucine for the treatment of Niemann-Pick disease type C and Ataxia-Telangiectasia. Future opportunities to develop N-Acetyl-Leucine include Lewy body dementia, amyotrophic lateral sclerosis, restless leg syndrome, multiple sclerosis, and migraine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CDH11**
CDH11:
Cadherin-11 is a protein that in humans is encoded by the CDH11 gene.
Function:
This gene encodes a type II classical cadherin from the cadherin superfamily, integral membrane proteins that mediate calcium-dependent cell-cell adhesion. Mature cadherin proteins are composed of a large N-terminal extracellular domain, a single membrane-spanning domain, and a small, highly conserved C-terminal cytoplasmic domain. Type II (atypical) cadherins are defined based on their lack of a HAV cell adhesion recognition sequence specific to type I cadherins. Expression of this particular cadherin in osteoblastic cell lines, and its upregulation during differentiation, suggests a specific function in bone development and maintenance. The mammalian CDH-11 homologues are termed calsyntenin.
Relevance to cancer:
CDH11 is overexpressed in 15% of breast cancers and seems essential to tumour progression in some other cancer types.
Drug interactions:
Arthritis drug celecoxib binds to CDH11.
Interactions:
CDH11 has been shown to interact with CDH2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Castner–Kellner process**
Castner–Kellner process:
The Castner–Kellner process is a method of electrolysis on an aqueous alkali chloride solution (usually sodium chloride solution) to produce the corresponding alkali hydroxide, invented by American Hamilton Castner and Austrian Carl Kellner in the 1890s.
Due to lower energy cost and less environmental concerns, the Castner–Kellner process is being replaced gradually with membrane electrolysis.
History:
The first patent for electrolyzing brine was granted in England in 1851 to Charles Watt. His process was not an economically feasible method for producing sodium hydroxide though because it could not prevent the chlorine that formed in the brine solution from reacting with its other constituents. American chemist and engineer, Hamilton Castner, solved the mixing problem with the invention of the mercury cell and was granted a U.S. patent in 1894. Austrian chemist, Carl Kellner arrived at a similar solution at about the same time. In order to avoid a legal battle they became partners in 1895, founding the Castner-Kellner Alkali Company, which built plants employing the process throughout Europe. The mercury cell process continues in use to this day. Current-day mercury cell plant operations are criticized for environmental release of mercury leading in some cases to severe mercury poisoning (as occurred in Japan). Due to these concerns, mercury cell plants are being phased out, and a sustained effort is being made to reduce mercury emissions from existing plants.
Process details:
The apparatus shown is divided into two types of cells separated by slate walls. The first type, shown on the right and left of the diagram, uses an electrolyte of sodium chloride solution, a graphite anode (A), and a mercury cathode (M). The other type of cell, shown in the center of the diagram, uses an electrolyte of sodium hydroxide solution, a mercury anode (M), and an iron cathode (D). The mercury electrode is common between the two cells. This is achieved by having the walls separating the cells dip below the level of the electrolytes but still allow the mercury to flow beneath them.The reaction at anode (A) is: 2 Cl− → Cl2 + 2 e−The chlorine gas that results vents at the top of the outside cells where it is collected as a byproduct of the process. The reaction at the mercury cathode in the outer cells is Na+ + e− → Na (amalgam)The sodium metal formed by this reaction dissolves in the mercury to form an amalgam. The mercury conducts the current from the outside cells to the center cell. In addition, a rocking mechanism (B shown by fulcrum on the left and rotating eccentric on the right) agitates the mercury to transport the dissolved sodium metal from the outside cells to the center cell.
Process details:
The anode reaction in the center cell takes place at the interface between the mercury and the sodium hydroxide solution.
Process details:
2Na (amalgam) → 2Na+ + 2e−Finally at the iron cathode (D) of the center cell the reaction is 2H2O + 2e− → 2OH− + H2The net effect is that the concentration of sodium chloride in the outside cells decreases and the concentration of sodium hydroxide in the center cell increases. As the process continues, some sodium hydroxide solution is withdrawn from center cell as output product and is replaced with water. Sodium chloride is added to the outside cells to replace what has been electrolyzed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dual code**
Dual code:
In coding theory, the dual code of a linear code C⊂Fqn is the linear code defined by C⊥={x∈Fqn∣⟨x,c⟩=0∀c∈C} where ⟨x,c⟩=∑i=1nxici is a scalar product. In linear algebra terms, the dual code is the annihilator of C with respect to the bilinear form ⟨⋅⟩ . The dimension of C and its dual always add up to the length n: dim dim C⊥=n.
Dual code:
A generator matrix for the dual code is the parity-check matrix for the original code and vice versa. The dual of the dual code is always the original code.
Self-dual codes:
A self-dual code is one which is its own dual. This implies that n is even and dim C = n/2. If a self-dual code is such that each codeword's weight is a multiple of some constant c>1 , then it is of one of the following four types: Type I codes are binary self-dual codes which are not doubly even. Type I codes are always even (every codeword has even Hamming weight).
Self-dual codes:
Type II codes are binary self-dual codes which are doubly even.
Type III codes are ternary self-dual codes. Every codeword in a Type III code has Hamming weight divisible by 3.
Type IV codes are self-dual codes over F4. These are again even.Codes of types I, II, III, or IV exist only if the length n is a multiple of 2, 8, 4, or 2 respectively.
If a self-dual code has a generator matrix of the form G=[Ik|A] , then the dual code C⊥ has generator matrix [−A¯T|Ik] , where Ik is the (n/2)×(n/2) identity matrix and a¯=aq∈Fq | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Immunoglobulin superfamily**
Immunoglobulin superfamily:
The immunoglobulin superfamily (IgSF) is a large protein superfamily of cell surface and soluble proteins that are involved in the recognition, binding, or adhesion processes of cells. Molecules are categorized as members of this superfamily based on shared structural features with immunoglobulins (also known as antibodies); they all possess a domain known as an immunoglobulin domain or fold. Members of the IgSF include cell surface antigen receptors, co-receptors and co-stimulatory molecules of the immune system, molecules involved in antigen presentation to lymphocytes, cell adhesion molecules, certain cytokine receptors and intracellular muscle proteins. They are commonly associated with roles in the immune system. Otherwise, the sperm-specific protein IZUMO1, a member of the immunoglobulin superfamily, has also been identified as the only sperm membrane protein essential for sperm-egg fusion.
Immunoglobulin domains:
Proteins of the IgSF possess a structural domain known as an immunoglobulin (Ig) domain. Ig domains are named after the immunoglobulin molecules. They contain about 70-110 amino acids and are categorized according to their size and function. Ig-domains possess a characteristic Ig-fold, which has a sandwich-like structure formed by two sheets of antiparallel beta strands. Interactions between hydrophobic amino acids on the inner side of the sandwich and highly conserved disulfide bonds formed between cysteine residues in the B and F strands, stabilize the Ig-fold.
Classification:
The Ig like domains can be classified as IgV, IgC1, IgC2, or IgI.Most Ig domains are either variable (IgV) or constant (IgC).
IgV: IgV domains with 9 beta strands are generally longer than IgC domains with 7 beta strands.
IgC1 and IgC2: Ig domains of some members of the IgSF resemble IgV domains in the amino acid sequence, yet are similar in size to IgC domains. These are called IgC2 domains, while standard IgC domains are called IgC1 domains.
IgI: Other Ig domains exist that are called intermediate (I) domains.
Members:
The Ig domain was reported to be the most populous family of proteins in the human genome with 765 members identified. Members of the family can be found even in the bodies of animals with a simple physiological structure such as poriferan sponges. They have also been found in bacteria, where their presence is likely to be due to divergence from a shared ancestor of eukaryotic immunoglobulin superfamily domains. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**State of Emergency (video game)**
State of Emergency (video game):
State of Emergency is a beat 'em up video game developed by VIS Entertainment and published by Rockstar Games for PlayStation 2 and Xbox, and by Global Star Software for Microsoft Windows.
Plot:
In 2023, the United States government was weakened by an economic crisis. In response, the American Trade Organization, most commonly known as "The Corporation", builds a para-militaristic force and overthrows the government, taking over the United States of America and establishing a corporatised totalitarian police state. Years later, in 2035, an underground resistance named "Freedom" began a campaign of resistance and soon sparked a national riot. The Corporation declares a state of emergency. The game takes place in Capital City. The player joins Freedom in an attempt to overthrow The Corporation; they must play as one of a selection of five characters, who each have their own unique backgrounds.
Characters:
The player is given the choice of choosing their characters to play as, however can only choose two and others need to be unlocked.
Characters:
Roy MacNeil (AKA "Mack") - MacNeil was a cop for fifteen years before he was fired from the force for refusing to open fire on a group of rioters looting a Corporation grocery store. A highly respected and high-ranking officer, MacNeil's dismissal resulted in a city-wide strike of protest by his fellow officers. The Corporation retaliated by replacing the entire force with their own security firm. MacNeil and his fellow officers continued to protest, eventually uncovering evidence that Corporation was funding organized crime outfits to harass non-Corporation businesses. When the ex-officers attempted to air their findings, key members were assassinated, prompting MacNeil to go into hiding. He has since been instrumental in organizing the underground revolutionary group, Freedom.
Characters:
Anna Price (AKA "Libra") - Anna was an up-and-coming criminal attorney who once believed in the Corporation model of rebuilding America, until she was blackmailed by the security forces to falsify evidence against political prisoners. When Anna refused to railroad her clients and threatened to expose Corporation, she paid a horrible price. A car bomb planted by Corporation thugs killed her husband and daughter, but she survived the blast. Believed to be officially dead, Anna has now hooked up with Freedom seeking revenge on the fascist state.
Characters:
Hector Soldado (AKA "Spanky") - Spanky remembers the way his neighbourhood used to be. Although it wasn't always the nicest place to live, it now seems like paradise compared to the veritable war zone that it has become. Overrun by state-sponsored death squads and gangs, The Corporation has launched a war of attrition against the forgotten people of Spanky's neighbourhood. Once a hardened gang member, this charismatic figure has now turned his efforts to organizing his community into an armed resistance against the Corporation.
Characters:
Ricky Thang (AKA "Freak" or "Phreak") - Ricky was orphaned in high school when his parents were arrested as political dissidents and were never seen again. Ricky went into hiding with other runaways before he was recruited by agents of Freedom. As Ricky's father used to run a PC repair shop, he has been using PC's since he was a child, and has become a prolific hacker and phone phreak. He is personally responsible for several attacks against Corporation's infrastructure, which first attracted the Freedom movement to put the hacker to good use.
Characters:
Edward Raymonds (AKA "The Bull") - The Bull graduated from Mesa High and joined the armed forces before the Corporation takeover. He was discovered by a sports agent while playing for the army's football team, and was recruited to play professionally. The Bull enjoyed a legendary career until the Corporation bought the professional football league. When he refused to participate in Corporation-sponsored match fixing, he was framed for illegal drug-use, and served five years in prison. With his future bleak and reputation destroyed, The Bull was ready to go take out as many of the men responsible as he could. The Bull now uses his experience to train new recruits to Freedom and lead organized attacks against the Corporation.
Reception:
By July 2006, the PlayStation 2 version of State of Emergency had sold 700,000 copies and earned $28 million in the United States. Next Generation ranked it as the 90th highest-selling game launched for the PlayStation 2, Xbox or GameCube between January 2000 and July 2006 in that country. Combined sales of the State of Emergency line reached 900,000 units in the United States by July 2006.State of Emergency received "mixed or average" reviews on all platforms according to video game review aggregator Metacritic. The game's strengths were considered to be the value for money as a budget title, the simplistic fun offered, the technical achievement of having hundreds of people running around on a modest system, and the satirical sense of humor. Weaknesses cited include gameplay that might be considered too simple, and a poor multiplayer mode on the PC.
Reception:
FHM gave the PS2 version a score of four stars out of five and called it "Manic, frenzied and violent gaming at its gripping best." The Cincinnati Enquirer also gave the same version four stars out of five and stated that "This new 'bad boy' of the video game industry is extremely fun to play — for those old enough and mature enough to purchase it — as a campy stress releaser at the end of a bad day." However, Maxim gave the same version a score of six out of ten and said, "Such virtual destruction may once have seemed innocent, but these days the whole thing hits a little close to home." Entertainment Weekly gave said version a C, saying, "Four levels are too few to stay interesting for all 175-plus missions, which are too bloody repetitive." The Village Voice gave the Xbox version a similar score of five out of ten and said, "In 'Revolution' mode—a series of nearly identical, frustrating mini-missions—the jackbooted thugs, now armed with pistols, make life much tougher. (Deeply flawed camera views don't help.) What's the point if you can't steal your family some diapers?"Before its release, the game was denounced by Washington state politicians for its similarity to the real-life 1999 World Trade Organization riots and protests in Seattle which caused $3 million in damages. The game features the fictional "American Trade Organization" as the antagonistic establishment.It was a runner-up for GameSpot's annual "Best Graphics (Technical) on PlayStation 2" award, which went to Ratchet & Clank.
Sequel:
A sequel, State of Emergency 2, was released in 2006. This game was once again developed by VIS Entertainment; during production, the company became insolvent and the game was completed by DC Studios before being released by SouthPeak Games. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Netprimer**
Netprimer:
NetPrimer is a gratis web-based tool used for analysing primers used in PCR to amplify a DNA sequence. The software predicts the melting temperature of the primers using the nearest neighbor thermodynamic algorithm. The accurate prediction of the melting temperature (Tm) is one of the most important factors that governs the success of a PCR reaction.NetPrimer also analyzes the thermodynamically important secondary structures such as hairpins, self and cross dimers, runs and repeats. These structures significantly affect the primer efficiency and therefore the success of a PCR reaction.
Netprimer:
NetPrimer can be used to determine the best primer pairs for a given set of experimental conditions. The program assigns a rating to each primer analyzed. The rating is based on the proximity of the thermodynamic parameters to their ideal scores.
In addition to the primer quality, its molecular weight and optical activity (both in nmol/A260 & µg/A260) are also presented for quantitation. Primers are analyzed for their GC% (Guanine-Cytosine content). This important parameter determines their annealing strength.
Business model:
Although Netprimer is provided without charge, it is not free software. Users must register for access and thereby receive advertising of Premier Biosoft's other products. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shelling (topology)**
Shelling (topology):
In mathematics, a shelling of a simplicial complex is a way of gluing it together from its maximal simplices (simplices that are not a face of another simplex) in a well-behaved way. A complex admitting a shelling is called shellable.
Definition:
A d-dimensional simplicial complex is called pure if its maximal simplices all have dimension d. Let Δ be a finite or countably infinite simplicial complex. An ordering C1,C2,… of the maximal simplices of Δ is a shelling if the complex := (⋃i=1k−1Ci)∩Ck is pure and of dimension dim Ck−1 for all k=2,3,… . That is, the "new" simplex Ck meets the previous simplices along some union Bk of top-dimensional simplices of the boundary of Ck . If Bk is the entire boundary of Ck then Ck is called spanning.
Definition:
For Δ not necessarily countable, one can define a shelling as a well-ordering of the maximal simplices of Δ having analogous properties.
Properties:
A shellable complex is homotopy equivalent to a wedge sum of spheres, one for each spanning simplex of corresponding dimension.
A shellable complex may admit many different shellings, but the number of spanning simplices and their dimensions do not depend on the choice of shelling. This follows from the previous property.
Examples:
Every Coxeter complex, and more generally every building (in the sense of Tits), is shellable.The boundary complex of a (convex) polytope is shellable. Note that here, shellability is generalized to the case of polyhedral complexes (that are not necessarily simplicial).There is an unshellable triangulation of the tetrahedron. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Open vote network**
Open vote network:
In cryptography, the open vote network (or OV-net) is a secure multi-party computation protocol to compute the boolean-count function: namely, given a set of binary values 0/1 in the input, compute the total count of ones without revealing each individual value. This protocol was proposed by Feng Hao, Peter Ryan, and Piotr Zieliński in 2010. It extends Hao and Zieliński's anonymous veto network protocol by allowing each participant to count the number of veto votes (i.e., input one in a boolean-OR function) while preserving the anonymity of those who have voted. The protocol can be generalized to support a wider range of inputs beyond just the binary values 0 and 1.
Description:
All participants agree on a group G with a generator g of prime order q in which the discrete logarithm problem is hard. For example, a Schnorr group can be used. Assume there are n participants. Unlike other secure multi-party computation protocols that typically require pairwise secret and authenticated channels between participants in addition to an authenticated public channel, OV-net only requires an authenticated public channel available to every participant. Such a channel may be realized by using digital signatures. The protocol runs in two rounds.
Description:
Round 1: each participant i selects a random value xi∈RZq and publishes the ephemeral public key gxi together with a zero-knowledge proof for the proof of the knowledge of the exponent xi . Such proofs may be realized by using Schnorr non-interactive zero-knowledge proofs as described in RFC 8235.
Description:
After this round, each participant computes: gyi=∏j<igxj/∏j>igxj Round 2: each participant i publishes gxiyigvi where vi is either 0 or 1, together with a 1-out-of-2 zero knowledge proof for the proof that vi is one of {0,1} . Such 1-out-of-2 proofs may be realized by using Cramer, Gennaro, and Schoenmakers' zero-knowledge proof technique.After round 2, each participant computes ∏igxiyigvi=g∑ivi . Note that all xi values vanish because ∑ixiyi=0 . The exponent ∑ivi represents the count of ones. As it is usually a small number, the count can be computed by exhaustive search.
Description:
Overall, the 2-round efficiency is the theoretically best possible. In terms of the computational load and bandwidth usage, OV-net is also the most efficient among related techniques.
Maximum secrecy:
The OV-net protocol guarantees the secrecy of an input bit unless all other input bits are known. The protection of secrecy is the maximum since when all other bits are known, the remaining bit can always be computed by subtracting the values of known input bits from the output of the boolean-count function.
Applications:
A straightforward application of OV-net is to build a boardroom voting system, where the election is run by voters themselves. For a single candidate election, each voter sends either "No" or "Yes", which correspond to 0 and 1. Every voter, as well as an observer, can tally the "Yes" votes by themselves without needing any tallying authority.
Applications:
There are standard methods to extend a single-candidate election to support multiple candidates. A straightforward method is to run the single-candidate election in parallel for multiple candidates, and each voter casts "Yes/No" to each of the candidates. Additional zero-knowledge proofs are needed if the voter is limited to vote for only one candidate. Another method is to modify the encoding of candidates: instead of using 0 and 1 for "No" and "Yes" in a single-candidate election, encode each candidate with a unique number such that the tally for each candidate can be unambiguously computed. In this case, a more general 1-out-of-n zero-knowledge proof is used instead where n is the number of candidates.
Implementation:
A prototype implementation of OV-net was presented by McCorry, Shahandashti, and Hao at Financial Cryptography'17 as a smart contract over Ethereum's blockchain. The source code is publicly available. This implementation forms part of the Newcastle University team's solution on "Removing Trusted Tallying Authorities: Self-Enforcing E-Voting over Ethereum", which was awarded third place in the 2016 Economist Cybersecurity Challenge jointly organized by The Economist and Kaspersky Lab. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gateway Energy Storage**
Gateway Energy Storage:
Gateway Energy Storage is a large-scale lithium-ion battery, operated by grid infrastructure developer LS Power. It has a storage capacity of 250 MWh, and it is located in Otay Mesa, California, on the outskirts of San Diego. It uses cells from LG Chem.The purpose of the battery is to provide power during times of peak demand after being charged with solar power during the day. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pointer (rod)**
Pointer (rod):
A pointer or pointing stick is a solid rod used to point manually, in the form of a stick, but always finished off or artificially produced.
The typical pointer is simply a long, slender, often flexible stick made in a strong material, designed to indicate places on maps, words on blackboards etc.
In addition it may be used like any ordinary stick for other purposes, e.g. for punitive caning (compare rulering).Some are telescopic and can be carried in a pocket like a pen. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**H-stable potential**
H-stable potential:
In statistical mechanics of continuous systems, a potential for a many-body system is called H-stable (or simply stable) if the potential energy per particle is bounded below by a constant that is independent of the total number of particles. In many circumstances, if a potential is not H-stable, it is not possible to define a grand canonical partition function in finite volume, because of catastrophic configurations with infinite particles located in a finite space.
Classical statistical mechanics:
Definition Consider a system of particles in positions x1,x2,…∈Rν ; the interaction or potential between a particle in position xi and a particle in position xj is ϕ(xi−xj) where ϕ(x) is a real, even (possibly unbounded) function. Then ϕ(x) is H-stable if there exists B>0 such that, for any n≥1 and any x1,x2,…,xn∈Rν , := ∑i<j=1nϕ(xi−xj)≥−Bn Applications If ϕ(0)<∞ and, for every n≥1 and every x1,x2,…xn∈Rν , it holds ∑i,j=1nϕ(xi−xj)≥0 then the potential ϕ(x) is stable (with the constant B given by ϕ(0)2 ). This condition applies for example to potentials that are: a) positive functions; b) positive-definite functions.If the potential ϕ(x) is stable, then, for any bounded domain Λ , any β>0 and z>0 , the series exp [−βVn(x1,x2,…xn)] is convergent. In fact, for bounded, upper-semi-continuous potentials the hypothesis is not only sufficient, but also necessary!The grand canonical partition function, in finite volume, is := exp [−βVn(x1,x2,…xn)] hence the H-stability is a sufficient condition for the partition function to exists in finite volume.H-stability doesn't necessary imply the existence of the infinite volume pressure. For example, in a Coulomb system (in dimension three) the potential is ϕ(x)=14π|x| and, if the charges of all the particles are equal, then the potential energy is Vn(x1,…,xn)=∑i<jϕ(xi−xj) and the system is H-stable with B=0 ; but the thermodynamic limit doesn't exist, because the potential is not tempered.If the potential is not bounded, H-stability is not a necessary condition for the existence of the grand canonical partition function in finite volume. For example, in the case of Yukawa interaction in two dimensions, ln m|x|forx∼0 if the particles can have charges with different signs, the potential energy is Hn(q_,x_)=∑i<jqiqjϕ(xi−xj) where qj is the charge of the particle j . Hn(q_,x_) in not bounded from below: for example, when n=2 and q1q2=1 , the two body potential has infimum inf x1,x2ϕ(x1−x2)=−∞ Yet, Frohlich proved the existence of the thermodynamics limit for β<4π
Quantum statistical mechanics:
The notion of H-stability in quantum mechanics is more subtle. While in the classical case the kinetic part of the Hamiltonian is not important as it can be zero independently of the position of the particles, in the quantum case the kinetic term plays an important role in the lower bound for the total energy because of the uncertainty principle. (In fact, stability of matter was the historical reason for introducing such a principle in mechanics.) The definition of stability is : ∃B:E0N>−B, where E0 is the ground state energy.
Quantum statistical mechanics:
Classical H-stability implies quantum H-stability, but the converse is false.
The criterion is especially useful in statistical mechanics, where H-stability is necessary to the existence of thermodynamics, i.e. if a system is not H-stable, the thermodynamic limit does not exist. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Incremental encoding**
Incremental encoding:
Incremental encoding, also known as front compression, back compression, or front coding, is a type of delta encoding compression algorithm whereby common prefixes or suffixes and their lengths are recorded so that they need not be duplicated. This algorithm is particularly well-suited for compressing sorted data, e.g., a list of words from a dictionary.
Incremental encoding:
For example: The encoding used to store the common prefix length itself varies from application to application. Typical techniques are storing the value as a single byte; delta encoding, which stores only the change in the common prefix length; and various universal codes. It may be combined with other general lossless data compression techniques such as entropy encoding and dictionary coders to compress the remaining suffixes.
Applications:
Incremental encoding is widely used in information retrieval to compress the lexicons used in search indexes; these list all the words found in all the documents and a pointer for each one to a list of locations. Typically, it compresses these indexes by about 40%.As one example, incremental encoding is used as a starting point by the GNU locate utility, in an index of filenames and directories. The GNU locate utility further uses bigram encoding to further shorten popular filepath prefixes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Intersection (road)**
Intersection (road):
An intersection or an at-grade junction is a junction where two or more roads converge, diverge, meet or cross at the same height, as opposed to an interchange, which uses bridges or tunnels to separate different roads. Major intersections are often delineated by gores and may be classified by road segments, traffic controls and lane design.
This article primarily reflects practice in jurisdictions where vehicles are driven on the right. If not otherwise specified, "right" and "left" can be reversed to reflect jurisdictions where vehicles are driven on the left.
Types:
Road segments One way to classify intersections is by the number of road segments (arms) that are involved.
A three-way intersection is a junction between three road segments (arms): a T junction when two arms form one road, or a Y junction, the latter also known as a fork if approached from the stem of the Y.
Types:
A four-way intersection, or crossroads, usually involves a crossing over of two streets or roads. In areas where there are blocks and in some other cases, the crossing streets or roads are perpendicular to each other. However, two roads may cross at a different angle. In a few cases, the junction of two road segments may be offset from each when reaching an intersection, even though both ends may be considered the same street.
Types:
Six-way intersections usually involve a crossing of three streets at one junction; for example, a crossing of two perpendicular streets and a diagonal street is a rather common type of 6-way intersection.
Five, seven or more approaches to a single intersection, such as at Seven Dials, London, are not common.
Types:
Traffic controls Another way of classifying intersections is by traffic control technology: Uncontrolled intersections, without signs or signals (or sometimes with a warning sign). Priority (right-of-way) rules may vary by country: on a 4-way intersection traffic from the right often has priority; on a 3-way intersection either traffic from the right has priority again, or traffic on the continuing road. For traffic coming from the same or opposite direction, that which goes straight has priority over that which turns off.
Types:
Yield-controlled intersections may or may not have specific "YIELD" signs (known as "GIVE WAY" signs in some countries).
Stop-controlled intersections have one or more "STOP" signs. Two-way stops are common, while some countries also employ four-way stops.
Signal-controlled intersections depend on traffic signals, usually electric, which indicate which traffic is allowed to proceed at any particular time.
Lane design A traffic circle is a type of intersection at which traffic streams are directed around a circle. Types of traffic circles include roundabouts, "mini-roundabouts", "rotaries", "STOP"-controlled circles, and signal-controlled circles. Some people consider roundabouts to be a distinct type of intersection from traffic circles (with the distinction based on certain differences in size and engineering).
A box junction can be added to an intersection, generally prohibiting entry to the intersection unless the exit is clear.
Types:
Some (unconventional or alternative) intersections employ indirect left turns to increase capacity and reduce delays. The Michigan left combines a right turn and a U-turn. Jughandle lefts diverge to the right, then curve to the left, converting a left turn to a crossing maneuver, similar to throughabouts. These techniques are generally used in conjunction with signal-controlled intersections, although they may also be used at stop-controlled intersections.
Types:
Other designs include advanced stop lines, parallel-flow and continuous-flow intersections, hook turns, quadrants, seagull intersections, slip lanes, staggered junctions (junctions consisting of two opposing T-junctions where one road intersects two sideroads located diagonally opposite each other; in American English referred to as doglegs), superstreets, Texas Ts, Texas U-turns and turnarounds.
A roundabout and its variants like turbo roundabouts, bowties and distributing circles like traffic circles and right-in/right-out (RIRO) intersections.
Turns:
At intersections, turns are usually allowed, but are often regulated to avoid interference with other traffic. Certain turns may be not allowed or may be limited by regulatory signs or signals, particularly those that cross oncoming traffic. Alternative designs often attempt to reduce or eliminate such potential conflicts.
Turn lanes At intersections with large proportions of turning traffic, turn lanes (also known as turn bays) may be provided. For example, in the intersection shown in the diagram, left turn lanes are present in the right-left street.
Turns:
Turn lanes allow vehicles, to cross oncoming traffic (i.e., a left turn in right-side driving countries, or a right turn in left-side driving countries), or to exit a road without crossing traffic (i.e., a right turn in right-side driving countries, or a left turn in left-side driving countries). Absence of a turn lane does not normally indicate a prohibition of turns in that direction. Instead, traffic control signs are used to prohibit specific turns.Turn lanes can increase the capacity of an intersection or improve safety. Turn lanes can have a dramatic effect on the safety of a junction. In rural areas, crash frequency can be reduced by up to 48% if left turn lanes are provided on both main-road approaches at stop-controlled intersections. At signalized intersections, crashes can be reduced by 33%. Results are slightly lower in urban areas.Turn lanes are marked with an arrow bending into the direction of the turn which is to be made from that lane. Multi-headed arrows indicate that vehicle drivers may travel in any one of the directions pointed to by an arrow.
Turns:
Turn signals Traffic signals facing vehicles in turn lanes often have arrow-shaped indications. North America uses various indication patterns. Green arrows indicate protected turn phases, when vehicles may turn unhindered by oncoming traffic. Red arrows may be displayed to prohibit turns in that direction. Red arrows may be displayed along with a circular green indication to show that turns in the direction of the arrow are prohibited, but other movements are allowed. In some jurisdictions, a red arrow prohibits a turn on red. In Europe, if different lanes have differing phases, red, yellow and green traffic lights corresponding to each lane have blacked-out areas in the middle in the shape of arrows indicating the direction(s) drivers in that lane may travel in. This makes it easier for drivers to be aware which traffic light they need to pay attention to. A green arrow may also be provided; when it is on, drivers heading in the direction of the arrow may proceed, but must yield to all other vehicles. This is similar to the right turn on red in the US.Disadvantages to turn lanes include increased pavement area, with associated increases in construction and maintenance costs, as well as increased amounts of stormwater runoff. They also increase the distance over which pedestrians crossing the street are exposed to vehicle traffic. If a turn lane has a separate signal phase, it often increases the delay experienced by oncoming through traffic. Without a separate phase, left crossing traffic does not get the full safety benefit of the turn lane.
Turns:
Lane management Alternative intersection configurations, formerly called unconventional intersections, can manage turning traffic to increase safety and intersection throughput. These include the Michigan left/Superstreet (RCUT/MUT) and continuous flow intersection (CFI/DLT), to improve traffic flow, and also interchange types like Diverging diamond interchange (DDI/DCD) design as part of the Federal Highway Administration's Every Day Counts initiative which started in 2012.
Vulnerable road users:
Vulnerable road users include pedestrians, cyclists, motorcyclists, and individuals using motorized scooters and similar devices. Compared to people who are in motor vehicles (like cars and trucks), they are much more likely to suffer catastrophic or fatal injuries at an intersection.
Vulnerable road users:
Pedestrians Intersections generally must manage pedestrian as well as vehicle traffic. Pedestrian aids include crosswalks, pedestrian-directed traffic signals ("walk light") and over/underpasses. Traffic signals can be time consuming to navigate, especially if programmed to prioritise vehicle flow over pedestrians, while over and underpasses which rely on stairs are inaccessible to those who can not climb them. Walk lights may be accompanied by audio signals to aid the visually impaired. Medians can offer pedestrian islands, allowing pedestrians to divide their crossings into a separate segment for each traffic direction, possibly with a separate signal for each.
Vulnerable road users:
Some intersections display red lights in all directions for a period of time. Known as a pedestrian scramble, this type of vehicle all-way stop allows pedestrians to cross safely in any direction, including diagonally. All green for non motorists is known from the crossing at Shibuya Station, Tokyo.In 2020, NHTSA reported that more than 50% of pedestrian deaths in the United States (3,262 total) were attributed to failure to yield the right of way-- which typically occurs at intersections.
Vulnerable road users:
Cyclists and motorcyclists Poor visibility at junctions can lead to drivers colliding with cyclists and motorcyclists. Some junctions use advanced stop lines which allow cyclists to filter to the front of a traffic queue which makes them more visible to drivers.
Safety:
A European study found that in Germany and Denmark, the most important crash scenario involving vulnerable road users was: motor vehicle turning right/left while cyclist going straight; motor vehicle turning right/left while pedestrian crossing the intersection approach.
These findings are supported by data elsewhere. According to the U.S. National Highway Traffic Safety Administration, roughly half of all U.S. car crashes occurred at intersections or were intersection related in 2019.
At grade railways:
In the case of railways or rail tracks the term at grade applies to a rail line that is not on an embankment nor in an open cut. As such, it crosses streets and roads without going under or over them. This requires level crossings. At-grade railways may run along the median of a highway. The opposite is grade-separated. There may be overpasses or underpasses. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shoulder strap**
Shoulder strap:
A shoulder strap is a strap over a shoulder. They are often affixed to women's dresses to support its weight or as part of its style. The term is also applied to carrying bags.
Dress shoulder strap:
Dress shoulder straps are a length of fabric, usually in pairs, used to support clothing, especially women's clothing, such as a dress, camisole, apron or brassiere. Shoulder straps such as these are usually made of the same material as the garment, and may be quite flimsy, as they are normally not expected to support much weight.
The shoulder straps on some dresses may be very thin, in which case they may be called a spaghetti strap (also called "noodle strap"). These are common in clothing such as camisoles, cocktail dresses, and evening gowns.
Some institutions ban spaghetti strap dresses and bare-shoulder strapless dresses on grounds of modesty.
Military shoulder strap:
Many military uniform shirts, jackets, tunics, or greatcoats feature shoulder straps. They were originally designed to keep back packs, ammunition pouches or bayonets from slipping off the shoulder. They often display badges of rank, shoulder marks, regimental insignia or epaulettes.
Carrier shoulder strap:
A carrier shoulder strap is a length of fabric or other flexible material (such as leather, vinyl, rubber), used to suspend an item, often of some weight, from the shoulder(s). The strap may be worn slung over one shoulder or across the body. In the interest of comfort, they often have some manner of padding near the middle, where the strap rests directly against the shoulder or neck. Such items include purses, guitars, rifles, etc. In the case of rifles and other such weaponry, the shoulder strap is usually referred to as a sling. Shoulder straps may also be used in pairs on such items as a backpack or a baby carrier; the straps are worn one over each shoulder, and the item so carried is centred on the back or chest. Some camera strap manufacturers design their straps to fit over both shoulders allowing two cameras to rest at hip level.The use of such straps frees the hands for other use. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Truncated spur**
Truncated spur:
A truncated spur is a spur, which is a ridge that descends towards a valley floor or coastline from a higher elevation, that ends in an inverted-V face and was produced by the erosional truncation of the spur by the action of either streams, waves, or glaciers. Truncated spurs can be found within mountain ranges, along the walls of river valleys, or along coastlines.A faceted spur is also a spur that ends in a triangular face, known as a triangular facet, with a broad base and an apex pointing upward. As typically used in geology, the triangular facet is usually a remnant of a fault plane and it and its associated faceted spur are the result of faulting. The term faceted spur is also applied to inverted-V rock faces formed by stream, wave, or glacial erosion and, thus, as a synonym for truncated spur.
Formation:
Truncated spurs Before glaciation, relatively immature rivers display a pattern of interlocking spurs. A valley glacier cannot avoid the interlocking spurs as a river can. As the valley glacier moves, abrasion and plucking erode the protruding tips of the spurs, leaving steep cliff-like truncated spurs. Hanging valleys are found in between truncated spurs as they join the main glacial valley from the side. It is common for waterfalls to form from them, where they fall into the main valley. Such truncated spurs can be found in mountainous regions. The Mer de Glace, in the European Alps, is a valley through which a glacier currently flows. This is a geologically active process where the glacier continues to gradually erode the valley sides.
Formation:
Faceted spurs In the most typical usage of this term, faceted spurs are formed by active faulting, especially normal faulting that produces well-defined triangular facets along either a mountain front or edges of a rift valley. These triangular facets provide evidence for recent fault movement and are used in seismotectonic analysis. Classic examples of faceted spurs can be found all along the Central Wasatch Fault, north-central Utah. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dirac hole theory**
Dirac hole theory:
Dirac hole theory is a theory in quantum mechanics, named after English theoretical physicist Paul Dirac. The theory poses that the continuum of negative energy states, that are solutions to the Dirac equation, are filled with electrons, and the vacancies in this continuum (holes) are manifested as positrons with energy and momentum that are the negative of those of the state. The discovery of the positron in 1929 gave a considerable support to the Dirac hole theory.While Enrico Fermi, Niels Bohr and Wolfgang Pauli were skeptical about the theory, other physicists, like Guido Beck and Kurt Sitte, made use of Dirac hole theory in alternative theories of beta decay. Gian Wick extended Dirac hole theory to cover neutrinos, introducing the anti-neutrino as a hole in a neutrino Dirac sea. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MicL sRNA**
MicL sRNA:
MicL RNA (mRNA-interfering complementary RNA regulator of Lpp) is a σ E transcription factor-dependent small non-coding RNA. It was discovered in E. coli. Together with MicA and RybB sRNAs, MicL sRNA down-regulates the synthesis of abundant outer membrane proteins in response to stress. MicL specifically targets mRNA of lipoprotein Lpp, preventing its translation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Poxvirus AX element late mRNA cis-regulatory element**
Poxvirus AX element late mRNA cis-regulatory element:
The Poxvirus AX element late mRNA family represents a cis-regulatory element present at the 3' end of poxvirus late ATI mRNA and is known as the AX element. The AX element is involved in directing the efficient production and orientation-dependent formation of late RNAs. It is likely that this element directs the endonucleolytic cleavage of the transcript. It has been shown that the F17R late mRNA transcript which is also cleaved is also likely to share a common factor in their mechanism despite a lack of any obvious similarity in its cis-regulatory RNA element. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ICON (microcomputer)**
ICON (microcomputer):
The ICON (also the CEMCorp ICON, Burroughs ICON, and Unisys ICON, and nicknamed the bionic beaver) was a networked personal computer built specifically for use in schools, to fill a standard created by the Ontario Ministry of Education. It was based on the Intel 80186 CPU and ran an early version of QNX, a Unix-like operating system. The system was packaged as an all-in-one machine similar to the Commodore PET, and included a trackball for mouse-like control. Over time, a number of GUI-like systems appeared for the platform, based on the system's NAPLPS-based graphics system.
ICON (microcomputer):
The ICON was widely used in the mid to late 1980s, but disappeared after that time with the widespread introduction of PCs and Apple Macintoshes.
History:
Development Origin In 1981, four years after the first microcomputers for mainstream consumers appeared, the Ontario Ministry of Education sensed that microcomputers could be an important component of education. In June the Minister of Education, Bette Stephenson, announced the need for computer literacy for all students and formed the Advisory Committee on Computers in Education to guide their efforts. She stated that: It is now clear that one of the major goals that education must add to its list of purposes, is computer literacy. The world of the very near future requires that all of us have some understanding of the processes and uses of computers.
History:
According to several contemporary sources, Stephenson was the driving force behind the project; "whenever there was a problem she appears to have 'moved heaven and earth' to get it back on the tracks."The Ministry recognized that a small proportion of teachers and other school personnel were already quite involved with microcomputers and that some schools were acquiring first-generation machines. These acquisitions were uneven, varying in brand and model not just between school boards, but among schools within boards and even classroom to classroom. Among the most popular were the Commodore PET which had a strong following in the new computer programming classes due to its tough all-in-one construction and built-in support for Microsoft BASIC, and the Apple II which had a wide variety of educational software, mostly aimed at early education.
History:
The Ministry wanted to encourage uses of microcomputers that supported its curriculum guidelines and was willing to underwrite the development of software for that purpose. However, the wide variety of machines being used meant that development costs had to be spread over several platforms. Additionally, many of the curriculum topics they wanted to cover required more storage or graphics capability than at least some of the machines then in use, if not all of them. Educational software was in its infancy, and many hardware acquisitions were made without a clear provision for educational software or a plan for use.A series of Policy Memos followed outlining the Committee's views. Policy Memo 47 stated that computers are to be used creatively, and for information retrieval; at the time most systems were used solely for programming. They also announced funding for the development of educational software on an estimated 6000 machines. The Ministry decided that standardizing the computers would reduce maintenance costs, and allow for the development of consistent educational software. The Ministry contracted the Canadian Advanced Technology Alliance (CATA) to help develop specifications for the new system.
History:
Design selection Policy Memos 68–73 followed in early 1983, stating that none of the existing platforms had all the qualities needed to be truly universal. The idea of a new machine quickly gained currency, with the added bonus that it would help develop a local microcomputer industry. In order to make the new machine attractive, the Ministry agreed to fund up to 75% of the purchase price from their own budget. When the plan was first announced there was widespread concern among educators. Their main complaint is that the Ministry would select a standard that was not powerful enough for their needs. A secondary concern was that the time delay between announcing and introducing the computer would be lengthy, a period in which existing purchases could be funded instead.The first set of concerns were rendered moot when the specifications were introduced in March 1983 in the "Functional Requirements for Microcomputers for Educational Use in Ontario Schools—Stage I." The physical design required a PET-like all-in-one case, headphones output for voice and sound effects, and a trackball for mouse-like pointing support. Inside the case, the specification called for a processor and support systems to allow a multitasking operating system to be used, selecting the Intel 80186 as the CPU. Color graphics were specified, at least as an option, along with monochrome and color monitors on top. Voice synthesis was built in, and the keyboard provided for accented characters. Additionally, the systems would include no local storage at all, and would instead rely on a networked file server containing a hard drive.The specification was considerably in advance of the state of the art of the time, and when it was delivered commentators immediately reversed their earlier concerns and suggested the machine was too powerful, and would therefore be available in too small numbers.
History:
CEMCORP To deliver such a machine, Robert Arn, a member of the CATA team, set up CEMCORP, the Canadian Educational Microprocessor Corporation. When the specification was announced in 1983, CEMCORP was announced as the winner of a $10 million contract to develop and supply the initial machines. An additional $5 million in funding was announced to cover development of new software applications, while the Ontario Institute for Studies in Education (OISE) was asked to convert 30 existing programs to the new machine. In order to be able to afford what was expected to be an expensive machine, the Ministry announced a special "Recognized Extraordinary Expenditure" (REE) grant that would provide for up to 75% of the purchase costs of machines meeting the "Grant Eligible Microcomputer Systems" or "G.E.M.S." specifications.At the time, only the ICON met the GEMS requirements, which cut its purchase price from around CAD$2500 to a mere $495 (USD$2700 and $696) – less expensive than most existing microcomputers. The entire program was politically explosive throughout its gestation as a result, causing a continual stream of news stories. Critics complained that other machines could be bought for half the cost, but supporters pushed back that no other machine at that price point supported the GEMS specifications. The release of the IBM Personal Computer/AT in 1984 reopened the debate and made nightly news, as it used a newer and more advanced CPU than the ICON: the 80286. Around this time other platforms, such as the Waterloo PORT networking system, gained approval for the government support that had originally been the province of the ICON.
History:
Production The basic ICON design had reached "beta quality" after just over a year, using off the shelf parts, the hardware manufactured by Microtel and operating system from Quantum Software Systems. The original Microtel machines were first introduced to Ontario schools in 1984 in small numbers, packaged in a short-lived dark brown case. At this point Burroughs Canada was brought in to sell and support the machine. Soon, Sperry and Burroughs merged to form Unisys in 1986. Several generations of ICON machines were produced, evolving steadily to become more PC-like. They were built into the early 1990s, but by this point were used almost entirely for running DOS and Windows programs.
History:
Cancellation Throughout the project's lifetime it was subject to continual debate and much political rhetoric. A 1992 article on the topic complained: Bette Stephenson favoured top-down decision making and as a result got trapped by her tunnel vision. Her ICON computer fiasco drained millions from the provincial treasury and created a white elephant scorned by boards and shunned by teachers.... Computer resources were forced upon the school system as a result of a top-down government decision that was taken precipitously and without research.
History:
The Ministry ceased all support for the ICON in 1994, making it orphaned technology, and the Archives of Ontario declined to take ICON hardware and copies of the ICON software, which were destroyed. This was controversial in its own right, as others maintained that it could be sent to other schools that lacked extensive Information Technology. Despite the development of the ICON program, equality among schools was not assured because each school community could afford different capital outlays depending on the parents' affluence.
Design:
The ICON system was based on a workstation/file server model, with no storage local to the workstations. The workstations and servers were internally similar, based on Intel 80186 microprocessors running at 7.16 MHz, and connected to each other using ARCNET. Several upgrades were introduced into the ICON line over time. The ICON2 sported a redesigned case, a detached keyboard with integrated trackball, expanded RAM, and facilities for an internal hard disk. The CPU was upgraded to the 386 in the Series III, while an "ICON-on-a-card" for PCs also appeared.
Design:
The original ICON workstations were housed in a large wedge-shaped steel case, with a full-sized keyboard mounted slightly left-of-center and a trackball mounted to the right. A rubber bumper-strip ran along the front edge, a precaution against a particular type of cut users sometimes got from the PET's sharp case. Graphics were generated by a Hitachi HD46505 SP video controller, supporting NAPLPS. The EGA monitor was mounted on top of a tilt-and-swivel mount, a welcome improvement on the PET. It also included TI's TMS5220 speech chip, originally designed for the TI-99, and would speak the vaguely obscene word "dhtick" when starting up. Early Microtel machines were dark brown, but the vast majority of examples in the classroom were a more nondescript beige.
Design:
The fileserver, sometimes referred to as the LexICON, was a simple rectangular box with an internal 10MB hard drive and a 5.25" floppy drive opening to the front, and parallel port for a shared printer. Later Lexicons included a 64MB hard disk, divided into two partitions. Unlike the PET's floppy system, however, users of the ICON used Unix commands to copy data to their personal floppy disks from its "natural" location in the user's home directory on the hard drive.
Design:
Both the client and server ran the Unix-like QNX as their operating system with the addition of network file-sharing, the basic portions of it embedded in ROM. To this they added a NAPLPS/Telidon-based graphics system, which was intended to be used with the trackball to make interactive programs. The system included a Paint programme that used the trackball, but did not include a usable GUI, although there were several attempts to produce one. QNX 2.0.1 included a modest one called "House", and another was built at least to the prototype stage by Helicon Systems in Toronto and appeared in one form as Ambience, though its capabilities were limited. A later upgrade called ICONLook improved upon this greatly, but it was apparently too slow to use realistically. Helicon Systems also produced a MIDI interface for the original ICON.
Design:
The biggest problem for the machine was a lack of software. The ICON was originally designed to let teachers create and share their own lessonware, using a simple hypertext-based system where pages could either link to other pages or run programs written in C. The "anyone can create lessonware" model was rejected by the Ministry of Education before the ICON shipped (in favour of a model under which the Ministry funded and controlled all lessonware), leaving the ICON with only the QNX command line interface and the Cemcorp-developed text editor application.
Design:
The various Watcom programming languages were quickly ported to the system, but beyond that, the educational software teachers could expect was few and far between. The Ministry contracted for a number of applications, but the small target market and the sometimes difficult process required to secure such contracts were significant obstacles for realistic commercial development.
Software:
The Bartlett Saga, a four-part game about the History of Canada; consisting of Part I: Refugees in the Wilderness: United Empire Loyalists, 1784-1793; Part II: The Rebels: Rebellion in Upper Canada, 1830-1844; Part III: United We Stand: Confederation, 1864-1873; Part IV: The Golden West: Settling the Plains, 1897-1911 Build-A-Bird [Ergonomics Lab, University of Toronto] Cargo Sailor (1987), a game about delivering goods to different ports around the world, given the latitude and longitude.
Software:
Crosscountry Canada, a game of travelling across Canada in a truck, picking up and delivering cargo.
Ernie's Big Splash, a video game including Sesame Street characters.
Logo, an implementation of the Logo programming language.
Northwest Fur Trader, educational software simulating the fur trade in Canada.
Lemonade Stand, an educational game of setting lemonade prices based on the weather forecast.
A Day in the Life Of, a strange game following the life of a student. There was an arcade game inside it where you could catch rabbits.
Spectricon, the drawing software. It used a relatively beautiful noise generator to create dithering patterns.
Software:
Offshore Fishing, A fishing game that utilizes both a top down map view to choose your fishing location, and a 2D side view when fishing. You try to catch fish using a trolling boat and net, and sell them for money. However, it is best to avoid the shark at all costs as he will break through your fishing net.
Software:
Watfor, the WATCOM FORTRAN programming language.
Chat, the OS included facilities for sending system-wide messages, which students abused often.
Robot R&D, a game of creating robots of various properties from various parts, then testing them through dropping, crushing and dunking Peggy's Way Home, A game where you help Peggy find her way home, so she can cook dinner for people in another game called Peggy's Potluck.
Peggy's Potluck, A game where you place various ingredients of your choice into a cauldron, cook them, and then server it to hungry people. It will then give you feedback on what the people think of your meal. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Meldonium**
Meldonium:
Meldonium (INN; trade name Mildronate, among others) is a limited-market pharmaceutical, developed in 1970 by Ivars Kalviņš at the USSR Latvia Institute of Organic Synthesis, and now manufactured by the Latvian pharmaceutical company Grindeks and several generic manufacturers. It is primarily distributed in Eastern European countries as an anti-ischemia medication.Since 1 January 2016, it has been on the World Anti-Doping Agency (WADA) list of substances banned from use by athletes. Meldonium can be used as a metabolic modulator, changing how some hormones accelerate or slow down enzymatic reactions in the body. However, there are debates over its use as an athletic performance enhancer. Some athletes are known to have used meldonium before it was banned, most notably Maria Sharapova. Nevertheless, many athletes have been suspended or disqualified officially in relation to this drug.
Medical use:
Meldonium may be used to treat coronary artery disease. These heart problems may sometimes lead to ischemia, a condition where too little blood flows to the organs in the body, especially the heart. Because this drug is thought to expand the arteries, it helps to increase the blood flow as well as increase the flow of oxygen throughout the body. Meldonium has also been found to induce anticonvulsant and antihypnotic effects involving alpha 2-adrenergic receptors, as well as nitric oxide-dependent mechanisms. This, in summary, shows that meldonium given in acute doses could be beneficial for the treatment of seizures and alcohol intoxication. It is also used in cases of cerebral ischemia, ocular ischemic syndrome and other ocular disease caused by disturbed arterial circulation and may also have some effect on decreasing the severity of withdrawal symptoms caused by the cessation of chronic alcohol use. It can also be used when there are cases of acute and chronic ischemic brain blood circulation disorders, reduced working capacity, physical and psycho-emotional overload as well as during the recovery period after cerebrovascular disorders, head injury and encephalitis.
Physio-pharmacology:
To ensure a continuous guarantee of energy supply, the cell's energy-producing mitochondria oxidise considerable amounts of fat along with glucose. Carnitine transports long-chain fatty acids (FA) from the cytosol of the cell into the mitochondrion and is therefore essential for fatty acid oxidation (known as beta oxidation). Carnitine is mainly absorbed from the diet, but can be formed through biosynthesis. To produce carnitine, lysine residues are methylated to trimethyllysine. Four enzymes are involved in the conversion of trimethyllysine and its intermediate forms into the final product of carnitine. The last of these 4 enzymes is gamma-butyrobetaine dioxygenase (GBB), which hydroxylates butyrobetaine into carnitine.
Physio-pharmacology:
The main cardioprotective effects of meldonium are mediated by the inhibition of GBB. By subsequently inhibiting carnitine biosynthesis, fatty acid transport is reduced and the accumulation of cytotoxic intermediate products of fatty acid beta-oxidation in ischemic tissues to produce energy is prevented, therefore blocking this highly oxygen-consuming process. Treatment with meldonium therefore shifts the myocardial energy metabolism from fatty acid oxidation to the more favorable oxidation of glucose, or glycolysis, under conditions where oxygen is limited. It also reduces the formation of trimethylamine N-oxide (TMAO), a product of carnitine breakdown that has been implicated in the pathogenesis of atherosclerosis and congestive heart failure.
Physio-pharmacology:
In fatty acid (FA) metabolism, long chain fatty acids in the cytosol cannot cross the mitochondrial membrane because they are negatively charged. The process in which they move into the mitochondria is called the carnitine shuttle. Long chain FA are first activated via esterification with coenzyme A to produce a fatty acid-coA complex which can then cross the external mitochondrial border. The co-A is then exchanged with carnitine (via the enzyme carnitine palmitoyltransferase I) to produce a fatty acid-carnitine complex. This complex is then transported through the inner mitochondrial membrane via a transporter protein called carnitine-acylcarnitine translocase. Once inside, carnitine is liberated (catalysed by the enzyme carnitine palmitoyltransferase II) and transported back outside so the process can occur again. Acylcarnitines like palmitoylcarnitine are produced as intermediate products of the carnitine shuttle.
Physio-pharmacology:
In the mitochondria themselves, meldonium also competitively inhibits the carnitine shuttle protein SLC22A5. This results in reduced transportation and metabolism of long-chain fatty acids in the mitochondria (this burden is shifted more to peroxisomes). The final effect is a decreased risk of mitochondrial injury from fatty acid oxidation and a reduction of the production of acylcarnitines, which has been implicated in the development of insulin resistance. Because of its inhibitory effects on L-carnitine biosynthesis and its subsequent glycolytic effects as well as reduced acylcarnitine production, meldonium has been indicated for use in diabetic patients. In animal models and a very small clinical trial, meldonium has been shown to reduce blood glucose concentrations, exhibit cardioprotective effects and prevent or reduce the severity of diabetic complications. Long-term treatment has also been shown to attenuate the development of atherosclerosis in the heart.Meldonium's vasodilatory effects are thought to be due to the stimulation of the production of nitric oxide in the vascular endothelium. It is hypothesized that meldonium may increase the formation of the gamma-butyrobetaine esters, which are potent parasympathomimetics, and may activate the endothelial NOS (eNOS) enzyme, which causes nitric oxide production via stimulation of the M3 muscarinic acetylcholine receptor or specific gamma-butyrobetaine ester receptors.Meldonium is believed to continually train the heart pharmacologically, even without physical activity, inducing preparation of cellular metabolism and membrane structures (specifically in myocardial mitochondria) to survive ischemic stress conditions. This is done by adapting myocardial cells to lower fatty acid inflow and by activating glycolysis; the heart eventually begins using glycolysis instead of beta oxidation during real life ischaemic conditions. This reduces oxidative stress on cells, formation of cytotoxic products of fatty acid oxidation and subsequent cellular damage. This has made meldonium a possible pharmacological agent for ischemic preconditioning.The mechanisms underlying the central nervous system effects of meldonium are unclear. In a study in a transgenic mouse model of Alzheimer's disease, meldonium increased cognition and mental performance by reducing amyloid beta deposition in the hippocampus.
Physio-pharmacology:
Pharmacology The mechanism of action of meldonium is to act as a fatty acid oxidation inhibitor, presumably by inhibiting enzymes in the carnitine biosynthesis pathway such as γ-butyrobetaine hydroxylase. Although initial reports suggested meldonium is a non-competitive and non-hydroxylatable analogue of gamma-butyrobetaine; further studies have identified that meldonium is a substrate for gamma-butyrobetaine dioxygenase. X-ray crystallographic and in vitro biochemical studies suggest that meldonium binds to the substrate pocket of γ-butyrobetaine hydroxylase and acts as an alternative substrate, and therefore a competitive inhibitor. Normally, this enzyme's action on its substrates γ-butyrobetaine and 2-oxoglutarate gives, in the presence of the further substrate oxygen, the products L-carnitine, succinate, and carbon dioxide; in the presence of this alternate substrate, the reaction yields malonic acid semialdehyde, formaldehyde (akin to the action of histone demethylases), dimethylamine, and (1-methylimidazolidin-4-yl)acetic acid, "an unexpected product with an additional carbon-carbon bond resulting from N-demethylation coupled to oxidative rearrangement, likely via an unusual radical mechanism." The unusual mechanism is thought likely to involve a Steven's type rearrangement.Meldonium's inhibition of γ-butyrobetaine hydroxylase gives a half maximal inhibitory concentration (IC50) value of 62 micromolar, which other study authors have described as "potent." Meldonium is an example of an inhibitor that acts as a non-peptidyl substrate mimic.Meldonium has also been shown by NMR to bind to carnitine acetyltransferase. Carnitine acetyltransferase belongs to a family of ubiquitous enzymes that play pivotal roles in cellular energy metabolism. Meldonium is a relatively weak inhibitor to carnitine acetyltransferase (when compared to γ-butyrobetaine hydroxylase), with an inhibition constant (KI) of 1.6 mM.
Chemistry:
The chemical name of meldonium is 3-(2,2,2-trimethylhydraziniumyl) propionate. It is a structural analogue of γ-butyrobetaine, with an amino group replacing the C-4 methylene of γ-butyrobetaine. γ-Butyrobetaine is a precursor in the biosynthesis of carnitine.Meldonium is a white crystalline powder, with a melting point of 87 °C (189 °F).
Society and culture:
Doping Meldonium was added to the World Anti-Doping Agency (WADA) list of banned substances effective 1 January 2016 because of evidence of its use by athletes with the intention of enhancing performance. It was on the 2015 WADA's list of drugs to be monitored. A high prevalence of meldonium use by athletes in sport was demonstrated by the laboratory findings at the Baku 2015 European Games. 13 medallists or competition winners were taking meldonium at the time of the Baku Games. Meldonium use was detected in athletes competing in 15 of the 21 sports during the Games. Most of the athletes taking meldonium withheld the information of their use from anti-doping authorities by not declaring it on their doping control forms as they should have. Only 23 of the 662 (3.5%) athletes tested declared the personal use of meldonium. However, 66 of the total 762 (8.7%) of athlete urine samples analysed during the Games and during pre-competition tested positive for meldonium.WADA classes the drug as a metabolic modulator, just as it does insulin. Metabolic modulators are classified as S4 substances according to the WADA banned substances list. These substances have the ability to modify how some hormones accelerate or slow down different enzymatic reactions in the body. In this way, these modulators can block the body's conversion of testosterone into oestrogen, which is necessary for females. On 13 April 2016 it was reported that WADA had issued updated guidelines allowing less than 1 microgram per milliliter of meldonium for tests done before 1 March 2016. The agency cited that "preliminary tests showed that it could take weeks or months for the drug to leave the body".
Society and culture:
Affected athletes On 7 March 2016, former world number one tennis player Maria Sharapova announced that she had failed a drug test in Australia due to the detection of meldonium. She said that she had been taking the drug for ten years for various health issues, and had not noticed that it had been banned. On 8 June 2016, she was suspended from playing tennis for two years by the International Tennis Federation (ITF), which was reduced to 15 months by Court of Arbitration for Sport after appeal. Earlier the same year (March 7), Russian ice dancer Ekaterina Bobrova announced that she had also tested positive for meldonium at the 2016 European Figure Skating Championships. Bobrova said she was shocked about the test result, because she had been made aware of meldonium's addition to the banned list, and had been careful to avoid products containing banned substances. In May 2016, Russian professional boxer Alexander Povetkin—a former two-time World Boxing Association (WBA) Heavyweight Champion—tested positive for meldonium. This was discovered just a week prior to his mandatory title match against World Boxing Council (WBC) Heavyweight Champion, Deontay Wilder. As a result, the match—scheduled to take place in Russia—was postponed indefinitely by the WBC.Other athletes who are provisionally banned for using meldonium include UFC flyweight Liliya Shakirova, Ethiopian-Swedish middle-distance runner Abeba Aregawi, Ethiopian long-distance runner Endeshaw Negesse, Russian cyclist Eduard Vorganov, and Ukrainian biathletes Olga Abramova and Artem Tyshchenko.The Ice Hockey Federation of Russia replaced the Russia men's national under-18 ice hockey team with an under-17 team for the 2016 IIHF World U18 Championships after players on the original roster tested positive for meldonium.More than 170 failed tests by athletes were identified in a relatively brief period after the ban on meldonium was imposed on 1 January 2016, almost all of which were from Eastern European countries. Many of the early cases were dropped when athletes claimed that they had ceased use in 2015. Notable athletes with positive samples include: In addition it was reported that five Georgian wrestlers and a German wrestler had tested positive for the drug although no further names were released. On 25 March 2016 the Fédération Internationale de Sambo confirmed that four wrestlers under their governance (two from Russia and two from other countries) had recorded positive tests for the drug.
Society and culture:
Debates A December 2015 study in the journal Drug Testing and Analysis argued that meldonium "demonstrates an increase in endurance performance of athletes, improved rehabilitation after exercise, protection against stress, and enhanced activation of central nervous system (CNS) functions". However the study itself presents no evidence for this claim, and focuses instead on describing two approaches for the reliable identification of meldonium.
Society and culture:
The manufacturer, Grindeks, said in a statement that it did not believe meldonium's use should be banned for athletes. It said the drug worked mainly by reducing damage to cells that can be caused by certain byproducts of carnitine. Meldonium "is used to prevent death of ischemic cells and not to increase performance of normal cells", the statement said. "Meldonium cannot improve athletic performance, but it can stop tissue damage in the case of ischemia", the lack of blood flow to an area of the body.The drug was invented in the mid-1970s at the Institute of Organic Synthesis of the Latvian SSR Academy of Sciences by Ivars Kalviņš. Kalviņš criticized the ban, saying that WADA had not presented scientific proof that the drug can be used for doping. According to him, meldonium does not enhance athletic performance in any way, and was rather used by athletes to prevent damage to the heart and muscles caused by lack of oxygen during high-intensity exercise. He contended that not allowing athletes to take care of their health was a violation of their human rights, and that the decision aimed to remove Eastern European athletes from competitions and his drug from the pharmaceutical market. Liene Kozlovska, the former head of the anti-doping department of the Latvian sports medicine center, rejected claims that the ban is in violation of athletes' rights, saying that meldonium is dangerous in high doses, and should only be used under medical supervision to treat genuine health conditions. She also speculated that Russian athletes may not have received adequate warnings that the drug was banned – due to the suspension of the Russian Anti-Doping Agency in late 2015.Forbes reported that anesthesiology professor Michael Joyner, at the Mayo Clinic in Rochester, Minnesota, who studies how humans respond to physical and mental stress during exercise and other activities, told them that "Evidence is lacking for many compounds believed to enhance athletic performance. Its use has a sort of urban legend element and there is not much out there that it is clearly that effective. I would be shocked if this stuff [meldonium] had an effect greater than caffeine or creatine (a natural substance that, when taken as a supplement, is thought to enhance muscle mass)." Ford Vox, a U.S.-based physician specializing in rehabilitation medicine and a journalist reported "there's not much scientific support for its use as an athletic enhancer".Don Catlin, a long-time anti-doping expert and the scientific director of the Banned Substances Control Group (BSCG) said "There's really no evidence that there's any performance enhancement from meldonium – Zero percent".
Society and culture:
Approval status Meldonium, which is not approved by the FDA in the United States, is registered and prescribed in Latvia, Russia, Ukraine, Georgia, Kazakhstan, Azerbaijan, Belarus, Uzbekistan, Moldova, Lithuania, Albania, and Kyrgyzstan.
Economics Meldonium is manufactured by Grindeks, a Latvian pharmaceutical company, with offices in thirteen Eastern European countries as a treatment for heart conditions. The company identifies it as one of their main products. It had sales of 65 million euros in 2013. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Free-piston linear generator**
Free-piston linear generator:
The free-piston linear generator (FPLG) uses chemical energy from fuel to drive magnets through a stator and converts this linear motion into electric energy. Because of its versatility, low weight and high efficiency, it can be used in a wide range of applications, although it is of special interest to the mobility industry as range extenders for electric vehicles.
Description:
The free-piston engine linear generators can be divided in 3 subsystems: One (or more) reaction section with a single or two opposite pistons One (or more) linear electric generator, which is composed of a static part (the stator) and a moving part (the magnets) connected to the connection rod.
Description:
One (or more) return unit to push the piston back due to the lack of a crankshaft (typically a gas spring or an opposed reaction section)The FPLG has many potential advantages compared to traditional electric generator powered by an internal combustion engine. One of the main advantages of the FPLG comes from the absence of crankshaft. It leads to a smaller and lighter generator with fewer parts. This also allows a variable compression and expansion ratios, which makes it possible to operate with different kinds of fuel.
Description:
The linear generator also allows the control of the resistance force, and therefore a better control of the piston's movement and of the reaction. The total efficiency (including mechanical and generator) of free-piston linear generators can be significantly higher than conventional internal combustion engines and comparable to fuel cells.
Development:
The first patents of free-piston linear generators date from around 1940, however in the last decades, especially after the development of rare-earth magnets and power electronics, many different research groups have been working in this field.
These include: Libertine LPE, UK.
West Virginia University (WVU), USA.
Chalmers University of Technology, Sweden.
Electric Generator, Pontus Ostenberg, USA - 1943 Free Piston Engine, Van Blarigan, Sandia National Laboratory, USA - Since 1995 Aquarius Engines, Israel.
Free-Piston Engine Project, Newcastle University, UK - Since 1999 Shanghai Jiaotong University, China.
Development:
Free-Piston Linear Generator, German Aerospace Center (DLR), Germany - since 2002 Free Piston Power Pack (FP3), Pempek Systems, Australia - 2003 Free Piston Energy Converter, KTH Electrical Engineering, Sweden - 2006 Linear Combustion Engine, Czech technical university - 2004 Internal Combustion Linear Generator Integrated Power System, Xu Nanjing, China - 2010 micromer ag (Switzerland) - 2012 Free-piston engine linear generator, Toyota, Japan - 2014Although there is a variety of names and abbreviations for the technology, the terms "Free-piston linear generator" and "FPLG" particularly refer to the project at German Aerospace Center.
Operation:
The free-piston linear generator generally consists of three subsystems: combustion chamber, linear generator and return unit (normally a gas spring), which are coupled through a connecting rod.
Operation:
In the combustion chamber, a mixture of fuel and air is ignited, increasing the pressure and forcing the moving parts (connection rod, linear generator and pistons) in the direction of the gas spring. The gas spring is compressed, and, while the piston is near the bottom dead center (BDC), fresh air and fuel are injected into the combustion chamber, expelling the exhaust gases.
Operation:
The gas spring pushes the moving parts assembly back to the top dead center (TDC), compressing the mixture of air and fuel that was injected and the cycle repeats. This works in a similar manner to the two-stroke engine, however it is not the only possible configuration.
The linear generator can generate a force opposed to the motion, not only during expansion but also during compression. The magnitude and the force profile affect the piston movement, as well as the overall efficiency.
Variations:
The FPLG has been conceived in many different configurations, but for most applications, particularly for the automotive industry, focus has been on two opposed pistons in the same cylinder with one combustion chamber with a gas spring at the end of each cylinder. This balances out the forces in order to reduce vibration and noise. In the simplest case, a second unit is just a mirror of the first, with no functional connection to the first. Alternatively, a single combustion chamber or gas spring can be used, allowing for a more compact design and easier synchronization between the pistons.
Variations:
The gas spring and combustion chamber can be placed on the ends of the connection rods, or they can share the same piston, using opposite sides in order to reduce space.
The linear generator itself has also many different configurations and forms. It can be designed as round tube, a cylinder or even flat plate in order to reduce the center of gravity, and/or improve the heat dissipation.
Variations:
The free-piston linear generator's great versatility comes from the absence of a crankshaft, removing a great pumping loss, giving the engine a further degree of freedom. The combustion can be two-stroke engine or four-stroke engine. However, a four-stroke requires a much higher intermediate storage of energy, the rotational inertia of the crankshaft, to propel the piston through the four strokes. With the absence of a crankshaft, a gas spring would need to power the piston through the intake, compression, and exhaust strokes. Hence the reason why most of the current research focuses on the two-strokes cycle.
Variations:
Several variations are possible for combustion: Spark ignition (Otto) Compression ignition (Diesel) Homogeneous charge compression ignition (HCCI)
The DLR research:
The Institute of Vehicle Concepts of the German Aerospace Center is currently developing a FPLG (or Freikolbenlineargenerator - FKLG) since 2002, and has published several papers about this subject.During the first few years of research, the theoretical background along with the 3 subsystems were developed separately. In 2013, the first entire system was built and operated successfully.The German center is currently into its 2nd version of the entire system, on which two opposed cylinders will be used in order to reduce vibration and noise, making it viable for the automotive industry. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Federative International Programme on Anatomical Terminology**
Federative International Programme on Anatomical Terminology:
The Federative International Programme for Anatomical Terminology (FIPAT) is a group of experts who review, analyze, and discuss the terms of the morphological structures of the human body. It was created by the International Federation of Associations of Anatomists (IFAA) and was previously known as the Federative Committee on Anatomical Terminology (FCAT) and the Federative International Committee on Anatomical Terminology (FICAT).
Origins and history:
This Committee was created in 1989, at the XIII International Congress of Anatomists, held in Rio de Janeiro (Brazil). It followed the old International Anatomical Nomenclature Committee (IANC).
The professionals involved are renowned professors and researchers with knowledge of medical terminology.
They hold periodic meetings in different countries on a rotating basis, where they study morphological terminology: anatomical, histological and embryology of the human being.
The results of this committee were published in 1998 in the anatomical area and in 2008 in the histological area. It is currently working in the embryologist area.
Objectives and scope:
The main objective is to study the problem of morphological terms and its possible solutions.
The aim is to achieve a common scientific language that allows international integration, facilitating scientific exchange and progress in the various medical specialties.
This impacts on research, teaching and medical care worldwide. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ptosis**
Ptosis:
Ptosis (from the Greek: πτῶσις 'falling', 'a fall', 'dropped') refers to droopiness or abnormal downward displacement of a body part or organ. Particular cases include: Ptosis (eyelid) Ptosis (chin) Ptosis (breasts) Visceroptosis, of the abdominal viscera Gastroptosis, of the stomach Nephroptosis, of the kidney | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solvent vapour annealing**
Solvent vapour annealing:
Solvent vapor annealing (SVA) is a widely used technique for controlling the morphology and ordering of block copolymer (BCP) films. By controlling the block ratio (f = NA/N), spheres, cylinders, gyroids , and lamellae structures can be generated by forming a swollen and mobile layer of thin-film from added solvent vapor to facilitate the self-assembly of the polymer blocks. The process allows increased lateral ordering by several magnitudes to previous methods. It is a more mild alternative to thermal annealing.
Solvent vapour annealing:
Ideally, the chamber in which SVA takes place is a metal chamber that is inert to reaction with the given solvent, allowing for high precision in forming the desired nanostructures. Computers with designed program control of the valves for solvent addition and withdrawal are used to increase precision as well. This regulated inlet along with close monitoring of pressure gauges and thickness allows instant response and control while the annealing and evaporation phases precede.
Factors Affecting SVA:
When looking at what affects SVA, one of the main things that come up first is the solvent that is used, and what nanostructure is wanted to be obtained. For example, if a hierarchical structure is desired, a solvent that has a vapor that can selectively mobilize the amorphous polymer chains of a semi-crystalline polymer is ideal because it can also keep the integrity of the crystals, allowing for the secondary structure to form.Looking more at BCP itself, they make ordered nanostructures because of thermodynamic differences between different blocks of the polymer. The sample morphology at equilibrium can be predicted using the molar mass of the blocks, the degree of polymerization of the chains (N), and the Flory-Huggins interaction parameter (χ) which is a magnitude of exactly how incompatible the different blocks are. These factors, along with the composition of the BCP, allow microphase separation of chains and the rearrangement into the desired product. The composition provides an especially important part of the process as knowing the ordering, such as alternating AB monomers, gives light on how to section the polymer in the desired manner.
Factors Affecting SVA:
Along with this, the selection of a specific type of block polymer is important for the process and its effectiveness. The main thing to consider is the original structure of the block at room temperature, as well as, temperatures in which each block will begin to change phase. Knowing these temperatures is critical in determining when each will begin to react and take in solvent and at what rate this will happen, which is critical in pushing to a desired morphology of the given block polymer through annealing.
Factors Affecting SVA:
Other factors that affect SVA are parameters such as vapor pressure, solvent concertation in the film, and evaporation rate of the solvent. Each of these factors contributes to the volatility and imprecision at times of this method, not possessing a set mechanism for the construction of structures that are desired, such as nanocylinders. Getting perfect success of the desired morphology of a polymer has yet to be achieved with these plethoras of factors dictating formation.
Applications:
There are many applications in technology and lab work for this process to create desired morphologies of polymers. One of these applications is inscribing secondary nanostructures onto electrospun fibers. The use of poly(ε‐caprolactone) fibers, known as PCL, allows using solvents like acetone to move the amorphous chains of block polymers onto a pre-existing crystal, making the inscribed secondary structure. When the PCL is annealed with acetone, the amorphous chains can be mobilized to a given desired region, while the overall integrity of the fully crystallized regions stays intact. With a careful approach to the semi-crystalline polymer chosen and looking for appropriate solvent vapor, this simple process can be applied to many different systems and allows for the creation of many types of hierarchical polymer material.
Applications:
Another application of SVA is its use in helping create and improve photovoltaic device efficiency through the annealing of perovskite materials. For the greater performance of these energy cells, the keys lie with higher quality perovskite materials and on the use of SVA to create these higher quality films that can retain energy more efficiently. Solvent engineering is the key to make the perovskite material and improving their quality through SVA in an anhydrous isopropanol environment, where the crystalline polymer has low solubility, which causes the performance to increase greatly. The use of SVA here leads to a more energy-efficient and promising path of using specific polymers to help move forward with the improvement of energy storage.
Challenges and Areas to Focus on for Improvement:
There are some main areas of focus that can be looked at for the future of SVA to keep improving and being innovational in technology. Firstly, the chambers in which SVA takes place should continue to be improved on to allow precision of the process, as well as, reproducibility of the same structure on each attempt. The focus on these chambers and the components that make it precise have been a hypothetical thought process of what parameters affect reproducibility. It is imperative to continue to improve the amount of control over the annealing through being able to control all factors, such as humidity and temperature. The point of being meticulous in defining such parameters is for the possibility of multiple labs reproducing a certain compound to the same effect.
Challenges and Areas to Focus on for Improvement:
Next off, SVA with the improvement of the apparatus in which the process takes place, in situ studies, through X-ray and neutron scattering methods, can give more highly accurate images of the swollen and dried states of the BCP. Using methods such as also ellipsometry and interferometry can lead to discoveries about the thickness of the polymers in different states and nanostructure orientation, which will help to learn more about the equilibrium structure and the kinetics of developing a specified morphology. It is important here as well to be able to define small molecule additions to different parts of the block polymer at different points of the annealing and evaporation as to accurately be able to precisely know how the moieties will create certain orientations and directionality in structure. The final area moving forward is simply the implementation of the created block polymers in new intended applications and technology, beyond lab study and characterization of the method. It is important to go beyond creating the nanostructures and move into seeing the utility of the structures in an application, which will help reveal practical shortcomings of the created polymers and reveal areas of where to improve in parts of the structure, such as film integrity and attachment strength of the amorphous chains. Going beyond these simple surface imaging will allow us to realize and face some of the dangers and hindrances to functionality, such as the toxicity of working with organic solvents or the issues with dewetting the swollen state of the BCP. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CHM-081**
CHM-081:
CHM-081 (SGT-4) is a recreational designer drug which is classed as a synthetic cannabinoid. It is from the naphthoylindole family, being the 1-cyclohexylmethyl instead of 1-pentyl analogue of JWH-081, and produces cannabis-like effects. It has been identified as an ingredient in synthetic cannabis products in various countries including the USA and Australia. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SCIMP protein**
SCIMP protein:
SLP65/SLP76, Csk-interacting membrane protein, termed SCIMP, belongs to family of transmembrane adaptor proteins (TRAP) which do not directly associate with a receptor, such as LAT, NTAL, LIME or LAX. SCIMP is expressed in antigen-presenting cells (APC), namely B cells, bone marrow-derived dendritic cells and macrophages.
Structure and interactions:
Like other TRAPs, SCIMP has negligible extracellular domain and transmembrane domain followed by intracellular domain, containing several tyrosines and one proline-rich region (PRR). Upon phosphorylation, these tyrosines serve as docking domains for SH2 domains containing proteins. In a contrast to phospho-tyrosines, proline rich regions are generally less susceptible to post-translation modifications and they are rather targets of constitutive interactions with SH3 domains containing proteins. It has been shown that SCIMP interact via SH2 domains with Csk kinase, negative regulator of Src family kinases, but also with Slp65/76 and Grb2 adaptors, which are key pro-signalling soluble adaptor proteins in lymphocyte signalling network. SCIMP is constitutively associated with Lyn kinase via SH3 domain.
Membrane localization:
Some of TRAPs are palmitoylated in a border region between transmembrane and intracellular domain. The aliphatic chain of Palmitic acid is anchored to the membrane bilayer and thus influence protein targeting to membrane microdomains. SCIMP is also palmitoylated and is associated with tetraspanin-enriched mircrodomains (TEMs). TEMs, unlike lipid rafts, are based more on protein-protein interactions than lipid-lipid/lipid-protein interactions. One of the resident proteins in TEMs is MHC class II molecule. SCIMP is present in the immunological synapse during antigen presentation between a T cell and an antigen-presenting cell (APC).
In vitro studies and putative function:
SCIMP becomes strongly phosphorylated after MHC II stimulation. Studies performed with fusion protein CD25-SCIMP showed its ability to induce calcium release and Erk phosphorylation upon anti-CD25 antibody treatment. The calcium release was even stronger in CD25-SCIMP mutant protein in binding side for Csk. Indicating negative feedback loop performed by Csk kinase. Fusion proteins are commonly used in order to study signalling ability of proteins with a small extracellular domain hidden for antibody in membrane glycocalix. However knock down of SCIMP didn´t influence calcium release after anti MHC II antibody treatment, but only decrease level of Erk phosphorylation in longer time point (10 min.) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gardeau Formation**
Gardeau Formation:
The Gardeau Formation is a geologic formation in New York. It preserves fossils dating back to the Devonian period. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Knocking Piece II**
Knocking Piece II:
Knocking Piece II is a piece of music by Ben Johnston. It is a continuation of Knocking Piece I. In both pieces, Johnston calls to demonstrate unconventional methods of playing conventional instruments. In Knocking Piece II, percussionists use bouncy balls, brushes on their playing surfaces as they also use a deck of cards for instructions. The instrumentation of Knocking Piece II is undefined, and there are various symbols represent different actions. For example, "X" signifies super balls, which are the bouncy balls that are used to play the playing surface. Seven percussionists and one sound technician fulfill their duties according to the playing cards’ instructions. While seven percussionists play their playing surfaces, the sound technician uses the mixer or the soundboard to follow instructions.
Knocking Piece II:
In addition to some guidelines for allowing different qualities in the sound of the percussion instruments, a deck of cards is used to give further instructions. For example, suits determine loudness. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**KDE Plasma 4**
KDE Plasma 4:
KDE Plasma 4 is the fourth generation of the KDE workspace environments. It consisted of three workspaces, each targeting a certain platform: Plasma Desktop for traditional desktop PCs and notebooks, Plasma Netbook for netbooks, and Plasma Active for tablet PCs and similar devices.KDE Plasma 4 was released as part of KDE Software Compilation 4 and replaced Kicker, KDesktop, and SuperKaramba, which formed the Desktop in earlier KDE releases. They are bundled as the default environment with a number of free software operating systems, such as Chakra, Kubuntu, Mageia (DVD version), openSUSE, or TrueOS.With the release of KDE SC 4.11 on 14 August 2013, KDE Plasma 4 was placed into a feature freeze and turned into a long-term stable package until August 2015. On 15 July 2014 KDE Plasma 4's successor, KDE Plasma 5, was released.
Features:
Plasma features containments, which are essentially applets that contain other applets. Two examples of containments are the desktop background and the taskbar. A containment can be anything the developer wants: an image (either raster graphics or an SVG image), animation, or even OpenGL. Images are most commonly used, but with Plasma the user could set any applet as the desktop background without losing functionality of the applet. This also allows for applets to be dragged between the desktop and the taskbar (two separate containments), and have a separate visualization for the more confined taskbar.
Features:
Plasma separates components into "data engine" and their visualization counterparts. This is intended to reduce the total programming effort when there are multiple possible visualizations of given data; and to make it easier for the data engine and the workspaces to be written independently.
The scalable nature of the Plasma widgets allows for them to be resized and rotated to any size, with only a brief pause to redraw themselves. The Kross scripting framework allows developers to write widgets in a variety of programming languages in addition to C++.
Features:
KRunner is a versatile tool for several functions. It replaces the dialog box "Run Command" from K Desktop Environment 3, and also inherits from the application launcher feature, expanding the possibilities through a modular plug. KRunner stores previously entered commands and searches, accessible via an auto-complete feature. KRunner can be shown on the desktop via the keyboard combination Alt+F2 or by selecting "Run Command ..." in the desktop menu.
Features:
These functions are handled by the plugin: Application launcher: Type at least three letters of the desired name or description. KRunner shows applications associated with the terms of the search and allows the selection of the desired one.
Calculator: Simply enter the desired operation to show the result. It also supports sophisticated expressions.
Contacts can search for entries in KDE's address book allowing users to directly open, for example, KMail to write an e-mail. The address of the recipient of your choice is automatically added to the message.
Unit Converter converts values between different units of measure.
Web history: Search history of recently visited sites in Konqueror.
Recent documents: Search for recently opened files.
Available Plasma Workspaces:
Desktop Plasma Desktop was a standard desktop interface. It was declared mature with the release of KDE SC 4.2. It is designed for desktop PCs and larger laptops. In its default configuration it resembles K Desktop Environment 3 and Microsoft Windows XP but extensive configurability allows radical departures from the default layout.Its technology is a fundamental rewrite of several desktop interaction programs included in previous KDE desktop environments for Unix-like systems, focusing on eye candy and special graphical effects. The Desktop Workspace replaces the previous KDesktop shell, Kicker taskbar and SuperKaramba widget engine used in the K Desktop Environment 3 series with a unified system of widgets that can be configured and replaced with alternative designs.
Available Plasma Workspaces:
From KDE 4.0 to KDE 4.2, the default theme Oxygen was characterized by dark tones. In KDE 4.3, replaced by the new Air theme, which predominates in transparency and white as base color. New themes for Plasma can be chosen and installed through software like Discover or online at store.kde.org.
Supported widgets This is a list of widgets that the current release version of Plasma supports. Not all widgets are supported by default in all Linux distributions; some may require different packages, or even a recompilation of Plasma.
First generation native widgets (In C++, JavaScript, Ruby or Python. In many distributions, the Ruby and Python bindings must be downloaded separately as packages) Second generation native widgets written in QML.
Apple Dashboard widgets SuperKaramba widgets – used in KDE 3.
Web widgets (supports HTML and JavaScript)Previous Plasma Workspaces releases also supported Edje gadgets and E17 modules. Support for those was developed in 2008 but later, in 2010, removed.Google Gadgets were also supported. After Google announced to discontinue its two services that utilize Gadgets – Google Desktop and iGoogle – KDE removed support for this widget engine in early 2013.
Netbook Plasma Netbook is the second workspace. It aims at netbooks and may also be used on tablet PCs. The first stable release shipped with KDE SC 4.4.
Plasma Active Plasma Active was a workspace for devices with touchscreens. It shipped with several applications such as Kontact Touch and a document viewer based on Calligra Suite. It has been succeeded by KDE Plasma Mobile starting with KDE Plasma 5.
Contour Contour was the name of an interface for tablet devices. Its development was started in April 2011 by basysKom. Replacing an earlier tablet prototype, Contour has then become the main workspace UI of Plasma Active and was shipped as 1.0 in October 2011.
Mobile Plasma Mobile was targeted at smartphones and small tablet devices that are mainly used via touch input. It was originally expected to be released in 2011 along with Plasma Active 1.0 but development focus shifted towards Contour. A new version with the same name but based on KDE Frameworks 5 has been announced on 25 July 2015.
History:
KDE 4.0 was released in January 2008. Linux.com described the reaction from users as a "revolt", writing that the backlash KDE 4.0 received was on a scale that was unprecedented for a FOSS project. Although it was a developer's release, several distributions made the KDE 4.0 desktop environment available to their users without specifying that it was an experimental option. openSUSE released a more polished KDE 4 option while other distributions "released packages that simply [didn't] work," according to project leader Aaron Seigo. As a result, many users complained about the loss of features and stability. A number of KDE developers, including project leader Aaron Seigo, were targeted for abuse by outlets like Linux Hater's Blog. Several KDE developers stepped back from the public scrutiny. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Biosurfactant**
Biosurfactant:
Biosurfactant usually refers to surfactants of microbial origin. Most of the biosurfactants produced by microbes are synthesized extracellularly and many microbes are known to produce biosurfactants in large relative quantities. Some are of commercial interest. As a secondary metabolite of microorganisms, biosurfactants can be processed by the cultivation of biosurfactant producing microorganisms in the stationary phase on many sorts of low-priced substrates like biochar, plant oils, carbohydrates, wastes, etc. High-level production of biosurfactants can be controlled by regulation of environmental factors and growth circumstances.
Classification:
Biosurfactants are usually categorized by their molecular structure. Like synthetic surfactants, they are composed of a hydrophilic moiety made up of amino acids, peptides, (poly)saccharides, or sugar alcohols and a hydrophobic moiety consisting of fatty acids. Correspondingly, the significant classes of biosurfactants include glycolipids, lipopeptides and lipoproteins, and polymeric surfactants as well as particulate surfactants.
Examples:
Common biosurfactants include: Bile salts are mixtures of micelle-forming compounds that encapsulate food, enabling absorption through the small intestine.
Lecithin, which can be obtained either from soybean or from egg yolk, is a common food ingredient.
Rhamnolipids, which can be produced by some species of Pseudomonas, e.g., Pseudomonas aeruginosa.
Sophorolipids are produced by various nonpathogenic yeasts.
Emulsan produced by Acinetobacter calcoaceticus.Microbial biosurfactants are obtained by including immiscible liquids in the growth medium.
Applications:
Potential applications include herbicides and pesticides formulations, detergents, healthcare and cosmetics, pulp and paper, coal, textiles, ceramic processing and food industries, uranium ore-processing, and mechanical dewatering of peat.
Oil spill remediation Biosurfactants enhance the emulsification of hydrocarbons, thus they have the potential to solubilise hydrocarbon contaminants and increase their availability for microbial degradation. These compounds can also be used in enhanced oil recovery and may be considered for other potential applications in environmental protection. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**C++03**
C++03:
C++03 is a version of the ISO/IEC 14882 standard for the C++ programming language. It is defined by two standards organizations, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), in standard ISO/IEC 14882:2003.
C++03:
C++03 replaced the prior version of the C++ standard, called C++98, and was later replaced by C++11. C++03 was primarily a bug fix release for the implementers to ensure greater consistency and portability. This revision addressed 92 core language defect reports, 125 library defect reports, and included only one new language feature: value initialization.Among the more noteworthy defect reports addressed by C++03 was the library defect report 69, whose resolution added the requirement that elements in a vector are stored contiguously. This codifies the common expectation that a C++ std::vector object uses a memory layout similar to an array. While most implementations satisfied this expectation, it was not required by C++98. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**T-norm fuzzy logics**
T-norm fuzzy logics:
T-norm fuzzy logics are a family of non-classical logics, informally delimited by having a semantics that takes the real unit interval [0, 1] for the system of truth values and functions called t-norms for permissible interpretations of conjunction. They are mainly used in applied fuzzy logic and fuzzy set theory as a theoretical basis for approximate reasoning.
T-norm fuzzy logics:
T-norm fuzzy logics belong in broader classes of fuzzy logics and many-valued logics. In order to generate a well-behaved implication, the t-norms are usually required to be left-continuous; logics of left-continuous t-norms further belong in the class of substructural logics, among which they are marked with the validity of the law of prelinearity, (A → B) ∨ (B → A). Both propositional and first-order (or higher-order) t-norm fuzzy logics, as well as their expansions by modal and other operators, are studied. Logics that restrict the t-norm semantics to a subset of the real unit interval (for example, finitely valued Łukasiewicz logics) are usually included in the class as well.
T-norm fuzzy logics:
Important examples of t-norm fuzzy logics are monoidal t-norm logic (MTL) of all left-continuous t-norms, basic logic (BL) of all continuous t-norms, product fuzzy logic of the product t-norm, or the nilpotent minimum logic of the nilpotent minimum t-norm. Some independently motivated logics belong among t-norm fuzzy logics, too, for example Łukasiewicz logic (which is the logic of the Łukasiewicz t-norm) or Gödel–Dummett logic (which is the logic of the minimum t-norm).
Motivation:
As members of the family of fuzzy logics, t-norm fuzzy logics primarily aim at generalizing classical two-valued logic by admitting intermediary truth values between 1 (truth) and 0 (falsity) representing degrees of truth of propositions. The degrees are assumed to be real numbers from the unit interval [0, 1]. In propositional t-norm fuzzy logics, propositional connectives are stipulated to be truth-functional, that is, the truth value of a complex proposition formed by a propositional connective from some constituent propositions is a function (called the truth function of the connective) of the truth values of the constituent propositions. The truth functions operate on the set of truth degrees (in the standard semantics, on the [0, 1] interval); thus the truth function of an n-ary propositional connective c is a function Fc: [0, 1]n → [0, 1]. Truth functions generalize truth tables of propositional connectives known from classical logic to operate on the larger system of truth values.
Motivation:
T-norm fuzzy logics impose certain natural constraints on the truth function of conjunction. The truth function ∗:[0,1]2→[0,1] of conjunction is assumed to satisfy the following conditions: Commutativity, that is, x∗y=y∗x for all x and y in [0, 1]. This expresses the assumption that the order of fuzzy propositions is immaterial in conjunction, even if intermediary truth degrees are admitted.
Associativity, that is, (x∗y)∗z=x∗(y∗z) for all x, y, and z in [0, 1]. This expresses the assumption that the order of performing conjunction is immaterial, even if intermediary truth degrees are admitted.
Monotony, that is, if x≤y then x∗z≤y∗z for all x, y, and z in [0, 1]. This expresses the assumption that increasing the truth degree of a conjunct should not decrease the truth degree of the conjunction.
Motivation:
Neutrality of 1, that is, 1∗x=x for all x in [0, 1]. This assumption corresponds to regarding the truth degree 1 as full truth, conjunction with which does not decrease the truth value of the other conjunct. Together with the previous conditions this condition ensures that also 0∗x=0 for all x in [0, 1], which corresponds to regarding the truth degree 0 as full falsity, conjunction with which is always fully false.
Motivation:
Continuity of the function ∗ (the previous conditions reduce this requirement to the continuity in either argument). Informally this expresses the assumption that microscopic changes of the truth degrees of conjuncts should not result in a macroscopic change of the truth degree of their conjunction. This condition, among other things, ensures a good behavior of (residual) implication derived from conjunction; to ensure the good behavior, however, left-continuity (in either argument) of the function ∗ is sufficient. In general t-norm fuzzy logics, therefore, only left-continuity of ∗ is required, which expresses the assumption that a microscopic decrease of the truth degree of a conjunct should not macroscopically decrease the truth degree of conjunction.These assumptions make the truth function of conjunction a left-continuous t-norm, which explains the name of the family of fuzzy logics (t-norm based). Particular logics of the family can make further assumptions about the behavior of conjunction (for example, Gödel–Dummett logic requires its idempotence) or other connectives (for example, the logic IMTL (involutive monoidal t-norm logic) requires the involutiveness of negation).
Motivation:
All left-continuous t-norms ∗ have a unique residuum, that is, a binary function ⇒ such that for all x, y, and z in [0, 1], x∗y≤z if and only if x≤y⇒z.
The residuum of a left-continuous t-norm can explicitly be defined as sup {z∣z∗x≤y}.
This ensures that the residuum is the pointwise largest function such that for all x and y, x∗(x⇒y)≤y.
Motivation:
The latter can be interpreted as a fuzzy version of the modus ponens rule of inference. The residuum of a left-continuous t-norm thus can be characterized as the weakest function that makes the fuzzy modus ponens valid, which makes it a suitable truth function for implication in fuzzy logic. Left-continuity of the t-norm is the necessary and sufficient condition for this relationship between a t-norm conjunction and its residual implication to hold.
Motivation:
Truth functions of further propositional connectives can be defined by means of the t-norm and its residuum, for instance the residual negation ¬x=(x⇒0) or bi-residual equivalence x⇔y=(x⇒y)∗(y⇒x).
Motivation:
Truth functions of propositional connectives may also be introduced by additional definitions: the most usual ones are the minimum (which plays a role of another conjunctive connective), the maximum (which plays a role of a disjunctive connective), or the Baaz Delta operator, defined in [0, 1] as Δx=1 if x=1 and Δx=0 otherwise. In this way, a left-continuous t-norm, its residuum, and the truth functions of additional propositional connectives determine the truth values of complex propositional formulae in [0, 1].
Motivation:
Formulae that always evaluate to 1 are called tautologies with respect to the given left-continuous t-norm ∗, or ∗- tautologies. The set of all ∗- tautologies is called the logic of the t-norm ∗, as these formulae represent the laws of fuzzy logic (determined by the t-norm) that hold (to degree 1) regardless of the truth degrees of atomic formulae. Some formulae are tautologies with respect to a larger class of left-continuous t-norms; the set of such formulae is called the logic of the class. Important t-norm logics are the logics of particular t-norms or classes of t-norms, for example: Łukasiewicz logic is the logic of the Łukasiewicz t-norm max (x+y−1,0) Gödel–Dummett logic is the logic of the minimum t-norm min (x,y) Product fuzzy logic is the logic of the product t-norm x∗y=x⋅y Monoidal t-norm logic MTL is the logic of (the class of) all left-continuous t-norms Basic fuzzy logic BL is the logic of (the class of) all continuous t-normsIt turns out that many logics of particular t-norms and classes of t-norms are axiomatizable. The completeness theorem of the axiomatic system with respect to the corresponding t-norm semantics on [0, 1] is then called the standard completeness of the logic. Besides the standard real-valued semantics on [0, 1], the logics are sound and complete with respect to general algebraic semantics, formed by suitable classes of prelinear commutative bounded integral residuated lattices.
History:
Some particular t-norm fuzzy logics have been introduced and investigated long before the family was recognized (even before the notions of fuzzy logic or t-norm emerged): Łukasiewicz logic (the logic of the Łukasiewicz t-norm) was originally defined by Jan Łukasiewicz (1920) as a three-valued logic; it was later generalized to n-valued (for all finite n) as well as infinitely-many-valued variants, both propositional and first-order.
History:
Gödel–Dummett logic (the logic of the minimum t-norm) was implicit in Gödel's 1932 proof of infinite-valuedness of intuitionistic logic. Later (1959) it was explicitly studied by Dummett who proved a completeness theorem for the logic.A systematic study of particular t-norm fuzzy logics and their classes began with Hájek's (1998) monograph Metamathematics of Fuzzy Logic, which presented the notion of the logic of a continuous t-norm, the logics of the three basic continuous t-norms (Łukasiewicz, Gödel, and product), and the 'basic' fuzzy logic BL of all continuous t-norms (all of them both propositional and first-order). The book also started the investigation of fuzzy logics as non-classical logics with Hilbert-style calculi, algebraic semantics, and metamathematical properties known from other logics (completeness theorems, deduction theorems, complexity, etc.).
History:
Since then, a plethora of t-norm fuzzy logics have been introduced and their metamathematical properties have been investigated. Some of the most important t-norm fuzzy logics were introduced in 2001, by Esteva and Godo (MTL, IMTL, SMTL, NM, WNM), Esteva, Godo, and Montagna (propositional ŁΠ), and Cintula (first-order ŁΠ).
Logical language:
The logical vocabulary of propositional t-norm fuzzy logics standardly comprises the following connectives: Implication → (binary). In the context of other than t-norm-based fuzzy logics, the t-norm-based implication is sometimes called residual implication or R-implication, as its standard semantics is the residuum of the t-norm that realizes strong conjunction.
Strong conjunction & (binary). In the context of substructural logics, the sign ⊗ and the names group, intensional, multiplicative, or parallel conjunction are often used for strong conjunction.
Logical language:
Weak conjunction ∧ (binary), also called lattice conjunction (as it is always realized by the lattice operation of meet in algebraic semantics). In the context of substructural logics, the names additive, extensional, or comparative conjunction are sometimes used for lattice conjunction. In the logic BL and its extensions (though not in t-norm logics in general), weak conjunction is definable in terms of implication and strong conjunction, by The presence of two conjunction connectives is a common feature of contraction-free substructural logics.
Logical language:
Bottom ⊥ (nullary); 0 or 0¯ are common alternative signs and zero a common alternative name for the propositional constant (as the constants bottom and zero of substructural logics coincide in t-norm fuzzy logics). The proposition ⊥ represents the falsity or absurdum and corresponds to the classical truth value false.
Negation ¬ (unary), sometimes called residual negation if other negation connectives are considered, as it is defined from the residual implication by the reductio ad absurdum: Equivalence ↔ (binary), defined as In t-norm logics, the definition is equivalent to (A→B)&(B→A).
Logical language:
(Weak) disjunction ∨ (binary), also called lattice disjunction (as it is always realized by the lattice operation of join in algebraic semantics). In t-norm logics it is definable in terms of other connectives as Top ⊤ (nullary), also called one and denoted by 1 or 1¯ (as the constants top and zero of substructural logics coincide in t-norm fuzzy logics). The proposition ⊤ corresponds to the classical truth value true and can in t-norm logics be defined as Some propositional t-norm logics add further propositional connectives to the above language, most often the following ones: The Delta connective △ is a unary connective that asserts classical truth of a proposition, as the formulae of the form △A behave as in classical logic. Also called the Baaz Delta, as it was first used by Matthias Baaz for Gödel–Dummett logic. The expansion of a t-norm logic L by the Delta connective is usually denoted by L△.
Logical language:
Truth constants are nullary connectives representing particular truth values between 0 and 1 in the standard real-valued semantics. For the real number r , the corresponding truth constant is usually denoted by r¯.
Most often, the truth constants for all rational numbers are added. The system of all truth constants in the language is supposed to satisfy the bookkeeping axioms: etc. for all propositional connectives and all truth constants definable in the language.
Involutive negation ∼ (unary) can be added as an additional negation to t-norm logics whose residual negation is not itself involutive, that is, if it does not obey the law of double negation ¬¬A↔A . A t-norm logic L expanded with involutive negation is usually denoted by L∼ and called L with involution.
Logical language:
Strong disjunction ⊕ (binary). In the context of substructural logics it is also called group, intensional, multiplicative, or parallel disjunction. Even though standard in contraction-free substructural logics, in t-norm fuzzy logics it is usually used only in the presence of involutive negation, which makes it definable (and so axiomatizable) by de Morgan's law from strong conjunction: Additional t-norm conjunctions and residual implications. Some expressively strong t-norm logics, for instance the logic ŁΠ, have more than one strong conjunction or residual implication in their language. In the standard real-valued semantics, all such strong conjunctions are realized by different t-norms and the residual implications by their residua.Well-formed formulae of propositional t-norm logics are defined from propositional variables (usually countably many) by the above logical connectives, as usual in propositional logics. In order to save parentheses, it is common to use the following order of precedence: Unary connectives (bind most closely) Binary connectives other than implication and equivalence Implication and equivalence (bind most loosely)First-order variants of t-norm logics employ the usual logical language of first-order logic with the above propositional connectives and the following quantifiers: General quantifier ∀ Existential quantifier ∃ The first-order variant of a propositional t-norm logic L is usually denoted by L∀.
Semantics:
Algebraic semantics is predominantly used for propositional t-norm fuzzy logics, with three main classes of algebras with respect to which a t-norm fuzzy logic L is complete: General semantics, formed of all L -algebras — that is, all algebras for which the logic is sound.
Linear semantics, formed of all linear L -algebras — that is, all L -algebras whose lattice order is linear.
Semantics:
Standard semantics, formed of all standard L -algebras — that is, all L -algebras whose lattice reduct is the real unit interval [0, 1] with the usual order. In standard L -algebras, the interpretation of strong conjunction is a left-continuous t-norm and the interpretation of most propositional connectives is determined by the t-norm (hence the names t-norm-based logics and t-norm L -algebras, which is also used for L -algebras on the lattice [0, 1]). In t-norm logics with additional connectives, however, the real-valued interpretation of the additional connectives may be restricted by further conditions for the t-norm algebra to be called standard: for example, in standard L∼ -algebras of the logic L with involution, the interpretation of the additional involutive negation ∼ is required to be the standard involution f∼(x)=1−x, rather than other involutions that can also interpret ∼ over t-norm L∼ -algebras. In general, therefore, the definition of standard t-norm algebras has to be explicitly given for t-norm logics with additional connectives. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tropical cyclone forecast model**
Tropical cyclone forecast model:
A tropical cyclone forecast model is a computer program that uses meteorological data to forecast aspects of the future state of tropical cyclones. There are three types of models: statistical, dynamical, or combined statistical-dynamic. Dynamical models utilize powerful supercomputers with sophisticated mathematical modeling software and meteorological data to calculate future weather conditions. Statistical models forecast the evolution of a tropical cyclone in a simpler manner, by extrapolating from historical datasets, and thus can be run quickly on platforms such as personal computers. Statistical-dynamical models use aspects of both types of forecasting. Four primary types of forecasts exist for tropical cyclones: track, intensity, storm surge, and rainfall. Dynamical models were not developed until the 1970s and the 1980s, with earlier efforts focused on the storm surge problem.
Tropical cyclone forecast model:
Track models did not show forecast skill when compared to statistical models until the 1980s. Statistical-dynamical models were used from the 1970s into the 1990s. Early models use data from previous model runs while late models produce output after the official hurricane forecast has been sent. The use of consensus, ensemble, and superensemble forecasts lowers errors more than any individual forecast model. Both consensus and superensemble forecasts can use the guidance of global and regional models runs to improve the performance more than any of their respective components. Techniques used at the Joint Typhoon Warning Center indicate that superensemble forecasts are a very powerful tool for track forecasting.
Statistical guidance:
The first statistical guidance used by the National Hurricane Center was the Hurricane Analog Technique (HURRAN), which was available in 1969. It used the newly developed North Atlantic tropical cyclone database to find storms with similar tracks. It then shifted their tracks through the storm's current path, and used location, direction and speed of motion, and the date to find suitable analogs. The method did well with storms south of the 25th parallel which had not yet turned northward, but poorly with systems near or after recurvature. Since 1972, the Climatology and Persistence (CLIPER) statistical model has been used to help generate tropical cyclone track forecasts. In the era of skillful dynamical forecasts, CLIPER is now being used as the baseline to show model and forecaster skill. The Statistical Hurricane Intensity Forecast (SHIFOR) has been used since 1979 for tropical cyclone intensity forecasting. It uses climatology and persistence to predict future intensity, including the current Julian day, current cyclone intensity, the cyclone's intensity 12 hours ago, the storm's initial latitude and longitude, as well as its zonal (east-west) and meridional (north-south) components of motion.A series of statistical-dynamical models, which used regression equations based upon CLIPER output and the latest output from primitive equation models run at the National Meteorological Center, then National Centers for Environmental Prediction, were developed between the 1970s and 1990s and were named NHC73, NHC83, NHC90, NHC91, and NHC98. Within the field of tropical cyclone track forecasting, despite the ever-improving dynamical model guidance which occurred with increased computational power, it was not until the decade of the 1980s when numerical weather prediction showed skill, and until the 1990s when it consistently outperformed statistical or simple dynamical models. In 1994, a version of SHIFOR was created for the northwest Pacific Ocean for typhoon forecasting, known as the Statistical Typhoon Intensity Forecast (STIFOR), which used the 1971–1990 data for that region to develop intensity forecasts out to 72 hours into the future.In regards to intensity forecasting, the Statistical Hurricane Intensity Prediction Scheme (SHIPS) utilizes relationships between environmental conditions from the Global Forecast System (GFS) such as vertical wind shear and sea surface temperatures, climatology, and persistence (storm behavior) via multiple regression techniques to come up with an intensity forecast for systems in the northern Atlantic and northeastern Pacific oceans. A similar model was developed for the northwest Pacific Ocean and Southern Hemisphere known as the Statistical Intensity Prediction System (STIPS), which accounts for land interactions through the input environmental conditions from the Navy Operational Global Prediction System (NOGAPS) model. The version of SHIPS with an inland decay component is known as Decay SHIPS (DSHIPS). The Logistic Growth Equation Model (LGEM) uses the same input as SHIPS but within a simplified dynamical prediction system. Within tropical cyclone rainfall forecasting, the Rainfall Climatology and Persistence (r-CLIPER) model was developed using microwave rainfall data from polar orbiting satellites over the ocean and first-order rainfall measurements from the land, to come up with a realistic rainfall distribution for tropical cyclones based on the National Hurricane Center's track forecast. It has been operational since 2004. A statistical-parametric wind radii model has been developed for use at the National Hurricane Center and Joint Typhoon Warning Center which uses climatology and persistence to predict wind structure out to five days into the future.
Dynamical guidance:
The first dynamical hurricane track forecast model, the Sanders Barotropic Tropical Cyclone Track Prediction Model (SANBAR), was introduced in 1970 and was used by the National Hurricane Center as part of its operational track guidance through 1989. It was based on a simplified set of atmospheric dynamical equations (the equivalent barotropic formulation) using a deep layer-mean wind. During 1972, the first model to forecast storm surge along the continental shelf of the United States was developed, known as the Special Program to List the Amplitude of Surges from Hurricanes (SPLASH). In 1978, the first full-physics hurricane-tracking model based on atmospheric dynamics – the movable fine-mesh (MFM) model – began operating. The Quasi-Lagrangian Limited Area (QLM) model is a multi-level primitive equation model using a Cartesian grid and the Global Forecast System (GFS) for boundary conditions. In the early 1980s, the assimilation of satellite-derived winds from water vapor, infrared, and visible satellite imagery was found to improve tropical cyclones track forecasting. The Geophysical Fluid Dynamics Laboratory (GFDL) hurricane model was used for research purposes between 1973 and the mid-1980s. Once it was determined that it could show skill in hurricane prediction, a multi-year transition transformed the research model into an operational model which could be used by the National Weather Service for both track and intensity forecasting in 1995. By 1985, the Sea Lake and Overland Surges from Hurricanes (SLOSH) Model had been developed for use in areas of the Gulf of Mexico and near the United States' East coast, which was more robust than the SPLASH model.The Beta Advection Model (BAM) has been used operationally since 1987 using steering winds averaged through the 850 hPa to 200 hPa layer and the Beta effect which causes a storm to drift northwest due to differences in the coriolis effect across the tropical cyclone. The larger the cyclone, the larger the impact of the beta effect is likely to be. Starting in 1990, three versions of the BAM were run operationally: the BAM shallow (BAMS) average winds in an 850 hPa to 700 hPa layer, the BAM Medium (BAMM) which uses average winds in an 850 hPa to 400 hPa layer, and the BAM Deep (BAMD) which is the same as the pre-1990 BAM. For a weak hurricane without well-developed central thunderstorm activity, BAMS works well, because weak storms tend to be steered by low-level winds. As the storm grows stronger and associated thunderstorm activity near its center gets deeper, BAMM and BAMD become more accurate, as these types of storms are steered more by the winds in the upper-level. If the forecast from the three versions is similar, then the forecaster can conclude that there is minimal uncertainty, but if the versions vary by a great deal, then the forecaster has less confidence in the track predicted due to the greater uncertainty. Large differences between model predictions can also indicate wind shear in the atmosphere, which could affect the intensity forecast as well.Tested in 1989 and 1990, The Vic Ooyama Barotropic (VICBAR) model used a cubic-B spline representation of variables for the objective analysis of observations and solutions to the shallow-water prediction equations on nested domains, with the boundary conditions defined as the global forecast model. It was implemented operationally as the Limited Area Sine Transform Barotropic (LBAR) model in 1992, using the GFS for boundary conditions. By 1990, Australia had developed its own storm surge model which was able to be run in a few minutes on a personal computer. The Japan Meteorological Agency (JMA) developed its own Typhoon Model (TYM) in 1994, and in 1998, the agency began using its own dynamic storm surge model.
Dynamical guidance:
The Hurricane Weather Research and Forecasting (HWRF) model is a specialized version of the Weather Research and Forecasting (WRF) model and is used to forecast the track and intensity of tropical cyclones. The model was developed by the National Oceanic and Atmospheric Administration (NOAA), the U.S. Naval Research Laboratory, the University of Rhode Island, and Florida State University. It became operational in 2007. Despite improvements in track forecasting, predictions of the intensity of a tropical cyclone based on numerical weather prediction continue to be a challenge, since statistical methods continue to show higher skill over dynamical guidance. Other than the specialized guidance, global guidance such as the GFS, Unified Model (UKMET), NOGAPS, Japanese Global Spectral Model (GSM), European Centre for Medium-Range Weather Forecasts model, France's Action de Recherche Petite Echelle Grande Echelle (ARPEGE) and Aire Limit´ee Adaptation Dynamique Initialisation (ALADIN) models, India's National Centre for Medium Range Weather Forecasting (NCMRWF) model, Korea's Global Data Assimilation and Prediction System (GDAPS) and Regional Data Assimilation and Prediction System (RDAPS) models, Hong Kong/China's Operational Regional Spectral Model (ORSM) model, and Canadian Global Environmental Multiscale Model (GEM) model are used for track and intensity purposes.
Dynamical guidance:
Timeliness Some models do not produce output quickly enough to be used for the forecast cycle immediately after the model starts running (including HWRF, GFDL, and FSSE). Most of the above track models (except CLIPER) require data from global weather models, such as the GFS, which produce output about four hours after the synoptic times of 0000, 0600, 1200, and 1800 Universal Coordinated Time (UTC). For half of their forecasts, the NHC issues forecasts only three hours after that time, so some "early" models – NHC90, BAM, and LBAR – are run using a 12-hour-old forecast for the current time. "Late" models, such as the GFS and GFDL, finish after the advisory has already been issued. These models are interpolated to the current storm position for use in the following forecast cycle – for example, GFDI, the interpolated version of the GFDL model.
Consensus methods:
Using a consensus of forecast models reduces forecast error. Trackwise, the GUNA model is a consensus of the interpolated versions of the GFDL, UKMET with quality control applied to the cyclone tracker, United States Navy NOGAPS, and GFS models. The version of the GUNA corrected for model biases is known as the CGUN. The TCON consensus is the GUNA consensus plus the Hurricane WRF model. The version of the TCON corrected for model biases is known as the TCCN. A lagged average of the last two runs of the members within the TCON plus the ECMWF model is known as the TVCN consensus. The version of the TVCN corrected for model biases is the TVCC consensus.In early 2013, The NAVGEM replaced the NOGAPS as the Navy's primary operational global forecast model. For the 2013 season, and until model verification can occur, it is not being utilized in the development of any consensus forecasts.
Consensus methods:
For intensity, a combination of the LGEM, interpolated GFDL, interpolated HWRF, and DSHIPS models is known as the ICON consensus. The lagged average of the last two runs of models within the ICON consensus is called the IVCN consensus. Across the northwest Pacific and Southern Hemisphere, a ten-member STIPS consensus is formed from the output of the NOGAPS, GFS, the Japanese GSM, the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS), the UKMET, the Japanese TYM, the GFDL with NOGAPS boundary conditions, the Air Force Weather Agency (AFWA) Model, the Australian Tropical Cyclone Local Area Prediction System, and the Weber Barotropic Model.
Ensemble methods:
No model is ever perfectly accurate because it is impossible to learn exactly everything about the atmosphere in a timely enough manner, and atmospheric measurements that are taken are not completely accurate. The use of the ensemble method of forecasting, whether it be a multi-model ensemble, or numerous ensemble members based on the global model, helps define the uncertainty and further limit errors.The JMA has produced an 11-member ensemble forecast system for typhoons known as the Typhoon Ensemble Prediction System (TEPS) since February 2008, which is run out to 132 hours into the future. It uses a lower resolution version (with larger grid spacing) of its GSM, with ten perturbed members and one non-perturbed member. The system reduces errors by an average of 40 kilometres (25 mi) five days into the future when compared to its higher resolution GSM.The Florida State Super Ensemble (FSSE) is produced from a suite of models which then uses statistical regression equations developed over a training phase to reduce their biases, which produces forecasts better than the member models or their mean solution. It uses 11 global models, including five developed at Florida State University, the Unified Model, the GFS, the NOGAPS, the United States Navy NOGAPS, the Australian Bureau of Meteorology Research Centre (BMRC) model, and Canadian Recherche en Prévision Numérique (RPN) model. It shows significant skill in track, intensity, and rainfall predictions of tropical cyclones.The Systematic Approach Forecast Aid (SAFA) was developed by the Joint Typhoon Warning Center to create a selective consensus forecast which removed more erroneous forecasts at a 72‑hour time frame from consideration using the United States Navy NOGAPS model, the GFDL, the Japan Meteorological Agency's global and typhoon models, as well as the UKMET. All the models improved during SAFA's five-year history and removing erroneous forecasts proved difficult to do in operations.
Sunspot theory:
A 2010 report correlates low sunspot activity with high hurricane activity. Analyzing historical data, there was a 25% chance of at least one hurricane striking the continental United States during a peak sunspot year; a 64% chance during a low sunspot year. In June 2010, the hurricanes predictors in the US were not using this information.
Hurricane forecast model accuracy:
The accuracy of hurricane forecast models can vary significantly from storm to storm. For some storms the factors affecting the hurricane track are relatively straightforward, and the models are not only accurate but they produce similar forecasts, while for other storms the factors affecting the hurricane track are more complex and different models produce very different forecasts. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scalar electrodynamics**
Scalar electrodynamics:
In theoretical physics, scalar electrodynamics is a theory of a U(1) gauge field coupled to a charged spin 0 scalar field that takes the place of the Dirac fermions in "ordinary" quantum electrodynamics. The scalar field is charged, and with an appropriate potential, it has the capacity to break the gauge symmetry via the Abelian Higgs mechanism.
Matter content and Lagrangian:
Matter content The model consists of a complex scalar field ϕ(x) minimally coupled to a gauge field Aμ(x) . This article discusses the theory on flat spacetime R1,3 (Minkowski space) so these fields can be treated (naïvely) as functions ϕ:R1,3→C , and Aμ:R1,3→(R1,3)∗ . The theory can also be defined for curved spacetime but these definitions must be replaced with a more subtle one. The gauge field is also known as a principal connection, specifically a principal U(1) connection.
Matter content and Lagrangian:
Lagrangian The dynamics is given by the Lagrangian density L=(Dμϕ)∗Dμϕ−V(ϕ∗ϕ)−14FμνFμν=(∂μϕ)∗(∂μϕ)−ie((∂μϕ)∗ϕ−ϕ∗(∂μϕ))Aμ+e2AμAμϕ∗ϕ, where Fμν=(∂μAν−∂νAμ) is the electromagnetic field strength, or curvature of the connection.
Dμϕ=(∂μϕ−ieAμϕ) is the covariant derivative of the field ϕ e=−|e|<0 is the electric charge V(ϕ∗ϕ) is the potential for the complex scalar field.
Gauge-invariance This model is invariant under gauge transformations parameterized by λ(x) . This is a real-valued function λ:R1,3→R.
and Aμ′(x)=Aμ(x)+∂μλ(x).
Differential-geometric view From the geometric viewpoint, λ is an infinitesimal change of trivialization, which generates the finite change of trivialization eieλ:R1,3→U(1).
In physics, it is customary to work under an implicit choice of trivialization, hence a gauge transformation really can be viewed as a change of trivialization.
Higgs mechanism:
If the potential is such that its minimum occurs at non-zero value of |ϕ| , this model exhibits the Higgs mechanism. This can be seen by studying fluctuations about the lowest energy configuration: one sees that the gauge field behaves as a massive field with its mass proportional to e times the minimum value of |ϕ| . As shown in 1973 by Nielsen and Olesen, this model, in 2+1 dimensions, admits time-independent finite energy configurations corresponding to vortices carrying magnetic flux. The magnetic flux carried by these vortices are quantized (in units of 2πe ) and appears as a topological charge associated with the topological current Jtopμ=ϵμνρFνρ.
Higgs mechanism:
These vortices are similar to the vortices appearing in type-II superconductors. This analogy was used by Nielsen and Olesen in obtaining their solutions.
Example A simple choice of potential for demonstrating the Higgs mechanism is V(|ϕ|2)=λ(|ϕ|2−Φ2)2.
The potential is minimized at |ϕ|=Φ , which is chosen to be greater than zero. This produces a circle of minima, with values Φeiθ , for θ a real number.
Scalar chromodynamics:
This theory can be generalized from a theory with U(1) gauge symmetry containing a scalar field ϕ valued in C coupled to a gauge field Aμ to a theory with gauge symmetry under the gauge group G , a Lie group. The scalar field ϕ is valued in a representation space of the gauge group G , making it a vector; the label of scalar field refers only to the transformation of ϕ under the action of the Lorentz group, so it is still referred to as a scalar field. The gauge-field is a g -valued 1-form, where g is the Lie algebra of G. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Timpani concerto**
Timpani concerto:
A timpani concerto is piece of music written for timpani with orchestral or band accompaniment. It is usually in three parts or movements.
The first timpani concertos were written in the Baroque and Classical periods of music. Important concertos from these eras include Johann Fischer's Symphony for Eight Timpani and Georg Druschetzky's Concerto for Six Timpani. During the Romantic Period, the timpani concerto was largely ignored. The timpani concerto was revived in the 20th century and the timpani concerto repertoire increased significantly.
Timpani concerto set-ups can range anywhere from a normal set of 4(32", 29", 26", 23") to 16+ Drums, some of which are smaller than 20" or larger than 32". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**International Society for NeuroVirology**
International Society for NeuroVirology:
The International Society for NeuroVirology (ISNV) was founded to promote research into disease-causing viruses that infect the human brain and nervous system. The ISNV membership includes scientists and clinicians from around the world who work in the fields of basic, translational, and clinical neurovirology.
History:
The ISNV was conceived during the 1st International Symposium on NeuroVirology, which was held in Philadelphia, Pennsylvania, USA, in 1997. The ISNV was officially established in 1998 as a non-profit organization by Kamel Khalili, Ph.D, with Brian Wigdahl, Ph.D., and Steven Jacobson, Ph.D., as its founding president and vice-president, respectively. The leadership of the Society has included:
Mission:
The ISNV provides an international forum for researchers and clinical scientists working in the field of neurovirology. By promoting collaborative interactions among scientists with common interests, the ISNV supports advances in the field of neurovirology and related disciplines. The goal of the ISNV is to promote basic science as well as the clinical application of knowledge to the prevention and treatment of neuroinflammatory and viral diseases of the nervous system. The mission of the ISNV is accomplished primarily through the organization and sponsorship of regular international meetings, and through the Society's official publication, the Journal of NeuroVirology. Activities that support the mission of the ISNV include: Organization and sponsorship of the International Symposium on NeuroVirology Co-sponsorship of small research conferences in neurovirology and related areas Publication of reviews and research articles in the bi-monthly Journal of NeuroVirology Publication and electronic distribution of the Society newsletter, which features current topics in neurovirology and highlights significant scientific achievements of neurovirologists from around the world Sponsorship of the Pioneer in NeuroVirology award, which recognizes researchers who make important contributions to the field of neurovirology Support of education in neurovirology-related areas by sponsorship of graduate and post-graduate participation in the International Symposium on NeuroVirology
Membership:
The ISNV currently has approximately 330 members, who collectively represent 15 countries around the world. Approximately one-quarter of its members reside outside the United States. Annual memberships are available to faculty members, research scientists, and clinicians who have interests in neurovirology. Post-doctoral fellows and students are also eligible to join (at a reduced membership rate).
Governance:
The ISNV is managed through its board of directors, which meets bi-annually and in conjunction with the International Symposium on NeuroVirology. The board of directors is responsible for choosing the society's executive officers, which include a president, vice-president, secretary, and treasurer. The president serves as the chief executive officer of the organization.
The current president of the ISNV is Avindra Nath, who took office in 2013. Nath holds the position of intramural clinical director of the National Institute of Neurological Disorders and Stroke (NINDS) at the National Institutes of Health (NIH).
The following committees, which are composed of ISNV members from around the world, carry out specific functions of the ISNV: Fundraising Publications and Communications Women in NeuroVirology Investigators in Training Meetings International Interests Junior Scientists
Symposia:
The ISNV regularly sponsors an International Symposium on NeuroVirology and concurrent Conference on HIV in the Nervous System. These meetings involve more than 350 basic and clinical scientists and trainees working in the areas of neurology, neuropathology, neuropathogenesis, neurobiology, neuroimmunology, neurochemistry, and molecular virology. Symposia have been held at locations around the world since 1997: The overall goal of these meetings is to provide investigators working in the field of neurovirology and related areas with leading edge information so that important gaps in knowledge can continue to be identified. Armed with this information, attendees of both events work toward formulating questions and experimental directions that will enhance the development of new preventative and therapeutic strategies effective against neurologic diseases associated with prions, HIV, and other viral and non-viral pathogens.
Publications:
The ISNV periodically publishes a newsletter, which is distributed electronically to all members. The goal of the newsletter is to provide a forum through which information about current Society issues as well as "hot" news in the field of neurovirology can be disseminated.
Publications:
The official journal of the ISNV is the Journal of NeuroVirology. The Journal of NeuroVirology (JNV) provides a unique platform for the publication of high-quality basic science and clinical studies on the molecular biology and pathogenesis of viral infections of the nervous system, and for reporting on the development of novel therapeutic strategies using neurotropic viral vectors. The journal also emphasizes publication of papers on non-viral infections that affect the central nervous system. The journal publishes original research articles, reviews, case reports, and coverage of various scientific meetings, as well as supplements and special issues on selected subjects. The journal has been published by Springer since 2011.
Pioneer in NeuroVirology:
The Pioneer in NeuroVirology award is presented by the ISNV in recognition of outstanding individual achievement in the field of neurovirology. Each International Symposium on NeuroVirology honors a worthy recipient of this award. Pioneers in NeuroVirology have been recognized by the ISNV since 1999. Recipients of the Pioneer in NeuroVirology Award include: Richard T. Johnson, M.D. (1999) Volker ter Meulen, M.D., Ph.D. (2000) Neal Nathanson, M.D. (2002) Michael B. A. Oldstone, M.D. (2003) Hilary Koprowski, M.D. (2004) Opendra Narayan, D.V.M., Ph.D. (2006) Donald H. Gilden, M.D. (2007) Diane Griffin, M.D., Ph.D. (2009) Kamel Khalili, Ph.D. (2010) Avindra Nath, M.D. (2012) Brian Wigdahl, Ph.D. (2013) Joseph Berger, M.D. (2015) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.