text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Sphinganine C4-monooxygenase**
Sphinganine C4-monooxygenase:
Sphinganine C4-monooxygenase (EC 1.14.13.169, sphingolipid C4-hydroxylase, SUR2 (gene), SBH1 (gene), SBH2 (gene)) is an enzyme with systematic name sphinganine,NADPH:oxygen oxidoreductase (C4-hydroxylating). This enzyme catalyses the following chemical reaction sphinganine + NADPH + H+ + O2 ⇌ phytosphingosine + NADP+ + H2OSphinganine C4-monooxygenase is involved in the biosynthesis of sphingolipids in yeast and plants. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glutathione S-transferase Mu 1**
Glutathione S-transferase Mu 1:
Glutathione S-transferase Mu 1 (gene name GSTM1) is a human glutathione S-transferase.
Function:
Cytosolic and membrane-bound forms of glutathione S-transferase are encoded by two distinct supergene families. At present, eight distinct classes of the soluble cytoplasmic mammalian glutathione S-transferases have been identified: alpha, kappa, mu, omega, pi, sigma, theta and zeta. This gene encodes a cytoplasmic glutathione S-transferase that belongs to the mu class. The mu class of enzymes functions in the detoxification of electrophilic compounds, including carcinogens, therapeutic drugs, environmental toxins, and products of oxidative stress, by conjugation with glutathione.
Function:
The genes encoding the mu class of enzymes are organized in a gene cluster on chromosome 1p13.3, and are known to be highly polymorphic. These genetic variations can change an individual's susceptibility to carcinogens and toxins, as well as affect the toxicity and efficacy of certain drugs. Null mutations of this class mu gene have been linked with an increase in a number of cancers, likely due to an increased susceptibility to environmental toxins and carcinogens. Multiple protein isoforms are encoded by transcript variants of this gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Peel Sound Formation**
Peel Sound Formation:
The Peel Sound Formation is a geologic formation in Nunavut. It preserves fossils dating back to the Silurian period. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Big Bang**
Big Bang:
The Big Bang event is a physical theory that describes how the universe expanded from an initial state of high density and temperature. Various cosmological models of the Big Bang explain the evolution of the observable universe from the earliest known periods through its subsequent large-scale form. These models offer a comprehensive explanation for a broad range of observed phenomena, including the abundance of light elements, the cosmic microwave background (CMB) radiation, and large-scale structure. The overall uniformity of the Universe, known as the flatness problem, is explained through cosmic inflation: a sudden and very rapid expansion of space during the earliest moments. However, physics currently lacks a widely accepted theory of quantum gravity that can successfully model the earliest conditions of the Big Bang.
Big Bang:
Crucially, these models are compatible with the Hubble–Lemaître law—the observation that the farther away a galaxy is, the faster it is moving away from Earth. Extrapolating this cosmic expansion backwards in time using the known laws of physics, the models describe an increasingly concentrated cosmos preceded by a singularity in which space and time lose meaning (typically named "the Big Bang singularity"). In 1964 the CMB was discovered, which convinced many cosmologists that the competing steady-state model of cosmic evolution was falsified, since the Big Bang models predict a uniform background radiation caused by high temperatures and densities in the distant past. A wide range of empirical evidence strongly favors the Big Bang event, which is now essentially universally accepted. Detailed measurements of the expansion rate of the universe place the Big Bang singularity at an estimated 13.787±0.020 billion years ago, which is considered the age of the universe.There remain aspects of the observed universe that are not yet adequately explained by the Big Bang models. After its initial expansion, the universe cooled sufficiently to allow the formation of subatomic particles, and later atoms. The unequal abundances of matter and antimatter that allowed this to occur is an unexplained effect known as baryon asymmetry. These primordial elements—mostly hydrogen, with some helium and lithium—later coalesced through gravity, forming early stars and galaxies. Astronomers observe the gravitational effects of an unknown dark matter surrounding galaxies. Most of the gravitational potential in the universe seems to be in this form, and the Big Bang models and various observations indicate that this excess gravitational potential is not created by baryonic matter, such as normal atoms. Measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, an observation attributed to an unexplained phenomenon known as dark energy.
Features of the models:
The Big Bang models offer a comprehensive explanation for a broad range of observed phenomena, including the abundances of the light elements, the CMB, large-scale structure, and Hubble's law. The models depend on two major assumptions: the universality of physical laws and the cosmological principle. The universality of physical laws is one of the underlying principles of the theory of relativity. The cosmological principle states that on large scales the universe is homogeneous and isotropic—appearing the same in all directions regardless of location.These ideas were initially taken as postulates, but later efforts were made to test each of them. For example, the first assumption has been tested by observations showing that largest possible deviation of the fine-structure constant over much of the age of the universe is of order 10−5. Also, general relativity has passed stringent tests on the scale of the Solar System and binary stars.The large-scale universe appears isotropic as viewed from Earth. If it is indeed isotropic, the cosmological principle can be derived from the simpler Copernican principle, which states that there is no preferred (or special) observer or vantage point. To this end, the cosmological principle has been confirmed to a level of 10−5 via observations of the temperature of the CMB. At the scale of the CMB horizon, the universe has been measured to be homogeneous with an upper bound on the order of 10% inhomogeneity, as of 1995.
Features of the models:
Horizons An important feature of the Big Bang spacetime is the presence of particle horizons. Since the universe has a finite age, and light travels at a finite speed, there may be events in the past whose light has not yet had time to reach us. This places a limit or a past horizon on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never "catch up" to very distant objects. This defines a future horizon, which limits the events in the future that we will be able to influence. The presence of either type of horizon depends on the details of the FLRW model that describes our universe.Our understanding of the universe back to very early times suggests that there is a past horizon, though in practice our view is also limited by the opacity of the universe at early times. So our view cannot extend further backward in time, though the horizon recedes in space. If the expansion of the universe continues to accelerate, there is a future horizon as well.
Features of the models:
Thermalization Some processes in the early universe occurred too slowly, compared to the expansion rate of the universe, to reach approximate thermodynamic equilibrium. Others were fast enough to reach thermalization. The parameter usually used to find out whether a process in the very early universe has reached thermal equilibrium is the ratio between the rate of the process (usually rate of collisions between particles) and the Hubble parameter. The larger the ratio, the more time particles had to thermalize before they were too far away from each other.
Timeline:
According to the Big Bang models, the universe at the beginning was very hot and very compact, and since then it has been expanding and cooling down.
Timeline:
Singularity Extrapolation of the expansion of the universe backwards in time using general relativity yields an infinite density and temperature at a finite time in the past. This irregular behavior, known as the gravitational singularity, indicates that general relativity is not an adequate description of the laws of physics in this regime. Models based on general relativity alone cannot fully extrapolate toward the singularity.This primordial singularity is itself sometimes called "the Big Bang", but the term can also refer to a more generic early hot, dense phase of the universe. In either case, "the Big Bang" as an event is also colloquially referred to as the "birth" of our universe since it represents the point in history where the universe can be verified to have entered into a regime where the laws of physics as we understand them (specifically general relativity and the Standard Model of particle physics) work. Based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background, the time that has passed since that event—known as the "age of the universe"—is 13.8 billion years.Despite being extremely dense at this time—far denser than is usually required to form a black hole—the universe did not re-collapse into a singularity. Commonly used calculations and limits for explaining gravitational collapse are usually based upon objects of relatively constant size, such as stars, and do not apply to rapidly expanding space such as the Big Bang. Since the early universe did not immediately collapse into a multitude of black holes, matter at that time must have been very evenly distributed with a negligible density gradient.
Timeline:
Inflation and baryogenesis The earliest phases of the Big Bang are subject to much speculation, since astronomical data about them are not available. In the most common models the universe was filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures, and was very rapidly expanding and cooling. The period up to 10−43 seconds into the expansion, the Planck epoch, was a phase in which the four fundamental forces — the electromagnetic force, the strong nuclear force, the weak nuclear force, and the gravitational force, were unified as one. In this stage, the characteristic scale length of the universe was the Planck length, 1.6×10−35 m, and consequently had a temperature of approximately 1032 degrees Celsius. Even the very concept of a particle breaks down in these conditions. A proper understanding of this period awaits the development of a theory of quantum gravity. The Planck epoch was succeeded by the grand unification epoch beginning at 10−43 seconds, where gravitation separated from the other forces as the universe's temperature fell.At approximately 10−37 seconds into the expansion, a phase transition caused a cosmic inflation, during which the universe grew exponentially, unconstrained by the light speed invariance, and temperatures dropped by a factor of 100,000. This concept is motivated by the flatness problem, where the density of matter and energy is very close to the critical density needed to produce a flat universe. That is, the shape of the universe has no overall geometric curvature due to gravitational influence. Microscopic quantum fluctuations that occurred because of Heisenberg's uncertainty principle were "frozen in" by inflation, becoming amplified into the seeds that would later form the large-scale structure of the universe. At a time around 10−36 seconds, the electroweak epoch begins when the strong nuclear force separates from the other forces, with only the electromagnetic force and weak nuclear force remaining unified.Inflation stopped locally at around the 10−33 to 10−32 seconds mark, with the observable universe's volume having increased by a factor of at least 1078. Reheating occurred until the universe obtained the temperatures required for the production of a quark–gluon plasma as well as all other elementary particles. Temperatures were so high that the random motions of particles were at relativistic speeds, and particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions. At some point, an unknown reaction called baryogenesis violated the conservation of baryon number, leading to a very small excess of quarks and leptons over antiquarks and antileptons—of the order of one part in 30 million. This resulted in the predominance of matter over antimatter in the present universe.
Timeline:
Cooling The universe continued to decrease in density and fall in temperature, hence the typical energy of each particle was decreasing. Symmetry-breaking phase transitions put the fundamental forces of physics and the parameters of elementary particles into their present form, with the electromagnetic force and weak nuclear force separating at about 10−12 seconds.After about 10−11 seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle accelerators. At about 10−6 seconds, quarks and gluons combined to form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The temperature was no longer high enough to create either new proton–antiproton or neutron–antineutron pairs. A mass annihilation immediately followed, leaving just one in 108 of the original matter particles and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the universe was dominated by photons (with a minor contribution from neutrinos).
Timeline:
A few minutes into the expansion, when the temperature was about a billion kelvin and the density of matter in the universe was comparable to the current density of Earth's atmosphere, neutrons combined with protons to form the universe's deuterium and helium nuclei in a process called Big Bang nucleosynthesis (BBN). Most protons remained uncombined as hydrogen nuclei.As the universe cooled, the rest energy density of matter came to gravitationally dominate that of the photon radiation. After about 379,000 years, the electrons and nuclei combined into atoms (mostly hydrogen), which were able to emit radiation. This relic radiation, which continued through space largely unimpeded, is known as the cosmic microwave background.
Timeline:
Structure formation Over a long period of time, the slightly denser regions of the uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures observable today. The details of this process depend on the amount and type of matter in the universe. The four possible types of matter are known as cold dark matter (CDM), warm dark matter, hot dark matter, and baryonic matter. The best measurements available, from the Wilkinson Microwave Anisotropy Probe (WMAP), show that the data is well-fit by a Lambda-CDM model in which dark matter is assumed to be cold. (Warm dark matter is ruled out by early reionization.) This CDM is estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%.In an "extended model" which includes hot dark matter in the form of neutrinos, then the "physical baryon density" Ωbh2 is estimated at 0.023. (This is different from the 'baryon density' Ωb expressed as a fraction of the total matter/energy density, which is about 0.046.) The corresponding cold dark matter density Ωch2 is about 0.11, and the corresponding neutrino density Ωvh2 is estimated to be less than 0.0062.
Timeline:
Cosmic acceleration Independent lines of evidence from Type Ia supernovae and the CMB imply that the universe today is dominated by a mysterious form of energy known as dark energy, which appears to homogeneously permeate all of space. Observations suggest that 73% of the total energy density of the present day universe is in this form. When the universe was very young it was likely infused with dark energy, but with everything closer together gravity predominated, braking the expansion. Eventually, after billions of years of expansion, the declining density of matter relative to the density of dark energy allowed the expansion of the universe to begin to accelerate.Dark energy in its simplest formulation is modeled by a cosmological constant term in Einstein field equations of general relativity, but its composition and mechanism are unknown. More generally, the details of its equation of state and relationship with the Standard Model of particle physics continue to be investigated both through observation and theory.All of this cosmic evolution after the inflationary epoch can be rigorously described and modeled by the lambda-CDM model of cosmology, which uses the independent frameworks of quantum mechanics and general relativity. There are no easily testable models that would describe the situation prior to approximately 10−15 seconds. Understanding this earliest of eras in the history of the universe is currently one of the greatest unsolved problems in physics.
Concept history:
Etymology English astronomer Fred Hoyle is credited with coining the term "Big Bang" during a talk for a March 1949 BBC Radio broadcast, saying: "These theories were based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past." However, it did not catch on until the 1970s.It is popularly reported that Hoyle, who favored an alternative "steady-state" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models. Helge Kragh writes that the evidence for the claim that it was meant as a pejorative is "unconvincing", and mentions a number of indications that it was not a pejorative.The term itself is a misnomer as it implies the occurrence of an explosion. However, an explosion implies expansion from a center point out into the surrounding space. Rather than expanding into space, the Big Bang was the expansion/stretching of space itself, everywhere simultaneously (not from a single point), causing the universe to cool down and the density to be lowered. Another issue pointed out by Santhosh Mathew is that bang implies sound, which would require a vibrating particle and medium through which it travels. Since this is the beginning of anything we can imagine, there is no basis for any sound, and thus the Big Bang was likely silent. An attempt to find a more suitable alternative was not successful.
Concept history:
Development The Big Bang models developed from observations of the structure of the universe and from theoretical considerations. In 1912, Vesto Slipher measured the first Doppler shift of a "spiral nebula" (spiral nebula is the obsolete term for spiral galaxies), and soon discovered that almost all such nebulae were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside our Milky Way. Ten years later, Alexander Friedmann, a Russian cosmologist and mathematician, derived the Friedmann equations from the Einstein field equations, showing that the universe might be expanding in contrast to the static universe model advocated by Albert Einstein at that time.In 1924, American astronomer Edwin Hubble's measurement of the great distance to the nearest spiral nebulae showed that these systems were indeed other galaxies. Starting that same year, Hubble painstakingly developed a series of distance indicators, the forerunner of the cosmic distance ladder, using the 100-inch (2.5 m) Hooker telescope at Mount Wilson Observatory. This allowed him to estimate distances to galaxies whose redshifts had already been measured, mostly by Slipher. In 1929, Hubble discovered a correlation between distance and recessional velocity—now known as Hubble's law.Independently deriving Friedmann's equations in 1927, Georges Lemaître, a Belgian physicist and Roman Catholic priest, proposed that the recession of the nebulae was due to the expansion of the universe. He inferred the relation that Hubble would later observe, given the cosmological principle. In 1931, Lemaître went further and suggested that the evident expansion of the universe, if projected back in time, meant that the further in the past the smaller the universe was, until at some finite time in the past all the mass of the universe was concentrated into a single point, a "primeval atom" where and when the fabric of time and space came into existence.In the 1920s and 1930s, almost every major cosmologist preferred an eternal steady-state universe, and several complained that the beginning of time implied by the Big Bang imported religious concepts into physics; this objection was later repeated by supporters of the steady-state theory. This perception was enhanced by the fact that the originator of the Big Bang concept, Lemaître, was a Roman Catholic priest. Arthur Eddington agreed with Aristotle that the universe did not have a beginning in time, viz., that matter is eternal. A beginning in time was "repugnant" to him. Lemaître, however, disagreed: If the world has begun with a single quantum, the notions of space and time would altogether fail to have any meaning at the beginning; they would only begin to have a sensible meaning when the original quantum had been divided into a sufficient number of quanta. If this suggestion is correct, the beginning of the world happened a little before the beginning of space and time.
Concept history:
During the 1930s, other ideas were proposed as non-standard cosmologies to explain Hubble's observations, including the Milne model, the oscillatory universe (originally suggested by Friedmann, but advocated by Albert Einstein and Richard C. Tolman) and Fritz Zwicky's tired light hypothesis.After World War II, two distinct possibilities emerged. One was Fred Hoyle's steady-state model, whereby new matter would be created as the universe seemed to expand. In this model the universe is roughly the same at any point in time. The other was Lemaître's Big Bang theory, advocated and developed by George Gamow, who introduced BBN and whose associates, Ralph Alpher and Robert Herman, predicted the CMB. Ironically, it was Hoyle who coined the phrase that came to be applied to Lemaître's theory, referring to it as "this big bang idea" during a BBC Radio broadcast in March 1949. For a while, support was split between these two theories. Eventually, the observational evidence, most notably from radio source counts, began to favor Big Bang over steady state. The discovery and confirmation of the CMB in 1964 secured the Big Bang as the best theory of the origin and evolution of the universe.In 1968 and 1970, Roger Penrose, Stephen Hawking, and George F. R. Ellis published papers where they showed that mathematical singularities were an inevitable initial condition of relativistic models of the Big Bang. Then, from the 1970s to the 1990s, cosmologists worked on characterizing the features of the Big Bang universe and resolving outstanding problems. In 1981, Alan Guth made a breakthrough in theoretical work on resolving certain outstanding theoretical problems in the Big Bang models with the introduction of an epoch of rapid expansion in the early universe he called "inflation". Meanwhile, during these decades, two questions in observational cosmology that generated much discussion and disagreement were over the precise values of the Hubble Constant and the matter-density of the universe (before the discovery of dark energy, thought to be the key predictor for the eventual fate of the universe).In the mid-1990s, observations of certain globular clusters appeared to indicate that they were about 15 billion years old, which conflicted with most then-current estimates of the age of the universe (and indeed with the age measured today). This issue was later resolved when new computer simulations, which included the effects of mass loss due to stellar winds, indicated a much younger age for globular clusters.Significant progress in Big Bang cosmology has been made since the late 1990s as a result of advances in telescope technology as well as the analysis of data from satellites such as the Cosmic Background Explorer (COBE), the Hubble Space Telescope and WMAP. Cosmologists now have fairly precise and accurate measurements of many of the parameters of the Big Bang model, and have made the unexpected discovery that the expansion of the universe appears to be accelerating.
Observational evidence:
The earliest and most direct observational evidence of the validity of the theory are the expansion of the universe according to Hubble's law (as indicated by the redshifts of galaxies), discovery and measurement of the cosmic microwave background and the relative abundances of light elements produced by Big Bang nucleosynthesis (BBN). More recent evidence includes observations of galaxy formation and evolution, and the distribution of large-scale cosmic structures, These are sometimes called the "four pillars" of the Big Bang models.Precise modern models of the Big Bang appeal to various exotic physical phenomena that have not been observed in terrestrial laboratory experiments or incorporated into the Standard Model of particle physics. Of these features, dark matter is currently the subject of most active laboratory investigations. Remaining issues include the cuspy halo problem and the dwarf galaxy problem of cold dark matter. Dark energy is also an area of intense interest for scientists, but it is not clear whether direct detection of dark energy will be possible. Inflation and baryogenesis remain more speculative features of current Big Bang models. Viable, quantitative explanations for such phenomena are still being sought. These are currently unsolved problems in physics.
Observational evidence:
Hubble's law and the expansion of space Observations of distant galaxies and quasars show that these objects are redshifted: the light emitted from them has been shifted to longer wavelengths. This can be seen by taking a frequency spectrum of an object and matching the spectroscopic pattern of emission or absorption lines corresponding to atoms of the chemical elements interacting with the light. These redshifts are uniformly isotropic, distributed evenly among the observed objects in all directions. If the redshift is interpreted as a Doppler shift, the recessional velocity of the object can be calculated. For some galaxies, it is possible to estimate distances via the cosmic distance ladder. When the recessional velocities are plotted against these distances, a linear relationship known as Hubble's law is observed: v=H0D where v is the recessional velocity of the galaxy or other distant object, D is the proper distance to the object, and H0 is Hubble's constant, measured to be 70.4+1.3−1.4 km/s/Mpc by the WMAP.Hubble's law implies that the universe is uniformly expanding everywhere. This cosmic expansion was predicted from general relativity by Friedmann in 1922 and Lemaître in 1927, well before Hubble made his 1929 analysis and observations, and it remains the cornerstone of the Big Bang model as developed by Friedmann, Lemaître, Robertson, and Walker.
Observational evidence:
The theory requires the relation v=HD to hold at all times, where D is the proper distance, v is the recessional velocity, and v , H , and D vary as the universe expands (hence we write H0 to denote the present-day Hubble "constant"). For distances much smaller than the size of the observable universe, the Hubble redshift can be thought of as the Doppler shift corresponding to the recession velocity v . However, the redshift is not a true Doppler shift, but rather the result of the expansion of the universe between the time the light was emitted and the time that it was detected.An unexplained discrepancy with the determination of the Hubble constant is known as Hubble tension. Techniques based on observation of the CMB suggest a lower value of this constant compared to the quantity derived from measurements based on the cosmic distance ladder.
Observational evidence:
Cosmic microwave background radiation In 1964, Arno Penzias and Robert Wilson serendipitously discovered the cosmic background radiation, an omnidirectional signal in the microwave band. Their discovery provided substantial confirmation of the big-bang predictions by Alpher, Herman and Gamow around 1950. Through the 1970s, the radiation was found to be approximately consistent with a blackbody spectrum in all directions; this spectrum has been redshifted by the expansion of the universe, and today corresponds to approximately 2.725 K. This tipped the balance of evidence in favor of the Big Bang model, and Penzias and Wilson were awarded the 1978 Nobel Prize in Physics.
Observational evidence:
The surface of last scattering corresponding to emission of the CMB occurs shortly after recombination, the epoch when neutral hydrogen becomes stable. Prior to this, the universe comprised a hot dense photon-baryon plasma sea where photons were quickly scattered from free charged particles. Peaking at around 372±14 kyr, the mean free path for a photon becomes long enough to reach the present day and the universe becomes transparent.
Observational evidence:
In 1989, NASA launched COBE, which made two major advances: in 1990, high-precision spectrum measurements showed that the CMB frequency spectrum is an almost perfect blackbody with no deviations at a level of 1 part in 104, and measured a residual temperature of 2.726 K (more recent measurements have revised this figure down slightly to 2.7255 K); then in 1992, further COBE measurements discovered tiny fluctuations (anisotropies) in the CMB temperature across the sky, at a level of about one part in 105. John C. Mather and George Smoot were awarded the 2006 Nobel Prize in Physics for their leadership in these results.
Observational evidence:
During the following decade, CMB anisotropies were further investigated by a large number of ground-based and balloon experiments. In 2000–2001, several experiments, most notably BOOMERanG, found the shape of the universe to be spatially almost flat by measuring the typical angular size (the size on the sky) of the anisotropies.In early 2003, the first results of the Wilkinson Microwave Anisotropy Probe were released, yielding what were at the time the most accurate values for some of the cosmological parameters. The results disproved several specific cosmic inflation models, but are consistent with the inflation theory in general. The Planck space probe was launched in May 2009. Other ground and balloon-based cosmic microwave background experiments are ongoing.
Observational evidence:
Abundance of primordial elements Using Big Bang models, it is possible to calculate the expected concentration of the isotopes helium-4 (4He), helium-3 (3He), deuterium (2H), and lithium-7 (7Li) in the universe as ratios to the amount of ordinary hydrogen. The relative abundances depend on a single parameter, the ratio of photons to baryons. This value can be calculated independently from the detailed structure of CMB fluctuations. The ratios predicted (by mass, not by abundance) are about 0.25 for 4He:H, about 10−3 for 2H:H, about 10−4 for 3He:H, and about 10−9 for 7Li:H.The measured abundances all agree at least roughly with those predicted from a single value of the baryon-to-photon ratio. The agreement is excellent for deuterium, close but formally discrepant for 4He, and off by a factor of two for 7Li (this anomaly is known as the cosmological lithium problem); in the latter two cases, there are substantial systematic uncertainties. Nonetheless, the general consistency with abundances predicted by BBN is strong evidence for the Big Bang, as the theory is the only known explanation for the relative abundances of light elements, and it is virtually impossible to "tune" the Big Bang to produce much more or less than 20–30% helium. Indeed, there is no obvious reason outside of the Big Bang that, for example, the young universe before star formation, as determined by studying matter supposedly free of stellar nucleosynthesis products, should have more helium than deuterium or more deuterium than 3He, and in constant ratios, too.: 182–185 Galactic evolution and distribution Detailed observations of the morphology and distribution of galaxies and quasars are in agreement with the current state of the Big Bang models. A combination of observations and theory suggest that the first quasars and galaxies formed within a billion years after the Big Bang, and since then, larger structures have been forming, such as galaxy clusters and superclusters.Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently, appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions and larger structures, agree well with Big Bang simulations of the formation of structure in the universe, and are helping to complete details of the theory.
Observational evidence:
Primordial gas clouds In 2011, astronomers found what they believe to be pristine clouds of primordial gas by analyzing absorption lines in the spectra of distant quasars. Before this discovery, all other astronomical objects have been observed to contain heavy elements that are formed in stars. Despite being sensitive to carbon, oxygen, and silicon, these three elements were not detected in these two clouds. Since the clouds of gas have no detectable levels of heavy elements, they likely formed in the first few minutes after the Big Bang, during BBN.
Observational evidence:
Other lines of evidence The age of the universe as estimated from the Hubble expansion and the CMB is now in good agreement with other estimates using the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular clusters and through radiometric dating of individual Population II stars. It is also in good agreement with age estimates based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background. The agreement of independent measurements of this age supports the Lambda-CDM (ΛCDM) model, since the model is used to relate some of the measurements to an age estimate, and all estimates turn out to agree. Still, some observations of objects from the relatively early universe (in particular quasar APM 08279+5255) raise concern as to whether these objects had enough time to form so early in the ΛCDM model.The prediction that the CMB temperature was higher in the past has been experimentally supported by observations of very low temperature absorption lines in gas clouds at high redshift. This prediction also implies that the amplitude of the Sunyaev–Zel'dovich effect in clusters of galaxies does not depend directly on redshift. Observations have found this to be roughly true, but this effect depends on cluster properties that do change with cosmic time, making precise measurements difficult.
Observational evidence:
Future observations Future gravitational-wave observatories might be able to detect primordial gravitational waves, relics of the early universe, up to less than a second after the Big Bang.
Problems and related issues in physics:
As with any theory, a number of mysteries and problems have arisen as a result of the development of the Big Bang models. Some of these mysteries and problems have been resolved while others are still outstanding. Proposed solutions to some of the problems in the Big Bang model have revealed new mysteries of their own. For example, the horizon problem, the magnetic monopole problem, and the flatness problem are most commonly resolved with inflation theory, but the details of the inflationary universe are still left unresolved and many, including some founders of the theory, say it has been disproven. What follows are a list of the mysterious aspects of the Big Bang concept still under intense investigation by cosmologists and astrophysicists.
Problems and related issues in physics:
Baryon asymmetry It is not yet understood why the universe has more matter than antimatter. It is generally assumed that when the universe was young and very hot it was in statistical equilibrium and contained equal numbers of baryons and antibaryons. However, observations suggest that the universe, including its most distant parts, is made almost entirely of normal matter, rather than antimatter. A process called baryogenesis was hypothesized to account for the asymmetry. For baryogenesis to occur, the Sakharov conditions must be satisfied. These require that baryon number is not conserved, that C-symmetry and CP-symmetry are violated and that the universe depart from thermodynamic equilibrium. All these conditions occur in the Standard Model, but the effects are not strong enough to explain the present baryon asymmetry.
Problems and related issues in physics:
Dark energy Measurements of the redshift–magnitude relation for type Ia supernovae indicate that the expansion of the universe has been accelerating since the universe was about half its present age. To explain this acceleration, general relativity requires that much of the energy in the universe consists of a component with large negative pressure, dubbed "dark energy".Dark energy, though speculative, solves numerous problems. Measurements of the cosmic microwave background indicate that the universe is very nearly spatially flat, and therefore according to general relativity the universe must have almost exactly the critical density of mass/energy. But the mass density of the universe can be measured from its gravitational clustering, and is found to have only about 30% of the critical density. Since theory suggests that dark energy does not cluster in the usual way it is the best explanation for the "missing" energy density. Dark energy also helps to explain two geometrical measures of the overall curvature of the universe, one using the frequency of gravitational lenses, and the other using the characteristic pattern of the large-scale structure as a cosmic ruler.Negative pressure is believed to be a property of vacuum energy, but the exact nature and existence of dark energy remains one of the great mysteries of the Big Bang. Results from the WMAP team in 2008 are in accordance with a universe that consists of 73% dark energy, 23% dark matter, 4.6% regular matter and less than 1% neutrinos. According to theory, the energy density in matter decreases with the expansion of the universe, but the dark energy density remains constant (or nearly so) as the universe expands. Therefore, matter made up a larger fraction of the total energy of the universe in the past than it does today, but its fractional contribution will fall in the far future as dark energy becomes even more dominant.The dark energy component of the universe has been explained by theorists using a variety of competing theories including Einstein's cosmological constant but also extending to more exotic forms of quintessence or other modified gravity schemes. A cosmological constant problem, sometimes called the "most embarrassing problem in physics", results from the apparent discrepancy between the measured energy density of dark energy, and the one naively predicted from Planck units.
Problems and related issues in physics:
Dark matter During the 1970s and the 1980s, various observations showed that there is not sufficient visible matter in the universe to account for the apparent strength of gravitational forces within and between galaxies. This led to the idea that up to 90% of the matter in the universe is dark matter that does not emit light or interact with normal baryonic matter. In addition, the assumption that the universe is mostly normal matter led to predictions that were strongly inconsistent with observations. In particular, the universe today is far more lumpy and contains far less deuterium than can be accounted for without dark matter. While dark matter has always been controversial, it is inferred by various observations: the anisotropies in the CMB, galaxy cluster velocity dispersions, large-scale structure distributions, gravitational lensing studies, and X-ray measurements of galaxy clusters.Indirect evidence for dark matter comes from its gravitational influence on other matter, as no dark matter particles have been observed in laboratories. Many particle physics candidates for dark matter have been proposed, and several projects to detect them directly are underway.Additionally, there are outstanding problems associated with the currently favored cold dark matter model which include the dwarf galaxy problem and the cuspy halo problem. Alternative theories have been proposed that do not require a large amount of undetected matter, but instead modify the laws of gravity established by Newton and Einstein; yet no alternative theory has been as successful as the cold dark matter proposal in explaining all extant observations.
Problems and related issues in physics:
Horizon problem The horizon problem results from the premise that information cannot travel faster than light. In a universe of finite age this sets a limit—the particle horizon—on the separation of any two regions of space that are in causal contact. The observed isotropy of the CMB is problematic in this regard: if the universe had been dominated by radiation or matter at all times up to the epoch of last scattering, the particle horizon at that time would correspond to about 2 degrees on the sky. There would then be no mechanism to cause wider regions to have the same temperature.: 191–202 A resolution to this apparent inconsistency is offered by inflation theory in which a homogeneous and isotropic scalar energy field dominates the universe at some very early period (before baryogenesis). During inflation, the universe undergoes exponential expansion, and the particle horizon expands much more rapidly than previously assumed, so that regions presently on opposite sides of the observable universe are well inside each other's particle horizon. The observed isotropy of the CMB then follows from the fact that this larger region was in causal contact before the beginning of inflation.: 180–186 Heisenberg's uncertainty principle predicts that during the inflationary phase there would be quantum thermal fluctuations, which would be magnified to a cosmic scale. These fluctuations served as the seeds for all the current structures in the universe.: 207 Inflation predicts that the primordial fluctuations are nearly scale invariant and Gaussian, which has been accurately confirmed by measurements of the CMB.: sec 6 If inflation occurred, exponential expansion would push large regions of space well beyond our observable horizon.: 180–186 A related issue to the classic horizon problem arises because in most standard cosmological inflation models, inflation ceases well before electroweak symmetry breaking occurs, so inflation should not be able to prevent large-scale discontinuities in the electroweak vacuum since distant parts of the observable universe were causally separate when the electroweak epoch ended.
Problems and related issues in physics:
Magnetic monopoles The magnetic monopole objection was raised in the late 1970s. Grand unified theories (GUTs) predicted topological defects in space that would manifest as magnetic monopoles. These objects would be produced efficiently in the hot early universe, resulting in a density much higher than is consistent with observations, given that no monopoles have been found. This problem is resolved by cosmic inflation, which removes all point defects from the observable universe, in the same way that it drives the geometry to flatness.
Problems and related issues in physics:
Flatness problem The flatness problem (also known as the oldness problem) is an observational problem associated with a FLRW. The universe may have positive, negative, or zero spatial curvature depending on its total energy density. Curvature is negative if its density is less than the critical density; positive if greater; and zero at the critical density, in which case space is said to be flat. Observations indicate the universe is consistent with being flat.The problem is that any small departure from the critical density grows with time, and yet the universe today remains very close to flat. Given that a natural timescale for departure from flatness might be the Planck time, 10−43 seconds, the fact that the universe has reached neither a heat death nor a Big Crunch after billions of years requires an explanation. For instance, even at the relatively late age of a few minutes (the time of nucleosynthesis), the density of the universe must have been within one part in 1014 of its critical value, or it would not exist as it does today.
Misconceptions:
One of the common misconceptions about the Big Bang model is that it fully explains the origin of the universe. However, the Big Bang model does not describe how energy, time, and space were caused, but rather it describes the emergence of the present universe from an ultra-dense and high-temperature initial state. It is misleading to visualize the Big Bang by comparing its size to everyday objects. When the size of the universe at Big Bang is described, it refers to the size of the observable universe, and not the entire universe.Hubble's law predicts that galaxies that are beyond Hubble distance recede faster than the speed of light. However, special relativity does not apply beyond motion through space. Hubble's law describes velocity that results from expansion of space, rather than through space.Astronomers often refer to the cosmological redshift as a Doppler shift which can lead to a misconception. Although similar, the cosmological redshift is not identical to the classically derived Doppler redshift because most elementary derivations of the Doppler redshift do not accommodate the expansion of space. Accurate derivation of the cosmological redshift requires the use of general relativity, and while a treatment using simpler Doppler effect arguments gives nearly identical results for nearby galaxies, interpreting the redshift of more distant galaxies as due to the simplest Doppler redshift treatments can cause confusion.
Implications:
Given current understanding, scientific extrapolations about the future of the universe are only possible for finite durations, albeit for much longer periods than the current age of the universe. Anything beyond that becomes increasingly speculative. Likewise, at present, a proper understanding of the origin of the universe can only be subject to conjecture.
Implications:
Pre–Big Bang cosmology The Big Bang explains the evolution of the universe from a starting density and temperature that is well beyond humanity's capability to replicate, so extrapolations to the most extreme conditions and earliest times are necessarily more speculative. Lemaître called this initial state the "primeval atom" while Gamow called the material "ylem". How the initial state of the universe originated is still an open question, but the Big Bang model does constrain some of its characteristics. For example, specific laws of nature most likely came to existence in a random way, but as inflation models show, some combinations of these are far more probable. A flat universe implies a balance between gravitational potential energy and other energy forms, requiring no additional energy to be created.The Big Bang theory, built upon the equations of classical general relativity, indicates a singularity at the origin of cosmic time, and such an infinite energy density may be a physical impossibility. However, the physical theories of general relativity and quantum mechanics as currently realized are not applicable before the Planck epoch, and correcting this will require the development of a correct treatment of quantum gravity. Certain quantum gravity treatments, such as the Wheeler–DeWitt equation, imply that time itself could be an emergent property. As such, physics may conclude that time did not exist before the Big Bang.While it is not known what could have preceded the hot dense state of the early universe or how and why it originated, or even whether such questions are sensible, speculation abounds on the subject of "cosmogony".
Implications:
Some speculative proposals in this regard, each of which entails untested hypotheses, are: The simplest models, in which the Big Bang was caused by quantum fluctuations. That scenario had very little chance of happening, but, according to the totalitarian principle, even the most improbable event will eventually happen. It took place instantly, in our perspective, due to the absence of perceived time before the Big Bang.
Implications:
Models in which the whole of spacetime is finite, including the Hartle–Hawking no-boundary condition. For these cases, the Big Bang does represent the limit of time but without a singularity. In such a case, the universe is self-sufficient.
Implications:
Brane cosmology models, in which inflation is due to the movement of branes in string theory; the pre-Big Bang model; the ekpyrotic model, in which the Big Bang is the result of a collision between branes; and the cyclic model, a variant of the ekpyrotic model in which collisions occur periodically. In the latter model the Big Bang was preceded by a Big Crunch and the universe cycles from one process to the other.
Implications:
Eternal inflation, in which universal inflation ends locally here and there in a random fashion, each end-point leading to a bubble universe, expanding from its own big bang.Proposals in the last two categories see the Big Bang as an event in either a much larger and older universe or in a multiverse.
Implications:
Ultimate fate of the universe Before observations of dark energy, cosmologists considered two scenarios for the future of the universe. If the mass density of the universe were greater than the critical density, then the universe would reach a maximum size and then begin to collapse. It would become denser and hotter again, ending with a state similar to that in which it started—a Big Crunch.Alternatively, if the density in the universe were equal to or below the critical density, the expansion would slow down but never stop. Star formation would cease with the consumption of interstellar gas in each galaxy; stars would burn out, leaving white dwarfs, neutron stars, and black holes. Collisions between these would result in mass accumulating into larger and larger black holes. The average temperature of the universe would very gradually asymptotically approach absolute zero—a Big Freeze. Moreover, if protons are unstable, then baryonic matter would disappear, leaving only radiation and black holes. Eventually, black holes would evaporate by emitting Hawking radiation. The entropy of the universe would increase to the point where no organized form of energy could be extracted from it, a scenario known as heat death.Modern observations of accelerating expansion imply that more and more of the currently visible universe will pass beyond our event horizon and out of contact with us. The eventual result is not known. The ΛCDM model of the universe contains dark energy in the form of a cosmological constant. This theory suggests that only gravitationally bound systems, such as galaxies, will remain together, and they too will be subject to heat death as the universe expands and cools. Other explanations of dark energy, called phantom energy theories, suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion in a so-called Big Rip.
Implications:
Religious and philosophical interpretations As a description of the origin of the universe, the Big Bang has significant bearing on religion and philosophy. As a result, it has become one of the liveliest areas in the discourse between science and religion. Some believe the Big Bang implies a creator, while others argue that Big Bang cosmology makes the notion of a creator superfluous. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sun Certified Network Administrator**
Sun Certified Network Administrator:
SCNA (an abbreviation of Sun Certified Network Administrator) is a certification for system administrators and covers LANs and Solaris.
Requirements:
Candidates must pass a certification exam. The examination includes multiple-choice, scenario-based questions, drag-and-drop questions, and tests the candidate on Solaris network administration topics including how to configure and manage the network interface layer, the network (internet and transport layers), network applications, and the Solaris IP Filter.
Candidates must have three or more years of experience administering Sun systems in a networked environment.
Certification also requires already being a Sun Certified System Administrator for Solaris (any edition). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Somatic fusion**
Somatic fusion:
Somatic fusion, also called protoplast fusion, is a type of genetic modification in plants by which two distinct species of plants are fused together to form a new hybrid plant with the characteristics of both, a somatic hybrid. Hybrids have been produced either between different varieties of the same species (e.g. between non-flowering potato plants and flowering potato plants) or between two different species (e.g. between wheat Triticum and rye Secale to produce Triticale).
Somatic fusion:
Uses of somatic fusion include making potato plants resistant to potato leaf roll disease. Through somatic fusion, the crop potato plant Solanum tuberosum – the yield of which is severely reduced by a viral disease transmitted on by the aphid vector – is fused with the wild, non-tuber-bearing potato Solanum brevidens, which is resistant to the disease. The resulting hybrid has the chromosomes of both plants and is thus similar to polyploid plants.
Somatic fusion:
Somatic hybridization was first introduced by Carlson et al. in Nicotiana glauca.
Process for plant cells:
The somatic fusion process occurs in four steps: The removal of the cell wall of one cell of each type of plant using cellulase enzyme to produce a somatic cell called a protoplast The cells are then fused using electric shock (electrofusion) or chemical treatment to join the cells and fuse together the nuclei. The resulting fused nucleus is called heterokaryon.
Process for plant cells:
The formation of the cell wall is then induced using hormones The cells are then grown into calluses which then are further grown to plantlets and finally to a full plant, known as a somatic hybrid.The procedure for seed plants describe above, fusion of moss protoplasts can be initiated without electric shock but by the use of polyethylene glycol (PEG). Further, moss protoplasts do not need phytohormones for regeneration, and they do not form a callus. Instead, regenerating moss protoplasts behave like germinating moss spores. Of further note sodium nitrate and calcium ion at high pH can be used, although results are variable depending on the organism.
Applications of hybrid cells:
Somatic cells of different types can be fused to obtain hybrid cells. Hybrid cells are useful in a variety of ways, e.g., (i) to study the control of cell division and gene expression, (ii) to investigate malignant transformations, (iii) to obtain viral replication, (iv) for gene or chromosome mapping and for (v) production of monoclonal antibodies by producing hybridoma (hybrid cells between an immortalised cell and an antibody producing lymphocyte), etc.Chromosome mapping through somatic cell hybridization is essentially based on fusion of human and mouse somatic cells. Generally, human fibrocytes or leucocytes are fused with mouse continuous cell lines.
Applications of hybrid cells:
When human and mouse cells (or cells of any two mammalian species or of the same species) are mixed, spontaneous cell fusion occurs at a very low rate (10-6). Cell fusion is enhanced 100 to 1000 times by the addition of ultraviolet inactivated Sendai (parainfluenza) virus or polyethylene glycol (PEG).
These agents adhere to the plasma membranes of cells and alter their properties in such a way that facilitates their fusion. Fusion of two cells produces a heterokaryon, i.e., a single hybrid cell with two nuclei, one from each of the cells entering fusion. Subsequently, the two nuclei also fuse to yield a hybrid cell with a single nucleus.
Applications of hybrid cells:
A generalized scheme for somatic cell hybridization may be described as follows. Appropriate human and mouse cells are selected and mixed together in the presence of inactivated Sendai virus or PEG to promote cell fusion. After a period of time, the cells (a mixture of man, mouse and 'hybrid' cells) are plated on a selective medium, e.g., HAT medium, which allows the multiplication of hybrid cells only.
Applications of hybrid cells:
Several clones (each derived from a single hybrid cell) of the hybrid cells are thus isolated and subjected to both cytogenetic and appropriate biochemical analyses for the detection of enzyme/ protein/trait under investigation. An attempt is now made to correlate the presence and absence of the trait with the presence and absence of a human chromosome in the hybrid clones.
Applications of hybrid cells:
If there is a perfect correlation between the presence and absence of a human chromosome and that of a trait in the hybrid clones, the gene governing the trait is taken to be located in the concerned chromosome.
The HAT medium is one of the several selective media used for the selection of hybrid cells. This medium is supplemented with hypoxanthine, aminopterin and thymidine, hence the name HAT medium. Antimetabolite aminopterin blocks the cellular biosynthesis of purines and pyrimidines from simple sugars and amino acids.
However, normal human and mouse cells can still multiply as they can utilize hypoxanthine and thymidine present in the medium through a salvage pathway, which ordinarily recycles the purines and pyrimidines produced from degradation of nucleic acids.
Hypoxanthine is converted into guanine by the enzyme hypoxanthine-guanine phosphoribosyltransferase (HGPRT), while thymidine is phosphorylated by thymidine kinase (TK); both HGPRT and TK are enzymes of the salvage pathway.
On a HAT medium, only those cells that have active HGPRT (HGPRT+) and TK (TK+) enzymes can proliferate, while those deficient in these enzymes (HGPRr- and/or TK-) can not divide (since they cannot produce purines and pyrimidines due to the aminopterin present in the HAT medium).
For using HAT medium as a selective agent, human cells used for fusion must be deficient for either the enzyme HGPRT or TK, while mouse cells must be deficient for the other enzyme of this pair. Thus, one may fuse HGPRT deficient human cells (designated as TK+ HGPRr-) with TK deficient mouse cells (denoted as TK- HGPRT+).
Their fusion products (hybrid cells) will be TK+ (due to the human gene) and HGPRT+ (due to the mouse gene) and will multiply on the HAT medium, while the man and mouse cells will fail to do so. Experiments with other selective media can be planned in a similar fashion.
Characteristics of somatic hybridization and cybridization:
Somatic cell fusion appears to be the only means through which two different parental genomes can be recombined among plants that cannot reproduce sexually (asexual or sterile).
Protoplasts of sexually sterile (haploid, triploid, and aneuploid) plants can be fused to produce fertile diploids and polyploids.
Somatic cell fusion overcomes sexual incompatibility barriers. In some cases somatic hybrids between two incompatible plants have also found application in industry or agriculture.
Somatic cell fusion is useful in the study of cytoplasmic genes and their activities and this information can be applied in plant breeding experiments.
Inter-specific and inter-generic fusion achievements Note: The table only lists a few examples, there are many more crosses. The possibilities of this technology are great; however, not all species are easily put into protoplast culture.
Table: Reference #5 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**N-Methyl-D-aspartic acid**
N-Methyl-D-aspartic acid:
N-methyl-D-aspartic acid or N-methyl-D-aspartate (NMDA) is an amino acid derivative that acts as a specific agonist at the NMDA receptor mimicking the action of glutamate, the neurotransmitter which normally acts at that receptor. Unlike glutamate, NMDA only binds to and regulates the NMDA receptor and has no effect on other glutamate receptors (such as those for AMPA and kainate). NMDA receptors are particularly important when they become overactive during, for example, withdrawal from alcohol as this causes symptoms such as agitation and, sometimes, epileptiform seizures.
Biological function:
In 1962, J.C. Watkins reported synthesizing NMDA, an isomer of the previously known N-Methyl-DL-aspartic-acid (PubChem ID 4376). NMDA is a water-soluble D-alpha-amino acid — an aspartic acid derivative with an N-methyl substituent and D-configuration — found across Animalia from lancelets to mammals. At homeostatic levels NMDA plays an essential role as a neurotransmitter and neuroendocrine regulator. At increased but sub–toxic levels NMDA becomes neuro-protective. In excessive amounts NMDA is an excitotoxin. Behavioral neuroscience research utilizes NMDA excitotoxicity to induce lesions in specific regions of an animal subject's brain or spinal cord to study behavioral changes.The mechanism of action for the NMDA receptor is a specific agonist binding to its NR2 subunits, and then a non-specific cation channel is opened, which can allow the passage of Ca2+ and Na+ into the cell and K+ out of the cell. Therefore, NMDA receptors will only open if glutamate is in the synapse and concurrently the postsynaptic membrane is already depolarized - acting as coincidence detectors at the neuronal level. The excitatory postsynaptic potential (EPSP) produced by activation of an NMDA receptor also increases the concentration of Ca2+ in the cell. The Ca2+ can in turn function as a second messenger in various signaling pathways. This process is modulated by a number of endogenous and exogenous compounds and plays a key role in a wide range of physiological (such as memory) and pathological processes (such as excitotoxicity).
Antagonists:
Examples of antagonists, or more appropriately named receptor channel blockers, of the NMDA receptor are APV, amantadine, dextromethorphan (DXM), ketamine, magnesium, tiletamine, phencyclidine (PCP), riluzole, memantine, methoxetamine (MXE), methoxphenidine (MXP) and kynurenic acid. While dizocilpine is generally considered to be the prototypical NMDA receptor blocker and is the most common agent used in research, animal studies have demonstrated some amount of neurotoxicity, which may or may not also occur in humans. These compounds are commonly referred to as NMDA receptor antagonists. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Speck (printing)**
Speck (printing):
Speck (figuratively for German speck or bacon) in the German typesetting tradition describes a manuscript that is printed with low effort. The term is still used in electronic publishing.
Background:
The usage is related to printing paid for as piece work. Manuscripts with a low amount of text, high amount of pictures, free space or halftitles and preset sections were described with the term. They were more easily finished, but allowed the typesetter to earn the same amount as complicated pages with a large amount of new letters. (Compare potboiler for authors.) A typesetter who fobbed off complicated manuscripts on others and preferred "Speck" was called a Speckjäger (Speck hunter). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Linde–Frank–Caro process**
Linde–Frank–Caro process:
The Linde–Frank–Caro process is a method for hydrogen production by removing hydrogen and carbon dioxide from water gas by condensation. The process was invented in 1909 by Adolf Frank and developed with Carl von Linde and Heinrich Caro.
Process description:
Water gas is compressed to 20 bar and pumped into the Linde–Frank–Caro reactor. A water column removes most of the carbon dioxide and sulfur. Tubes with caustic soda then remove the remaining carbon dioxide, sulphur, and water from the gas stream. The gas enters a chamber and is cooled to −190 °C, resulting in the condensation of most of the gas to a liquid. The remaining gas is pumped to the next vessel where the nitrogen is liquefied by cooling to −205 °C, resulting in hydrogen gas as an end product. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thermalsinterkalk Formation**
Thermalsinterkalk Formation:
The Thermalsinterkalk Formation is a geologic formation in Germany. It preserves fossils dating back to the Neogene period. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BriLife**
BriLife:
BriLife, also known as IIBR-100, is a replication-competent recombinant VSV viral vectored COVID-19 vaccine candidate. It was developed by the Israel Institute for Biological Research (IIBR). The IIBR partnered with the US-based NRx Pharmaceuticals to complete clinical trials and commercialize the vaccine. A study conducted in hamsters suggested that one dose of the vaccine was safe and effective at protecting against COVID-19. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hill sphere**
Hill sphere:
The Hill sphere of an astronomical body is the region in which it dominates the attraction of satellites. It is sometimes termed the Roche sphere. It was defined by the American astronomer George William Hill, based on the work of the French astronomer Édouard Roche.To be retained by a more gravitationally attracting astrophysical object—a planet by a more massive sun, a moon by a more massive planet—the less massive body must have an orbit that lies within the gravitational potential represented by the more massive body's Hill sphere. That moon would, in turn, have a Hill sphere of its own, and any object within that distance would tend to become a satellite of the moon, rather than of the planet itself.
Hill sphere:
One simple view of the extent of our Solar System is that it is bounded by the Hill sphere of the Sun (engendered by the Sun's interaction with the galactic nucleus or other more massive stars). A more complex example is the one at right, the Earth's Hill sphere, which extends between the Lagrange points L1 and L2, which lie along the line of centers of the Earth and the more massive Sun. The gravitational influence of the less massive body is least in that direction, and so it acts as the limiting factor for the size of the Hill sphere; beyond that distance, a third object in orbit around the Earth would spend at least part of its orbit outside the Hill sphere, and would be progressively perturbed by the tidal forces of the more massive body, the Sun), eventually ending up orbiting the latter.For two massive bodies with gravitational potentials and any given energy of a third object of negligible mass interacting with them, one can define a zero-velocity surface in space which cannot be passed, the contour of the Jacobi integral. When the object's energy is low, the zero-velocity surface completely surrounds the less massive body (of this restricted three-body system), which means the third object cannot escape; at higher energy, there will be one or more gaps or bottlenecks by which the third object may escape the less massive body and go into orbit around the more massive one. If the energy is at the border between these two cases, then the third object cannot escape, but the zero-velocity surface confining it touches a larger zero-velocity surface around the less massive body at one of the nearby Lagrange points, forming a cone-like point there. At the opposite side of the less massive body, the zero-velocity surface gets close to the other Lagrange point. This limiting zero-velocity surface around the less massive body is its Hill "sphere".
Definition:
The Hill radius or sphere (the latter defined by the former radius) has been described as "the region around a planetary body where its own gravity (compared to that of the Sun or other nearby bodies) is the dominant force in attracting satellites," both natural and artificial.As described by de Pater and Lissauer, all bodies within a system such as the Earth's solar system "feel the gravitational force of one another", and while the motions of just two gravitationally interacting bodies—constituting a "two-body problem"—are "completely integrable ([meaning]...there exists one independent integral or constraint per degree of freedom)" and thus an exact, analytic solution, the interactions of three (or more) such bodies "cannot be deduced analytically", requiring instead solutions by numerical integration, when possible.: p.26 This is the case, unless the negligible mass of one of the three bodies allows approximation of the system as a two-body problem, known formally as a "restricted three-body problem".: p.26 For such two- or restricted three-body problems as its simplest examples—e.g., one more massive primary astrophysical body, mass of m1, and a less massive secondary body, mass of m2—the concept of a Hill radius or sphere is of the approximate limit to the secondary mass's "gravitational dominance", a limit defined by "the extent" of its Hill sphere, which is represented mathematically as follows:: p.29 RH≈am23(m1+m2)3 ,where, in this representation, major axis "a" can be understood as the "instantaneous heliocentric distance" between the two masses (elsewhere abbreviated rp).: p.29 More generally, if the less massive body, m2 , orbits a more massive body (m1, e.g., as a planet orbiting around the Sun) and has a semi-major axis a , and an eccentricity of e , then the Hill radius or sphere, RH of the less massive body, calculated at the pericenter, is approximately: RH≈a(1−e)m23(m1+m2)3.
Definition:
When eccentricity is negligible (the most favourable case for orbital stability), this expression reduces to the one presented above.
Example and derivation:
In the Earth-Sun example, the Earth (5.97×1024 kg) orbits the Sun (1.99×1030 kg) at a distance of 149.6 million km, or one astronomical unit (AU). The Hill sphere for Earth thus extends out to about 1.5 million km (0.01 AU). The Moon's orbit, at a distance of 0.384 million km from Earth, is comfortably within the gravitational sphere of influence of Earth and it is therefore not at risk of being pulled into an independent orbit around the Sun. All stable satellites of the Earth (those within the Earth's Hill sphere) must have an orbital period shorter than seven months.The earlier eccentricity-ignoring formula can be re-stated as follows: RH3a3≈1/3m2M , or 3RH3a3≈m2M ,where M is the sum of the interacting masses.This expresses the relation in terms of the volume of the Hill sphere compared with the volume of the sphere defined by the less massive body's circular orbit around the more massive body, specifically, that the ratio of the volumes of these two spheres is one-third the ratio of the secondary mass to the total mass of the system.
Example and derivation:
Derivation The expression for the Hill radius can be found by equating gravitational and centrifugal forces acting on a test particle (of mass much smaller than m ) orbiting the secondary body. Assume that the distance between masses M and m is r , and that the test particle is orbiting at a distance rH from the secondary. When the test particle is on the line connecting the primary and the secondary body, the force balance requires that GmrH2−GM(r−rH)2+Ω2(r−rH)=0, where G is the gravitational constant and Ω=GMr3 is the (Keplerian) angular velocity of the secondary about the primary (assuming that m≪M ). The above equation can also be written as mrH2−Mr2(1−rHr)−2+Mr2(1−rHr)=0, which, through a binomial expansion to leading order in rH/r , can be written as 0.
Example and derivation:
Hence, the relation stated above rHr≈m3M3.
If the orbit of the secondary about the primary is elliptical, the Hill radius is maximum at the apocenter, where r is largest, and minimum at the pericenter of the orbit. Therefore, for purposes of stability of test particles (for example, of small satellites), the Hill radius at the pericenter distance needs to be considered.
To leading order in rH/r , the Hill radius above also represents the distance of the Lagrangian point L1 from the secondary.
Regions of stability:
The Hill sphere is only an approximation, and other forces (such as radiation pressure or the Yarkovsky effect) can eventually perturb an object out of the sphere. As stated, the satellite (third mass) should be small enough that its gravity contributes negligibly.: p.26ff Detailed numerical calculations show that orbits at or just within the Hill sphere are not stable in the long term; it appears that stable satellite orbits exist only inside 1/2 to 1/3 of the Hill radius.The region of stability for retrograde orbits at a large distance from the primary is larger than the region for prograde orbits at a large distance from the primary. This was thought to explain the preponderance of retrograde moons around Jupiter; however, Saturn has a more even mix of retrograde/prograde moons so the reasons are more complicated.
Further examples:
It is possible for a Hill sphere to be so small that it is impossible to maintain an orbit around a body. For example, an astronaut could not have orbited the 104 ton Space Shuttle at an orbit 300 km above the Earth, because a 104-ton object at that altitude has a Hill sphere of only 120 cm in radius, much smaller than a Space Shuttle. A sphere of this size and mass would be denser than lead, and indeed, in low Earth orbit, a spherical body must be more dense than lead in order to fit inside its own Hill sphere, or else it will be incapable of supporting an orbit. Satellites further out in geostationary orbit, however, would only need to be more than 6% of the density of water to fit inside their own Hill sphere.Within the Solar System, the planet with the largest Hill radius is Neptune, with 116 million km, or 0.775 au; its great distance from the Sun amply compensates for its small mass relative to Jupiter (whose own Hill radius measures 53 million km). An asteroid from the asteroid belt will have a Hill sphere that can reach 220,000 km (for 1 Ceres), diminishing rapidly with decreasing mass. The Hill sphere of 66391 Moshup, a Mercury-crossing asteroid that has a moon (named Squannit), measures 22 km in radius.A typical extrasolar "hot Jupiter", HD 209458 b, has a Hill sphere radius of 593,000 km, about eight times its physical radius of approx 71,000 km. Even the smallest close-in extrasolar planet, CoRoT-7b, still has a Hill sphere radius (61,000 km), six times its physical radius (approx 10,000 km). Therefore, these planets could have small moons close in, although not within their respective Roche limits.
Hill spheres for the solar system:
The following table and logarithmic plot show the radius of the Hill spheres of some bodies of the Solar System calculated with the first formula stated above (including orbital eccentricity), using values obtained from the JPL DE405 ephemeris and from the NASA Solar System Exploration website. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scianna antigen system**
Scianna antigen system:
The Scianna blood antigen system consists of seven antigens. These include two high frequency antigens Sc1 and Sc3, and two low frequency antigens Sc2 and Sc4.The very rare null phenotype is characterised by the absence of Sc1, Sc2 and Sc3.The antigens are caused by changes in the erythroid membrane associated protein (ERMAP).
History:
This blood group system was discovered in 1962 when a high frequency antigen was detected in a young woman (Ms. Scianna) who had experienced several late pregnancy losses due to haemolytic disease of the fetus. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HD 143699**
HD 143699:
HD 143699 is a single star in the southern constellation of Lupus. It is a dim star but visible to the naked eye with an apparent visual magnitude of 4.90. Based upon an annual parallax shift of 9.2 mas, it is located around 350 light years away. It is most likely (90% chance) a member of the Upper Centaurus–Lupus subgroup of the Sco OB2 moving group.
HD 143699:
This star has a stellar classification of B5/7 III/IV, suggesting it is an evolving star that is entering the giant stage. However, according to Zorec and Royer (2012) it is only 56% of the way through its main sequence lifespan. It is a chemically peculiar magnetic B star, showing an averaged quadratic field strength of (167.2±140.4)×10−3 T. Helium-weak, it displays an underabundance of helium in its spectrum. Radio emissions have been detected from this source.HD 143699 has 4.3 times the mass of the Sun and 4.4 times the Sun's radius. It has a high rate of spin with a projected rotational velocity of 123 km/s. The star is radiating 438 times the Sun's luminosity from its photosphere at an effective temperature of 14,521 K. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kernel (statistics)**
Kernel (statistics):
The term kernel is used in statistical analysis to refer to a window function. The term "kernel" has several distinct meanings in different branches of statistics.
Bayesian statistics:
In statistics, especially in Bayesian statistics, the kernel of a probability density function (pdf) or probability mass function (pmf) is the form of the pdf or pmf in which any factors that are not functions of any of the variables in the domain are omitted. Note that such factors may well be functions of the parameters of the pdf or pmf. These factors form part of the normalization factor of the probability distribution, and are unnecessary in many situations. For example, in pseudo-random number sampling, most sampling algorithms ignore the normalization factor. In addition, in Bayesian analysis of conjugate prior distributions, the normalization factors are generally ignored during the calculations, and only the kernel considered. At the end, the form of the kernel is examined, and if it matches a known distribution, the normalization factor can be reinstated. Otherwise, it may be unnecessary (for example, if the distribution only needs to be sampled from).
Bayesian statistics:
For many distributions, the kernel can be written in closed form, but not the normalization constant.
An example is the normal distribution. Its probability density function is p(x|μ,σ2)=12πσ2e−(x−μ)22σ2 and the associated kernel is p(x|μ,σ2)∝e−(x−μ)22σ2 Note that the factor in front of the exponential has been omitted, even though it contains the parameter σ2 , because it is not a function of the domain variable x
Pattern analysis:
The kernel of a reproducing kernel Hilbert space is used in the suite of techniques known as kernel methods to perform tasks such as statistical classification, regression analysis, and cluster analysis on data in an implicit space. This usage is particularly common in machine learning.
Nonparametric statistics:
In nonparametric statistics, a kernel is a weighting function used in non-parametric estimation techniques. Kernels are used in kernel density estimation to estimate random variables' density functions, or in kernel regression to estimate the conditional expectation of a random variable. Kernels are also used in time-series, in the use of the periodogram to estimate the spectral density where they are known as window functions. An additional use is in the estimation of a time-varying intensity for a point process where window functions (kernels) are convolved with time-series data.
Nonparametric statistics:
Commonly, kernel widths must also be specified when running a non-parametric estimation.
Definition A kernel is a non-negative real-valued integrable function K. For most applications, it is desirable to define the function to satisfy two additional requirements: Normalization: ∫−∞+∞K(u)du=1; Symmetry: for all values of u.
The first requirement ensures that the method of kernel density estimation results in a probability density function. The second requirement ensures that the average of the corresponding distribution is equal to that of the sample used.
If K is a kernel, then so is the function K* defined by K*(u) = λK(λu), where λ > 0. This can be used to select a scale that is appropriate for the data.
Kernel functions in common use Several types of kernel functions are commonly used: uniform, triangle, Epanechnikov, quartic (biweight), tricube, triweight, Gaussian, quadratic and cosine. In the table below, if K is given with a bounded support, then K(u)=0 for values of u lying outside the support. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cutaneous meningioma**
Cutaneous meningioma:
Cutaneous meningioma (also known as "Heterotopic meningeal tissue," and "Rudimentary meningocele") is a developmental defect, and results from the presence of meningocytes outside the calvarium.: 622 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Truncated 6-orthoplexes**
Truncated 6-orthoplexes:
In six-dimensional geometry, a truncated 6-orthoplex is a convex uniform 6-polytope, being a truncation of the regular 6-orthoplex.
There are 5 degrees of truncation for the 6-orthoplex. Vertices of the truncated 6-orthoplex are located as pairs on the edge of the 6-orthoplex. Vertices of the bitruncated 6-orthoplex are located on the triangular faces of the 6-orthoplex. Vertices of the tritruncated 6-orthoplex are located inside the tetrahedral cells of the 6-orthoplex.
Truncated 6-orthoplex:
Alternate names Truncated hexacross Truncated hexacontatetrapeton (Acronym: tag) (Jonathan Bowers) Construction There are two Coxeter groups associated with the truncated hexacross, one with the C6 or [4,3,3,3,3] Coxeter group, and a lower symmetry with the D6 or [33,1,1] Coxeter group.
Coordinates Cartesian coordinates for the vertices of a truncated 6-orthoplex, centered at the origin, are all 120 vertices are sign (4) and coordinate (30) permutations of (±2,±1,0,0,0,0) Images
Bitruncated 6-orthoplex:
Alternate names Bitruncated hexacross Bitruncated hexacontatetrapeton (Acronym: botag) (Jonathan Bowers) Images
Related polytopes:
These polytopes are a part of a set of 63 uniform 6-polytopes generated from the B6 Coxeter plane, including the regular 6-cube or 6-orthoplex. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Housekeeping (computing)**
Housekeeping (computing):
In computer programming, housekeeping can refer to either a standard entry or exit routine appended to a user-written block of code (such as a subroutine or function, sometimes as a function prologue and epilogue) at its entry and exit or to any other automated or manual software process whereby a computer is cleaned up after usage (e.g. freeing resources such as virtual memory). This might include such activities as removing or archiving logs that the system has made as a result of the users activities, or deletion of temporary files which may otherwise simply take up space. Housekeeping can be described as a necessary chore, required to perform a particular computer's normal activity but not necessarily part of the algorithm. For cleaning up computer disk storage, utility software usually exists for this purpose such as data compression software - to "shrink" files and release disk space and defragmentation programs - to improve disk performance.
Examples:
Housekeeping could include (but is not limited to) the following activities: Saving and restoring program state for called functions (including general purpose registers and return address) Obtaining local memory on the stack Initializing local variables at the start of a program or function Freeing local memory on the stack on exit from a function Garbage collection Data conversion Backup and/or removal of un-needed files and software Execution of disk maintenance utilities (e.g. ScanDisk, hard drive defragmenters, virus scanners) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HATS-3**
HATS-3:
HATS-3 is a F-type main-sequence star. Its surface temperature is 6351±76 K. HATS-3 is relatively depleted in its concentration of heavy elements, with a metallicity Fe/H index of −0.157±0.07, but is slightly younger than the Sun at an age of 3.2+0.6−0.4 billion years.A multiplicity survey in 2016 detected a candidate stellar companion to HATS-3, 3.671±0.016 arc-seconds away.
Planetary system:
In 2013, one planet, named HATS-3b, was discovered on a tight, nearly circular orbit. The planetary orbit of HATS-3b is likely aligned with the equatorial plane of the star, at a misalignment angle of 3±25°. Planetary equilibrium temperature is 1643 K. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pinosylvin**
Pinosylvin:
Pinosylvin is an organic compound with the formula C6H5CH=CHC6H3(OH)2. A white solid, it is related to trans-stilbene, but with two hydroxy groups on one of the phenyl substituents. It is very soluble in many organic solvents, such as acetone.
Occurrence:
Pinosylvin is produced in plants in response to fungal infections, ozone-induced stress, and physical damage for example. It is a fungitoxin protecting the wood from fungal infection. It is present in the heartwood of Pinaceae and also found in Gnetum cleistostachyum.Injected in rats, pinosylvin undergoes rapid glucuronidation and a poor bioavailability.
Biosynthesis:
Pinosylvin synthase, an enzyme, catalyzes the biosynthesis of pinosylvin from malonyl-CoA and cinnamoyl-CoA: 3 malonyl-S-CoA + cinnamoyl-S-CoA → 4 CoA-SH + pinosylvin + 4 CO2This biosynthesis is noteworthy because plant biosyntheses employing cinnamic acid as a starting point are rare compared to the more common use of p-coumaric acid. Two other compounds produced from cinnamic acid are anigorufone and curcumin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Loviride**
Loviride:
Loviride is an experimental antiviral drug manufactured by Janssen (now part of Janssen-Cilag) that is active against HIV. Loviride is a non-nucleoside reverse transcriptase inhibitor (NNRTI) that entered phase III clinical trials in the late 1990s, but failed to gain marketing approval because of poor potency. It is of clinical significance only in those patients who were enrolled in clinical trials to evaluate loviride (e.g., CAESAR and AVANTI), because in those trials loviride was often given alone and with no companion drug, leading to a high probability of developing reverse transcriptase mutations such as K103N which result in cross-class resistance to the NNRTIs efavirenz and nevirapine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hacker group**
Hacker group:
Hacker groups are informal communities that began to flourish in the early 1980s, with the advent of the home computer.
Overview:
Prior to that time, the term hacker was simply a referral to any computer hobbyist. The hacker groups were out to make names for themselves, and were often spurred on by their own press. This was a heyday of hacking, at a time before there was much law against computer crime. Hacker groups provided access to information and resources, and a place to learn from other members. Hackers could also gain credibility by being affiliated with an elite group. The names of hacker groups often parody large corporations, governments, police and criminals; and often used specialized orthography. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DFNY1**
DFNY1:
Deafness, Y-linked 1 (DFNY1) is a protein that in humans is encoded by the DFNY1 gene. Y-linked hearing impairment (DFNY1, MIM 400043) is one of the few Mendelian disorders showing Y-linkage in humans. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cinema Digital Sound**
Cinema Digital Sound:
Cinema Digital Sound (CDS) was a multi-channel surround sound format used for theatrical films in the early 1990s. The system was developed by Eastman Kodak and Optical Radiation Corporation. CDS was quickly superseded by Digital Theatre Systems (DTS) and Dolby Digital formats.
Cinema Digital Sound:
CDS replaced the analogue audio tracks on 35 mm and 70 mm film prints with 5.1 discrete audio. The 5.1 tracks were encoded using 16-bit PCM audio in a delta modulation compression which resulted in a compression level of 4:1. The audio channels in CDS were arranged in the same way that most current 5.1 systems with Left, Center, Right, Left Surround, Right Surround and LFE. Dick Tracy (1990) was the first film encoded with CDS. Not all films with CDS soundtracks used all 5.1 channels; some, such as Edward Scissorhands (1990), used only the 4 channels that were supported by Dolby Stereo. Universal Soldier (1992) was the last film encoded with CDS.
Cinema Digital Sound:
The digital information was printed on the film, similar to Dolby Digital and SDDS. However, unlike those formats, there was no analog optical backup in 35 mm and no magnetic backup in 70 mm, meaning that if the digital information were damaged in some way, there would be no sound at all. This was one of the factors that contributed to its inevitable demise; the then-new Dolby Digital format moved its information to another area (in between the film sprocket holes), preserving the optical tracks.
Development and technical aspects:
Prior to the development of Cinema Digital Sound, a six-track, optically read, sound-on-film system using PCM digital encoding was thought, by most in the industry, to be impractical. However, in a joint effort over a three-year period, and with a $5 million total investment, Kodak developed a special fine-grained, high-resolution negative film capable of holding more information than previous films and Optical Radiation Corporation developed a special audio coding and error correction system, resulting in the Cinema Digital Sound system.
Development and technical aspects:
Initial tests proved that packing densities necessary to achieve high-fidelity digital sound and error rates comparable to the Compact Disc format were possible using Kodak's new high-resolution negative film and that wear on the film during normal playback was not significant. In a controversial move (ORC's engineers fought against it but management overruled them), it was decided to utilize the area typically reserved for sound in the 35 mm optical and 70 mm magnetic film standards, for the new CDS digital audio and data tracks. Six audio channels were implemented; Five full-bandwidth audio channels (three behind screen and two surround channels) were applied to the input of the system as linear 16-bit samples at a 44.1 kHz sample rate. Samples were data compressed into 12-bit words via Delta-Modulation, with one in every 32 samples retaining its original linear 16-bit form to provide an accurate reference every 726μm. The subwoofer (.1 Low Frequency Effects) channel did not employ Delta-Modulation. Instead the 44.1 kHz sample rate was decreased to 1378 Hz, which yielded an upper audio bandwidth of 114 Hz with anti-aliasing and anti-imaging strategies applied in the remainder of the frequency range.
Development and technical aspects:
In addition to the six digital audio channels, three data/control channels were provided. One SMPTE time code channel and another channel for MIDI control signals offered flexibility for performing theater automation or external synchronization of equipment. The third data channel, an identification track, could be used to record a variety of user-defined parameters specific to the film (such as curtain opening/closing, seat movement or lighting effects.) In view of the fact that the CDS system was available for only two years before its complete withdrawal from the market, no use of the SMPTE time code or MIDI channels was ever implemented.
Development and technical aspects:
Because the data rate was 5.8 million bits-per-second (5.8mbp/s), significant error detection and correction was required. A custom designed Reed-Solomon block code, was used with additional CRC characters for error correction. Interleaving of odd and even audio samples was performed to protect against burst errors. Just as in audio tape machines, transport problems with tension, guides, and supply and take-up reels could result in vertical or horizontal weave, and as bit sizes were only 14μm, precise timing and tracking was essential, thus the CDS system required installation of special projector modifications to smooth the film path travel and steady the take-up speed. It was found later, however, that modifications to the projectors were not needed and that the CDS systems sensitivity to improper film speed was due to a diode installed incorrectly in the CDS decoder module. Horizontal tracking was provided by a 76-MHz digital servo, while vertical timing was accomplished with an algorithm written into the data format itself. Rows of data were scanned horizontally, thus a self-clocking run-length-limited code was used for this error correction. A 6-to-8-bit mapping was performed upon encoding to ensure that each 8-bit word contained exactly four ones. This form of parity worked well in correcting errors upon decoding.
Films distributed with CDS:
Dick Tracy (1990) (Buena Vista Pictures) Days of Thunder (1990) (Paramount Pictures) Flatliners (1990) (Columbia Pictures) Edward Scissorhands (1990) (20th Century Fox) The Doors (1991) (Carolco Pictures) Hudson Hawk (1991) (TriStar Pictures) Terminator 2: Judgment Day (1991) (Carolco) For the Boys (1991) (20th Century Fox) Final Approach (1991) (Trimark Pictures) Universal Soldier (1992) (Carolco) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Keratin 21**
Keratin 21:
Keratin 21 is a type I cytokeratin which expresses immunologically specific fusion protein. It is not found in humans, but only in Rattus norvegicus. It is first detectable after 18-19 days of gestation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Boing (TV channel)**
Boing (TV channel):
Boing is the brand name used by the International division of Warner Bros. Discovery co-owned by Mediaset for a collection of television networks outside of the United States that target children.
As of 2023, Boing-branded channels exist in Italy (its flagship service), Spain, and in Africa, while a weekend morning programming block formerly aired on the WarnerMedia-owned Chilevisión (now owned by Paramount Global) in Chile and tv2 in Turkey (previously TNT).
Broadcast:
Italy The Italian free-to-air television channel marketed at children and teenagers, owned by Boing S.p.A., a joint venture of Fininvest's MFE - MediaForEurope (through its Mediaset Italia subsidiary) and Warner Bros. Discovery (through its International division). It is available on digital terrestrial television and free-to-air satellite provider Tivùsat.
Spain The Spanish free-to-air television channel launched in 2010 and owned as a joint venture between Mediaset España and Warner Bros. Discovery through its International unit. Series on the channel are also available in English via a secondary audio feed. Additional feeds are available in Italy, France and Sub-Saharan Africa.
France The French pay television channel aimed at children and teenagers launched on 8 April 2010. On 2 February 2023, it was announced that Boing would transition to Cartoonito full-time on 3 April 2023.
Broadcast:
Africa The African television channel operated by Warner Bros. Discovery through its International unit, which launched on May 30, 2015. At this moment, the channel can be seen on Montage Cable TV in Nigeria and Sentech's Mobile TV in South Africa. On January 1, 2017, the channel became available to AzamTV subscribers. The channel does not have a website. The French version of Boing is also broadcast in Sub-Saharan Africa and the Maghreb.
Broadcast:
Chile Turkey Boing was launched on TNT in 2012 as a programming block, even after the channel was replaced by Teve2, the block still continued, it also aired on Cartoon Network Turkey starting in 2014.
The Animadz:
A group of characters known as the Animadz serve as Boing's official mascots. They include Bo, a blue dog-like human; Bobo, a hairless green humanoid; Otto, a robot; Maissa, a yellow maize; Katrina, a white chicken, Dino, a green dinosaur. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TIMvision**
TIMvision:
TIMvision (formerly Cubovision) is an Italian Internet video on demand (VOD) service by Telecom Italia.It offers television shows, movies and TV series, for rental or purchase through the use of a decoder as well as video on demand, smart TV, Android and iOS device. From 2018, TIMvision produces original TV shows, such as the Italian version of Skam, known as Skam Italia.
History:
Cubovision In 2007, Franco Bernabè returned to Telecom Italia from two years later, with the entrepreneur and innovation director of Telecom Italia Luca Tomassini, he created Cubovision as a set-top box that offered the possibility of accessing video-on-demand television content and managing more TV platforms, such as digital terrestrial or web TV. On December 16, 2009, Telecom presents the first Cubovision model in Milan. The industrial design of the set-top box was conceived by Luca Tomassini and created by Visione, the production by Finmek and the marketing by Amino Communications.
History:
Telecom Italia launched Cubovision in January 2010 in an experimental version of the same, with an open innovation model. The first models published on the market were beta versions that were updated with functionality with subsequent models.On 15 December 2010, Telecom Italia presented in Milan the new model of the platform equipped with Intel microprocessors, totally revisited and with broadcasting and broadband functionality. The new version included more than 1500 monthly programming hours, with over 200 titles per month within a catalog made up of TV series, cartoons, concerts, documentaries and films. In the same period, Telecom Italia made Cubovision's official application on Apple and Samsung stores to connect to the platform via tablet or smartphone.
History:
TIMvision On 12 May 2014, Cubovision changes its name in TIMvision.In June 2015, Tim finalized a two-year agreement with the 20th Century Fox with the intent to include some of the Fox's productions on the platform. Following an agreement with Turner Broadcasting System, the service has made available exclusive previews of the Cartoon Network's Adult Swim block including Robot Chicken, Mr. Pickles, China, IL, and Aqua Teen Hunger Force.From 28 December 2016, Timvision, with a capital of about 50,000 euros, becomes a limited liability company (società a responsabilità limitata) with the name TIMVISION S.r.l., controlled by Telecom Italia S.p.A., with the aim of accelerating the Quadruple Play strategy and creating and managing productions between TV series and cinematographic works at national and international level.Since June 2019, TIMvision has also made its app available on the Apple TV and Amazon Fire TV platforms.From 16 September 2019, Andrea Fabiano became the Multimedia manager of TIM and TIMvision CEO.On 12 October 2019, during the match of the Italian national team for qualifying for the 2020 European Football Championships, the new logo was unveiled.
History:
On 31 December 2019, Sky Uno, Sky Arte, Sky TG24 and Sky Sport 24 in HD are made visible in streaming. Furthermore, the Mediaset channels, excluding Boing, Cartoonito and the radio ones also arrive.
From 24 March 2020, TIM after signing a partnership with Disney+, TIM includes the Mundo Disney+ offer without constraints with TIMvision Plus.
From 27 May 2020, following the partnership with Netflix made in November 2019, TIM includes the Mundo Netflix offer on TIMvision Plus and the TIMvision Box decoder.In August 2020 the free channels Paramount Network, Spike, VH1 published by ViacomCBS Networks Italia were made available in streaming and in HD.
History:
From 27 January 2021, the contents of Discovery+ are also available in Italy on TIMvision.On 1 July 2021, the four commercial offers are made official, which include the contents of TIMvision and those of DAZN, Disney+ and Netflix, with Infinity+ always included in the various packages. Following the partnership with DAZN, holder of the transmission rights all 380 seasonal matches of the Serie A championship for the three-year period 2021-2024, TIMvision hosts all the races of the maximum Italian football championship on its platform. Thanks to the TIMvision Calcio and Sport offer, the Europa League, some races of the new Conference League, Serie B, La Liga, MotoGP and all the sporting events of the DAZN offer are also visible through DAZN.
History:
Until 31 July 2021, it included the possibility, through a NOW TV subscription, of watching Serie A, Premier League and Bundesliga football matches, as well as Formula 1 and all the sporting events contained in the Sky sports package.
Since August 2021, with the agreement with Mediaset Infinity and the streaming broadcast on the Infinity+ platform, it has been offering the matches of the 2022 UEFA Champions League for a total of 92 first phase matches, including 8 from the playoffs and 84 of the groups.
With to the partnership with Mediaset Infinity and thanks to the use of the Infinity+ platform, the offer of films, cartoons, TV series and content available also in the original language, with subtitles and in 4K, previously included in the Infinity TV catalog, is expanded.
On 17 January 2022, the Paramount Network and Spike channels, edited by ViacomCBS Networks Italia, were eliminated following their closure. A few days later of Paramount Network's replacement, 27 Twentyseven, published by Mediaset, was made available.
On 29 March 2022, La7 and La7d were placed on the platform.
As of 15 November 2022, a new interface has been released on the site and apps.
Device support:
The devices featured in this list feature hardware that is compatible for streaming TIMvision: Android smartphones and tablets Android TV devices Apple: iPad, iPhone Microsoft: Windows 8, Windows 10, Windows 11 Sony: some Blu-ray Disc players, CTVs LG: some Blu-ray Disc players, CTVs Samsung: some Blu-ray Disc players, CTVs | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Calcium borate**
Calcium borate:
Calcium borate (Ca3(BO3)2). It can be prepared by reacting calcium metal with boric acid. The resulting precipitate is calcium borate. A hydrated form occurs naturally as the minerals colemanite, nobleite and priceite.
One of its uses is as a binder in some grades of hexagonal boron nitride for hot pressing. Other uses include flame retardant in epoxy molding compounds, a ceramic flux in some ceramic glazes, reactive self-sealing binders in hazardous waste management, additive for insect-resistant polystyrene, fertilizer, and production of boron glasses.
Also it used as a main source of boron oxide in the manufacturing of ceramic frits that used in the ceramic glaze or ceramic engobe for wall and floor ceramic tiles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sheet metal forming simulation**
Sheet metal forming simulation:
Today the metal forming industry is making increasing use of simulation to evaluate the performing of dies, processes and blanks prior to building try-out tooling. Finite element analysis (FEA) is the most common method of simulating sheet metal forming operations to determine whether a proposed design will produce parts free of defects such as fracture or wrinkling.
Sheet metal forming challenges:
Sheet metal forming, which is often referred to as stamping, is a process in which a piece of sheet metal, referred to as the blank, is formed by stretching between a punch and a die.
Sheet metal forming challenges:
The most painful and most frequent defects are wrinkles, thinning, springback and splits or cracks. Few methods are being used around the industry to cope with the main defects, based on the experience of the technicians. However, the correct process is the most vital, since it involves the correct geometry followed by number of steps to reach at final geometry. Which demands for specific experience or higher number of iterations.Deformation of the blank is typically limited by buckling, wrinkling, tearing, and other negative characteristics which makes it impossible to meet quality requirements or makes it necessary to run at a slower than desirable rate.
Sheet metal forming challenges:
Wrinkling in a draw are series of ridges form radially in the drawn wall due to compressive buckling. Practically these are duo to low blank holder pressure due to which material slips and wrinkles formed. The optimum blank holding pressure is the key, however in certain cases it doesn't work. Then draw beads are the solutions, the location and shape of draw bead is the challenge, which can be analysed with FEA during design stage prior to tool manufacturing.Crack in the vertical wall due to high tensile stresses, some small radius block the material flow and results in excessive thinning at that point usually more than 40% of the sheet thk. result in cracks. In some cases it may happen due to excessive blank holder pressure, which restrict the metal flow. Somewhere it might be due to wrong process design, like try to make a more deep draws in a single stage, which otherwise feasible only in two stages.Thinning is a Excessive Stretching in the vertical wall due to high tensile stresses cause thickness reduction specifically on the small radius in the metal parts, however up to 20% thinning is allowed due to process limitations.Springback is a particularly critical aspect of sheet metal forming. Even relatively small amounts of springback in structures that are formed to a significant depth may cause the blank to distort to the point that tolerances cannot be held. New materials such as high strength steel, aluminum and magnesium are particularly prone to springback.Sheet metal forming is more of an art than a science. The design of the tooling, stamping process and blank materials and geometry are primarily done by trial and error.
Sheet metal forming challenges:
Nowadays the simulation software's comes under CAE (computer aided engineering), used the finite element analysis to predict the common defects in design stage, prior to die manufacturing.The traditional approach to designing the punch and die to produce parts successfully is to build try-out tools to check the ability of a certain tool design to produce parts of the required quality. Try-out tools are typically made of less expensive materials to reduce try-out costs yet this method is still costly and time-consuming.
History of sheet metal forming simulation:
The first effort at simulating metalforming was made using the finite difference method in the 1960s to better understand the deep drawing process. Simulation accuracy was later increased by applying nonlinear finite element analysis in the 1980s but computing time was too long at this time to apply simulation to industrial problems.Rapid improvements over the past few decades in computer hardware have made the finite element analysis method practical for resolving real-world metal forming problems. A new class of FEA codes based on explicit time integration was developed that reduced computational time and memory requirements. The dynamic explicit FEA approach uses a central different explicit scheme to integrate the equations of motion. This approach uses lumped mass matrices and a typical time step on order of millionths of seconds. The method has proved to be robust and efficient for typical industrial problems.As computer hardware and operating systems have evolved, memory limitations that prevented the practical use of Implicit Finite Element Methods had been overcome. Using the implicit method time steps are computed based on the predicted amount of deformation occurring at a given moment in the simulation, thus preventing unnecessary computational inefficiency caused by computing too small time steps when nothing is happening or too large a time step when high amounts of deformation are occurring.
History of sheet metal forming simulation:
Finite Element Analysis Methods Two broad divisions in the application of Finite Element Analysis method for sheet metal forming can be identified as Inverse One-step and Incremental.
History of sheet metal forming simulation:
Inverse One-step methods compute the deformation potential of a finished part geometry to the flattened blank. Mesh initially with the shape and material characteristics of the finished geometry is deformed to the flat pattern blank. The strain computed in this inverse forming operation is then inverted to predict the deformation potential of the flat blank being deformed into the final part shape. All the deformation is assumed to happen in one increment or step and is the inverse of the process which the simulation is meant to represent, thus the name Inverse One-Step.
History of sheet metal forming simulation:
Incremental Analysis methods start with the mesh of the flat blank and simulate the deformation of the blank inside of tools modeled to represent a proposed manufacturing process. This incremental forming is computed "forward" from initial shape to final, and is calculated over a number of time increments for start to finish. The time increments can be either explicitly or implicitly defined depending on the finite element software being applied. As the incremental methods include the model of the tooling and allow for the definition of boundary conditions which more fully replicate the manufacturing proposal, incremental methods are more commonly used for process validation. Inverse One-step with its lack of tooling and therefore poor representation of process is limited to geometry based feasibility checks.Incremental analysis has filled the role previously completed through the use of proof tools or prototype tools. Proof tools in the past were short run dies made of softer than normal material, which were used to plan and test the metal forming operations. This process was very time consuming and did not always yield beneficial results, as the soft tools were very different in their behavior than the longer running production tools. Lessons learned on the soft tools did not transfer to the hard tool designs. Simulation has for the most part displaced this old method. Simulation used as a virtual tryout is a metal forming simulation based on a specific set of input variables, sometimes nominal, best case, worst case, etc. However, any simulation is only as good as the data used to generate the predictions. When a simulation is seen as a "passing result" manufacturing of the tool will often begin in earnest. But if the simulation results are based on an unrealistic set of production inputs then its value as an engineering tool is suspect.
History of sheet metal forming simulation:
Robustness Analysis Recent innovations in stochastic analysis applied to sheet metal forming simulations has enabled early adopters to engineer repeat-ability into their processes that might not be found if they are using single sets of simulations as "virtual tryout".
Uses of sheet metal forming simulation:
Chaboche type material models are sometimes used to simulate springback effects in sheet metal forming. These and other advanced plasticity models require the experimental determination of cyclic stress-strain curves. Test rigs have been used to measure material properties that when used in simulations provide excellent correlation between measured and calculated springback.Many metal forming operation require too much deformation of the blank to be performed in a single step. Multistep or progressive stamping operations are used to incrementally form the blank into the desired shape through a series of stamping operations. Incremental forming simulation software platforms addresses these operations with a series of one-step stamping operations that simulate the forming process one step at a time.
Uses of sheet metal forming simulation:
Another common goal in design of metal forming operations is to design the shape of the initial blank so that the final formed part requires few or no cutting operations to match the design geometry. The blank shape can also be optimized with finite element simulations. One approach is based on an iterative procedure that begins with an approximate starting geometry, simulates the forming process and then checks deviation of the resulting formed geometry from the ideal product geometry. The node points are adjusted in accordance with the displacement filed to correct the blank edge geometry. This process is continued until the end blank shape matches the as-designed part geometry.Metal forming simulation offers particular advantages in the case of high strength steel and advanced high-strength steel which are used in current day automobiles to reduce weight while maintaining crash safety of the vehicle. The materials have higher yield and tensile strength than conventional steel so the die undergoes greater deformation during the forming process which in turn increases the difficulty of designing the die. Sheet metal simulation that considers the deformation of not only the blank but also the die can be used to design tools to successfully form these materials.
Industrial applications:
Tata Motors engineers used metal forming simulation to develop tooling and process parameters for producing a new oil pump design. The first prototypes that were produced closed matched the simulation prediction.Nissan Motor Company used metal forming simulation to address a tearing problem in a metal stamping operation. A simple simulation model was created to determine the effect of blank edge radius on the height to which the material could be formed without tearing. Based on this information a new die was designed that solved the problem.There are lots of sheet metal programs available in the industry as SolidWorks and LITIO.Nowadays FEA software's such as LS DYNA, AUTOFORM, HYPERFORM, PAMSTAMP are very good for virtual process simulations prior to product manufacturing. The defects such as Wrinkles, thinning and cracks can be seen in the design stage right just before the process design, results in correct process selection and reduction in lead time and save valuable money, which otherwise invested in hectic manufacturing iterations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Subcompact car**
Subcompact car:
Subcompact car is a North American classification for cars smaller than a compact car. It is broadly equivalent to the B-segment (Europe), supermini (Great Britain) or A0-class (China) classifications.
Subcompact car:
According to the U.S. Environmental Protection Agency (EPA) car size class definition, the subcompact category sits between the "minicompact" and "compact" categories. The EPA definition of a subcompact is a passenger car with a combined interior and cargo volume of 85–99 cubic feet (2,410–2,800 L). Current examples of subcompact cars are the Nissan Versa and Hyundai Accent. The smaller cars in the A-segment/city car category (such as the Chevrolet Spark and Smart Fortwo) are sometimes called subcompacts in the U.S., because the EPA's name for this smaller category — "minicompact" — is not commonly used by the general public.The prevalence of small cars in the United States increased in the 1960s due to increased imports of cars from Europe and Japan. Widespread use of the term subcompact coincided with the early 1970s increase in subcompact cars built in the United States. Early 1970s subcompacts include the AMC Gremlin, Chevrolet Vega, and Ford Pinto.
History:
1960s The term subcompact originated during the 1960s. However, it came into popular use in the early 1970s, as car manufacturers in the United States began to introduce smaller cars into their line-up. Previously, cars in this size were variously categorized, including "small cars" : 120 or "economy cars".
History:
Several of these small cars were produced in the U.S. in limited volumes, including the 1930 American Austin (later called the American Bantam) and the 1939 Crosley. From the 1950s onwards, various imported small cars were sold in the U.S., including the Nash Metropolitan, Volkswagen Beetle, and various small British cars. The term subcompact did not yet exist, so the Metropolitan was labeled a "compact or economy car" and marketed as a second vehicle for use around town, not as a primary car. The Volkswagen Beetle was marketed with advertising pointing out the car's unconventional features as strengths and to get buyers to "think small." Prompted by the British government for exports, Ford was one of the first companies to try and sell inexpensive small cars in volume. From 1948 to 1970, approximately 250,000 economical English Fords were imported to the US while over 235,000 went to Canada. Models such as the 1960 Ford Anglia were promoted as "The world's most exciting light car." 1970s Due to the increasing popularity of small cars imported from Europe and Japan during the late 1960s, the American manufacturers began releasing competing locally-built models in the early 1970s. The AMC Gremlin was described at its April 1970 introduction as "the first American-built import" and the first U.S. built subcompact car. Also introduced in 1970 were the Chevrolet Vega and Ford Pinto. Plans for the subcompact AMC Gremlin pre-dated Vega and Pinto by several years because of AMC's strategy to recognize emerging market opportunities ahead of the competition.Sales of American-built "low weight cars" (including subcompacts) accounted for more than 30% of total car sales in 1972 and 1973, despite inventory shortages for several models. The Gremlin, Pinto and Vega were all rear-wheel drive and available with four-cylinder engines (the Pinto was also available with a V6 engine, and the Gremlin was also available with I6 and V8 engines).
History:
The Pontiac Astre, the Canadian-originated re-badged Vega variant was released in the U.S. in September 1974. Due to falling sales of the larger pony cars (such as the Chevrolet Camaro and first-generation Ford Mustang) in the mid-1970s, the Vega-based Chevrolet Monza was introduced as an upscale subcompact and the Ford Mustang II temporarily downsized from the pony car class to become a subcompact car for its second generation. The Monza with its GM variants Pontiac Sunbird, Buick Skyhawk, Oldsmobile Starfire, and the Mustang II continued until the end of the decade. The Chevrolet Chevette was GM's new entry-level subcompact introduced as a 1976 model. It was an 'Americanized' design from Opel, GM's German subsidiary. Additionally, subcompacts that were imported and marketed through domestic manufacturers' dealer networks as captive imports included the Renault Le Car and the Ford Fiesta.
History:
In 1977, the U.S. Environmental Protection Agency (EPA) began to use a new vehicle classification system, based on interior volume instead of exterior size.: 3 Sedans with up to 100 cubic feet of passenger luggage volume were classified as subcompact. There was not a separate subcompact station wagon class with all up to 130 cubic feet of volume classified as "small." In 1978, Volkswagen began producing the "Rabbit" version of the Golf— a modern, front-wheel drive design— in Pennsylvania. In 1982, American Motors began manufacturing the U.S. Renault Alliance— a version of the Renault 9— in Wisconsin. Both models benefiting from European designs, development, and experience.
History:
1980s To replace the aging Chevette in the second half of the 1980s, Chevrolet introduced marketed imported front-wheel drive subcompact cars: the Suzuki Cultus (a three-cylinder hatchback, badged as the Chevrolet Sprint) and the Isuzu Gemini (a four-cylinder hatchback/sedan badged as the Chevrolet Spectrum).
1990s During the 1990s GM offered the Geo brand featuring the Suzuki-built Metro subcompact.
History:
2000s to present Because of consumer demand for fuel-efficient cars during the mid- to late-2000s, sales of subcompact cars made them the fastest growing market category in the U.S. In 2006, three major subcompact models were introduced to the market, the Toyota Yaris, Honda Fit, and Nissan Versa. These models were released by their manufacturers to aim at a group of younger buyers who otherwise shop for used cars. While fuel prices at the time were increasing, the small cars were planned before fuel prices soared; for example, Honda had announced that it would release a subcompact model as early as 2004.By 2008, sales of subcompact cars had dramatically increased in the wake of a continuing increase of fuel prices. At the same time, sales of pickup trucks and large sport utility vehicles had dropped sharply. By April 2008, sales of Toyota’s subcompact Yaris had increased 46 percent, and Honda’s Fit had a record month with an increase of 54 percent.However, low fuel prices and the added room in SUVs impacted subcompact sales negatively in the late 2010s. During this period, industry executives and analysts said that the subcompact car market was returning to historical norms after an unusual period when manufacturers had expanded small car lineups in anticipation of rising demand fueled by rising fuel prices, which has since eased. In the United States, the segment experienced a 50 percent drop in sales in the first half of 2020 compared to 2019. In Canada, the subcompact share of the car market shrank to 1.6 percent for the year ending 2020, down from 2.4 percent in 2019.As a result, manufacturers stopped offering subcompact models and focused on larger cars instead, including subcompact SUVs which offer higher profit margins and a higher average transaction price. Models that were no longer sold in the United States by the end of the decade include the Mazda 2 (discontinued after 2014), Scion xD (2016), Toyota Prius C (2017), Ford Fiesta (2019), Nissan Micra (2019), Smart Fortwo (2019), Fiat 500 (2019), Toyota Yaris (2020), Honda Fit (2020), and Chevrolet Sonic (2020). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Q-Net**
Q-Net:
The Q-NET is an add-on armor kit developed by QinetiQ to counter the threat of rocket-propelled grenades.
Design:
The Q-Net is an armor system that stands off from the vehicle's hull and "catches" the RPG before it hits the outside of the vehicle itself, using metal nodes connecting the net to disrupt the fusing of the warhead. The Q-Net is similar to other add-on systems like slat armor cages, but with additional benefits of being cheaper and 50-60% lighter than slat, allowing it to be integrated on light vehicles that cannot support slat such as Humvees. Because it is lighter, it reduces power train stress and fuel consumption, while giving equal or greater protection than bar armor. It is fully modular with a hook-and-loop installation method, allowing the system to be configured to different platforms. The net gives 360-degree coverage and can even provide overhead protection.
History:
The Q-Net was first sent to combat in Afghanistan in 2010. When soldiers saw the armor system, they were skeptical and concerned about its effectiveness. On September 19, a vehicle patrol equipped with Q-Nets was ambushed. At less than 100 meters distance, insurgents fired a volley of RPG rounds, followed by machine gun fire and then another RPG. The soldiers moved forward of the ambush and repelled the attack. Following the firefight, they found that the vehicles had taken three direct hits from RPGs and had not penetrated. They detonated at the nets, stopping them from penetrating the armor. One soldier commented, “All of the soldiers whose vehicles were hit by RPGs are alive today and still in the fight.”Following initial combat performance, 7,500 kits were ordered and produced at a rate of 1,000 per month. In February 2011, the U.S. Army ordered 829 nets for use on the Navistar MaxxPro. In May 2012, QinetiQ received a contract for 420 more Q-Net kits. They have been used on Humvees, RG-31s, M-ATVs, and other armored vehicles. On 6 December 2013, QinetiQ North America was awarded an $18.3 million contract to provide Q-Nets to protect hundreds of M-ATVs.
History:
Q-Net II By September 2012, 11,000 Q-Net kits had been delivered. At the same time, QinetiQ unveiled an improved version, the Q-Net II. Upgrades included the probability of defeating a warhead being further increased and even more weight reductions. It gives improvements to survivability and platform performance. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Regular space**
Regular space:
In topology and related fields of mathematics, a topological space X is called a regular space if every closed subset C of X and a point p not contained in C admit non-overlapping open neighborhoods. Thus p and C can be separated by neighborhoods. This condition is known as Axiom T3. The term "T3 space" usually means "a regular Hausdorff space". These conditions are examples of separation axioms.
Definitions:
A topological space X is a regular space if, given any closed set F and any point x that does not belong to F, there exists a neighbourhood U of x and a neighbourhood V of F that are disjoint. Concisely put, it must be possible to separate x and F with disjoint neighborhoods.
Definitions:
A T3 space or regular Hausdorff space is a topological space that is both regular and a Hausdorff space. (A Hausdorff space or T2 space is a topological space in which any two distinct points are separated by neighbourhoods.) It turns out that a space is T3 if and only if it is both regular and T0. (A T0 or Kolmogorov space is a topological space in which any two distinct points are topologically distinguishable, i.e., for every pair of distinct points, at least one of them has an open neighborhood not containing the other.) Indeed, if a space is Hausdorff then it is T0, and each T0 regular space is Hausdorff: given two distinct points, at least one of them misses the closure of the other one, so (by regularity) there exist disjoint neighborhoods separating one point from (the closure of) the other.
Definitions:
Although the definitions presented here for "regular" and "T3" are not uncommon, there is significant variation in the literature: some authors switch the definitions of "regular" and "T3" as they are used here, or use both terms interchangeably. This article uses the term "regular" freely, but will usually say "regular Hausdorff", which is unambiguous, instead of the less precise "T3". For more on this issue, see History of the separation axioms.
Definitions:
A locally regular space is a topological space where every point has an open neighbourhood that is regular. Every regular space is locally regular, but the converse is not true. A classical example of a locally regular space that is not regular is the bug-eyed line.
Relationships to other separation axioms:
A regular space is necessarily also preregular, i.e., any two topologically distinguishable points can be separated by neighbourhoods.
Since a Hausdorff space is the same as a preregular T0 space, a regular space which is also T0 must be Hausdorff (and thus T3).
In fact, a regular Hausdorff space satisfies the slightly stronger condition T2½.
(However, such a space need not be completely Hausdorff.) Thus, the definition of T3 may cite T0, T1, or T2½ instead of T2 (Hausdorffness); all are equivalent in the context of regular spaces.
Speaking more theoretically, the conditions of regularity and T3-ness are related by Kolmogorov quotients.
A space is regular if and only if its Kolmogorov quotient is T3; and, as mentioned, a space is T3 if and only if it's both regular and T0.
Thus a regular space encountered in practice can usually be assumed to be T3, by replacing the space with its Kolmogorov quotient.
There are many results for topological spaces that hold for both regular and Hausdorff spaces.
Most of the time, these results hold for all preregular spaces; they were listed for regular and Hausdorff spaces separately because the idea of preregular spaces came later.
On the other hand, those results that are truly about regularity generally don't also apply to nonregular Hausdorff spaces.
There are many situations where another condition of topological spaces (such as normality, pseudonormality, paracompactness, or local compactness) will imply regularity if some weaker separation axiom, such as preregularity, is satisfied.
Such conditions often come in two versions: a regular version and a Hausdorff version.
Although Hausdorff spaces aren't generally regular, a Hausdorff space that is also (say) locally compact will be regular, because any Hausdorff space is preregular.
Thus from a certain point of view, regularity is not really the issue here, and we could impose a weaker condition instead to get the same result.
However, definitions are usually still phrased in terms of regularity, since this condition is more well known than any weaker one.
Most topological spaces studied in mathematical analysis are regular; in fact, they are usually completely regular, which is a stronger condition.
Regular spaces should also be contrasted with normal spaces.
Examples and nonexamples:
A zero-dimensional space with respect to the small inductive dimension has a base consisting of clopen sets.
Every such space is regular.
As described above, any completely regular space is regular, and any T0 space that is not Hausdorff (and hence not preregular) cannot be regular.
Most examples of regular and nonregular spaces studied in mathematics may be found in those two articles.
On the other hand, spaces that are regular but not completely regular, or preregular but not regular, are usually constructed only to provide counterexamples to conjectures, showing the boundaries of possible theorems.
Of course, one can easily find regular spaces that are not T0, and thus not Hausdorff, such as an indiscrete space, but these examples provide more insight on the T0 axiom than on regularity. An example of a regular space that is not completely regular is the Tychonoff corkscrew.
Most interesting spaces in mathematics that are regular also satisfy some stronger condition.
Thus, regular spaces are usually studied to find properties and theorems, such as the ones below, that are actually applied to completely regular spaces, typically in analysis.
There exist Hausdorff spaces that are not regular. An example is the set R with the topology generated by sets of the form U — C, where U is an open set in the usual sense, and C is any countable subset of U.
Elementary properties:
Suppose that X is a regular space.
Then, given any point x and neighbourhood G of x, there is a closed neighbourhood E of x that is a subset of G.
In fancier terms, the closed neighbourhoods of x form a local base at x.
In fact, this property characterises regular spaces; if the closed neighbourhoods of each point in a topological space form a local base at that point, then the space must be regular.
Taking the interiors of these closed neighbourhoods, we see that the regular open sets form a base for the open sets of the regular space X.
This property is actually weaker than regularity; a topological space whose regular open sets form a base is semiregular. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gcn2**
Gcn2:
GCN2 (general control nonderepressible 2) is a serine/threonine-protein kinase that senses amino acid deficiency through binding to uncharged transfer RNA (tRNA). It plays a key role in modulating amino acid metabolism as a response to nutrient deprivation.
Introduction:
GCN2 is the only known eukaryotic initiation factor 2α kinase (eIF2α) in Saccharomyces cerevisiae. It inactivates eIF2α by phosphorylation at Serine 51 under conditions of amino acid deprivation, resulting in repression of general protein synthesis whilst allowing selected mRNA such as GCN4 to be translated due to regions upstream of the coding sequence. Elevated levels of GCN4 stimulate the expression of amino acid biosynthetic genes, which code for enzymes required to synthesize all 20 major amino acids.
Structure:
Protein kinase GCN2 is a multidomain protein and its C-terminus contains a region homologous to histidyl-tRNA synthetase (HisRS) next to the kinase catalytic moiety. This HisRS-like region forms a dimer and dimerization is required for GCN2 function. The crucial contribution to GCN2 function is the promotion of tRNA binding and the stimulation of the kinase domain via physical interaction.Binding of uncharged tRNA to this synthetase-like domain induces a conformational change in which the GCN2 domains rotate 180° normal to the dimerization surface and thereby transpose from their antiparallel to a parallel orientation. Subsequently, GCN2 is activated.GCN2 activation results from a conformation that facilitates ATP binding, leading to autophosphorylation of an activation loop which leads to maximal GCN2 kinase activity.
Function:
Regulation of translation GCN2 inhibits general translation by phosphorylation of eIF-2α at serine 51 within 15 min of amino acid deprivation, which then subsequently increases the affinity for the guanine exchange factor eIF2B to sequester eIF-2α leading to reduced formation of the ternary complex (TC) consisting of eIF2, GTP and the initiator Met-tRNA required for translation initiation.
eIF2 containing a phosphorylated alpha subunit shows an increased affinity for its only GEF, eIF2B, but eIF2B is only able to exchange GDP with GTP from unphosphorylated eIF2. So the recycling of eIF2, needed for TC formation, is inhibited by phosphorylation of eIF-2α, which in the end leads to a reduction of global translation rates.
Function:
An opposing effect of the reduced availability of TC is the induction of GCN4 expression by translational regulation. Four short ORF's exist in the leader of the GCN4 mRNA. 40S Ribosomal Subunits scanning the mRNA from 5' have TC bound and translate the first upstream open reading frame (uORF). Under non-starving condition there is enough ternary complex that the subunits rebind it before they reach uORF 4. Translation is again initiated, uORF2,3 or 4 translated and the 40S Subunits subsequently dissociate from GCN4 mRNA.
Function:
Under starving conditions there is less TC present. Some of the 40S Subunits are not able to rebind TC before they reach uORF 4 but eventually rebind TC before reaching GCN4 coding sequence. Therefore, the reduction in TC formation resulting from GCN2 activation by amino acid starvation leads to the induction of GCN4 translation.
Function:
GCN4 is the primary regulator in response to amino acid starvation, termed general amino acid control (GAAC). It acts as a transcription factor and activates several genes required for amino acid synthesis.Recently GCN2 has also been implicated in directing eating behavior in mammals by phosphorylating eIF-2α in the anterior Piriform cortex (APC) of the brain. The molecular mechanisms governing this function are not yet known, but a basic zipper transcription factor called ATF4 is a possible candidate. ATF4 is related to GCN4.
Function:
Cell cycle control GCN2 also regulates the cell cycle by delaying entry into S phase upon ultraviolet (UV) radiation and exposure to methyl methanesulfonate (MMS). Thereby the cell prevents passing the G1 checkpoint and starting DNA replication when the DNA is damaged. It has been hypothesized, that UV induces nitric oxide synthase activation and NO. production, which leads to the activation of GCN2 and that the cell cycle regulation by GCN2 is independent of eIF2α phosphorylation. Although the causal relationship between GCN2 and cell cycle delay is still under debate, it was suggested that the formation of the pre-replication complex is deferred by GCN2 upon UV-irradiation.
Function:
Lipid metabolism The absence of essential amino acids causes a downregulation of key components of the lipid synthesis such as the fatty acid synthase. Following leucine-deprivation in mammals, GCN2 decreases the expression of lipogenic genes via SREBP-1c. SREBP-1c actions upon genes regulating fatty-acid and triglyceride synthesis and is reduced by leucine deprivation in the liver in a GCN2-dependent manner.
Regulation:
Gcn2 is held in its inactive state via several auto-inhibitory molecular interactions until exposed to an activating signal. Binding of uncharged tRNA to the synthetase-like domain results in allosteric re-arrangements. This leads to Gcn2 auto-phosphorylation at specific sites in the activation loop of the kinase domain. This phosphorylation then allows Gcn2 to efficiently phosphorylate eIF2α.In yeast cells, GCN2 is kept inactive via phosphorylation at serine 577, which is thought to depend on the activity of TORC1. Inactivation of TORC1 by Rapamycin affects GCN2 and at least partly by dephosphorylation of serine 577. This leads to activation of GCN2 even in amino acid replete cells, probably by increasing the affinity of GCN2 for uncharged tRNA, so that even basal levels permit tRNA binding. However, this phosphorylation site in Gcn2 is not conserved in fission yeast or in mammalian cells.Another stimulatory input to GCN2 is exerted by a complex of GCN1/GCN20. GCN1/GCN20 shows structural similarity to eEF3, a factor important in the binding of tRNA to ribosomes. The GCN1/GCN20 complex physically interacts with GCN2 by binding to its N-terminus. It is thought that GCN1/GCN20 facilitates the transfer of tRNA from the ribosomal A site to the HisRS-like domain of GCN2. An additional mechanism of regulation of this protein is through the conserved protein IMPACT, that acts both in yeast, nematodes and mammals as an inhibitor of GCN2.
Homologues:
There are also GCN2 homologues in Neurospora crassa, C. elegans, Drosophila melanogaster and mice. Thus, GCN2 may be the most widespread and founding member of the eIF-2α kinase subfamily. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multi-layer insulation**
Multi-layer insulation:
Multi-layer insulation (MLI) is thermal insulation composed of multiple layers of thin sheets and is often used on spacecraft and cryogenics. Also referred to as superinsulation, MLI is one of the main items of the spacecraft thermal design, primarily intended to reduce heat loss by thermal radiation. In its basic form, it does not appreciably insulate against other thermal losses such as heat conduction or convection. It is therefore commonly used on satellites and other applications in vacuum where conduction and convection are much less significant and radiation dominates. MLI gives many satellites and other space probes the appearance of being covered with gold foil which is the effect of the amber-coloured Kapton layer deposited over the silver Aluminized mylar.
Multi-layer insulation:
For non-spacecraft applications, MLI works only as part of a vacuum insulation system. For use in cryogenics, wrapped MLI can be installed inside the annulus of vacuum jacketed pipes. MLI may also be combined with advanced vacuum insulation for use in high temperature applications.
Function and design:
The principle behind MLI is radiation balance. To see why it works, start with a concrete example - imagine a square meter of a surface in outer space, held at a fixed temperature of 300 K, with an emissivity of 1, facing away from the sun or other heat sources. From the Stefan–Boltzmann law, this surface will radiate 460 W. Now imagine placing a thin (but opaque) layer 1 cm away from the plate, also with an emissivity of 1. This new layer will cool until it is radiating 230 W from each side, at which point everything is in balance. The new layer receives 460 W from the original plate. 230 W is radiated back to the original plate, and 230 W to space. The original surface still radiates 460 W, but gets 230 W back from the new layers, for a net loss of 230 W. So overall, the radiation losses from the surface have been reduced by half by adding the additional layer.
Function and design:
More layers can be added to reduce the loss further. The blanket can be further improved by making the outside surfaces highly reflective to thermal radiation, which reduces both absorption and emission. The performance of a layer stack can be quantified in terms of its overall heat transfer coefficient U, which defines the radiative heat flow rate Q between two parallel surfaces with a temperature difference ΔT and area A as Q=UAΔT.
Function and design:
Theoretically, the heat transfer coefficient between two layers with emissivities ϵ1 and ϵ2 , at absolute temperatures T1 and T2 under vacuum, is U=σ(T12+T22)(T1+T2)11/ϵ1+1/ϵ2−1, where 5.7 10 −8 Wm−2K−4 is the Stefan-Boltzmann Constant. If the temperature difference is not too large ( << (T1+T2)/2 , then a stack of N of layers, all with the same emissivity ϵ on both sides, will have an overall heat transfer coefficient U=4σT31(N−1)(2/ϵ−1), where T=(T1+T2)/2 is the average temperature of the layers. Clearly, increasing the number of layers and decreasing the emissivity both lower the heat transfer coefficient, which is equivalent to a higher insulation value. In space, where the apparent outside temperature could be 3 K (cosmic background radiation), the exact U value is different.
Function and design:
The layers of MLI can be arbitrarily close to each other, as long as they are not in thermal contact. The separation space only needs to be minute, which is the function of the extremely thin scrim or polyester 'bridal veil' as shown in the photo. To reduce weight and blanket thickness, the internal layers are made very thin, but they must be opaque to thermal radiation. Since they don't need much structural strength, these internal layers are usually made of very thin plastic, about 6 μm (1/4 mil) thick, such as Mylar or Kapton, coated on one or both sides with a thin layer of metal, typically silver or aluminium. For compactness, the layers are spaced as close to each other as possible, though without touching, since there should be little or no thermal conduction between the layers. A typical insulation blanket has 40 or more layers. The layers may be embossed or crinkled, so they only touch at a few points, or held apart by a thin cloth mesh, or scrim, which can be seen in the picture above. The outer layers must be stronger, and are often thicker and stronger plastic, reinforced with a stronger scrim material such as fiberglass.
Function and design:
In satellite applications, the MLI will be full of air at launch time. As the rocket ascends, this air must be able to escape without damaging the blanket. This may require holes or perforations in the layers, even though this reduces their effectiveness.In cryogenics, the MLI is the most effective kind of insulation. Therefore, it is commonly used in liquefied gas tanks (e.g. LNG, LN2, LH2, LO2), cryostats, cryogenic pipelines and superconducting devices. Additionally it is valued for its compact size and weight. A blanket composed of 40 layers of MLI has thickness of about 20 mm and weight of approximately 1,2 kg/m2.Methods tend to vary between manufacturers with some MLI blankets being constructed primarily using sewing technology. The layers are cut, stacked on top of each other, and sewn together at the edges.
Function and design:
Other more recent methods include the use of Computer-aided design and Computer-aided manufacturing technology to weld a precise outline of the final blanket shape using Ultrasonic welding onto a "pack" (the final set of layers before the external "skin" is added by hand.) Seams and gaps in the insulation are responsible for most of the heat leakage through MLI blankets. A new method is being developed to use polyetheretherketone (PEEK) tag pins (similar to plastic hooks used to attach price tags to garments) to fix the film layers in place instead of sewing to improve the thermal performance.
Additional properties:
Spacecraft also may use MLI as a first line of defence against dust impacts. This normally means spacing it a cm or so away from the surface it is insulating. Also, one or more of the layers may be replaced by a mechanically strong material, such as beta cloth.
In most applications the insulating layers must be grounded, so they cannot build up a charge and arc, causing radio interference. Since the normal construction results in electrical as well as thermal insulation, these applications may include aluminium spacers as opposed to cloth scrim at the points where the blankets are sewn together.
Using similar materials, Single-layer Insulation and Dual-layer insulation (SLI and DLI respectively) are also commonplace on spacecraft. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Format-preserving encryption**
Format-preserving encryption:
In cryptography, format-preserving encryption (FPE), refers to encrypting in such a way that the output (the ciphertext) is in the same format as the input (the plaintext). The meaning of "format" varies. Typically only finite sets of characters are used; numeric, alphabetic or alphanumeric. For example: Encrypting a 16-digit credit card number so that the ciphertext is another 16-digit number.
Format-preserving encryption:
Encrypting an English word so that the ciphertext is another English word.
Encrypting an n-bit number so that the ciphertext is another n-bit number (this is the definition of an n-bit block cipher).For such finite domains, and for the purposes of the discussion below, the cipher is equivalent to a permutation of N integers {0, ... , N−1} where N is the size of the domain.
Motivation:
Restricted field lengths or formats One motivation for using FPE comes from the problems associated with integrating encryption into existing applications, with well-defined data models. A typical example would be a credit card number, such as 1234567812345670 (16 bytes long, digits only).
Motivation:
Adding encryption to such applications might be challenging if data models are to be changed, as it usually involves changing field length limits or data types. For example, output from a typical block cipher would turn credit card number into a hexadecimal (e.g.0x96a45cbcf9c2a9425cde9e274948cb67, 34 bytes, hexadecimal digits) or Base64 value (e.g. lqRcvPnCqUJc3p4nSUjLZw==, 24 bytes, alphanumeric and special characters), which will break any existing applications expecting the credit card number to be a 16-digit number.
Motivation:
Apart from simple formatting problems, using AES-128-CBC, this credit card number might get encrypted to the hexadecimal value 0xde015724b081ea7003de4593d792fd8b695b39e095c98f3a220ff43522a2df02. In addition to the problems caused by creating invalid characters and increasing the size of the data, data encrypted using the CBC mode of an encryption algorithm also changes its value when it is decrypted and encrypted again. This happens because the random seed value that is used to initialize the encryption algorithm and is included as part of the encrypted value is different for each encryption operation. Because of this, it is impossible to use data that has been encrypted with the CBC mode as a unique key to identify a row in a database.
Motivation:
FPE attempts to simplify the transition process by preserving the formatting and length of the original data, allowing a drop-in replacement of plaintext values with their ciphertexts in legacy applications.
Comparison to truly random permutations:
Although a truly random permutation is the ideal FPE cipher, for large domains it is infeasible to pre-generate and remember a truly random permutation. So the problem of FPE is to generate a pseudorandom permutation from a secret key, in such a way that the computation time for a single value is small (ideally constant, but most importantly smaller than O(N)).
Comparison to block ciphers:
An n-bit block cipher technically is a FPE on the set {0, ..., 2n-1}. If an FPE is needed on one of these standard sized sets (for example, n = 64 for DES and n = 128 for AES) a block cipher of the right size can be used.
However, in typical usage, a block cipher is used in a mode of operation that allows it to encrypt arbitrarily long messages, and with an initialization vector as discussed above. In this mode, a block cipher is not an FPE.
Definition of security:
In cryptographic literature (see most of the references below), the measure of a "good" FPE is whether an attacker can distinguish the FPE from a truly random permutation. Various types of attackers are postulated, depending on whether they have access to oracles or known ciphertext/plaintext pairs.
Algorithms:
In most of the approaches listed here, a well-understood block cipher (such as AES) is used as a primitive to take the place of an ideal random function. This has the advantage that incorporation of a secret key into the algorithm is easy. Where AES is mentioned in the following discussion, any other good block cipher would work as well.
Algorithms:
The FPE constructions of Black and Rogaway Implementing FPE with security provably related to that of the underlying block cipher was first undertaken in a paper by cryptographers John Black and Phillip Rogaway, which described three ways to do this. They proved that each of these techniques is as secure as the block cipher that is used to construct it. This means that if the AES algorithm is used to create an FPE algorithm, then the resulting FPE algorithm is as secure as AES because an adversary capable of defeating the FPE algorithm can also defeat the AES algorithm. Therefore, if AES is secure, then the FPE algorithms constructed from it are also secure. In all of the following, E denotes the AES encryption operation that is used to construct an FPE algorithm and F denotes the FPE encryption operation.
Algorithms:
FPE from a prefix cipher One simple way to create an FPE algorithm on {0, ..., N-1} is to assign a pseudorandom weight to each integer, then sort by weight. The weights are defined by applying an existing block cipher to each integer. Black and Rogaway call this technique a "prefix cipher" and showed it was probably as good as the block cipher used.
Algorithms:
Thus, to create a FPE on the domain {0,1,2,3}, given a key K apply AES(K) to each integer, giving, for example, weight(0) = 0x56c644080098fc5570f2b329323dbf62 weight(1) = 0x08ee98c0d05e3dad3eb3d6236f23e7b7 weight(2) = 0x47d2e1bf72264fa01fb274465e56ba20 weight(3) = 0x077de40941c93774857961a8a772650d Sorting [0,1,2,3] by weight gives [3,1,2,0], so the cipher is F(0) = 3 F(1) = 1 F(2) = 2 F(3) = 0 This method is only useful for small values of N. For larger values, the size of the lookup table and the required number of encryptions to initialize the table gets too big to be practical.
Algorithms:
FPE from cycle walking If there is a set M of allowed values within the domain of a pseudorandom permutation P (for example P can be a block cipher like AES), an FPE algorithm can be created from the block cipher by repeatedly applying the block cipher until the result is one of the allowed values (within M).
Algorithms:
CycleWalkingFPE(x) { if P(x) is an element of M then return P(x) else return CycleWalkingFPE(P(x)) } The recursion is guaranteed to terminate. (Because P is one-to-one and the domain is finite, repeated application of P forms a cycle, so starting with a point in M the cycle will eventually terminate in M.) This has the advantage that the elements of M do not have to be mapped to a consecutive sequence {0,...,N-1} of integers. It has the disadvantage, when M is much smaller than P's domain, that too many iterations might be required for each operation. If P is a block cipher of a fixed size, such as AES, this is a severe restriction on the sizes of M for which this method is efficient.
Algorithms:
For example, an application may want to encrypt 100-bit values with AES in a way that creates another 100-bit value. With this technique, AES-128-ECB encryption can be applied until it reaches a value which has all of its 28 highest bits set to 0, which will take an average of 228 iterations to happen.
Algorithms:
FPE from a Feistel network It is also possible to make a FPE algorithm using a Feistel network. A Feistel network needs a source of pseudo-random values for the sub-keys for each round, and the output of the AES algorithm can be used as these pseudo-random values. When this is done, the resulting Feistel construction is good if enough rounds are used.One way to implement an FPE algorithm using AES and a Feistel network is to use as many bits of AES output as are needed to equal the length of the left or right halves of the Feistel network. If a 24-bit value is needed as a sub-key, for example, it is possible to use the lowest 24 bits of the output of AES for this value.
Algorithms:
This may not result in the output of the Feistel network preserving the format of the input, but it is possible to iterate the Feistel network in the same way that the cycle-walking technique does to ensure that format can be preserved. Because it is possible to adjust the size of the inputs to a Feistel network, it is possible to make it very likely that this iteration ends very quickly on average. In the case of credit card numbers, for example, there are 1015 possible 16-digit credit card numbers (accounting for the redundant check digit), and because the 1015 ≈ 249.8, using a 50-bit wide Feistel network along with cycle walking will create an FPE algorithm that encrypts fairly quickly on average.
Algorithms:
The Thorp shuffle A Thorp shuffle is like an idealized card-shuffle, or equivalently a maximally-unbalanced Feistel cipher where one side is a single bit. It is easier to prove security for unbalanced Feistel ciphers than for balanced ones.
VIL mode For domain sizes that are a power of two, and an existing block cipher with a smaller block size, a new cipher may be created using VIL mode as described by Bellare, Rogaway.
Hasty Pudding Cipher The Hasty Pudding Cipher uses custom constructions (not depending on existing block ciphers as primitives) to encrypt arbitrary finite small domains.
The FFSEM/FFX mode of AES The FFSEM mode of AES (specification) that has been accepted for consideration by NIST uses the Feistel network construction of Black and Rogaway described above, with AES for the round function, with one slight modification: a single key is used and is tweaked slightly for each round.
As of February 2010, FFSEM has been superseded by the FFX mode written by Mihir Bellare, Phillip Rogaway, and Terence Spies. (specification, NIST Block Cipher Modes Development, 2010).
Algorithms:
FPE for JPEG 2000 encryption In JPEG 2000 standard, the marker codes (in the range 0xFF90 through 0xFFFF) should not appear in the plaintext and ciphertext. The simple modular-0xFF90 technique cannot be applied to solve the JPEG 2000 encryption problem. For example, the ciphertext words 0x23FF and 0x9832 are valid, but their combination 0x23FF9832 becomes invalid since it introduces the marker code 0xFF98. Similarly, the simple cycle-walking technique cannot be applied to solve the JPEG2000 encryption problem since two valid ciphertext blocks may give invalid ciphertext when they get combined. For example, if the first ciphertext block ends with bytes "...30FF" and the second ciphertext block starts with bytes "9832...", then the marker code "0xFF98" would appear in the ciphertext.
Algorithms:
Two mechanisms for format-preserving encryption of JPEG 2000 were given in the paper "Efficient and Secure Encryption Schemes for JPEG2000" by Hongjun Wu and Di Ma. To perform format-preserving encryption of JPEG 2000, the technique is to exclude the byte "0xFF" in the encryption and decryption. Then a JPEG 2000 encryption mechanism performs modulo-n addition with stream cipher; another JPEG 2000 encryption mechanism performs the cycle-walking technique with block cipher.
Algorithms:
Other FPE constructions Several FPE constructs are based on adding the output of a standard cipher, modulo n, to the data to be encrypted, with various methods of unbiasing the result. The modulo-n addition shared by many of the constructs is the immediately obvious solution to the FPE problem (thus its use in a number of cases), with the main differences being the unbiasing mechanisms used.
Algorithms:
Section 8 of the FIPS 74, Federal Information Processing Standards Publication 1981 Guidelines for Implementing and Using the NBS Data Encryption Standard, describes a way to use the DES encryption algorithm in a manner that preserves the format of the data via modulo-n addition followed by an unbiasing operation. This standard was withdrawn on May 19, 2005, so the technique should be considered obsolete in terms of being a formal standard.
Algorithms:
Another early mechanism for format-preserving encryption was Peter Gutmann's "Encrypting data with a restricted range of values" which again performs modulo-n addition on any cipher with some adjustments to make the result uniform, with the resulting encryption being as strong as the underlying encryption algorithm on which it is based.
The paper "Using Datatype-Preserving Encryption to Enhance Data Warehouse Security" by Michael Brightwell and Harry Smith describes a way to use the DES encryption algorithm in a way that preserves the format of the plaintext. This technique doesn't appear to apply an unbiasing step as do the other modulo-n techniques referenced here.
The paper "Format-Preserving Encryption" by Mihir Bellare and Thomas Ristenpart describes using "nearly balanced" Feistel networks to create secure FPE algorithms.
The paper "Format Controlling Encryption Using Datatype Preserving Encryption" by Ulf Mattsson describes other ways to create FPE algorithms.
An example of FPE algorithm is FNR (Flexible Naor and Reingold).
Acceptance of FPE algorithms by standards authorities:
NIST Special Publication 800-38G, "Recommendation for Block Cipher Modes of Operation: Methods for Format-Preserving Encryption" specifies two methods: FF1 and FF3. Details on the proposals submitted for each can be found at the NIST Block Cipher Modes Development site, including patent and test vector information. Sample values are available for both FF1 and FF3.
Acceptance of FPE algorithms by standards authorities:
FF1 is FFX[Radix] "Format-preserving Feistel-based Encryption Mode" which is also in standards processes under ANSI X9 as X9.119 and X9.124. It was submitted to NIST by Mihir Bellare of University of California, San Diego, Phillip Rogaway of University of California, Davis, and Terence Spies of Voltage Security Inc. Test vectors are supplied and parts of it are patented. (DRAFT SP 800-38G Rev 1) requires the minimum domain size of the data being encrypted to be 1 million (previously 100).FF3 is BPS named after the authors. It was submitted to NIST by Eric Brier, Thomas Peyrin and Jacques Stern of Ingenico, France. Authors declared to NIST that their algorithm is not patented. The CyberRes Voltage product, although claims to own patents also for BPS mode. On 12 April 2017, NIST concluded that FF3 is "no longer suitable as a general-purpose FPE method" because researchers found a vulnerability.
Acceptance of FPE algorithms by standards authorities:
FF3-1 (DRAFT SP 800-38G Rev 1) replaces FF3 and requires the minimum domain size of the data being encrypted to be 1 million (previously 100).Another mode was included in the draft NIST guidance but was removed before final publication.
Acceptance of FPE algorithms by standards authorities:
FF2 is VAES3 scheme for FFX: An addendum to "The FFX Mode of Operation for Preserving Encryption": A parameter collection for encipher strings of arbitrary radix with subkey operation to lengthen life of the enciphering key. It was submitted to NIST by Joachim Vance of VeriFone Systems Inc. Test vectors are not supplied separately from FF1 and parts of it are patented. Authors have submitted a modified algorithm as DFF which is under active consideration by NIST.Korea has also developed a FPE standard, FEA-1 and FEA-2.
Acceptance of FPE algorithms by standards authorities:
Implementations Open Source implementations of FF1 and FF3 are publicly available in C language, Go language, Java, Node.js, Python, C#/.Net and Rust | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Christmas store**
Christmas store:
A Christmas store is a retail store specializing in Christmas supplies, especially decorations. Many Christmas stores operate only seasonally in the month or two before the Christmas holidays, perhaps set up in otherwise vacant shopping mall space. However, in some places, Christmas stores operate year-round, becoming somewhat of a tourist attraction in their own right.
Examples of items that feature prominently in Christmas stores include nutcrackers, angel figures, and holiday-related stuffed animals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CyberTracker**
CyberTracker:
CyberTracker is software from a South African non-profit company, CyberTracker Conservation, that develops handheld data capture solutions.The software was first developed as a way to allow illiterate animal trackers to communicate their environmental observations. A prototype was used in 2002 to record details of animals killed in an outbreak of ebola. It has since evolved to become a general purpose data capture and visualization system. However, it retains the ability to be used by illiterate and low-literate users.
CyberTracker:
CyberTracker's primary user base is wildlife biologists, conservationists and disaster relief workers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HATEOAS**
HATEOAS:
Hypermedia as the engine of application state (HATEOAS) is a constraint of the REST application architecture that distinguishes it from other network application architectures.
HATEOAS:
With HATEOAS, a client interacts with a network application whose application servers provide information dynamically through hypermedia. A REST client needs little to no prior knowledge about how to interact with an application or server beyond a generic understanding of hypermedia. By contrast, clients and servers in Common Object Request Broker Architecture (CORBA) interact through a fixed interface shared through documentation or an interface description language (IDL).
HATEOAS:
The restrictions imposed by HATEOAS decouple client and server. This enables server functionality to evolve independently.
Example:
A user-agent makes an HTTP request to a REST API through an entry point URL. All subsequent requests the user-agent may make are discovered inside the response to each request. The media types used for these representations, and the link relations they may contain, are part of the API. The client transitions through application states by selecting from the links within a representation or by manipulating the representation in other ways afforded by its media type. In this way, RESTful interaction is driven by hypermedia, rather than out-of-band information.For example, this GET request fetches an account resource, requesting details in a JSON representation: The response is: The response contains these possible follow-up links: POST a deposit, withdrawal, transfer, or close request (to close the account).
Example:
As an example, later, after the account has been overdrawn, there is a different set of available links, because the account is overdrawn.
Now only one link is available: to deposit more money (by POSTing to deposits). In its current state, the other links are not available. Hence the term Engine of Application State. What actions are possible varies as the state of the resource varies.
A client does not need to understand every media type and communication mechanism offered by the server. The ability to understand new media types may be acquired at run-time through "code-on-demand" provided to the client by the server.
Origins:
The HATEOAS constraint is an essential part of the "uniform interface" feature of REST, as defined in Roy Fielding's doctoral dissertation. Fielding has further described the concept on his blog.The purpose of some of the strictness of this and other REST constraints, Fielding explains, is "software design on the scale of decades: every detail is intended to promote software longevity and independent evolution. Many of the constraints are directly opposed to short-term efficiency. Unfortunately, people are fairly good at short-term design, and usually awful at long-term design".
Implementations:
HAL JSON-LD Siren Collection+JSON JSON:API Hydra | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trapped ion quantum computer**
Trapped ion quantum computer:
A trapped ion quantum computer is one proposed approach to a large-scale quantum computer. Ions, or charged atomic particles, can be confined and suspended in free space using electromagnetic fields. Qubits are stored in stable electronic states of each ion, and quantum information can be transferred through the collective quantized motion of the ions in a shared trap (interacting through the Coulomb force). Lasers are applied to induce coupling between the qubit states (for single qubit operations) or coupling between the internal qubit states and the external motional states (for entanglement between qubits).The fundamental operations of a quantum computer have been demonstrated experimentally with the currently highest accuracy in trapped ion systems. Promising schemes in development to scale the system to arbitrarily large numbers of qubits include transporting ions to spatially distinct locations in an array of ion traps, building large entangled states via photonically connected networks of remotely entangled ion chains, and combinations of these two ideas. This makes the trapped ion quantum computer system one of the most promising architectures for a scalable, universal quantum computer. As of April 2018, the largest number of particles to be controllably entangled is 20 trapped ions.
History:
The first implementation scheme for a controlled-NOT quantum gate was proposed by Ignacio Cirac and Peter Zoller in 1995, specifically for the trapped ion system. The same year, a key step in the controlled-NOT gate was experimentally realized at NIST Ion Storage Group, and research in quantum computing began to take off worldwide.
In 2021, researchers from the University of Innsbruck presented a quantum computing demonstrator that fits inside two 19-inch server racks, the world's first quality standards-meeting compact trapped ion quantum computer.
Paul trap:
The electrodynamic quadrupole ion trap currently used in trapped ion quantum computing research was invented in the 1950s by Wolfgang Paul (who received the Nobel Prize for his work in 1989). Charged particles cannot be trapped in 3D by just electrostatic forces because of Earnshaw's theorem. Instead, an electric field oscillating at radio frequency (RF) is applied, forming a potential with the shape of a saddle spinning at the RF frequency. If the RF field has the right parameters (oscillation frequency and field strength), the charged particle becomes effectively trapped at the saddle point by a restoring force, with the motion described by a set of Mathieu equations.This saddle point is the point of minimized energy magnitude, |E(x)| , for the ions in the potential field. The Paul trap is often described as a harmonic potential well that traps ions in two dimensions (assume x^ and y^ without loss of generality) and does not trap ions in the z^ direction. When multiple ions are at the saddle point and the system is at equilibrium, the ions are only free to move in z^ . Therefore, the ions will repel each other and create a vertical configuration in z^ , the simplest case being a linear strand of only a few ions. Coulomb interactions of increasing complexity will create a more intricate ion configuration if many ions are initialized in the same trap. Furthermore, the additional vibrations of the added ions greatly complicate the quantum system, which makes initialization and computation more difficult.Once trapped, the ions should be cooled such that kBT≪ℏωz (see Lamb Dicke regime). This can be achieved by a combination of Doppler cooling and resolved sideband cooling. At this very low temperature, vibrational energy in the ion trap is quantized into phonons by the energy eigenstates of the ion strand, which are called the center of mass vibrational modes. A single phonon's energy is given by the relation ℏωz . These quantum states occur when the trapped ions vibrate together and are completely isolated from the external environment. If the ions are not properly isolated, noise can result from ions interacting with external electromagnetic fields, which creates random movement and destroys the quantized energy states.
Requirements for quantum computation:
The full requirements for a functional quantum computer are not entirely known, but there are many generally accepted requirements. David DiVincenzo outlined several of these criterion for quantum computing.
Requirements for quantum computation:
Qubits Any two-level quantum system can form a qubit, and there are two predominant ways to form a qubit using the electronic states of an ion: Two ground state hyperfine levels (these are called "hyperfine qubits") A ground state level and an excited level (these are called the "optical qubits")Hyperfine qubits are extremely long-lived (decay time of the order of thousands to millions of years) and phase/frequency stable (traditionally used for atomic frequency standards). Optical qubits are also relatively long-lived (with a decay time of the order of a second), compared to the logic gate operation time (which is of the order of microseconds). The use of each type of qubit poses its own distinct challenges in the laboratory.
Requirements for quantum computation:
Initialization Ionic qubit states can be prepared in a specific qubit state using a process called optical pumping. In this process, a laser couples the ion to some excited states which eventually decay to one state which is not coupled to the laser. Once the ion reaches that state, it has no excited levels to couple to in the presence of that laser and, therefore, remains in that state. If the ion decays to one of the other states, the laser will continue to excite the ion until it decays to the state that does not interact with the laser. This initialization process is standard in many physics experiments and can be performed with extremely high fidelity (>99.9%).The system's initial state for quantum computation can therefore be described by the ions in their hyperfine and motional ground states, resulting in an initial center of mass phonon state of |0⟩ (zero phonons).
Requirements for quantum computation:
Measurement Measuring the state of the qubit stored in an ion is quite simple. Typically, a laser is applied to the ion that couples only one of the qubit states. When the ion collapses into this state during the measurement process, the laser will excite it, resulting in a photon being released when the ion decays from the excited state. After decay, the ion is continually excited by the laser and repeatedly emits photons. These photons can be collected by a photomultiplier tube (PMT) or a charge-coupled device (CCD) camera. If the ion collapses into the other qubit state, then it does not interact with the laser and no photon is emitted. By counting the number of collected photons, the state of the ion may be determined with a very high accuracy (>99.99%).
Requirements for quantum computation:
Arbitrary single qubit rotation One of the requirements of universal quantum computing is to coherently change the state of a single qubit. For example, this can transform a qubit starting out in 0 into any arbitrary superposition of 0 and 1 defined by the user. In a trapped ion system, this is often done using magnetic dipole transitions or stimulated Raman transitions for hyperfine qubits and electric quadrupole transitions for optical qubits. The term "rotation" alludes to the Bloch sphere representation of a qubit pure state. Gate fidelity can be greater than 99%.
Requirements for quantum computation:
The rotation operators Rx(θ) and Ry(θ) can be applied to individual ions by manipulating the frequency of an external electromagnetic field from and exposing the ions to the field for specific amounts of time. These controls create a Hamiltonian of the form exp exp (−iϕ)) . Here, S+ and S− are the raising and lowering operators of spin (see Ladder operator). These rotations are the universal building blocks for single-qubit gates in quantum computing.To obtain the Hamiltonian for the ion-laser interaction, apply the Jaynes–Cummings model. Once the Hamiltonian is found, the formula for the unitary operation performed on the qubit can be derived using the principles of quantum time evolution. Although this model utilizes the rotating wave approximation, it proves to be effective for the purposes of trapped-ion quantum computing.
Requirements for quantum computation:
Two qubit entangling gates Besides the controlled-NOT gate proposed by Cirac and Zoller in 1995, many equivalent, but more robust, schemes have been proposed and implemented experimentally since. Recent theoretical work by JJ. Garcia-Ripoll, Cirac, and Zoller have shown that there are no fundamental limitations to the speed of entangling gates, but gates in this impulsive regime (faster than 1 microsecond) have not yet been demonstrated experimentally. The fidelity of these implementations has been greater than 99%.
Requirements for quantum computation:
Scalable trap designs Quantum computers must be capable of initializing, storing, and manipulating many qubits at once in order to solve difficult computational problems. However, as previously discussed, a finite number of qubits can be stored in each trap while still maintaining their computational abilities. It is therefore necessary to design interconnected ion traps that are capable of transferring information from one trap to another. Ions can be separated from the same interaction region to individual storage regions and brought back together without losing the quantum information stored in their internal states. Ions can also be made to turn corners at a "T" junction, allowing a two dimensional trap array design. Semiconductor fabrication techniques have also been employed to manufacture the new generation of traps, making the 'ion trap on a chip' a reality. An example is the quantum charge-coupled device (QCCD) designed by D. Kielpinski, Christopher Monroe and David J. Wineland. QCCDs resemble mazes of electrodes with designated areas for storing and manipulating qubits.
Requirements for quantum computation:
The variable electric potential created by the electrodes can both trap ions in specific regions and move them through the transport channels, which negates the necessity of containing all ions in a single trap. Ions in the QCCD's memory region are isolated from any operations and therefore the information contained in their states is kept for later use. Gates, including those that entangle two ion states, are applied to qubits in the interaction region by the method already described in this article.
Requirements for quantum computation:
Decoherence in scalable traps When an ion is being transported between regions in an interconnected trap and is subjected to a nonuniform magnetic field, decoherence can occur in the form of the equation below (see Zeeman effect). This effectively changes the relative phase of the quantum state. The up and down arrows correspond to a general superposition qubit state, in this case the ground and excited states of the ion.
Requirements for quantum computation:
exp (iα)|↑⟩+|↓⟩ Additional relative phases could arise from physical movements of the trap or the presence of unintended electric fields. If the user could determine the parameter α, accounting for this decoherence would be relatively simple, as known quantum information processes exist for correcting a relative phase. However, since α from the interaction with the magnetic field is path-dependent, the problem is highly complex. Considering the multiple ways that decoherence of a relative phase can be introduced in an ion trap, reimagining the ion state in a new basis that minimizes decoherence could be a way to eliminate the issue.
Requirements for quantum computation:
One way to combat decoherence is to represent the quantum state in a new basis called the decoherence-free subspaces, or DFS., with basis states ↑↓ ⟩ and ↓↑ ⟩ . The DFS is actually the subspace of two ion states, such that if both ions acquire the same relative phase, the total quantum state in the DFS will be unaffected.
Challenges:
Trapped ion quantum computers theoretically meet all of DiVincenzo's criteria for quantum computing, but implementation of the system can be quite difficult. The main challenges facing trapped ion quantum computing are the initialization of the ion's motional states, and the relatively brief lifetimes of the phonon states. Decoherence also proves to be challenging to eliminate, and is caused when the qubits interact with the external environment undesirably.
Challenges:
CNOT gate implementation The controlled NOT gate is a crucial component for quantum computing, as any quantum gate can be created by a combination of CNOT gates and single-qubit rotations. It is therefore important that a trapped-ion quantum computer can perform this operation by meeting the following three requirements.
First, the trapped ion quantum computer must be able to perform arbitrary rotations on qubits, which are already discussed in the "arbitrary single-qubit rotation" section.
Challenges:
The next component of a CNOT gate is the controlled phase-flip gate, or the controlled-X gate (see quantum logic gate). In a trapped ion quantum computer, the state of the center of mass phonon functions as the control qubit, and the internal atomic spin state of the ion is the working qubit. The phase of the working qubit will therefore be flipped if the phonon qubit is in the state |1⟩ Lastly, a SWAP gate must be implemented, acting on both the ion state and the phonon state.Two alternate schemes to represent the CNOT gates are presented in Michael Nielsen and Isaac Chuang's Quantum Computation and Quantum Information and Cirac and Zoller's Quantum Computation with Cold Trapped Ions.
Additional resources:
Wineland, D. J.; Monroe, C.; Itano, W. M.; Leibfried, D.; King, B. E.; Meekhof, D. M. (1998). "Experimental Issues in Coherent Quantum-State Manipulation of Trapped Atomic Ions". Journal of Research of the National Institute of Standards and Technology. 103 (3): 259–328. arXiv:quant-ph/9710025. doi:10.6028/jres.103.019. PMC 4898965. PMID 28009379.
Leibfried, D; Blatt, R; Monroe, C; Wineland, D (2003). "Quantum dynamics of single trapped ions". Reviews of Modern Physics. 75 (1): 281–324. Bibcode:2003RvMP...75..281L. doi:10.1103/revmodphys.75.281.
Steane, A. (1997). "The ion trap quantum information processor". Appl. Phys. B. 64 (6): 623–643. arXiv:quant-ph/9608011. Bibcode:1996ApPhB..64..623S. doi:10.1007/s003400050225. S2CID 2061791.
Monroe, C.; et al. (1995). "Demonstration of a Fundamental Quantum Logic Gate". Phys. Rev. Lett. 75 (25): 4714–4717. Bibcode:1995PhRvL..75.4714M. doi:10.1103/physrevlett.75.4714. PMID 10059979.
Trapped ion computer on arxiv.org Friedenauer, A.; Schmitz, H.; Glueckert, J. T.; Porras, D.; Schaetz, T. (2008). "Simulating a quantum magnet with trapped ions". Nature Physics. 4 (10): 757–761. Bibcode:2008NatPh...4..757F. doi:10.1038/nphys1032.
Moehring, D. L.; Maunz, P.; Olmschenk, S.; Younge, K. C.; Matsukevich, D. N.; Duan, L.-M.; Monroe, C. (2007). "Entanglement of single-atom quantum bits at a distance". Nature. 449 (7158): 68–71. Bibcode:2007Natur.449...68M. doi:10.1038/nature06118. hdl:2027.42/62780. PMID 17805290. S2CID 19624141.
Stick, D.; Hensinger, W. K.; Olmschenk, S.; Madsen, M. J.; Schwab, K.; Monroe, C. (2006). "Ion trap in a semiconductor chip". Nature Physics. 2 (1): 36–39. arXiv:quant-ph/0601052. Bibcode:2006NatPh...2...36S. doi:10.1038/nphys171. S2CID 5419269.
Leibfried, D.; Knill, E.; Seidelin, S.; Britton, J.; Blakestad, R. B.; Chiaverini, J.; Hume, D. B.; Itano, W. M.; Jost, J. D.; Langer, C.; Ozeri, R.; Reichle, R.; Wineland, D. J. (2005). "Creation of a six-atom 'Schrödinger cat' state". Nature. 438 (7068): 639–642. Bibcode:2005Natur.438..639L. doi:10.1038/nature04251. PMID 16319885. S2CID 4370887.
Häffner, H.; Hänsel, W.; Roos, C. F.; Benhelm, J.; Chek-al-kar, D.; Chwalla, M.; Körber, T.; Rapol, U. D.; Riebe, M.; Schmidt, P. O.; Becher, C.; Gühne, O.; Dür, W.; Blatt, R. (2005). "Scalable multiparticle entanglement of trapped ions". Nature. 438 (7068): 643–646. arXiv:quant-ph/0603217. Bibcode:2005Natur.438..643H. doi:10.1038/nature04279. PMID 16319886. S2CID 4411480.
Chiaverini, J.; Britton, J.; Leibfried, D.; Knill, E.; Barrett, M. D.; Blakestad, R. B.; Itano, W.M.; Jost, J.D.; Langer, C.; Ozeri, R.; Schaetz, T.; Wineland, D.J. (2005). "Implementation of the semiclassical quantum Fourier transform in a scalable system". Science. 308 (5724): 997–1000. Bibcode:2005Sci...308..997C. doi:10.1126/science.1110335. PMID 15890877. S2CID 15550997.
Blinov, B. B.; Moehring, D. L.; Duan, L.- M.; Monroe, C. (2004). "Observation of entanglement between a single trapped atom and a single photon" (PDF). Nature. 428 (6979): 153–157. Bibcode:2004Natur.428..153B. doi:10.1038/nature02377. hdl:2027.42/62924. PMID 15014494. S2CID 4314514.
Chiaverini, J.; Leibried, D.; Schaetz, T.; Barrett, M. D.; Blakestad, R. B.; Britton, J.; Itano, W.M.; Jost, J.D.; Knill, E.; Langer, C.; Ozeri, R.; Wineland, D.J. (2004). "Realization of quantum error correction". Nature. 432 (7017): 602–605. Bibcode:2004Natur.432..602C. doi:10.1038/nature03074. PMID 15577904. S2CID 167898.
Riebe, M.; Häffner, H.; Roos, C. F.; Hänsel, W.; Benhelm, J.; Lancaster, G. P. T.; Körber, T. W.; Becher, C.; Schmidt-Kaler, F.; James, D. F. V.; Blatt, R. (2004). "Deterministic quantum teleportation with atoms". Nature. 429 (6993): 734–737. Bibcode:2004Natur.429..734R. doi:10.1038/nature02570. PMID 15201903. S2CID 4397716.
Barrett, M. D.; Chiaverini, J.; Schaetz, T.; Britton, J.; Itano, W.M.; Jost, J.D.; Knill, E.; Langer, C.; Leibfried, D.; Ozeri, R.; Wineland, D.J. (2004). "Deterministic quantum teleportation of atomic qubits". Nature. 429 (6993): 737–739. Bibcode:2004Natur.429..737B. doi:10.1038/nature02608. PMID 15201904. S2CID 1608775.
Roos, C. F.; Riebe, M.; Häffner, H.; Hänsel, W.; Benhelm, J.; Lancaster, G. P. T.; Becher, C.; Schmidt-Kaler, F.; Blatt, R. (2004). "Control and measurement of three-qubit entangled state". Science. 304 (5676): 1478–1480. Bibcode:2004Sci...304.1478R. doi:10.1126/science.1097522. PMID 15178795. S2CID 12020439. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Applied Nanoscience**
Applied Nanoscience:
Applied Nanoscience is a science journal that publishes original articles on Nanotechnology. It caters to areas fundamental to building sustainable progress, including water science, advanced materials, energy, electronics, environmental science and medicine.
Abstracting and indexing:
The journal is indexed in the following databases: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Apple IIc Plus**
Apple IIc Plus:
The Apple IIc Plus is the sixth and final model in the Apple II series of personal computers, produced by Apple Computer. The "Plus" in the name was a reference to the additional features it offered over the original portable Apple IIc, such as greater storage capacity (a built-in 3.5-inch floppy drive replacing the classic 5.25-inch drive), increased processing speed, and a general standardization of the system components. In a notable change of direction, the Apple IIc Plus, for the most part, did not introduce new technology or any further evolutionary contributions to the Apple II series, instead merely integrating existing peripherals into the original Apple IIc design. The development of the 8-bit machine was criticized by quarters more interested in the significantly more advanced 16-bit Apple IIGS.
History:
The Apple IIc Plus was introduced on September 16, 1988, at the AppleFest conference in San Francisco, with less fanfare than the Apple IIc had received four years earlier. Described as little more than a "turbocharged version of the IIc with a high-capacity 3½ disk drive" by one magazine review of the time, some users were disappointed. Many IIc users already had add-ons giving them something rather close to what the new model offered.
History:
Before the official release of the machine, it had been rumored to be a slotless version of the Apple IIGS squeezed into the portable case of the Apple IIc. Apple employee John Arkley, one of the engineers working on the Apple IIc Plus project, had devised rudimentary plans for an enhanced Apple IIGS motherboard that would fit in the IIc case, and petitioned management for the go-ahead with such a project; the idea was rejected.
History:
When the project started the original plan was to just replace the 5.25-inch floppy drive with a 3.5-inch, without modifying the IIc design. Other features, consequently, were added as the project progressed. It is believed the Apple IIc Plus design, and its existence at all, was influenced by a third-party Apple IIc-compatible known as the Laser 128. It is not a coincidence that the Apple IIc Plus is very similar in design to the Laser 128EX/2 model, released shortly before the Apple IIc Plus. As it was fully backwards-compatible, the Apple IIc Plus replaced the Apple IIc.
History:
Codenames for the machine while under development included: Raisin, Pizza, and Adam Ant.
Overview:
Three major new features The Apple IIc Plus had comprised three new features compared to the IIc. The first and most noticeable feature was the replacement of the 5.25-inch floppy drive with the new 3.5-inch drive. Besides offering nearly six times the storage capacity (800 KB), the new drive had a much faster seek time (three times faster) and button-activated motorized ejection. To accommodate the increased data flow of the new drive, specialized chip circuitry called the MIG, an acronym for "Magic Interface Glue", was designed and added to the motherboard along with a dedicated 2 KB static RAM buffer (the MIG chip is the only exception to there being no new technological developments present in the machine).
Overview:
The second most important feature was a faster 65C02 processor. Running at 4 MHz, it made the computer faster than any other Apple II, including the IIGS. Apple licensed the Zip Chip Apple II accelerator from third-party developer Zip Technologies and added to the IIc Plus; instead of the all-in-one tall chip design, Apple engineers broke out the design into its core components and integrated them into the motherboard (a 4 MHz CPU, 8 KB of combined static RAM cache, and logic). Apple stated its performance as three times faster (3.3 times according to benchmarks) than any other 8-bit Apple II. The CPU acceleration was a last-minute feature addition, which in turn made the specialized circuitry for the use of a 3.5-inch drive unnecessary at full CPU speed as the machine was now fast enough to handle the data flow; that circuitry was left in place and put into operation nonetheless to support 1 MHz mode. By default the machine ran at 4 MHz, but holding down the 'ESC' key during a cold or warm boot disabled the acceleration so it could run at a standard 1 MHz operation — necessary for older software that depended on timing, especially games.
Overview:
The third major change was the internalization of the power supply into the Apple IIc Plus's case, utilizing a new miniature design from Sony and replacing the previous "brick on a leash" external supply design.
Overview:
A new look and minor changes Cosmetic changes were apparent as well. The keyboard layout and style now mirrored that of the Apple IIGS and Macintosh, including an enlarged "Return" key and updated modifier keys (Open and Solid Apple being replaced by "Command" and "Option"). Above the keyboard, the rarely used "40/80" switch was replaced by a sliding volume control (gone was the left side volume-control dial, and as a cost-cutting measure, the audio headphone jack disappeared with it). The case housing and keyboard had been changed to the light-grey Apple platinum color, creating a seamless blend between keyboard and case, making them appear almost as one. The machine, a half pound lighter than the original IIc, weighed in at 7 pounds (3.2 kg).
Overview:
In the rear of the machine the most obvious change was a three-prong AC plug connector and power switch where the voltage converter had once been, a Kensington security slot at the top left corner, and the standardization of the serial port connectors (changed from DIN-5 to mini DIN-8, but still providing an identical signal). All the same built-in Apple II peripheral equivalents and port functionality of the IIc remained, with the one exception being the floppy port. Whereas the previous IIc could only support one external 5.25-inch floppy drive and (in later models) "intelligent" storage devices such as the UniDisk 3.5, the Apple IIc Plus offered backwards port compatibility and more. Support for the external Apple 3.5 Drive used by the Apple IIGS and Macintosh was now present, and up to two external 5.25-inch floppy drives could be added as well.
Overview:
Internally, the new motherboard sported a pin connector for an internal modem; however no products ever utilized it. The same memory expansion socket introduced on late-model IIc's was present, although it was not compatible with memory cards designed for the previous system. The ROM firmware (now labeled revision "5", following in the sequence from the original IIc) remained the same size, as did RAM, meaning the machine continued to ship with only 128 KB of memory.
Overview:
Negative aspects The most criticized aspect of the Apple IIc Plus, even among collectors today, is the lack of an internal 5.25-inch drive. The reason for this is the vast majority of software for the 8-bit Apple II series shipped on 5.25-inch disks (often hardcoded for the medium) making the machine of limited use unless an external 5.25-inch drive is added. Along the same lines of breaking with standards, most 8-bit Apple II software (particularly games) had been designed to run at 'normal' 1.024 MHz operation, but the IIc Plus ran natively at 4 MHz. While user adjustable, the IIc Plus had no automated method to lock-down or "remember" the CPU speed (e.g. a physical turbo button or software-based Control Panel), meaning it would always default back to 'fast' 4 MHz operation if power cycled, reset or simply warm-booted. Acceleration could only be temporarily disabled with a special key press, making it inconvenient for users to repeatedly lower the clock speed manually (for example, booting games on different floppy diskettes). Another unpopular change was the removal of the voltage converter. While the built-in power supply made the IIc Plus a more integrated one-piece unit for desktop use, the negative aspect was the loss of the ability to operate the machine from a battery source. This, in turn, eroded the portability aspect of the IIc series–a main selling point even despite its lack of a built-in screen, rooting it further to a desktop-only environment. The removal of the audio-out jack used for headphones or a speaker was another feature users missed.
Reception:
inCider in November 1988 found that the Apple IIc Plus was faster than a IIGS, Laser 128EX/2, or Apple IIe with a Zip Chip. It favorably cited the improved keyboard, internal power supply, and Macintosh/IIGS-compatible serial port, but said that the computer "isn't everything it could be", criticizing the lack of change from the IIc's memory capacity ("128K doesn't quite cut it") and difficulty in adding more. The magazine concluded, "It's disappointing that a company as technologically sophisticated as Apple couldn't have gone a step further ... The IIc Plus is a nice system, but it's too little, too late". A separate editorial in the issue began "What if you announced a new computer and nobody cared? Apple Computer could be facing such a dilemma". Even with an accompanying price increase for the IIGS, the magazine stated that "unless you really want a small, easily transportable computer, there's little reason to buy the IIc Plus over the IIGS ... the improvements over the IIc simply aren't that significant". Regarding the 3.5-inch drive the magazine stated, "there are thousands of good, affordable programs that won't be released in 3 1/2-inch format ... bargain hunters will want access to classic educational and entertainment programs that are available only on 5 1/4-inch disks". While praising Apple for continuing to support Apple II owners the editorial criticized the company for announcing "a new product that uses old technology" at a price higher than that of the Laser 128 EX/2 or an inexpensive PC clone, comparing the IIc Plus to the unsuccessful IBM PCjr. It concluded that "the IIc Plus simply clouds the Apple II picture".
Technical specifications:
Microprocessor 65C02 running at either 1 MHz or 4 MHz (user-selectable) 8 KB SRAM cache (16 KB physical installed; 8 KB for TAG/DATA) 8-bit data bus Memory 128 KB RAM built-in Expandable from 128 KB to 1.125 MB RAM 32 KB ROM built-in Video 40 and 80 columns text, with 24 lines¹ Low-Resolution: 40×48 (16 colors) High-Resolution: 280×192 (6 colors) * Double-Low-Resolution: 80×48 (16 colors) Double-High-Resolution: 560×192 (16 colors) **effectively 140×192 in color, due to pixel placement restrictions ¹Text can be mixed with graphic modes, replacing either bottom 8 or 32 lines, depending on video mode Audio Built-in speaker; 1-bit toggling User-adjustable volume (manual sliding switch) Built-in storage Internal 3.5-inch floppy drive 800 KB, double-sided Motorized ejection/auto-injection Internal connectors IIc Plus Memory Expansion Card connector (34-pin) Internal modem (7-pin) Specialized chip controllers IWM (Integrated Woz Machine) for floppy drives MIG (Magic Interface Glue) with 2 KB SRAM, for "dumb" 3.5-inch drive support Dual 6551 ACIA chips for serial I/O External connectors Joystick/Mouse (DE-9) Printer, serial-1 (mini DIN-8) Modem, serial-2 (mini DIN-8) Video Expansion Port (D-15) Floppy drive SmartPort (D-19) NTSC composite video output (RCA connector)
Notes of interest:
Revisions The Apple IIc Plus had a relatively short product lifespan, produced for only two years (it was officially discontinued in November 1990). Though for many years it was believed that there had been no changes or revisions made to the machine, in 2008 hobbyists discovered the existence of two versions of the motherboard. While the revised board contained several minor differences (mainly different ASIC manufacturers and markings), there were no updates or bug fixes seen in the firmware (which was still identified as ROM version '5').
Notes of interest:
No international versions There were also no international versions of the Apple IIc Plus produced, so the keyboard, unlike the original IIc, was only manufactured with American English printed keycaps and the 'Keyboard' switch was utilized solely for changing between QWERTY and DVORAK layout (rather than localized keyboard layouts). Consequently, the Apple IIc Plus was only sold in the U.S. — not even Canadian Apple dealers were authorized to distribute or sell it.
End of the line:
Although it wasn't intended to be, fate would have it that the Apple IIc Plus would be the last new Apple II model. But even back in 1988, before this was known, the Apple IIc Plus could be seen as signaling the beginning of the end for the Apple II series, or at the very least, a hint at the direction Apple Computer was taking with the line. In releasing the IIc Plus, Apple management essentially made a statement that the Apple IIGS was no longer considered a top priority, and if anything, gave it a back seat when it was the only possible future for the evolution and continued success of the Apple II line. That, in turn, signified that the Apple II line as a whole, despite its promise and potential, was no longer considered important at Apple headquarters. Consequently, from this point forward, the Apple II was milked for financial gain as much as possible, while at the same time a cap was placed on its evolution and advancement so it wouldn't overshadow and compete with the Macintosh, the company's then-new focus and chosen future.Further proof of this was that, a year after the release of the oddly out-of-place and retro-designed Apple IIc Plus, only a minor maintenance release of the Apple IIGS was introduced (mainly boasting more RAM and improved firmware) rather than any of the desperately needed hardware changes required to keep the machine viable. Prototypes of more advanced Apple IIs (namely in the form of a new IIGS) were delayed and eventually cancelled as the company decided what to do with its Apple II product line. The end result was to allow it to slowly fade out into obscurity due to a lack of development or support. The Apple II line carried on until October 1993, when the IIe was discontinued. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Front limber**
Front limber:
A front limber is a gymnastics skill where the gymnast performs a handstand, carries the momentum forward, landing in a bridge, and then pulls their upper body upwards, ending in a standing position. It is related to a front walkover, but it is a variant as both legs are carried forward at once whereas each leg is taken over separately in a walkover.
Front limber:
This is how it is done correctly: Kick up into handstand, and hold it momentarily.
Push shoulders 'out' and arch the back, with feet together and toes pointed.
Bend knees until feet land on the floor.
Immediately push the hips forward and push off the hands to stand up ending with arms overhead.Tips: Look at hands throughout Land with feet about a foot apart Stretch back before doing a front limber If doing for the first time, do on to a raised surface (sofa, mat) as the back does not have to arch as much.
Similar gymnastic skills:
Back limber Front walkover, Back walkover Front or back handsprings | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alpha shape**
Alpha shape:
In computational geometry, an alpha shape, or α-shape, is a family of piecewise linear simple curves in the Euclidean plane associated with the shape of a finite set of points. They were first defined by Edelsbrunner, Kirkpatrick & Seidel (1983). The alpha-shape associated with a set of points is a generalization of the concept of the convex hull, i.e. every convex hull is an alpha-shape but not every alpha shape is a convex hull.
Characterization:
For each real number α, define the concept of a generalized disk of radius 1/α as follows: If α = 0, it is a closed half-plane; If α > 0, it is a closed disk of radius 1/α; If α < 0, it is the closure of the complement of a disk of radius −1/α.Then an edge of the alpha-shape is drawn between two members of the finite point set whenever there exists a generalized disk of radius 1/α containing none of the point set and which has the property that the two points lie on its boundary.
Characterization:
If α = 0, then the alpha-shape associated with the finite point set is its ordinary convex hull.
Alpha complex:
Alpha shapes are closely related to alpha complexes, subcomplexes of the Delaunay triangulation of the point set.
Each edge or triangle of the Delaunay triangulation may be associated with a characteristic radius, the radius of the smallest empty circle containing the edge or triangle. For each real number α, the α-complex of the given set of points is the simplicial complex formed by the set of edges and triangles whose radii are at most 1/α.
Alpha complex:
The union of the edges and triangles in the α-complex forms a shape closely resembling the α-shape; however it differs in that it has polygonal edges rather than edges formed from arcs of circles. More specifically, Edelsbrunner (1995) showed that the two shapes are homotopy equivalent. (In this later work, Edelsbrunner used the name "α-shape" to refer to the union of the cells in the α-complex, and instead called the related curvilinear shape an α-body.)
Examples:
This technique can be employed to reconstruct a Fermi surface from the electronic Bloch spectral function evaluated at the Fermi level, as obtained from the Green's function in a generalised ab-initio study of the problem. The Fermi surface is then defined as the set of reciprocal space points within the first Brillouin zone, where the signal is highest. The definition has the advantage of covering also cases of various forms of disorder. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multidimensional art**
Multidimensional art:
Multidimensional art is art that cannot be represented on a two-dimensional flat canvas. Artists create a third dimension with paper or another medium. In multidimensional art an artist can make use of virtually any items (mediums).
Materials used in multidimensional art:
Many artists make use of the objects and items they find in nature and or man made items. Some artists use paper and others make use of rubber, plastic, or sculpture. Artists also use other man made items like: textiles, milk cartons, or beads.Japanese born Nobuhiro Nakanishi puts photos on see through plastic and orders the photos in chronological order. He then mounts the photos on a wall in a line (stacking them) which gives the viewer a different perspective.
Multi-dimensional artists:
Joseph Csaky Leo Monahan Nnenna Okore | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Boggle**
Boggle:
Boggle is a word game in which players try to find as many words as they can from a grid of lettered dice, within a set time limit. It was invented by Allan Turoff and originally distributed by Parker Brothers. The game is played using a plastic grid of lettered dice, in which players look for words in sequences of adjacent letters.
Rules:
One player begins the game by shaking a covered tray of 16 cubic dice, each with a different letter printed on each of its sides. The dice settle into a 4×4 tray so that only the top letter of each cube is visible. After they have settled into the tray, a three-minute sand timer is started and all players simultaneously begin the main phase of play.Each player searches for words that fit the following criteria: Words must be at least three letters in length.
Rules:
Each letter after the first must be a horizontal, vertical, or diagonal neighbor of the one before it.
No individual letter cube may be used more than once in a word.
No capitalized or hyphenated words are allowed.Multiple forms of the same word are allowed, such as singular/plural forms and other derivations. Each player records all the words they find by writing on a private sheet of paper. After three minutes have elapsed, all players must immediately stop writing and the game enters the scoring phase.
Rules:
In this, each player reads off their list of discovered words. If two or more players wrote the same word, it is removed from all players' lists. Any player may challenge the validity of a word, in which case a previously nominated dictionary is used to verify or refute it. Once all duplicates and invalid words have been eliminated, points are awarded based on the length of each remaining word in a player's list. The winner is the player whose point total is highest, with any ties typically broken by a count of long words.
Rules:
One cube is printed with "Qu". This is because Q is nearly always followed by U in English words (see exceptions), and if there were a Q in Boggle, it would be challenging to use if a U did not, by chance, appear next to it. For the purposes of scoring, Qu counts as two letters; for example, squid would score two points (for a five-letter word) despite being formed from a chain of only four cubes. Early versions of the game had a "Q" without the accompanying "u".
Rules:
Merriam-Webster publishes the Official Scrabble Players Dictionary, which is also suitable for Boggle. This dictionary includes all variant forms of words up to eight letters in length. A puzzle book entitled 100 Boggle Puzzles (Improve Your Game) offering 100 game positions was published in the UK in 2003 but is no longer in print.
Different versions of Boggle have varying distributions of letters. For example, a more modern version in the UK has easier letters, such as only one K, but an older version (with a yellow box, from 1986) has two Ks and a generally more awkward letter distribution.
Rules:
Using the sixteen cubes in a standard Boggle set, the list of longest words that can be formed includes inconsequentially, quadricentennials, and sesquicentennials, all seventeen-letter words made possible by q and u appearing on the same face of one cube.Words within words are allowed, such as "mast" and "aster" within "master". Neither the cubes nor the board may be touched while the timer is running.
Game variants:
Parker Brothers has introduced several licensed variations on the game. As of 2006, only Boggle Junior and Travel Boggle (also marketed as Boggle Folio) continue to be manufactured and marketed in North America alongside the standard Boggle game, apart from a licensed keychain miniature version. Boggle Junior is a much-simplified version intended for young children. Boggle Travel is a car-friendly version of the standard 4×4 set. The compact, zippered case includes pencils and small pads of paper, as well as an electronic timer, and notably, a cover made from a soft plastic that produces much less noise when the board is shaken.
Game variants:
Big Boggle, later marketed as Boggle Master and Boggle Deluxe, featured a 5×5 tray, and disallowed three-letter words. Some editions of the Big Boggle set included an adapter that could convert the larger grid into a standard 4×4 Boggle grid. In the United Kingdom, Hasbro UK released Super Boggle in 2004 (now discontinued), which features both the 4×4 and 5×5 grid and an electronic timer that flashes to indicate the start and finish. Despite the game's popularity in North America, no version of Boggle offering a 5×5 grid was marketed outside Europe for an extended period until 2011, when Winning Moves Games USA revived the Big Boggle name for a new version. Their variant features a two-letter die with popular letter combinations such as Qu, Th and In.In 2008, Parker Brothers released a self-contained version of the game with the dice sealed inside a plastic unit and featuring an integrated timer. Although the older version has been discontinued, some retailers refer to the newer one as "Boggle Reinvention" to avoid confusion.
Game variants:
In 2012, Winning Moves Games USA released a 6×6 version of the game called Super Big Boggle. In addition to the two-letter dice with popular letter combinations, there is also a die containing three faces which are solid squares. These solid squares represent a word stop, which is simply a space that may not be used in any word. The other changes are that the time limit was increased from three minutes to four minutes, three-letter words are no longer allowed, and there is a modified scoring scheme, outlined below.
Game variants:
Other Boggle variants have included: A version of the standard 4×4 set that included a special red "Boggle challenge cube", featuring six relatively uncommon letters. Bonus points are awarded for all words making use of the red cube.
Boggle CD-ROM, a version for Windows, produced and marketed by Hasbro Interactive, including both 4×4 and 5×5 versions, several 3-D versions, and facilities allowing up to four players to compete directly over the Internet.
Body Boggle, which is more akin to Twister than it is to standard Boggle. Two players work together as a team, using their hands and feet to spell words on a large floor mat containing pre-printed Boggle letters.
Boggle Bowl, in which players roll their own dice and compete to build longer words, in order to move their token toward their goal on a (bowl-shaped) playing area. Similar to Scrabble, the play area has special spaces, but here they alter the play for the next round.
A 1998 game show pilot episode hosted by Bill Rafferty that was not picked up for a full production season.
Boggle, an interactive game show hosted by Wink Martindale. It aired on The Family Channel (now ABC Family) in 1994, replacing the interactive version of Trivial Pursuit.
Coggle, which functions similarly to Boggle but involves creating a word to fit a particular theme. Was mainly aimed at the French and Canadian market.
Boggle Flash. An electronic version of Boggle, but consists of five tiles in which one to ten players make words by swapping tiles. This product is sold in the United States under the name Scrabble Flash.
Game variants:
Foggle, where the 16 dice have to be used to form valid mathematical equations.Numerous unofficial computer versions and variants of the game are available. By 1989, users of MIT's Project Athena competed in the online game mboggle. In 2013, Ruzzle, a mobile phone game based on Boggle, topped the most-downloaded iPhone apps chart. Other games similar to or influenced by Boggle include Bananagrams, Bookworm, Dropwords, Letterpress, Puzzlage, SpellTower, Word Factory, Wordquest, Word Racer, WordSpot, Word Streak with Friends, WordTwist, and Zip-It.
Club and tournament play:
While not as widely institutionally established as Scrabble, several clubs have been established for the purpose of organizing Boggle play. Official Boggle clubs exist at a number of educational institutions, including the Dartmouth Union of Bogglers at Dartmouth College, the Western Oregon University Boggle Club, the University of Michigan Boggle Club, Berkeley Boggle Club at the University of California, Berkeley, CCA Boggle Club at Canyon Crest Academy, and Grinnell College Boggle Club.Unlike Scrabble, there is no national or international governing or rule-making body for Boggle competition and no official tournament regulations exist. When it comes to creating Boggle games for tournament play, most of the time it is done by special software designed to generate completely random and probably fair boards, using words oftentimes pre-selected by the officiating committee.
Reception:
Games magazine included Boggle in their "Top 100 Games of 1980", praising it as a "fast-moving word game".
Reviews:
Games #1 Jeux & Stratégie #6 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Videogrammetry**
Videogrammetry:
Videogrammetry is a measurement technology in which the three-dimensional coordinates of points on an object are determined by measurements made in two or more video images taken from different angles. Images can be obtained from two cameras which simultaneously view the object or from successive images captured by the same camera with a view of the object. Videogrammetry is typically used in manufacturing and construction. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rhinolith**
Rhinolith:
A rhinolith is a stone present in the nasal cavity. The word is derived from the roots rhino- and -lith, literally meaning "nose stone". It is an uncommon medical phenomenon, not to be confused with dried nasal mucus. A rhinolith usually forms around the nucleus of a small exogenous foreign body, blood clot or secretion by slow deposition of calcium and magnesium carbonate and phosphate salts. Over a period of time, they grow into large irregular masses that fill the nasal cavity. They may cause pressure necrosis of the nasal septum or lateral wall of nose. Rhinoliths can cause nasal obstruction, epistaxis, headache, sinusitis and epiphora. They can be diagnosed from the history with unilateral foul-smelling blood-stained nasal discharge or by anterior rhinoscopy. On probing, the probe can be passed around all its corners. In both CT and MRI a rhinolith will appear like a radiopaque irregular material. Small rhinoliths can be removed by a foreign body hook. Whereas large rhinoliths can be removed either by crushing with Luc's forceps or by Moore's lateral rhinotomy approach.
Signs and symptoms:
Rhinoliths present as a unilateral nasal obstruction. Foul-smelling, blood-stained discharge is often present. Nosebleed and pain may occur due to the ulceration of surrounding mucosa.
Management:
They are removed under general anaesthesia. Most can be removed through anterior nares. Large ones need to be broken into pieces before removal. Some particularly hard and irregular ones may require lateral rhinotomy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Intrusion Countermeasures Electronics**
Intrusion Countermeasures Electronics:
Intrusion Countermeasures Electronics (ICE) is a term used in cyberpunk literature to refer to security programs which protect computerized data from being accessed by hackers.
Origin of term:
The term was popularized by William Gibson in his short story "Burning Chrome", which also introduced the term cyberspace, and in his subsequent novel Neuromancer. According to the Jargon File, as well as Gibson's own acknowledgements, the term ICE was originally coined by Tom Maddox.
Description of ICE:
When viewed in a cyberspace virtual reality environment, these constructs are often represented by actual walls of ice, stone, or metal. Black ICE refers to ICE that are capable of killing the intruder if deemed necessary or appropriate; some forms of black ICE may be artificially intelligent.
Real-world usage:
Though real-life firewalls, anti-virus software and similar programs fall under this classification, the term has little real world significance and remains primarily a science fiction concept. This can be attributed to the fact that using the term "electronics" to describe software products (such as firewalls) is something of a misnomer. On the other hand, there is a strong connection between real-world cybercrime and cyberpunk literature. "The Gibsonian concept of cyberspace [...] fed back into both computer and information systems design and theory," wrote Roger Burrows.The term ICE has occasionally been used for real-world software: BlackICE, an intrusion detection system built by a California company named Network ICE in 1999, acquired by IBM Internet Security Systems, then discontinued in 2007.
Real-world usage:
The ICE cipher, an encryption algorithm, may be inspired by Gibson's ICE, but it is explained as meaning "Information Concealment Engine".
Real-world usage:
The Java bytecode verifier in the Apache ByteCode Engineering Library (BCEL) is called JustIce (see the 'docs' folder for documentation).On April 28, 2009, the Information and Communications Enhancement Act, or ICE Act for short, was introduced to the United States Senate by Senator Tom Carper to make changes to the handling of information security by the federal government, including the establishment of the National Office for Cyberspace.
Usage in fiction:
The term ICE is widely used in cyberpunk fiction.
Anime Cyberpunk: Edgerunners Cyber City Oedo 808 Ghost in the Shell, where ICE is referred to directly by name or else as an 'Attack Barrier'.
Usage in fiction:
Cartoons Phantom 2040, though in it "ICE" stands for "Integrated Cyber Environment", referring to cyberspace, rather than Intrusion Countermeasures Electronics Card games Netrunner, based on Cyberpunk 2020 setting, where the corporate player uses ICE and the runner player uses icebreakers; while corps in Netrunner understand ICE to be an acronym for "Intrusion Countermeasures Electronics", the runner viewpoint is that the acronym should be for "Insidious Cortical Electrocution" Android: Netrunner, an adaptation of the original Netrunner for the Android setting Hacker and Hacker II - The Dark Side, where the players attempt to gain illicit access systems represented by playing cards arranged in a network while avoiding getting zapped by ICE and Black ICE.
Usage in fiction:
Literature Neuromancer, original popularizer of the term Count Zero the second novel of William Gibson's "Sprawl trilogy" Hyperion, wherein black ICE is used to defend the TechnoCore Trouble and Her Friends by Melissa Scott, wherein IC(E) refers to Intrusion Countermeasures (Electronic), solving the problem of implying that the measures are hardware-based Roleplaying games Cyberpunk 2020, upon which the Netrunner card game is based GURPS Cyberpunk Shadowrun, called IC (The setting drops the "electronics" misnomer) but is colloquially named "Ice" by hackers in the setting Shadow of the Beanstalk, a roleplaying game based on Android universe Movies Johnny Mnemonic, mentioned in the opening crawl.
Usage in fiction:
Track Down, wherein a friend of Kevin Mitnick says in a club that he is the hacker known as "IceBreaker" Television Babylon 5, in the episode "Born to the Purple" Max Headroom, in the episode "Security Systems", April 21, 1987 Video games Anarchy Online features an item called "Hacker ICE-Breaker Source", which can be further upgraded to "Intrusion Countermeasure Electronics Upgrade".
Usage in fiction:
AI: The Somnium Files Baldr Sky uses the term to describe the technology protecting the characters' "brain chips" and virtual structures.
BloodNet uses the term to describe the technology the player must overcome when hacking a computer system.
Cyberpunk 2077 uses the term to refer to defensive countermeasures that prevent netrunners and cyberware from hacking a target.
Usage in fiction:
Deus Ex, where the player's hacking program is referred to as an "ICE Breaker" Dystopia, wherein there are security programs called "ICE walls" Mr. Robot, where "ICE" in its RPG part refers to shields or armor that can be attacked by various "ICE breaker"s Midnight Protocol, where "ICE" is an umbrella term for security measures that shield nodes from being accessed Neuromancer, where ICE, BlackICE, and ICE Breaking are highly featured.
Usage in fiction:
Perfect Dark Zero, where players use ICE technology to bypass security.
Project Snowblind, features an ICE pick, to hack enemy cameras, turrets, and robots and use them against enemy forces.
Ripper has the player break into various cyberspace systems, which involves fighting the "ICE" security programs in the form of a rail shooter.
Star Wars: Knights of the Old Republic, an item called "ICE Breaker" can be obtained and used as a hacking tool during a sequence on the Leviathan, in which the player chooses one character to remain behind and attempt to rescue the other captured party members.
StarCrawlers features an ability called Black Ice, which the Hacker character may use.
System Shock, where ICE is represented in cyberspace as both autonomous security programs and ICE protection attached to data or software objects appearing as blue crystal formations.
System Shock 2, where an item that auto-hacks electronics is known as an "ICE-Pick" The Ascent, where items are protected by various levels of ICE that the player must overcome to access.
Invisible, Inc., wherein "ICE" is used intermittently with "firewalls" to reference mainframe defenses which the player-controlled AI program Incognita breaks through to take control of enemy electronics.
Web comics Schlock Mercenary, icewalls are a standard security measure. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ellipsoid**
Ellipsoid:
An ellipsoid is a surface that can be obtained from a sphere by deforming it by means of directional scalings, or more generally, of an affine transformation.
Ellipsoid:
An ellipsoid is a quadric surface; that is, a surface that may be defined as the zero set of a polynomial of degree two in three variables. Among quadric surfaces, an ellipsoid is characterized by either of the two following properties. Every planar cross section is either an ellipse, or is empty, or is reduced to a single point (this explains the name, meaning "ellipse-like"). It is bounded, which means that it may be enclosed in a sufficiently large sphere.
Ellipsoid:
An ellipsoid has three pairwise perpendicular axes of symmetry which intersect at a center of symmetry, called the center of the ellipsoid. The line segments that are delimited on the axes of symmetry by the ellipsoid are called the principal axes, or simply axes of the ellipsoid. If the three axes have different lengths, the figure is a triaxial ellipsoid (rarely scalene ellipsoid), and the axes are uniquely defined.
Ellipsoid:
If two of the axes have the same length, then the ellipsoid is an ellipsoid of revolution, also called a spheroid. In this case, the ellipsoid is invariant under a rotation around the third axis, and there are thus infinitely many ways of choosing the two perpendicular axes of the same length. If the third axis is shorter, the ellipsoid is an oblate spheroid; if it is longer, it is a prolate spheroid. If the three axes have the same length, the ellipsoid is a sphere.
Standard equation:
The general ellipsoid, also known as triaxial ellipsoid, is a quadratic surface which is defined in Cartesian coordinates as: x2a2+y2b2+z2c2=1, where a , b and c are the length of the semi-axes. The points (a,0,0) , (0,b,0) and (0,0,c) lie on the surface. The line segments from the origin to these points are called the principal semi-axes of the ellipsoid, because a, b, c are half the length of the principal axes. They correspond to the semi-major axis and semi-minor axis of an ellipse.
Standard equation:
In spherical coordinate system for which sin cos sin sin cos θ) , the general ellipsoid is defined as: sin cos sin sin cos 2θc2=1, where θ is the polar angle and φ is the azimuthal angle.
When a=b=c , the ellipsoid is a sphere.
When a=b≠c , the ellipsoid is a spheroid or ellipsoid of revolution. In particular, if a=b>c , it is an oblate spheroid; if a=b<c , it is a prolate spheroid.
Parameterization:
The ellipsoid may be parameterized in several ways, which are simpler to express when the ellipsoid axes coincide with coordinate axes. A common choice is sin cos sin sin cos (θ), where 0≤θ≤π,0≤φ<2π.
Parameterization:
These parameters may be interpreted as spherical coordinates, where θ is the polar angle and φ is the azimuth angle of the point (x, y, z) of the ellipsoid.Measuring from the equator rather than a pole, cos cos cos sin sin (θ), where −π2≤θ≤π2,0≤λ<2π, θ is the reduced latitude, parametric latitude, or eccentric anomaly and λ is azimuth or longitude.
Parameterization:
Measuring angles directly to the surface of the ellipsoid, not to the circumscribed sphere, cos cos cos sin sin (γ)] where cos sin cos sin 2γ,−π2≤γ≤π2,0≤λ<2π.
γ would be geocentric latitude on the Earth, and λ is longitude. These are true spherical coordinates with the origin at the center of the ellipsoid.In geodesy, the geodetic latitude is most commonly used, as the angle between the vertical and the equatorial plane, defined for a biaxial ellipsoid. For a more general triaxial ellipsoid, see ellipsoidal latitude.
Volume:
The volume bounded by the ellipsoid is V=43πabc.
In terms of the principal diameters A, B, C (where A = 2a, B = 2b, C = 2c), the volume is V=π6ABC .This equation reduces to that of the volume of a sphere when all three elliptic radii are equal, and to that of an oblate or prolate spheroid when two of them are equal.
The volume of an ellipsoid is 2/3 the volume of a circumscribed elliptic cylinder, and π/6 the volume of the circumscribed box. The volumes of the inscribed and circumscribed boxes are respectively: inscribed circumscribed =8abc.
Surface area:
The surface area of a general (triaxial) ellipsoid is sin sin cos 2(φ)), where cos (φ)=ca,k2=a2(b2−c2)b2(a2−c2),a≥b≥c, and where F(φ, k) and E(φ, k) are incomplete elliptic integrals of the first and second kind respectively. The surface area of this general ellipsoid can also be expressed using the RF and RD Carlson symmetric forms of the elliptic integrals by simply substituting the above formula to the respective definitions: S=2πc2+2πab[RF(c2a2,c2b2,1)−13(1−c2a2)(1−c2b2)RD(c2a2,c2b2,1)].
Surface area:
Unlike the expression with F(φ, k) and E(φ, k), the variant based on the Carlson symmetric integrals yields valid results for a sphere and only the axis c must be the smallest, the order between the two larger axes, a and b can be arbitrary. The surface area of an ellipsoid of revolution (or spheroid) may be expressed in terms of elementary functions: oblate artanh where and (c<a), or oblate artanh e) or oblate ln 1+e1−e and prolate arcsin where and (c>a), which, as follows from basic trigonometric identities, are equivalent expressions (i.e. the formula for Soblate can be used to calculate the surface area of a prolate ellipsoid and vice versa). In both cases e may again be identified as the eccentricity of the ellipse formed by the cross section through the symmetry axis. (See ellipse). Derivations of these results may be found in standard sources, for example Mathworld.
Surface area:
Approximate formula S≈4πapbp+apcp+bpcp3p.
Here p ≈ 1.6075 yields a relative error of at most 1.061%; a value of p = 8/5 = 1.6 is optimal for nearly spherical ellipsoids, with a relative error of at most 1.178%.
In the "flat" limit of c much smaller than a and b, the area is approximately 2πab, equivalent to p = log23 ≈ 1.5849625007.
Plane sections:
The intersection of a plane and a sphere is a circle (or is reduced to a single point, or is empty). Any ellipsoid is the image of the unit sphere under some affine transformation, and any plane is the image of some other plane under the same transformation. So, because affine transformations map circles to ellipses, the intersection of a plane with an ellipsoid is an ellipse or a single point, or is empty. Obviously, spheroids contain circles. This is also true, but less obvious, for triaxial ellipsoids (see Circular section).
Plane sections:
Determining the ellipse of a plane section Given: Ellipsoid x2/a2 + y2/b2 + z2/c2 = 1 and the plane with equation nxx + nyy + nzz = d, which have an ellipse in common.
Wanted: Three vectors f0 (center) and f1, f2 (conjugate vectors), such that the ellipse can be represented by the parametric equation cos sin t (see ellipse).
Solution: The scaling u = x/a, v = y/b, w = z/c transforms the ellipsoid onto the unit sphere u2 + v2 + w2 = 1 and the given plane onto the plane with equation nxau+nybv+nzcw=d.
Let muu + mvv + mww = δ be the Hesse normal form of the new plane and m=[mumvmw] its unit normal vector. Hence e0=δm is the center of the intersection circle and ρ=1−δ2 its radius (see diagram).
Where mw = ±1 (i.e. the plane is horizontal), let e1=[ρ00],e2=[0ρ0].
Where mw ≠ ±1, let e1=ρmu2+mv2[mv−mu0],e2=m×e1.
In any case, the vectors e1, e2 are orthogonal, parallel to the intersection plane and have length ρ (radius of the circle). Hence the intersection circle can be described by the parametric equation cos sin t.
The reverse scaling (see above) transforms the unit sphere back to the ellipsoid and the vectors e0, e1, e2 are mapped onto vectors f0, f1, f2, which were wanted for the parametric representation of the intersection ellipse. How to find the vertices and semi-axes of the ellipse is described in ellipse.
Example: The diagrams show an ellipsoid with the semi-axes a = 4, b = 5, c = 3 which is cut by the plane x + y + z = 5.
Pins-and-string construction:
The pins-and-string construction of an ellipsoid is a transfer of the idea constructing an ellipse using two pins and a string (see diagram).
Pins-and-string construction:
A pins-and-string construction of an ellipsoid of revolution is given by the pins-and-string construction of the rotated ellipse. The construction of points of a triaxial ellipsoid is more complicated. First ideas are due to the Scottish physicist J. C. Maxwell (1868). Main investigations and the extension to quadrics was done by the German mathematician O. Staude in 1882, 1886 and 1898. The description of the pins-and-string construction of ellipsoids and hyperboloids is contained in the book Geometry and the imagination written by D. Hilbert & S. Vossen, too.
Pins-and-string construction:
Steps of the construction Choose an ellipse E and a hyperbola H, which are a pair of focal conics: with the vertices and foci of the ellipse and a string (in diagram red) of length l.
Pins-and-string construction:
Pin one end of the string to vertex S1 and the other to focus F2. The string is kept tight at a point P with positive y- and z-coordinates, such that the string runs from S1 to P behind the upper part of the hyperbola (see diagram) and is free to slide on the hyperbola. The part of the string from P to F2 runs and slides in front of the ellipse. The string runs through that point of the hyperbola, for which the distance |S1 P| over any hyperbola point is at a minimum. The analogous statement on the second part of the string and the ellipse has to be true, too.
Pins-and-string construction:
Then: P is a point of the ellipsoid with equation The remaining points of the ellipsoid can be constructed by suitable changes of the string at the focal conics.
Semi-axes Equations for the semi-axes of the generated ellipsoid can be derived by special choices for point P: Y=(0,ry,0),Z=(0,0,rz).
Pins-and-string construction:
The lower part of the diagram shows that F1 and F2 are the foci of the ellipse in the xy-plane, too. Hence, it is confocal to the given ellipse and the length of the string is l = 2rx + (a − c). Solving for rx yields rx = 1/2(l − a + c); furthermore r2y = r2x − c2.
Pins-and-string construction:
From the upper diagram we see that S1 and S2 are the foci of the ellipse section of the ellipsoid in the xz-plane and that r2z = r2x − a2.
Converse If, conversely, a triaxial ellipsoid is given by its equation, then from the equations in step 3 one can derive the parameters a, b, l for a pins-and-string construction.
Pins-and-string construction:
Confocal ellipsoids If E is an ellipsoid confocal to E with the squares of its semi-axes r¯x2=rx2−λ,r¯y2=ry2−λ,r¯z2=rz2−λ then from the equations of E rx2−ry2=c2,rx2−rz2=a2,ry2−rz2=a2−c2=b2 one finds, that the corresponding focal conics used for the pins-and-string construction have the same semi-axes a, b, c as ellipsoid E. Therefore (analogously to the foci of an ellipse) one considers the focal conics of a triaxial ellipsoid as the (infinite many) foci and calls them the focal curves of the ellipsoid.The converse statement is true, too: if one chooses a second string of length l and defines λ=rx2−r¯x2 then the equations r¯y2=ry2−λ,r¯z2=rz2−λ are valid, which means the two ellipsoids are confocal.
Pins-and-string construction:
Limit case, ellipsoid of revolution In case of a = c (a spheroid) one gets S1 = F1 and S2 = F2, which means that the focal ellipse degenerates to a line segment and the focal hyperbola collapses to two infinite line segments on the x-axis. The ellipsoid is rotationally symmetric around the x-axis and rx=l2,ry=rz=rx2−c2 Properties of the focal hyperbola True curve If one views an ellipsoid from an external point V of its focal hyperbola, than it seems to be a sphere, that is its apparent shape is a circle. Equivalently, the tangents of the ellipsoid containing point V are the lines of a circular cone, whose axis of rotation is the tangent line of the hyperbola at V. If one allows the center V to disappear into infinity, one gets an orthogonal parallel projection with the corresponding asymptote of the focal hyperbola as its direction. The true curve of shape (tangent points) on the ellipsoid is not a circle. The lower part of the diagram shows on the left a parallel projection of an ellipsoid (with semi-axes 60, 40, 30) along an asymptote and on the right a central projection with center V and main point H on the tangent of the hyperbola at point V. (H is the foot of the perpendicular from V onto the image plane.) For both projections the apparent shape is a circle. In the parallel case the image of the origin O is the circle's center; in the central case main point H is the center.
Pins-and-string construction:
Umbilical points The focal hyperbola intersects the ellipsoid at its four umbilical points.
Property of the focal ellipse The focal ellipse together with its inner part can be considered as the limit surface (an infinitely thin ellipsoid) of the pencil of confocal ellipsoids determined by a, b for rz → 0. For the limit case one gets rx=a,ry=b,l=3a−c.
In general position:
As a quadric If v is a point and A is a real, symmetric, positive-definite matrix, then the set of points x that satisfy the equation (x−v)TA(x−v)=1 is an ellipsoid centered at v. The eigenvectors of A are the principal axes of the ellipsoid, and the eigenvalues of A are the reciprocals of the squares of the semi-axes: a−2, b−2 and c−2.An invertible linear transformation applied to a sphere produces an ellipsoid, which can be brought into the above standard form by a suitable rotation, a consequence of the polar decomposition (also, see spectral theorem). If the linear transformation is represented by a symmetric 3 × 3 matrix, then the eigenvectors of the matrix are orthogonal (due to the spectral theorem) and represent the directions of the axes of the ellipsoid; the lengths of the semi-axes are computed from the eigenvalues. The singular value decomposition and polar decomposition are matrix decompositions closely related to these geometric observations.
In general position:
Parametric representation The key to a parametric representation of an ellipsoid in general position is the alternative definition: An ellipsoid is an affine image of the unit sphere.An affine transformation can be represented by a translation with a vector f0 and a regular 3 × 3 matrix A: x↦f0+Ax=f0+xf1+yf2+zf3 where f1, f2, f3 are the column vectors of matrix A.
In general position:
A parametric representation of an ellipsoid in general position can be obtained by the parametric representation of a unit sphere (see above) and an affine transformation: cos cos cos sin sin θ,−π2<θ<π2,0≤φ<2π .If the vectors f1, f2, f3 form an orthogonal system, the six points with vectors f0 ± f1,2,3 are the vertices of the ellipsoid and |f1|, |f2|, |f3| are the semi-principal axes.
In general position:
A surface normal vector at point x(θ, φ) is cos cos cos sin sin θ.
For any ellipsoid there exists an implicit representation F(x, y, z) = 0. If for simplicity the center of the ellipsoid is the origin, f0 = 0, the following equation describes the ellipsoid above: det det det det (f1,f2,f3)2=0
Applications:
The ellipsoidal shape finds many practical applications: GeodesyEarth ellipsoid, a mathematical figure approximating the shape of the Earth.
Reference ellipsoid, a mathematical figure approximating the shape of planetary bodies in general.MechanicsPoinsot's ellipsoid, a geometrical method for visualizing the torque-free motion of a rotating rigid body.
Lamé's stress ellipsoid, an alternative to Mohr's circle for the graphical representation of the stress state at a point.
Manipulability ellipsoid, used to describe a robot's freedom of motion.
Jacobi ellipsoid, a triaxial ellipsoid formed by a rotating fluidCrystallographyIndex ellipsoid, a diagram of an ellipsoid that depicts the orientation and relative magnitude of refractive indices in a crystal.
Applications:
Thermal ellipsoid, ellipsoids used in crystallography to indicate the magnitudes and directions of the thermal vibration of atoms in crystal structures.LightingEllipsoidal reflector floodlight Ellipsoidal reflector spotlightMedicineMeasurements obtained from MRI imaging of the prostate can be used to determine the volume of the gland using the approximation L × W × H × 0.52 (where 0.52 is an approximation for π/6) Dynamical properties The mass of an ellipsoid of uniform density ρ is m=Vρ=43πabcρ.
Applications:
The moments of inertia of an ellipsoid of uniform density are 0.
For a = b = c these moments of inertia reduce to those for a sphere of uniform density.
Applications:
Ellipsoids and cuboids rotate stably along their major or minor axes, but not along their median axis. This can be seen experimentally by throwing an eraser with some spin. In addition, moment of inertia considerations mean that rotation along the major axis is more easily perturbed than rotation along the minor axis.One practical effect of this is that scalene astronomical bodies such as Haumea generally rotate along their minor axes (as does Earth, which is merely oblate); in addition, because of tidal locking, moons in synchronous orbit such as Mimas orbit with their major axis aligned radially to their planet.
Applications:
A spinning body of homogeneous self-gravitating fluid will assume the form of either a Maclaurin spheroid (oblate spheroid) or Jacobi ellipsoid (scalene ellipsoid) when in hydrostatic equilibrium, and for moderate rates of rotation. At faster rotations, non-ellipsoidal piriform or oviform shapes can be expected, but these are not stable.
Applications:
Fluid dynamics The ellipsoid is the most general shape for which it has been possible to calculate the creeping flow of fluid around the solid shape. The calculations include the force required to translate through a fluid and to rotate within it. Applications include determining the size and shape of large molecules, the sinking rate of small particles, and the swimming abilities of microorganisms.
Applications:
In probability and statistics The elliptical distributions, which generalize the multivariate normal distribution and are used in finance, can be defined in terms of their density functions. When they exist, the density functions f have the structure: f(x)=k⋅g((x−μ)Σ−1(x−μ)T) where k is a scale factor, x is an n-dimensional random row vector with median vector μ (which is also the mean vector if the latter exists), Σ is a positive definite matrix which is proportional to the covariance matrix if the latter exists, and g is a function mapping from the non-negative reals to the non-negative reals giving a finite area under the curve. The multivariate normal distribution is the special case in which g(z) = exp(−z/2) for quadratic form z.
Applications:
Thus the density function is a scalar-to-scalar transformation of a quadric expression. Moreover, the equation for any iso-density surface states that the quadric expression equals some constant specific to that value of the density, and the iso-density surface is an ellipsoid.
In higher dimensions:
A hyperellipsoid, or ellipsoid of dimension n−1 in a Euclidean space of dimension n , is a quadric hypersurface defined by a polynomial of degree two that has a homogeneous part of degree two which is a positive definite quadratic form. One can also define a hyperellipsoid as the image of a sphere under an invertible affine transformation. The spectral theorem can again be used to obtain a standard equation of the form 1.
In higher dimensions:
The volume of an n-dimensional hyperellipsoid can be obtained by replacing Rn by the product of the semi-axes a1a2...an in the formula for the volume of a hypersphere: V=πn2Γ(n2+1)a1a2⋯an (where Γ is the gamma function). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**QuickTransit**
QuickTransit:
QuickTransit was a cross-platform virtualization program developed by Transitive Corporation. It allowed software compiled for one specific processor and operating system combination to be executed on a different processor and/or operating system architecture without source code or binary changes.
QuickTransit was an extension of the Dynamite technology developed by the University of Manchester Parallel Architectures and Languages research group, which now forms part of the university's Advanced Processor Technologies research group.
Silicon Graphics announced QuickTransit's first availability in October 2004 on its Prism visualization systems. These systems, based on Itanium 2 processors and the Linux operating system, used QuickTransit to transparently run application binaries compiled for previous SGI systems based on the MIPS processor and IRIX operating system.
This technology was also licensed by Apple Computer in its transition from PowerPC to Intel (x86) CPUs, starting in 2006. Apple marketed this technology as "Rosetta".
In August 2006, IBM announced a partnership with Transitive to run Linux/x86 binaries on its Power ISA-based Power Systems servers. IBM named this software System p AVE during its beta phase, but it was renamed to PowerVM Lx86 upon release.
QuickTransit:
In November 2006, Transitive launched QuickTransit for Solaris/SPARC-to-Linux/x86-64, which enabled unmodified Solaris applications compiled for SPARC systems to run on 64-bit x86-based systems running Linux. This was followed in October 2007 by QuickTransit for Solaris/SPARC-to-Linux/Itanium, which enabled Solaris/SPARC applications to run on Itanium systems running Linux. A third product, QuickTransit for Solaris/SPARC-to-Solaris/x86-64, was released in December 2007, enabling Solaris/SPARC applications to run on 64-bit x86 systems running Solaris.
QuickTransit:
IBM acquired Transitive in June 2009 and merged the company into its Power Systems division. IBM announced in September 2011 it would discontinue marketing for the PowerVM Lx86 product in January of 2012, withdrawing it from sale completely in April 2013. Apple removed Rosetta from Mac OS X starting with Mac OS X Lion in 2011.
Most of the original team now work for the BBC, Apple in California and ARM in Manchester. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thickened earlobes-conductive deafness syndrome**
Thickened earlobes-conductive deafness syndrome:
Thickened earlobes-conductive deafness syndrome, also known as Escher-Hirt syndrome, or Schweitzer Kemink Graham syndrome, is a rare genetic disorder which is characterized by ear and jaw abnormalities associated with progressive hearing loss. Two families worldwide have been described with the disorder.
Presentation:
People with the disorder often have the following symptoms: Ear/Auditory Microtia (abnormally small ears) Thick earlobes Conductive hearing loss Congenital auditory ossicle anomalies Jaw Micrognathia
Etiology:
Escher et al. described a family with dominantly inherited conductive deafness caused by ear anomalies in 1968 and Wilmot et al. described another family with the same symptoms and mode of inheritance in 1970, Schweitzer et al described the symptoms and declared a novel syndrome in 1984. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**KIF2A**
KIF2A:
Kinesin-like protein KIF2A is a protein that in humans is encoded by the KIF2A gene.Kinesins, such as KIF2, are microtubule-associated motor proteins. For background information on kinesins, see MIM 148760.[supplied by OMIM] | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Transport layer**
Transport layer:
In computer networking, the transport layer is a conceptual division of methods in the layered architecture of protocols in the network stack in the Internet protocol suite and the OSI model. The protocols of this layer provide end-to-end communication services for applications.: §1.1.3 It provides services such as connection-oriented communication, reliability, flow control, and multiplexing.
Transport layer:
The details of implementation and semantics of the transport layer of the Internet protocol suite, which is the foundation of the Internet, and the OSI model of general networking are different. The protocols in use today in this layer for the Internet all originated in the development of TCP/IP. In the OSI model the transport layer is often referred to as Layer 4, or L4, while numbered layers are not used in TCP/IP.
Transport layer:
The best-known transport protocol of the Internet protocol suite is the Transmission Control Protocol (TCP). It is used for connection-oriented transmissions, whereas the connectionless User Datagram Protocol (UDP) is used for simpler messaging transmissions. TCP is the more complex protocol, due to its stateful design incorporating reliable transmission and data stream services. Together, TCP and UDP comprise essentially all traffic on the Internet and are the only protocols implemented in every major operating system. Additional transport layer protocols that have been defined and implemented include the Datagram Congestion Control Protocol (DCCP) and the Stream Control Transmission Protocol (SCTP).
Services:
Transport layer services are conveyed to an application via a programming interface to the transport layer protocols. The services may include the following features: Connection-oriented communication: It is normally easier for an application to interpret a connection as a data stream rather than having to deal with the underlying connection-less models, such as the datagram model of the User Datagram Protocol (UDP) and of the Internet Protocol (IP).
Services:
Same order delivery: The network layer doesn't generally guarantee that packets of data will arrive in the same order that they were sent, but often this is a desirable feature. This is usually done through the use of segment numbering, with the receiver passing them to the application in order. This can cause head-of-line blocking.
Services:
Reliability: Packets may be lost during transport due to network congestion and errors. By means of an error detection code, such as a checksum, the transport protocol may check that the data is not corrupted, and verify correct receipt by sending an ACK or NACK message to the sender. Automatic repeat request schemes may be used to retransmit lost or corrupted data.
Services:
Flow control: The rate of data transmission between two nodes must sometimes be managed to prevent a fast sender from transmitting more data than can be supported by the receiving data buffer, causing a buffer overrun. This can also be used to improve efficiency by reducing buffer underrun.
Services:
Congestion avoidance: Congestion control can control traffic entry into a telecommunications network, so as to avoid congestive collapse by attempting to avoid oversubscription of any of the processing or link capabilities of the intermediate nodes and networks and taking resource reducing steps, such as reducing the rate of sending packets. For example, automatic repeat requests may keep the network in a congested state; this situation can be avoided by adding congestion avoidance to the flow control, including slow start. This keeps the bandwidth consumption at a low level in the beginning of the transmission, or after packet retransmission.
Services:
Multiplexing: Ports can provide multiple endpoints on a single node. For example, the name on a postal address is a kind of multiplexing and distinguishes between different recipients of the same location. Computer applications will each listen for information on their own ports, which enables the use of more than one network service at the same time. It is part of the transport layer in the TCP/IP model, but of the session layer in the OSI model.
Analysis:
The transport layer is responsible for delivering data to the appropriate application process on the host computers. This involves statistical multiplexing of data from different application processes, i.e. forming data segments, and adding source and destination port numbers in the header of each transport layer data segment. Together with the source and destination IP address, the port numbers constitute a network socket, i.e. an identification address of the process-to-process communication. In the OSI model, this function is supported by the session layer.
Analysis:
Some transport layer protocols, for example TCP, but not UDP, support virtual circuits, i.e. provide connection-oriented communication over an underlying packet-oriented datagram network. A byte-stream is delivered while hiding the packet mode communication for the application processes. This involves connection establishment, dividing of the data stream into packets called segments, segment numbering and reordering of out-of-order data.
Finally, some transport layer protocols, for example TCP, but not UDP, provide end-to-end reliable communication, i.e. error recovery by means of error detecting code and automatic repeat request (ARQ) protocol. The ARQ protocol also provides flow control, which may be combined with congestion avoidance.
UDP is a very simple protocol and does not provide virtual circuits, nor reliable communication, delegating these functions to the application program. UDP packets are called datagrams, rather than segments.
Analysis:
TCP is used for many protocols, including HTTP web browsing and email transfer. UDP may be used for multicasting and broadcasting, since retransmissions are not possible to a large amount of hosts. UDP typically gives higher throughput and shorter latency and is therefore often used for real-time multimedia communication where packet loss occasionally can be accepted, for example IP-TV and IP-telephony, and for online computer games.
Analysis:
Many non-IP-based networks, such as X.25, Frame Relay and ATM, implement the connection-oriented communication at the network or data link layer rather than the transport layer. In X.25, in telephone network modems and in wireless communication systems, reliable node-to-node communication is implemented at lower protocol layers.
The OSI connection-mode transport layer protocol specification defines five classes of transport protocols: TP0, providing the least error recovery, to TP4, which is designed for less reliable networks.
Analysis:
Due to protocol ossification, TCP and UDP are the only widely-used transport protocols on the Internet. To avoid middlebox intolerance, new transport protocols may mimic the wire image of a tolerated protocol, or be encapsulated in UDP, accepting some overhead (e.g., due to outer checksums made redundant by inner integrity checks). QUIC takes the latter approach, rebuilding reliable stream transport on top of UDP.
Protocols:
This list shows some protocols that are commonly placed in the transport layers of the Internet protocol suite, the OSI protocol suite, NetWare's IPX/SPX, AppleTalk, and Fibre Channel.
Protocols:
Comparison of Internet transport layer protocols Comparison of OSI transport protocols ISO/IEC 8073/ITU-T Recommendation X.224, "Information Technology - Open Systems Interconnection - Protocol for providing the connection-mode transport service", defines five classes of connection-mode transport protocols designated class 0 (TP0) to class 4 (TP4). Class 0 contains no error recovery and was designed for use on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP contains functions, such as the graceful close, which OSI assigns to the session layer. All OSI connection-mode protocol classes provide expedited data and preservation of record boundaries. Detailed characteristics of the classes are shown in the following table: There is also a connectionless transport protocol, specified by ISO/IEC 8602/ITU-T Recommendation X.234. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Magnetic declination**
Magnetic declination:
Magnetic declination, or magnetic variation, is the angle on the horizontal plane between magnetic north (the direction the north end of a magnetized compass needle points, corresponding to the direction of the Earth's magnetic field lines) and true north (the direction along a meridian towards the geographic North Pole). This angle varies depending on position on the Earth's surface and changes over time.
Magnetic declination:
Somewhat more formally, Bowditch defines variation as "the angle between the magnetic and geographic meridians at any place, expressed in degrees and minutes east or west to indicate the direction of magnetic north from true north. The angle between magnetic and grid meridians is called grid magnetic angle, grid variation, or grivation."By convention, declination is positive when magnetic north is east of true north, and negative when it is to the west. Isogonic lines are lines on the Earth's surface along which the declination has the same constant value, and lines along which the declination is zero are called agonic lines. The lowercase Greek letter δ (delta) is frequently used as the symbol for magnetic declination.
Magnetic declination:
The term magnetic deviation is sometimes used loosely to mean the same as magnetic declination, but more correctly it refers to the error in a compass reading induced by nearby metallic objects, such as iron on board a ship or aircraft.
Magnetic declination should not be confused with magnetic inclination, also known as magnetic dip, which is the angle that the Earth's magnetic field lines make with the downward side of the horizontal plane.
Declination change over time and location:
Magnetic declination varies both from place to place and with the passage of time. As a traveller cruises the east coast of the United States, for example, the declination varies from 16 degrees west in Maine, to 6 in Florida, to 0 degrees in Louisiana, to 4 degrees east in Texas. The declination at London, UK was one degree west (2014), reducing to zero as of early 2020. Reports of measured magnetic declination for distant locations became commonplace in the 17th century, and Edmund Halley made a map of declination for the Atlantic Ocean in 1700.In most areas, the spatial variation reflects the irregularities of the flows deep in the Earth; in some areas, deposits of iron ore or magnetite in the Earth's crust may contribute strongly to the declination. Similarly, secular changes to these flows result in slow changes to the field strength and direction at the same point on the Earth.
Declination change over time and location:
The magnetic declination in a given area may (most likely will) change slowly over time, possibly as little as 2–2.5 degrees every hundred years or so, depending on where it is measured.. For a location close to the pole like Ivujivik, the declination may change by 1 degree every three years. This may be insignificant to most travellers, but can be important if using magnetic bearings from old charts or metes (directions) in old deeds for locating places with any precision.
Declination change over time and location:
As an example of how variation changes over time, see the two charts of the same area (western end of Long Island Sound), below, surveyed 124 years apart. The 1884 chart shows a variation of 8 degrees, 20 minutes West. The 2008 chart shows 13 degrees, 15 minutes West.
Determination:
Field measurement The magnetic declination at any particular place can be measured directly by reference to the celestial poles—the points in the heavens around which the stars appear to revolve, which mark the direction of true north and true south. The instrument used to perform this measurement is known as a declinometer.
Determination:
The approximate position of the north celestial pole is indicated by Polaris (the North Star). In the northern hemisphere, declination can therefore be approximately determined as the difference between the magnetic bearing and a visual bearing on Polaris. Polaris currently traces a circle 0.73° in radius around the north celestial pole, so this technique is accurate to within a degree. At high latitudes a plumb-bob is helpful to sight Polaris against a reference object close to the horizon, from which its bearing can be taken.
Determination:
Determination from maps A rough estimate of the local declination (within a few degrees) can be determined from a general isogonic chart of the world or a continent, such as those illustrated above. Isogonic lines are also shown on aeronautical and nautical charts.
Determination:
Larger-scale local maps may indicate current local declination, often with the aid of a schematic diagram. Unless the area depicted is very small, declination may vary measurably over the extent of the map, so the data may be referred to a specific location on the map. The current rate and direction of change may also be shown, for example in arcminutes per year. The same diagram may show the angle of grid north (the direction of the map's north–south grid lines), which may differ from true north.
Determination:
On the topographic maps of the U.S. Geological Survey (USGS), for example, a diagram shows the relationship between magnetic north in the area concerned (with an arrow marked "MN") and true north (a vertical line with a five-pointed star at its top), with a label near the angle between the MN arrow and the vertical line, stating the size of the declination and of that angle, in degrees, mils, or both.
Determination:
Models and software Worldwide empirical model of the deep flows described above are available for describing and predicting features of the Earth's magnetic field, including the magnetic declination for any given location at any time in a given timespan. One such model is World Magnetic Model (WMM) of the US and UK. It is built with all the information available to the map-makers at the start of the five-year period it is prepared for. It reflects a highly predictable rate of change, and is usually more accurate than a map—which is likely months or years out of date. For historical data, the IGRF and GUFM models may be used. Tools for using such models include: Web apps hosted by the National Geophysical Data Center, a division of the National Oceanic and Atmospheric Administration of the United States.
Determination:
C demo program that for WMM by the National Geospatial-Intelligence Agency, along with various other third-party implementations.The WMM, IGRF, and GUFM models only describe the magnetic field as emitted at the core-mantle boundary. In practice, the magnetic field is also distorted by the Earth crust, the distortion being magnetic anomaly. For more precise estimates, a larger crust-aware model such as the Enhanced Magnetic Model may be used. (See cited page for a comparison of declination contours.)
Using the declination:
Adjustable compasses A magnetic compass points to magnetic north, not geographic north. Compasses of the style commonly used for hiking include a declination adjustment in the form of a bezel which swivels relative to the base plate. To establish a declination the bezel is rotated until the desired number of degrees plus or minus lie between the bezel's designation N (for North) and the direction indicated by the magnetic end of the needle (usually painted red). This allows the user to establish a true bearing for travel or orientation by aligning the embossed red indicator arrow on the base plate with a landmark or heading on a map. A compass thus adjusted can be said to be reading “true north” instead of magnetic north (as long as it remains within an area on the same isogonic line).
Using the declination:
In the image to the left, the bezel's N has been aligned with the direction indicated by the magnetic end of the compass needle, reflecting a magnetic declination of 0 degrees. The arrow on the base plate indicates a bearing of 312 degrees.
Using the declination:
Non-adjustable compasses To work with both true and magnetic bearings, the user of a non-adjustable compass needs to make simple calculations that take into account the local magnetic declination. The example on the left shows how you would convert a magnetic bearing (one taken in the field using a non-adjustable compass) to a true bearing (one that you could plot on a map) by adding the magnetic declination. The declination in the example is 14°E (+14°). If, instead, the declination was 14°W (−14°), you would still “add” it to the magnetic bearing to obtain the true bearing: 40°+ (−14°) = 26°.
Using the declination:
The opposite procedure is used in converting a true bearing to a magnetic bearing. With a local declination of 14°E, a true bearing (perhaps taken from a map) of 54° is converted to a magnetic bearing (for use in the field) by subtracting the declination: 54° – 14° = 40°. If, instead, the declination was 14°W (−14°), you would still “subtract” it from the true bearing to obtain the magnetic bearing: 54°- (−14°) = 68°.
Navigation:
On aircraft or vessels there are three types of bearing: true, magnetic, and compass bearing. Compass error is divided into two parts, namely magnetic variation and magnetic deviation, the latter originating from magnetic properties of the vessel or aircraft. Variation and deviation are signed quantities. As discussed above, positive (easterly) variation indicates that magnetic north is east of geographic north. Likewise, positive (easterly) deviation indicates that the compass needle is east of magnetic north.Compass, magnetic and true bearings are related by: The general equation relating compass and true bearings is Where: C is Compass bearing M is Magnetic bearing T is True bearing V is magnetic Variation D is compass Deviation V<0,D<0 for westerly Variation and Deviation V>0,D>0 for easterly Variation and DeviationFor example, if the compass reads 32°, the local magnetic variation is −5.5° (i.e. West) and the deviation is 0.5° (i.e. East), the true bearing will be: To calculate true bearing from compass bearing (and known deviation and variation): Compass bearing + deviation = magnetic bearing Magnetic bearing + variation = true bearingTo calculate compass bearing from true bearing (and known deviation and variation): True bearing - variation = Magnetic bearing Magnetic bearing - deviation = Compass bearingThese rules are often combined with the mnemonic "West is best, East is least"; that is to say, add W declinations when going from True bearings to Magnetic bearings, and subtract E ones.
Navigation:
Another simple way to remember which way to apply the correction for continental USA is: For locations east of the agonic line (zero declination), roughly east of the Mississippi: the magnetic bearing is always bigger.
Navigation:
For locations west of the agonic line (zero declination), roughly west of the Mississippi: the magnetic bearing is always smaller.Common abbreviations are: TC = true course; V = variation (of the Earth's magnetic field); MC = magnetic course (what the course would be in the absence of local deviation); D = deviation caused by magnetic material (mostly iron and steel) on the vessel; CC = compass course.
Navigation:
Deviation Magnetic deviation is the angle from a given magnetic bearing to the related bearing mark of the compass. Deviation is positive if a compass bearing mark (e.g., compass north) is right of the related magnetic bearing (e.g., magnetic north) and vice versa. For example, if the boat is aligned to magnetic north and the compass' north mark points 3° more east, deviation is +3°. Deviation varies for every compass in the same location and depends on such factors as the magnetic field of the vessel, wristwatches, etc. The value also varies depending on the orientation of the boat. Magnets and/or iron masses can correct for deviation, so that a particular compass accurately displays magnetic bearings. More commonly, however, a correction card lists errors for the compass, which can then be compensated for arithmetically. Deviation must be added to compass bearing to obtain magnetic bearing.
Navigation:
Air navigation Air navigation is based on magnetic directions thus it is necessary to periodically revise navigational aids to reflect the drift in magnetic declination over time. This requirement applies to VOR beacons, runway numbering, airway labeling, and aircraft vectoring directions given by air traffic control, all of which are based on magnetic direction. Runways are designated by a number between 01 and 36, which is generally one tenth of the magnetic azimuth of the runway's heading: a runway numbered 09 points east (90°), runway 18 is south (180°), runway 27 points west (270°) and runway 36 points to the north (360° rather than 0°). However, due to magnetic declination, changes in runway designators have to occur at times to keep their designation in line with the runway's magnetic heading. An exception is made for runways within the Northern Domestic Airspace of Canada; these are numbered relative to true north because proximity to the magnetic North Pole makes the magnetic declination large and changes in it happen at a high pace.
Navigation:
Radionavigation aids located on the ground, such as VORs, are also checked and updated to keep them aligned with magnetic north to allow pilots to use their magnetic compasses for accurate and reliable in-plane navigation.
Navigation:
For simplicity aviation sectional charts are drawn using true north so the entire chart need not be rotated as magnetic declination changes. Instead individual printed elements on the chart (such as VOR compass roses) are updated with each revision of the chart to reflect changes in magnetic declination. For an example refer to the sectional chart slightly west of Winston-Salem, North Carolina in March 2021, magnetic north is 8 degrees west of true north (Note the dashed line marked 8°W).When plotting a course, some small aircraft pilots may plot a trip using true north on a sectional chart (map), then convert the true north bearings to magnetic north for in-plane navigation using the magnetic compass. These bearings are then converted on a pre-flight plan by adding or subtracting the local variation displayed on a sectional chart.
Navigation:
GPS systems used for aircraft navigation also display directions in terms of magnetic north even though their intrinsic coordinate system is based on true north. This is accomplished by means of lookup tables inside the GPS which account for magnetic declination. If flying under visual flight rules it is acceptable to fly with an outdated GPS declination database however if flying IFR the database must be updated every 28 days per FAA regulation.
Navigation:
As a fail-safe even the most advanced airliner will still have a magnetic compass in the cockpit. When onboard electronics fail, pilots can still rely on paper charts and the ancient and highly reliable device -- the magnetic compass. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fixed service**
Fixed service:
In telecommunications, a fixed service (or fixed radiocommunication service) is a radiocommunication service between specified fixed points.
Classification:
The ITU Radio Regulations (article 1) classify variations of this radiocommunication service as follows:Fixed service Fixed-satellite service (article 1.21); Fixed station (article 1.66) Inter-satellite service (article 1.22) Earth exploration-satellite service (article 1.51) Meteorological-satellite service (article 1.52)
Examples:
In line with national regulations there are numerous radio applications in accordance with ITU RR article 1.20 on fixed services. These include: Radio relay Troposcatter radiocommunication Embassy radiocommunication, between fixed point Fixed wireless
Frequency allocation:
The allocation of radio frequencies is provided according to Article 5 of the ITU Radio Regulations (edition 2012).In order to improve harmonisation in spectrum utilisation, the majority of service-allocations stipulated in this document were incorporated in national Tables of Frequency Allocations and Utilisations which is with-in the responsibility of the appropriate national administration. The allocation might be primary, secondary, exclusive, and shared.
Frequency allocation:
primary allocation: is indicated by writing in capital letters (see example below) secondary allocation: is indicated by small letters exclusive or shared utilization: is within the responsibility of administrationsHowever, military usage, in bands where there is civil usage, will be in accordance with the ITU Radio Regulations. In NATO countries military fixed utilizations will be in accordance with NATO Joint Civil/Military Frequency Agreement (NJFA).
Frequency allocation:
An example of frequency allocation in the 8.3–110 kHz range would be: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2-Bromoanisole**
2-Bromoanisole:
2-Bromoanisole is an organobromide with the formula BrC6H4OCH3. A colorless liquid, it is one of three isomers of bromoanisole, the others being 3-bromoanisole and 4-bromoanisole. It is a standard coupling partner in metal catalyzed coupling reactions. These reactions include Heck reactions, Buchwald-Hartwig coupling, Suzuki couplings, and Ullmann condensations. The corresponding Grignard reagent readily forms. It is a precursor to o-anisaldehyde. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jack L. Koenig**
Jack L. Koenig:
Jack L. Koenig (1933-2021) was a chemical engineer noted for pioneering spectroscopic methods of polymer characterization. In particular, he played a significant role in developing characterization methods to provide fundamental structure-property relationships for polymers used in thermoplastic and thermoset systems.Koenig was elected a member of the National Academy of Engineering in 2000 for applications of spectroscopic methods of polymeric materials.
Education:
Koenig earned his B.A. in Chemistry and Mathematics from Yankton College and his M.S. and Ph.D. in theoretical chemistry from the University of Nebraska.
Career:
Before joining the faculty at Case Institute of Technology in 1963, he worked for a short time at DuPont on spectroscopic methods for characterizing polymers.
Early in his career he was mentored by Goodyear medalist Prof. J. Reid Shelton.
Career:
He is known for inventing the infrared method of measuring branches in polyethylene and for the method of determining the molecular weight of insoluble PTFE polymers. Both of his methods are now ASTM standard test methods.Koenig was promoted to the position of J. Donnell Institute Endowed Chair in the Department of Macromolecular Science and Engineering at Case Western Reserve University in Cleveland, OH in 1990. He retired in 2004 as a professor emeritus from Case Western Reserve University.
Honors and awards:
1970 - American Physical Society Fellow 1984 - Pittsburgh Society Spectroscopy Award 1986 - Alexander Von Humboldt Award for Senior U.S. Scientists 2000 - Charles Goodyear Medal 2000 - Elected to the National Academy of Engineering (U.S.) 2006 - Plastics Academy Hall of Fame
Publications:
Koenig published 542 articles. His most highly cited articles are: Miller-Chou, B.A., Koenig, J.L. "A review of polymer dissolution." Progress in Polymer Science (Oxford), 2003, 28(8), pp. 1223–1270. https://doi.org/10.1016/S0079-6700(03)00045-5 Mathlouthi, M., Koenig, J.L. "Vibrational spectra of carbohydrates." Advances in Carbohydrate Chemistry and Biochemistry, 1987, 44(C), pp. 7–89. https://doi.org/10.1016/S0065-2318(08)60077-3 Tuinstra, F., Koenig, J.L. "Characterization of Graphite Fiber Surfaces with Raman Spectroscopy." Journal of Composite Materials, 1970, 4(4), pp. 492–499. https://doi.org/10.1177/002199837000400405 Chiang, C.-H., Ishida, H., Koenig, J.L. "The structure of γ-aminopropyltriethoxysilane on glass surfaces." Journal of Colloid And Interface Science, 1980, 74(2), pp. 396–404. https://doi.org/10.1016/0021-9797(80)90209-X Ishida, H., Wellinghoff, S.T., Baer, E., Koenig, J.L. "Spectroscopic Studies of Poly[N,N'-bis(phenoxyphenyl)pyromellitimide]. 1. Structures of the Polyimide and Three Model Compounds." Macromolecules, 1980, 13(4), pp. 826–834. https://doi.org/10.1021/ma60076a011 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Two-lane expressway**
Two-lane expressway:
A two-lane expressway or two-lane freeway is an expressway or freeway with only one lane in each direction, and usually no median barrier. It may be built that way because of constraints, or may be intended for expansion once traffic volumes rise. The term super two is often used by roadgeeks for this type of road, but traffic engineers use that term for a high-quality surface road. Most of these roads are not tolled.
Two-lane expressway:
A somewhat related concept is a "four-lane undivided freeway". This is much rarer; a current example is U.S. Route 101 in California through Humboldt Redwoods State Park.
Two-lane expressway:
In Europe, the concept of express road encompasses roads which are classified between a motorway and an ordinary road. This concept is recognized both by European Union law and under the UNECE treaty. This type of road is not very standardized, and its geometry may vary from country to country or within a same country. These roads are usually, but not always, reserved for motorized vehicles, and accessible with limited-access roads. Some European union regulation considers the high-quality roads to be roads "which play an important role in long-distance freight and passenger traffic, integrate the main urban and economic centres, interconnect with other transport modes and link mountainous, remote, landlocked and peripheral NUTS 2 regions to central regions of the Union". According to this same regulation "High-quality roads shall be specially designed and built for motor traffic, and shall be either motorways, express roads or conventional strategic roads."
Justification:
Two-lane freeways are usually built as a temporary solution due to lack of funds, as an environmental compromise or as a way to overcome problems constrained from highway reconstruction when there are four lanes or more. If the road is widened, the existing road is typically allocated to traffic going in one direction, and the lanes for the other direction are built as a whole new roadbed adjacent to the existing one. When upgraded in this manner, the road becomes a typical freeway. Many two-lane freeways are built so that when the road is upgraded to a proper divided freeway, the existing overpasses and ramps do not need reconstruction.A super-2 expressway is a high-speed surface road with at-grade intersections, depending on the common usage of the term expressway in the area. By this definition, Super-2s can be considered the first stage of project which is expected to become a full freeway, with the transportation authority owning the land necessary for the future adjacent carriageway. At-grade intersections exist but there is sufficient land to replace them with interchanges. In some US states, a super-2 expressway is simply referred to as a super-2, regardless of whether it is fully controlled-access or not. Highway 410 in Ontario was originally a super-2 before being upgraded to a full freeway. Similarly, most of Highway 102 in Nova Scotia was a super-2 for three decades before being upgraded. Many super-2 expressways are simply just short transitional segments between surface street and four-lane divided freeways.
Justification:
A super-4 expressway is a multi-lane divided highway with at-grade intersections, although the highway will become a full controlled-access freeway if the intersections are replaced with interchanges. A super-4 may have been a super-2 that has been twinned, although such instances of super-4 intermediaries are rare as super-2s are often upgraded right away to full freeways. Highway 40 in Ontario is a super-4 expressway between Highway 402 and Wellington St., and from Indian Rd to Rokeby Line. The remaining sections of Highway 40 are super-2 expressways. Other super-4 expressways include the Hanlon Parkway in Guelph and Black Creek Drive in Toronto, both which have sufficient right of way to allow for interchanges and overpasses to replace the at-grade crossings.
Justification:
When a super-2 expressway is converted to a four-lane divided freeway, conversion artifacts such as double yellow lines, or broken yellow lines in passing zones are usually cleanly bestowed in favor of more consistent road marking for four-lane divided expressways.
List of two-lane freeways:
Argentina National Route 38, between Famaillá and Juan Bautista Alberdi in Tucumán Province.
Australia In Melbourne, Victoria, the Mornington Peninsula Freeway, is a two-lane freeway between the interchange with Jetty Road and the interchange with Boneo Road at 90 km/h.
In Brisbane, Queensland, the Cunningham Highway, is a two-lane freeway between Warwick Road and Ripley Road, After Ripley Road the Cunningham Highway is grade separated until it meets the Ipswich Motorway, M2.
In Kingston, Tasmania; Kingston Bypass has been constructed as a two-lane expressway, with provision for dual carriageway in the future when needed.
In north-eastern Tasmania, the Bass Highway has some grade-separated interchanges, and the standard rural freeway 110 km/h speed limit, but with some sections having only two lanes.
On the Sunshine Coast, Queensland, the Sunshine Motorway is a two-lane freeway between Kawana Way and Nicklin Way, and again between David Low Way and Emu Mountain Road (although it has a roundabout in this section).
In Townsville, Queensland, the Townsville Ring Road is two-laned.
Canberra's Gungahlin Drive Extension (GDE) was constructed as a two-lane grade-separated freeway for part of its length. The GDE has since been duplicated to four lanes.
The Motorway Link Road, between the Pacific Highway and the Pacific Motorway in the Hunter Valley is an example of a two-way freeway—it has a speed limit of 100 km/h.
Mandjoogoordap Drive in Mandurah, Western Australia, was to be built as a two-lane freeway, before funding was then supplied for duplication.
Canada Many of the 100-series highways in Nova Scotia and arterial highways in New Brunswick are two-lane freeways, with diamond interchanges and grade separations with many intersecting roads.
Some Quebec Autoroutes are also two-lane freeways for some of their length, including Autoroute 50 in Mirabel and Autoroute 20 in Rimouski.
Some sections of the Trans Canada Highway in Newfoundland, Nova Scotia, Ontario, Manitoba, and British Columbia are two-lane freeways.
Airport Parkway located in Ottawa connects the Ottawa Macdonald–Cartier International Airport to downtown. Its speed limit is 80 km/h with no mid line dividers. Talks have suggested to twin the expressway.
In Ottawa, the transit-ways are almost entirely two lane, undivided freeways through the use of grade separations but are for buses and emergency vehicles only.
List of two-lane freeways:
Europe In a few European countries (like Germany and Switzerland), many rural highways have been converted into two-lane freeways. However, most of these have been built with low overpasses wide enough to accommodate only two-lanes, which indicates that there is no intent to widen them into freeways in the foreseeable future. In German this type of road is called an Autostrasse.
List of two-lane freeways:
In Germany rural segments of the A 8 in the Saarland (between Saarlouis and the Luxembourg border), the A 60 from the exit for Prüm to the Belgian border, and the A 62 between the A 6 and the A 8, are two lanes (or, in the former two cases, 2 + 1 with an extra climbing lane). Unlike the Autostrassen previously mentioned, these segments are built to Autobahn standards but with only one carriageway; all of the overpasses, culverts and short bridges, cuttings and earthworks are wide enough for twin carriageways, and only some long bridges would need to be dualized for upgrading to a full 4- or 5-lane Autobahn.
List of two-lane freeways:
In Croatia, the Istrian Y highway complex used to consist out of two-lane freeways, which were due to be upgraded to four-lane ones, should the traffic increase. The complex is classified as consisting of expressways and as such has a general speed limit of 110 km/h (68 mph), although a limit of 100 km/h (62 mph) tends to be more prevalent there. However, as the traffic increases came sooner than it was predicted, the status of Istrian Y was changed to semi-highway, as a widening to four or six lanes is already in progress.
List of two-lane freeways:
Highway 19 in the Czech Republic is a two-lane expressway between Highway 3 and Zahradka.
Some of the motorways in former German areas in Poland were originally two-lane expressways when built in the 1930s. Currently the road from Elbląg to Kaliningrad in Russia is still two-lane and in Poland is signed as expressway S22.
List of two-lane freeways:
In Sweden and Norway, a large amount of two-lane expressways were built in the period 1960-1990 (Sweden) and 1970-2000 (Norway). In addition, some have been built in Denmark. Only a few such roads have been built recently because there were many serious accidents. Many have been widened to four-lane expressways. Those remaining have, in Sweden, been converted to 2+1 roads with a barrier between the directions. In Norway and many other European countries the two-lane expressways are too narrow to convert to 2+1-road if they are not widened.
List of two-lane freeways:
In Spain, there were few two-lane expressways until the 1980s, when many started being built to provide a faster and safer alternative to old rural roads that cross towns and have long and dangerous tracings around mountains and other obstacles. Although there are less accidents on this type of road than in normal roads, those that happen are usually more serious, due to the high speeds they allow (100 km/h, but up to 120 km/h to pass). Some blamed the name "expressway" as being an "apology of speed", so after 2003 it was changed for "automobile way" in most of the country ("high performance way" in some regions). Since the 1990s, many have been widened to four-lane or six-lane expressways, but still few have been converted to 2+1 roads. In 2019, their general speed limit was lowered to 90 km/h.
List of two-lane freeways:
In Romania the total width of an expressway is 21.5 m, while the width of a highway is 26 m. As a result, each lane of an expressway are 0.25 meters narrower and there are no emergency lanes. The maximum speed is 10 km/h lower than that of a highway, 120km/h instead of 130km/h.
List of two-lane freeways:
Japan While most expressways in Japan are four-lane divided expressways with median barriers, some expressways in rural areas are two-lane expressways, such as some sections of the Hokkaidō Expressway. The two-lane expressways in Japan are built in the same manner as the ordinary four-lane expressways with grade-separated interchanges and full access control, allowing future conversions to full four-lane divided expressways.
List of two-lane freeways:
Malaysia The two-lane expressway is not a new concept in Malaysia, as the Kuala Lumpur–Karak Expressway was initially a two-lane toll expressway before being upgraded to a full expressway in 1997. While the full four-lane divided toll expressways are more favored in recent years due to their higher traffic capacity, a few two-lane expressways do exist, such as the Kempas Highway and the North Klang Straits Bypass. These expressways, however, only have partial access control with at-grade intersections commonly available like most other federal and state roads. Nevertheless, these two-lane highways are still classified as "two-lane expressways" as they are maintained by highway concessionaires, namely PLUS Expressways Berhad (Kempas Highway) and Shapadu (North Klang Straits Bypass).
List of two-lane freeways:
Meanwhile, the South Klang Valley Expressway at Teluk Panglima Garang is a two-lane carriageway making it the first true two-lane expressway in Klang Valley and the second in Malaysia.
The first true two-lane expressway with full access control is the section of the Senai–Desaru Expressway between Cahaya Baru and Penawar.
List of two-lane freeways:
Mexico A new Super-2 bypass of Mexicali (MEX-2D) was completed in summer 2006. It features 1 lane in each direction and is a toll road. Three interchanges exist—one at each end, and one in the middle, providing access to MEX-5 (north to downtown Mexicali and south to San Felipe). The road has complete control of access. According to a toll collector, this Super-2 is scheduled for an upgrade to a full toll freeway (four lanes, two in each direction) by sometime in 2008. Eventually, this freeway may be constructed all the way to San Luis Río Colorado, replacing the existing four lane undivided highway, MEX-2.
List of two-lane freeways:
A Super-2 toll road, MEX-150D and MEX-190D (MEX-150D travels to Veracruz), connects Mexico City and Oaxaca.
A Super-2 bypass of Poza Rica, Veracruz, was finished in 2005. This two-lane toll highway connects MEX 131 north of Poza Rica to MEX 180 east of Papantla.
A Super-2 toll road (MEX-15D) connects most of the distance between Mazatlán and Tepic.
New Zealand MotorwaysThe Wellington Urban Motorway is three lanes undivided (two northbound, one southbound) through the Terrace Motorway Tunnel. A separate southbound tunnel was never built.
Christchurch The Christchurch Northern Motorway is two lanes undivided between its northern terminus at Pineacres and the Lineside Road interchange. This reduces the number of lanes prior to the northern terminus, where the motorway merges onto a two-laned road.
In the southwestern part of the city prior to 2012 was the two-laned Christchurch Southern Motorway, between Barrington Street and Curletts Road. The road has since been duplicated to four lanes divided as part of the motorway's extension from Curletts Road to Halswell Junction Road.ExpresswaysLinking the cities of Napier and Hastings is the two-laned Hawke's Bay Expressway.
Linking Christchurch with Lyttelton is the limited-access Tunnel Road.
In Dunedin, the Dunedin Southern Motorway has a short one-kilometre section which is only two-laned. Work is underway to widen it to duplicate the road to four lanes divided.
Philippines A segment of Metro Manila Skyway from Alabang South Station to Alabang Toll Plaza is a two-lane expressway.
A segment of North Luzon Expressway in Mabalacat, Pampanga from Subic–Clark–Tarlac Expressway (SCTEX) Interchange to Santa Ines (northern terminus) was a two-lane expressway. However, it was announced on 7 November 2013 that the Manila North Tollways Corporation will expand SCTEX-Santa Ines segment from two lanes to four lanes.
A segment of the Southern Tagalog Arterial Road (STAR Tollway) from Lipa to Batangas City (southern terminus) was a two-lane expressway but upgraded into a four-lane expressway, however the segment of Sabang Bridge to the Batangas Toll Plaza of the expressway is still a two-lane expressway.
The Subic Freeport Expressway in the provinces of Zambales and Bataan was a two-lane expressway until its expansion in 2020.
South Africa Some sections of two-lane freeway can be found on the N1 and the N2 highways.
United Kingdom The former A6144(M) in Manchester had one lane in each direction, although to motorway standards. It has now been downgraded to an A road.
Part of the A601(M) road in Lancashire was a two-lane motorway between its junction with the M6 and terminus at the B6254. This section was downgraded to a B road in 2020.
The Runcorn Spur Road in Runcorn is a two-lane expressway with grade separations and at-grade intersections (partially motorway-like).
United States Arizona A portion of State Route 80 in the vicinity of Bisbee is a two-lane expressway with an interchange at West Boulevard and Tombstone Canyon Road (Historic US 80).
California U.S. Highway 101 is a two-lane undivided freeway between County Route D8 near the Klamath River and Elhers Avenue in Klamath. US 101 is also a two-lane undivided freeway for most of the segment bypassing Willits.
State Route 108 bypassing Sonora is a two-lane undivided freeway.
A section of State Route 138 at Cleghorn Road in San Bernardino County briefly becomes a two-lane freeway.
State Route 154 is a two-lane undivided freeway over State Route 192 in Santa Barbara.
State Route 255 is a two-lane undivided freeway for the Samoa Bridge segment between Eureka and Samoa, with one interchange at Woodley Island in Eureka.
U.S. Highway 395 in Inyokern is a two-lane freeway east of Inyokern Airport.
Connecticut A one-mile (1.6 km) portion of the Milford Connector from the Wilbur Cross Parkway to Wheelers Farms Road in Milford. This divided two-lane extension of the original connector opened in 1993.
Route 190 between Route 159 in Suffield and the Pearl Street underpass in Enfield is a two-lane freeway. It was originally planned to be a four-lane freeway across northern Connecticut.
Route 2A from the eastbound on-ramp from Mohegan Boulevard to Route 12 (0.8 miles or 1.3 kilometres).
Florida State Road 293 is a two-lane expressway on two separate locations. One is north of the unincorporated community of Bluewater Bay, and the other section is north of Niceville. This parkway mostly serves as bypass around the community of Niceville, providing quicker access to Bluewater Bay and Destin.
State Road 407, providing a connection between the Beachline Expressway, Interstate 95, and State Road 405 for direct access into Kennedy Space Center, is a two-lane freeway for most of its distance. An at-grade intersection is near its eastern terminus.
State Road 570 (Polk Parkway) was a two-lane freeway along its northernmost six miles (9.7 km) from its opening in 1999 to 2011. It is two lanes for approximately 3.5 miles (5.6 km) from Old Dixie Highway to a point three miles (4.8 km) south of Interstate 4. This is an example of a two-lane toll road.
Kansas U.S. Highway 36 from just west of Hiawatha to Wathena.
U.S. Highway 400 bypasses Neodesha to the south and west. The western portion of this bypass is two-lane, while the eastern section, overlapping U.S. Highway 75 is a conventional four-lane freeway.
K-96/K-14 bypasses Hutchinson from the south at U.S. Highway 50 along the western side of the urban area to just northwest of Yaggy at West 50th Avenue, where it returns to a conventional two-lane highway.
K-10 from its western terminus at Interstate 70 (Kansas Turnpike) to U.S. Highway 59 is a super-2.
K-4 from U.S. Highway 40 at SE 6th Street to U.S. Highway 24 is a super-2 and is known to residents of Topeka as the Oakland Expressway.
U.S. Highway 75 runs as a super-2 from its junction with Interstate 35/U.S. Highway 50 at BETO Junction to K-31 at Melvern Lake.
U.S. Highway 169 runs as a super-2 from its junction with U.S. Route 54 in Iola, Kansas to Forty-Ninth Street in Chanute, Kansas.
List of two-lane freeways:
Kentucky The Hal Rogers Parkway (formerly Daniel Boone Parkway), connecting Hazard and London, is a two-lane freeway for virtually its entire length (approximately 65 miles (105 km)), with occasional truck lanes on hills. The only four-lane section is the northern bypass of London at the road's western end. Originally, the road was tolled from the eastern end of the London bypass to Hazard. Upgrading to four lanes had been considered in the early 21st century as part of a possible extension to Interstate 66, but I-66 was officially killed in 2015.
List of two-lane freeways:
The Bert T. Combs Mountain Parkway, another road that was originally a toll road but has since ceased toll collection, is a two-lane freeway from exit 46 at Campton to the road's eastern terminus in Salyersville. In 2014, Kentucky Governor Steve Beshear announced a proposal to upgrade the two-lane section to four lanes, and extend the parkway a further 16 miles (26 km) to Prestonsburg.
List of two-lane freeways:
Louisiana Louisiana Highway 1, the completed elevated section is a two-lane toll bridge from Leeville crossing Bayou Lafourche to Port Fourchon with future phases to eventually convert the highway from Port Fourchon to Golden Meadow into a four-lane elevated expressway.
Maine Interstate 95 north of Bangor was originally constructed as a two-lane freeway. In 1981 the present divided highway was completed between Bangor and Houlton at the Canadian border.
Maryland Maryland Route 90 (Ocean City Expressway) is a two-lane expressway with one traffic light.
Portions of the Francis Scott Key Bridge approach (exit 42 to exit 44) on the Baltimore Beltway were originally a two-lane freeway when the final section of the beltway opened in 1977. The highway was updated to four lanes in the mid-1990s.
The Hampstead Bypass (Maryland Route 30) in Carroll County is a two-lane expressway. It has a traffic circle with Maryland Route 482.
The Bel Air Bypass (U.S. Route 1) in Harford County includes portions of two-lane freeway along with 2+2 road segments.
Massachusetts U.S. Route 6 on Cape Cod, from exit 78B (Route 134) in Dennis to the Orleans Rotary. This stretch of highway is known to locals as "Suicide Alley" due to the high number of fatalities from head-on collisions. Median construction has alleviated this problem somewhat.
Route 2, from Erving to the eastern U.S. Route 202 interchange in Phillipston.
The Plimoth Plantation Highway, a spur off Route 3 in Plymouth.
U.S. Route 44 from Route 105 in Middleborough to Route 58 in Carver. This stretch of highway has a guard rail that acts as a median barrier.
Route 88, a two-lane expressway in Westport connecting Interstate 195, US 6, and Route 177 to Horseneck Beach. Unlike the other examples in Massachusetts, Route 88 features a mixture of interchanges and signaled intersections.
List of two-lane freeways:
Michigan A rare instance of a two-lane freeway that was restricted to only one direction of traffic existed in Michigan at one time, of which U.S. Highway 16 was restricted to eastbound traffic when bypassing Farmington, Michigan. This rare instance permitted passing traffic without the liability of a head-on collision, though this changed when it was upgraded to a four-lane divided freeway which became I-96, and later M-102, then M-5.
List of two-lane freeways:
M-231 is a two-lane freeway built as a “U.S. 31 emergency Route” between Interstate 96 and M-104 to M-45 with one at grade intersection at Lincoln St near Robinson. Future plans call for southward extensions and widening of M-231 Minnesota U.S. Highway 12 bypasses around Long Lake, from Wayzata to Orono. Originally built without a median, a median was installed in September 2016 in response to the highway's high accident rate.
List of two-lane freeways:
Missouri U.S. Route 54 bypasses Mexico, on a two-lane expressway around the city. The two-lane expressway both begins and ends at the original route through Mexico, now signed as Business Loop 54. The divided highway begins just east of the West Mexico Interchange, while the east end is only a set of ramps to eventually be connected to the planned expressway. Another two-lane freeway section is a north bypass of Bowling Green with a grade separated crossing at Business US Route 61 and a diamond interchange at U.S. Route 61 (intersections are on 54) that has grading for a full cloverleaf interchange.U.S. Route 65 in Warsaw, is a two-lane expressway from Route MM to just north of Main Street. The portion from north of Main Street to North Dam Access Road become a four-lane expressway in 2012.
List of two-lane freeways:
Montana Interstate 15 and Interstate 90 in Butte, Montana has 2 separate two-lane expressways in opposite directions.
List of two-lane freeways:
New Hampshire Interstate 93 (Franconia Notch Parkway) from Lincoln (mile marker 113) to Franconia (mile marker 121), due to fears that building a four-lane highway would destabilize the Old Man of the Mountain, long before the rock formation's natural collapse from erosion occurred on 3 May 2003. The highway was constructed with a median divider. This segment of I-93 is now the only Interstate highway with fewer than four lanes.
List of two-lane freeways:
Laconia Bypass (U.S. Route 3 and New Hampshire Route 11) New Hampshire Route 101 from Milford to Amherst and again from exit 13 (New Hampshire Route 27) in Hampton to Highland Avenue at Hampton Beach. A former two-lane freeway section from Raymond to Hampton was widened to four lanes in the mid 1990s.
Spaulding Turnpike from Rochester to Milton.
New Hampshire Route 9 and U.S. Route 202 overlap from Henniker to Hillsborough U.S. Route 4 from Sheep Road in Lee to the New Hampshire Route 108 exit in Durham bypasses the University of New Hampshire.
New Jersey The Freehold Bypass of Route 33 is a two-lane freeway between Halls Mill Road (County Route 55) and Brickyard Road. There is a full cloverleaf at Halls Mill, a westbound entrance at Howell Road, and full access from Fairfield Road.
List of two-lane freeways:
New York An example of a two-lane parkway is Bethpage State Parkway on Long Island. This was constructed by Robert Moses as a two-lane freeway in part due to aesthetics. Like most parkways (especially those created by Moses), the road was originally meant to deliver a pleasurable motoring experience, and as such incorporates natural scenery, as well as pedestrian and bicycle trails for those who choose not to drive.
List of two-lane freeways:
New York State Route 85 near Albany contains a section of approximately two miles (3.2 km) of two-lane freeway extending from the Albany city line to the roundabout at Blessing Road. This section, colloquially known as the Slingerlands Bypass, was originally constructed as two lanes of a four-lane freeway when it was designed in the 1940s and 1950s. However, the remaining two lanes were never completed. In the future, the unused portions of adjacent land could easily be used to construct the two lanes originally planned, with minimal effort, if necessary. This is due to the fact that most of the grading and drainage is already present from the original construction work.
List of two-lane freeways:
New York State Route 5S has a two-lane freeway section between Ilion and its junction with New York State Route 28. The highway is a divided four-lane freeway west of this, extending to Utica.
North Carolina U.S. Highway 1 between Cary and Sanford (exits 70 and 98) was a two-lane freeway until its expansion to four divided lanes in the late 1990s.
U.S. Highway 17 between exits 224 and 229 in the Edenton area was a two-lane freeway until the late 1990s.
U.S. Highway 64 between exits 457 and 463 in the Nashville area was a two-lane freeway until the late 70s.
U.S. Highway 421 was originally constructed with three two-lane freeway segments—the first between Winston-Salem and Yadkinville, with the second sections as part of a bypass around the towns of North Wilkesboro and Wilkesboro. These sections were converted into four-lane divided freeways between the 1970s and 1990s.
Ohio U.S. Route 33, from Athens to Darwin, and again between State Route 7 and Ravenswood, West Virginia In the Delaware urban area, U.S. Route 42 is a two-lane freeway between London Road and U.S. Route 23, it has some grade-separations. Incidentally its intersection with U.S. Route 23, with which it runs concurrent, is at-grade.
U.S. Route 22 is a two-lane expressway between Cadiz and Hopedale for 2.5 miles between two four-lane divided segments.
Oklahoma Chickasaw Turnpike Oregon U.S. Route 97 is a 2 and 3 lane undivided freeway bypass of Wasco. Also parts of the Klamath Falls bypass are 2 lane undivided freeway.
U.S. Route 101, from the southern edge of Cannon Beach north to the interchange with U.S. Route 26 south of Seaside.
List of two-lane freeways:
Oregon Route 22 is a four-lane divided freeway from Salem east to just north of Aumsville. It becomes a true freeway for about five miles (8.0 km) through Stayton/Sublimity, then is a two-lane freeway for about another mile east. (The freeway section between Aumsville and Stayton used to be a two-lane freeway.) Pennsylvania U.S. Route 220, south of the Bedford Fairgrounds interchange to the intersection with Business Route 220.
List of two-lane freeways:
Route 8, just north of the Interstate 80 interchange in Barkeyville, for 1.4 miles before it upgrades to a four-lane divided highway.
Rhode Island Route 78, which starts about 200 yards (180 m) inside Connecticut bypasses the city of Westerly to the north and east, and is a key route for traffic heading between Interstate 95 and the Rhode Island beaches.
Route 138 was originally a two-lane freeway between U.S. Route 1 and the Jamestown Bridge, but was widened to four-lanes following the opening of the Jamestown-Verrazano Bridge in 1992.
Texas State Highway 99 (SH 9, Grand Parkway) in Liberty and Chambers counties (consisting of Segments H and I-1) is a two-lane toll road.
A section of SH 105 south of Cleveland is a two-lane freeway.
SH 249 (Aggie Expressway) within Grimes County is a non-tolled two-lane freeway.
Most of Loop 322 in Abilene was once a two-lane freeway; however, it was later upgraded to a conventional divided highway.
Loop 49 in Tyler is a two-lane tollway.
US Highway 82 (US 82) is a two-lane expressway between Bonham and its merger with SH 56 at Honey Grove.
US 287 was a two-lane expressway between I-45 and Bus. US 287 (Ennis Avenue) near Ennis, however, it was recently upgraded to a four-lane divided highway with frontage roads.
The Chisholm Trail Parkway in Johnson County is a two-lane tollway with intermittent passing lanes.
As of January 2018, the proposed Cibolo Parkway in Cibolo will initially be built as a two-lane tollway.
Both US 190 and SH 9 are briefly two-lane freeways while bypassing Copperas Cove; SH 9 for its entire length except its western terminus, and US 190 from about half a mile east of its intersection with FM 2657 to the point where it merges with Bus. US 190 and SH 9 to become I-14.
Utah Utah State Route 7 is routed around St. George on a two-lane freeway bypass and will be fully-upgraded to a divided freeway when demand is met.
U.S. Route 6 / U.S. Route 191 is routed around Price on a two-lane freeway bypass.
List of two-lane freeways:
Vermont U.S. Route 7 just north of Bennington to just north of Manchester is a four-lane freeway that turns into a two-lane freeway just after exit 2. There are two more exits on this section of U.S. Route 7. Exit 3 serves Vermont Route 7A in the Arlington area, and exit 4 serves Vermont Route 30 and Vermont Route 11 for Manchester.
List of two-lane freeways:
Vermont Route 289 around Burlington. Plans to extend the Super 2 both north and south were cancelled by Vermont Governor Peter Shumlin in May 2011. Mile markers on the constructed segment are based on the entire length as originally planned.
The Bennington Bypass is a two-lane freeway bypass of Bennington. Two segments, from New York Route 7 to U.S. Route 7 and from U.S. Route 7 to Vermont Route 9 are open and signed as Vermont Route 279. The remaining southern portion of the bypass remains to be built and is unfunded.
A short section of Vermont Route 127 in Burlington known as the Winooski Valley Parkway is built as a two-lane expressway.
Virginia The Staunton Loop Road (State Route 262) is a two-lane freeway for most of its length. Grading already exists for this highway to be upgraded to a fully divided highway in the future.
U.S. Route 501 in Lynchburg has a short two-lane expressway segment.
State Route 10 and U.S. Route 258 where they bypass Smithfield.
Washington U.S. Route 2 between Bickford Road and 92nd St S.E. around Snohomish.
List of two-lane freeways:
U.S. Route 101 from the interchange with State Route 3 to the northern city limits of Shelton (half-freeway with two-way traffic on northbound side and no plans for the southbound half being constructed) and a section between Sequim and Port Angeles (half-freeway with two-way traffic on the eastbound side, with some intersection segments upgraded to full freeway and plans for further improvements).
List of two-lane freeways:
U.S. Route 195 has two lane expressway segments between Spangle and Plaza, and on the Thornton bypass. The Plaza bypass is a two lane freeway.
Washington State Route 9 has a two lane expressway from Marsh Road in Snohomish to Arlington with the exception of a 2 mile 4 lane divided section through Lake Stevens with plans for more 4 laning between Snohomish and Lake Stevens.
State Route 522 after the at-grade intersection with State Route 524 to the Snohomish River near Monroe.
West Virginia The West Virginia Turnpike was a two-lane freeway from its opening in 1954 until it was expanded to four lanes in 1986.
US 52 from Ohio State Route 7 to Interstate 64.
The King Coal Highway segment in Red Jacket is a two-lane expressway as part of the future U.S. Route 52 replacement corridor.
Wisconsin US 14, south of the interchange with County Trunk Highway MM at Oregon, to Wisconsin Highway 138 (WIS 138). This section was expanded to four lanes during the middle of 2009.
A segment of WSI 26 bypassing Fort Atkinson was built as a two-lane limited access freeway. This section was expanded to four lanes during the middle of 2011.
US 45, from its split from US 41 (now I-41) north of Milwaukee to a point just north of West Bend, Wisconsin, was built as a two-lane freeway, then expanded to four lanes in 1990.
The US 151 bypasses of Beaver Dam and Waupun were originally built as Super 2s during the 1970s to accommodate future expansion; these have since been upgraded as part of the highway's ongoing conversion to a four-lane facility through the entire state.
List of two-lane freeways:
Portions of I-39/US 51, first near Westfield and later near Tomahawk, were built as a two-lane freeways; these were expanded in the late 1980s and 1990s respectively. While I-39 ends in Wausau, a portion of US 51 north of Tomahawk remains a Super 2, with a stub allowing for future expansion to four lanes north of the US 8 interchange.
List of two-lane freeways:
WIS 35/WIS 65 on the River Falls Bypass from WIS 29 to when the four-lane starts.
Vietnam The section from Yen Bai City to Lao Cai City of the Hanoi-Lao Cai Expressway is two-laned.
The section between Cam Lo and Hoa Lien of the North-South expressway is two-laned. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Serbian Astronomical Journal**
Serbian Astronomical Journal:
The Serbian Astronomical Journal is a biannual peer-reviewed scientific journal covering astronomy. The journal is the successor of the Bulletin Astronomique de Belgrade (1992–1998), which was formed by a merger of the Bulletin de l'Observatoire Astronomique de Belgrade (1936–1991) and the Publications of the Department of Astronomy (1969–1990). It has been published under the present title since 1998. It is published by the Astronomical Observatory Belgrade and the Department of Astronomy (Faculty of Mathematics, University of Belgrade). It publishes invited reviews, original scientific papers, preliminary reports, and professional papers over the entire range of astronomy, astrophysics, astrobiology, and related fields.
Abstracting and indexing:
The Serbian Astronomical Journal is abstracted and indexed by Astrophysics Data System, Chemical Abstracts, Referativni Zhurnal, EBSCO databases, Scopus and Thomson Reuters products | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bose–Einstein statistics**
Bose–Einstein statistics:
In quantum statistics, Bose–Einstein statistics (B–E statistics) describes one of two possible ways in which a collection of non-interacting, indistinguishable particles may occupy a set of available discrete energy states at thermodynamic equilibrium. The aggregation of particles in the same state, which is a characteristic of particles obeying Bose–Einstein statistics, accounts for the cohesive streaming of laser light and the frictionless creeping of superfluid helium. The theory of this behaviour was developed (1924–25) by Satyendra Nath Bose, who recognized that a collection of identical and indistinguishable particles can be distributed in this way. The idea was later adopted and extended by Albert Einstein in collaboration with Bose.
Bose–Einstein statistics:
The Bose–Einstein statistics applies only to the particles not limited to single occupancy of the same state – that is, particles that do not obey the Pauli exclusion principle restrictions. Such particles have integer values of spin and are named bosons. Particles with half-integer spins are called fermions and obey Fermi-Dirac statistics.
Bose–Einstein distribution:
At low temperatures, bosons behave differently from fermions (which obey the Fermi–Dirac statistics) in a way that an unlimited number of them can "condense" into the same energy state. This apparently unusual property also gives rise to the special state of matter – the Bose–Einstein condensate. Fermi–Dirac and Bose–Einstein statistics apply when quantum effects are important and the particles are "indistinguishable". Quantum effects appear if the concentration of particles satisfies where N is the number of particles, V is the volume, and nq is the quantum concentration, for which the interparticle distance is equal to the thermal de Broglie wavelength, so that the wavefunctions of the particles are barely overlapping.
Bose–Einstein distribution:
Fermi–Dirac statistics applies to fermions (particles that obey the Pauli exclusion principle), and Bose–Einstein statistics applies to bosons. As the quantum concentration depends on temperature, most systems at high temperatures obey the classical (Maxwell–Boltzmann) limit, unless they also have a very high density, as for a white dwarf. Both Fermi–Dirac and Bose–Einstein become Maxwell–Boltzmann statistics at high temperature or at low concentration.
Bose–Einstein distribution:
B–E statistics was introduced for photons in 1924 by Bose and generalized to atoms by Einstein in 1924–25.
Bose–Einstein distribution:
The expected number of particles in an energy state i for B–E statistics is: with εi > μ and where ni is the occupation number (the number of particles) in state i, gi is the degeneracy of energy level i, εi is the energy of the i-th state, μ is the chemical potential (zero for a photon gas), kB is the Boltzmann constant, and T is the absolute temperature.
Bose–Einstein distribution:
The variance of this distribution V(n) is calculated directly from the expression above for the average number.
Bose–Einstein distribution:
For comparison, the average number of fermions with energy εi given by Fermi–Dirac particle-energy distribution has a similar form: As mentioned above, both the Bose–Einstein distribution and the Fermi–Dirac distribution approaches the Maxwell–Boltzmann distribution in the limit of high temperature and low particle density, without the need for any ad hoc assumptions: In the limit of low particle density, n¯i=gie(εi−μ)/kBT±1≪1 , therefore e(εi−μ)/kBT±1≫1 or equivalently e(εi−μ)/kBT≫1 . In that case, n¯i≈gie(εi−μ)/kBT=1Ze−εi/kBT , which is the result from Maxwell–Boltzmann statistics.
Bose–Einstein distribution:
In the limit of high temperature, the particles are distributed over a large range of energy values, therefore the occupancy on each state (especially the high energy ones with εi−μ≫kBT ) is again very small, n¯i=gie(εi−μ)/kBT±1≪1 . This again reduces to Maxwell–Boltzmann statistics.In addition to reducing to the Maxwell–Boltzmann distribution in the limit of high T and low density, B–E statistics also reduces to Rayleigh–Jeans law distribution for low energy states with εi−μ≪kBT , namely
History:
Władysław Natanson in 1911 concluded that Planck's law requires indistinguishability of "units of energy", although he did not frame this in terms of Einstein's light quanta.While presenting a lecture at the University of Dhaka (in what was then British India and is now Bangladesh) on the theory of radiation and the ultraviolet catastrophe, Satyendra Nath Bose intended to show his students that the contemporary theory was inadequate, because it predicted results not in accordance with experimental results. During this lecture, Bose committed an error in applying the theory, which unexpectedly gave a prediction that agreed with the experiment. The error was a simple mistake—similar to arguing that flipping two fair coins will produce two heads one-third of the time—that would appear obviously wrong to anyone with a basic understanding of statistics (remarkably, this error resembled the famous blunder by d'Alembert known from his Croix ou Pile article). However, the results it predicted agreed with experiment, and Bose realized it might not be a mistake after all. For the first time, he took the position that the Maxwell–Boltzmann distribution would not be true for all microscopic particles at all scales. Thus, he studied the probability of finding particles in various states in phase space, where each state is a little patch having phase volume of h3, and the position and momentum of the particles are not kept particularly separate but are considered as one variable.
History:
Bose adapted this lecture into a short article called "Planck's law and the hypothesis of light quanta" and submitted it to the Philosophical Magazine. However, the referee's report was negative, and the paper was rejected. Undaunted, he sent the manuscript to Albert Einstein requesting publication in the Zeitschrift für Physik. Einstein immediately agreed, personally translated the article from English into German (Bose had earlier translated Einstein's article on the general theory of relativity from German to English), and saw to it that it was published. Bose's theory achieved respect when Einstein sent his own paper in support of Bose's to Zeitschrift für Physik, asking that they be published together. The paper came out in 1924.The reason Bose produced accurate results was that since photons are indistinguishable from each other, one cannot treat any two photons having equal quantum numbers (e.g., polarization and momentum vector) as being two distinct identifiable photons. Bose originally had a factor of 2 for the possible spin states, but Einstein changed it to polarization. By analogy, if in an alternate universe coins were to behave like photons and other bosons, the probability of producing two heads would indeed be one-third, and so is the probability of getting a head and a tail which equals one-half for the conventional (classical, distinguishable) coins. Bose's "error" leads to what is now called Bose–Einstein statistics.
History:
Bose and Einstein extended the idea to atoms and this led to the prediction of the existence of phenomena which became known as Bose–Einstein condensate, a dense collection of bosons (which are particles with integer spin, named after Bose), which was demonstrated to exist by experiment in 1995.
Derivation:
Derivation from the microcanonical ensemble In the microcanonical ensemble, one considers a system with fixed energy, volume, and number of particles. We take a system composed of {\textstyle N=\sum _{i}n_{i}} identical bosons, ni of which have energy εi and are distributed over gi levels or states with the same energy εi , i.e. gi is the degeneracy associated with energy εi of total energy {\textstyle E=\sum _{i}n_{i}\varepsilon _{i}} . Calculation of the number of arrangements of ni particles distributed among gi states is a problem of combinatorics. Since particles are indistinguishable in the quantum mechanical context here, the number of ways for arranging ni particles in gi boxes (for the i th energy level) would be (see image): where Ckm is the k-combination of a set with m elements. The total number of arrangements in an ensemble of bosons is simply the product of the binomial coefficients Cnini+gi−1 above over all the energy levels, i.e.
Derivation:
The maximum number of arrangements determining the corresponding occupation number ni is obtained by maximizing the entropy, or equivalently, setting ln BE )=0 and taking the subsidiary conditions {\textstyle N=\sum n_{i},E=\sum _{i}n_{i}\varepsilon _{i}} into account (as Lagrange multipliers). The result for ni≫1 , gi≫1 , ni/gi=O(1) is the Bose–Einstein distribution.
Derivation from the grand canonical ensemble The Bose–Einstein distribution, which applies only to a quantum system of non-interacting bosons, is naturally derived from the grand canonical ensemble without any approximations. In this ensemble, the system is able to exchange energy and exchange particles with a reservoir (temperature T and chemical potential µ fixed by the reservoir).
Derivation:
Due to the non-interacting quality, each available single-particle level (with energy level ϵ) forms a separate thermodynamic system in contact with the reservoir. That is, the number of particles within the overall system that occupy a given single particle state form a sub-ensemble that is also grand canonical ensemble; hence, it may be analysed through the construction of a grand partition function.
Derivation:
Every single-particle state is of a fixed energy, ε . As the sub-ensemble associated with a single-particle state varies by the number of particles only, it is clear that the total energy of the sub-ensemble is also directly proportional to the number of particles in the single-particle state; where N is the number of particles, the total energy of the sub-ensemble will then be Nε . Beginning with the standard expression for a grand partition function and replacing E with Nε , the grand partition function takes the form This formula applies to fermionic systems as well as bosonic systems. Fermi–Dirac statistics arises when considering the effect of the Pauli exclusion principle: whilst the number of fermions occupying the same single-particle state can only be either 1 or 0, the number of bosons occupying a single particle state may be any integer. Thus, the grand partition function for bosons can be considered a geometric series and may be evaluated as such: Note that the geometric series is convergent only if e(μ−ε)/kBT<1 , including the case where ϵ=0 . This implies that the chemical potential for the Bose gas must be negative, i.e., μ<0 , whereas the Fermi gas is allowed to take both positive and negative values for the chemical potential.The average particle number for that single-particle substate is given by This result applies for each single-particle level and thus forms the Bose–Einstein distribution for the entire state of the system.The variance in particle number, {\textstyle \sigma _{N}^{2}=\langle N^{2}\rangle -\langle N\rangle ^{2}} , is: As a result, for highly occupied states the standard deviation of the particle number of an energy level is very large, slightly larger than the particle number itself: σN≈⟨N⟩ . This large uncertainty is due to the fact that the probability distribution for the number of bosons in a given energy level is a geometric distribution; somewhat counterintuitively, the most probable value for N is always 0. (In contrast, classical particles have instead a Poisson distribution in particle number for a given state, with a much smaller uncertainty of {\textstyle \sigma _{N,{\rm {classical}}}={\sqrt {\langle N\rangle }}} , and with the most-probable N value being near ⟨N⟩ .) Derivation in the canonical approach It is also possible to derive approximate Bose–Einstein statistics in the canonical ensemble. These derivations are lengthy and only yield the above results in the asymptotic limit of a large number of particles. The reason is that the total number of bosons is fixed in the canonical ensemble. The Bose–Einstein distribution in this case can be derived as in most texts by maximization, but the mathematically best derivation is by the Darwin–Fowler method of mean values as emphasized by Dingle. See also Müller-Kirsten. The fluctuations of the ground state in the condensed region are however markedly different in the canonical and grand-canonical ensembles.
Interdisciplinary applications:
Viewed as a pure probability distribution, the Bose–Einstein distribution has found application in other fields: In recent years, Bose–Einstein statistics has also been used as a method for term weighting in information retrieval. The method is one of a collection of DFR ("Divergence From Randomness") models, the basic notion being that Bose–Einstein statistics may be a useful indicator in cases where a particular term and a particular document have a significant relationship that would not have occurred purely by chance. Source code for implementing this model is available from the Terrier project at the University of Glasgow.
Interdisciplinary applications:
The evolution of many complex systems, including the World Wide Web, business, and citation networks, is encoded in the dynamic web describing the interactions between the system's constituents. Despite their irreversible and nonequilibrium nature these networks follow Bose statistics and can undergo Bose–Einstein condensation. Addressing the dynamical properties of these nonequilibrium systems within the framework of equilibrium quantum gases predicts that the "first-mover-advantage", "fit-get-rich" (FGR) and "winner-takes-all" phenomena observed in competitive systems are thermodynamically distinct phases of the underlying evolving networks. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Built-in breathing system**
Built-in breathing system:
A built-in breathing system is a source of breathing gas installed in a confined space where an alternative to the ambient gas may be required for medical treatment, emergency use, or to minimise a hazard. They are found in diving chambers, hyperbaric treatment chambers, and submarines.
Built-in breathing system:
The use in hyperbaric treatment chambers is usually to supply an oxygen rich treatment gas which if used as the chamber atmosphere, would constitute an unacceptable fire hazard. In this application the exhaust gas is vented outside of the chamber. In saturation diving chambers and surface decompression chamber the application is similar, but a further function is a supply of breathable gas in case of toxic contamination of the chamber atmosphere. This function does not require external venting, but the same equipment is typically used for supply of oxygen enriched gases, so they are generally vented to the exterior.
Built-in breathing system:
In submarines the function is to supply a breathable gas in an emergency, which may be contamination of the ambient internal atmosphere, or flooding. In this application venting to the interior is both acceptable and generally the only feasible option, as the exterior is typically at a higher pressure than the interior, and external venting is not possible by passive means.
Function:
Externally vented BIBS These are systems used to supply breathing gas on demand in a chamber which is at a pressure greater than the ambient pressure outside the chamber. The pressure difference between chamber and external ambient pressure makes it possible to exhaust the exhaled gas to the external environment, but the flow must be controlled so that only exhaled gas is vented through the system, and it does not drain the contents of the chamber to the outside. This is achieved by using a controlled exhaust valve which opens when a slight over-pressure relative to the chamber pressure on the exhaust diaphragm moves the valve mechanism against a spring. When this over-pressure is dissipated by the gas flowing out through the exhaust hose, the spring returns this valve to the closed position, cutting off further flow, and conserving the chamber atmosphere. A negative or zero pressure difference over the exhaust diaphragm will keep it closed. The exhaust diaphragm is exposed to the chamber pressure on one side, and exhaled gas pressure in the oro-nasal mask on the other side. The supply of gas for inhalation is through a demand valve which works on the same principles as a regular diving demand valve second stage. Like any other breathing apparatus, the dead space must be limited to minimise carbon dioxide buildup in the mask.
Function:
In some cases the outlet suction must be limited and a back-pressure regulator may be required. This would usually be the case for use in a saturation system. Use for oxygen therapy and surface decompression on oxygen would not generally need a back-pressure regulator. When an externally vented BIBS is used at low chamber pressure, a vacuum assist may be necessary to keep the exhalation backpressure down to provide an acceptable work of breathing.The oro-nasal mask may be interchangeable for hygienic use by different people.Some models are rated for pressures up to 450 msw.The major application for this type of BIBS is supply of breathing gas with a different composition to the chamber atmosphere to occupants of a hyperbaric chamber where the chamber atmosphere is controlled, and contamination by the BIBS gas would be a problem. This is common in therapeutic decompression, and hyperbaric oxygen therapy, where a higher partial pressure of oxygen in the chamber would constitute an unacceptable fire hazard, and would require frequent ventilation of the chamber to keep the partial pressure within acceptable limits Frequent ventilation is noisy and expensive, but can be used in an emergency. It is also necessary that the BIBS gas is not contaminated by chamber gas, as this could adversely affect decompression.When this format of BIBS is fitted it can also be used for emergency breathing gas supply in the event of contaminated chamber atmosphere, though in those cases the contamination by exhaled BIBS gas would usually not be important.
Function:
Locally vented BIBS When contamination of the internal atmosphere is not important, and where the external ambient pressure is higher than in the occupied space, exhaled gas is simply dumped into the internal volume, requiring no special flow control beyond a simple non-return valve. The delivery and exhaust mechanism of a BIBS demand valve for this application is the same as for a scuba or SCBA second stage regulator, and these can be used for this purpose with little or no modification. This type of breathing apparatus may also use a full-face mask for delivery.
Applications:
Hyperbaric oxygen therapy The traditional type of hyperbaric chamber used for therapeutic recompression and hyperbaric oxygen therapy is a rigid shelled pressure vessel. Such chambers can be run at absolute pressures typically about 6 bars (87 psi), 600,000 Pa or more in special cases. Navies, professional diving organizations, hospitals, and dedicated recompression facilities typically operate these. They range in size from semi-portable, one-patient units to room-sized units that can treat eight or more patients. They may be rated for lower pressures if not primarily intended for treatment of diving injuries.
Applications:
In the larger multiplace chambers, patients inside the chamber breathe from either "oxygen hoods" – flexible, transparent soft plastic hoods with a seal around the neck similar to a space suit helmet – or tightly fitting oxygen masks, which supply pure oxygen and may be designed to directly exhaust the exhaled gas from the chamber. During treatment patients breathe 100% oxygen most of the time to maximise the effectiveness of their treatment, but have periodic "air breaks" during which they breathe chamber air (21% oxygen) to reduce the risk of oxygen toxicity. The exhaled treatment gas must be removed from the chamber to prevent the buildup of oxygen, which could present a fire risk. Attendants may also breathe oxygen some of the time to reduce their risk of decompression sickness when they leave the chamber. The pressure inside the chamber is increased by opening valves allowing high-pressure air to enter from storage cylinders, which are filled by an air compressor. Chamber air oxygen content is kept between 19% and 23% to control fire risk (US Navy maximum 25%). If the chamber does not have a scrubber system to remove carbon dioxide from the chamber gas, the chamber must be isobarically ventilated to keep the CO2 within acceptable limits.
Applications:
Therapeutic recompression Hyperbaric oxygen therapy was developed as a treatment for diving disorders involving bubbles of gas in the tissues, such as decompression sickness and gas embolism, and it is still considered the definitive treatment for these conditions. The recompression treats decompression sickness and gas embolism by increasing pressure, which reduces the size of the gas bubbles and improves the transport of blood to downstream tissues. Elimination of the inert component of the breathing gas by breathing oxygen provides a stronger concentration gradient to eliminate dissolved inert gas still in the tissues, and further accelerates bubble reduction by dissolving the gas back into the blood. After elimination of bubbles, the pressure is gradually reduced back to atmospheric levels. The raised oxygen partial pressures in the blood may also help recovery of oxygen-starved tissues downstream of the blockages.
Applications:
Emergency treatment for decompression illness follows schedules laid out in treatment tables. Most treatments recompress to 2.8 bars (41 psi) absolute, the equivalent of 18 metres (60 ft) of water, for 4.5 to 5.5 hours with the casualty breathing pure oxygen, but taking periodic air breaks to reduce oxygen toxicity. For serious cases resulting from very deep dives, the treatment may require a chamber capable of a maximum pressure of 8 bars (120 psi), the equivalent of 70 metres (230 ft) of water, and the ability to supply heliox and nitrox as a breathing gas.
Applications:
Surface decompression Surface decompression is a procedure in which some or all of the staged decompression obligation is done in a decompression chamber instead of in the water. This reduces the time that the diver spends in the water, exposed to environmental hazards such as cold water or currents, which will enhance diver safety. The decompression in the chamber is more controlled, in a more comfortable environment, and oxygen can be used at greater partial pressure as there is no risk of drowning and a lower risk of oxygen toxicity convulsions. A further operational advantage is that once the divers are in the chamber, new divers can be supplied from the diving panel, and the operations can continue with less delay.A typical surface decompression procedure is described in the US Navy Diving Manual. If there is no in-water 40 ft stop required the diver is surfaced directly. Otherwise, all required decompression up to and including the 40 ft (12 m) stop is completed in-water. The diver is then surfaced and pressurised in a chamber to 50 fsw (15 msw) within 5 minutes of leaving 40 ft depth in the water. If this "surface interval" from 40 ft in the water to 50 fsw in the chamber exceeds 5 minutes, a penalty is incurred, as this indicates a higher risk of DCS symptoms developing, so longer decompression is required.In the case where the diver is successfully recompressed within the nominal interval, he will be decompressed according to the schedule in the air decompression tables for surface decompression, preferably on oxygen, which is used from 50 fsw (15 msw), a partial pressure of 2.5 bar. The duration of the 50 fsw stop is 15 minutes for the Revision 6 tables. The chamber is then decompressed to 40 fsw (12 msw) for the next stage of up to 4 periods of 30 minutes each on oxygen. A stop may also be done at 30 fsw (9 msw), for further periods on oxygen according to the schedule. Air breaks of 5 minutes are taken at the end of each 30 minutes of oxygen breathing.
Applications:
Saturation systems emergency gas supply During decompression from saturation, a pressure will be reached where raising the oxygen concentration further would cause an unacceptable fire hazard, while keeping it at an acceptable level for fire risk would be inefficient for decompression. BIBS supply of breathing gas with higher oxygen content than the chamber atmosphere can solve this problem. If the atmosphere in a saturation habitat is contaminated, the inhabitants can use the available BIBS masks during the emergency and be supplied with non-contaminated breathing gas until the problem has been solved.
Applications:
Submarine emergency gas supply Submarine BIBS systems are intended to provide the crew with diving quality air or nitrox breathing gas in an emergency escape situation where the interior may be partly or completely flooded, and may be at a significantly higher than atmospheric pressure.The supply gas is provided from a high pressure storage bank at a pressure automatically compensated for depth and is distributed around the vessel to points where the breathing units can be connected as required. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Penex**
Penex:
The Penex process is a continuous catalytic process used in the refining of crude oil. It isomerizes light straight run naphtha (C5/C6) into higher-octane, branched C5/C6 molecules. It also reduces the concentration of benzene in the gasoline pool. It was first used commercially in 1958.
Ideally, the isomerization catalyst converts normal pentane (nC5) to isopentane (iC5) and normal hexane (nC6) to 2,2- and 2,3-dimethylbutane. The thermodynamic equilibrium is more favorable at low temperature.
Penex:
The Penex process uses fixed-bed catalysts containing chlorides. A single pass of feedstock with an octane rating of 50-60 through such a bed typically produces an end product rated at 82-86. If the feedstock is subsequently passed through a DIH (deisohexanizer) column, the end product typically has an octane rating of 87-90.5. If the feedstock is subsequently passed through a Molex-technology column, the end product typically has an octane rating of 88-91. If the feedstock is first passed through a DIP (deisopentanizer) column to remove iso-pentanes, then through the Penex bed, and subsequently through the DIH column, the end product typically has an octane rating of 91-93.
Penex:
The Penex Process is licensed by the UOP corporation and currently utilized at more than 120 units at petroleum refineries and natural gas liquids plants throughout the world. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MRPS25**
MRPS25:
28S ribosomal protein S25, mitochondrial is a protein that in humans is encoded by the MRPS25 gene.Mammalian mitochondrial ribosomal proteins are encoded by nuclear genes and help in protein synthesis within the mitochondrion. Mitochondrial ribosomes (mitoribosomes) consist of a small 28S subunit and a large 39S subunit. They have an estimated 75% protein to rRNA composition compared to prokaryotic ribosomes, where this ratio is reversed. Another difference between mammalian mitoribosomes and prokaryotic ribosomes is that the latter contain a 5S rRNA. Among different species, the proteins comprising the mitoribosome differ greatly in sequence, and sometimes in biochemical properties, which prevents easy recognition by sequence homology. This gene encodes a 28S subunit protein. A pseudogene corresponding to this gene is found on chromosome 4. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Biceps**
Biceps:
The biceps or biceps brachii (Latin: musculus biceps brachii, "two-headed muscle of the arm") is a large muscle that lies on the front of the upper arm between the shoulder and the elbow. Both heads of the muscle arise on the scapula and join to form a single muscle belly which is attached to the upper forearm. While the biceps crosses both the shoulder and elbow joints, its main function is at the elbow where it flexes the forearm and supinates the forearm. Both these movements are used when opening a bottle with a corkscrew: first biceps screws in the cork (supination), then it pulls the cork out (flexion).
Structure:
The biceps is one of three muscles in the anterior compartment of the upper arm, along with the brachialis muscle and the coracobrachialis muscle, with which the biceps shares a nerve supply. The biceps muscle has two heads, the short head and the long head, distinguished according to their origin at the coracoid process and supraglenoid tubercle of the scapula, respectively. From its origin on the glenoid, the long head remains tendinous as it passes through the shoulder joint and through the intertubercular groove of the humerus. Extending from its origin on the coracoid, the tendon of the short head runs adjacent to the tendon of the coracobrachialis as the conjoint tendon. Unlike the other muscles in the anterior compartment of the arm, the biceps muscle crosses two joints, the shoulder joint and the elbow joint.
Structure:
Both heads of the biceps join in the middle upper arm to form a single muscle mass usually near the insertion of the deltoid to form a common muscle belly, although several anatomic studies have demonstrated that the muscle bellies remain distinct structures without confluent fibers. As the muscle extends distally, the two heads rotate 90 degrees externally before inserting onto the radial tuberosity. The short head inserts distally on the tuberosity while the long head inserts proximally closer to the apex of the tuberosity. The bicipital aponeurosis, also called the lacertus fibrosus, is a thick fascial band that organizes close to the musculotendinous junction of the biceps and radiates over and inserts onto the ulnar part of the antebrachial fascia.The tendon that attaches to the radial tuberosity is partially or completely surrounded by a bursa, the bicipitoradial bursa, which ensures frictionless motion between the biceps tendon and the proximal radius during pronation and supination of the forearm.Two muscles lie underneath the biceps brachii. These are the coracobrachialis muscle, which like the biceps attaches to the coracoid process of the scapula, and the brachialis muscle which connects to the ulna and along the mid-shaft of the humerus. Besides those, the brachioradialis muscle is adjacent to the biceps and also inserts on the radius bone, though more distally.
Structure:
Variation Traditionally described as a two-headed muscle, biceps brachii is one of the most variable muscles of the human body and has a third head arising from the humerus in 10% of cases (normal variation)—most commonly originating near the insertion of the coracobrachialis and joining the short head—but four, five, and even seven supernumerary heads have been reported in rare cases.One study found a higher than expected number of female cadavers with a third head of biceps brachii, equal incidence between sides of the body, and uniform innervation by musculocutaneous nerve.The distal biceps tendons are completely separated in 40% and bifurcated in 25% of cases.
Structure:
Nerve supply The biceps shares its nerve supply with the other two muscles of the anterior compartment. The muscles are supplied by the musculocutaneous nerve. Fibers of the fifth, sixth and seventh cervical nerves make up the components of the musculocutaneous nerve which supply the biceps.
Blood supply The blood supply of the biceps is the brachial artery. The distal tendon of the biceps can be useful for palpating the brachial pulse, as the artery runs medial to the tendon in the cubital fossa.
Function:
The biceps works across three joints. The most important of these functions is to supinate the forearm and flex the elbow. Besides, the long head of biceps prevents the upward displacement of the head of the humerus. In more detail, the actions are, by joint: Proximal radioulnar joint of the elbow – The biceps brachii functions as a powerful supinator of the forearm, i.e. it turns the palm upwards. This action, which is aided by the supinator muscle, requires the humeroulnar joint of the elbow to be at least partially flexed. If the humeroulnar joint, is fully extended, supination is then primarily carried out by the supinator muscle. The biceps is a particularly powerful supinator of the forearm due to the distal attachment of the muscle at the radial tuberosity, on the opposite side of the bone from the supinator muscle. When flexed, the biceps effectively pulls the radius back into its neutral supinated position in concert with the supinator muscle.: 346–347 Humeroulnar joint of the elbow – The biceps brachii also functions as an important flexor of the forearm, particularly when the forearm is supinated. Functionally, this action is performed when lifting an object, such as a bag of groceries or when performing a biceps curl. When the forearm is in pronation (the palm faces the ground), the brachialis, brachioradialis, and supinator function to flex the forearm, with minimal contribution from the biceps brachii. It is also important to note that regardless of forearm position, (supinated, pronated, or neutral) the force exerted by the biceps brachii remains the same; however, the brachioradialis has a much greater change in exertion depending on position than the biceps during concentric contractions. That is, the biceps can only exert so much force, and as forearm position changes, other muscles must compensate.
Function:
Glenohumeral joint (shoulder joint) – Several weaker functions occur at the glenohumeral joint. The biceps brachii weakly assists in forward flexion of the shoulder joint (bringing the arm forward and upwards). It may also contribute to abduction (bringing the arm out to the side) when the arm is externally (or laterally) rotated. The short head of the biceps brachii also assists with horizontal adduction (bringing the arm across the body) when the arm is internally (or medially) rotated. Finally, the short head of the biceps brachii, due to its attachment to the scapula (or shoulder blade), assists with stabilization of the shoulder joint when a heavy weight is carried in the arm. The tendon of the long head of the biceps also assists in holding the head of the humerus in the glenoid cavity.: 295 Motor units in the lateral portion of the long head of the biceps are preferentially activated during elbow flexion, while motor units in the medial portion are preferentially activated during forearm supination.The biceps are usually attributed as representative of strength within a variety of worldwide cultures.
Clinical significance:
The proximal tendons of the biceps brachii are commonly involved in pathological processes and are a frequent cause of anterior shoulder pain. Disorders of the distal biceps brachii tendon include insertional tendonitis and partial or complete tears of the tendon. Partial tears are usually characterized by pain and enlargement and abnormal contour of the tendon. Complete tears occur as avulsion of the tendinous portion of the biceps away from its insertion on the tuberosity of the radius, and is often accompanied by a palpable, audible "pop" and immediate pain and soft tissue swelling.A soft-tissue mass is sometimes encountered in the anterior aspect of the arm, the so-called Reverse Popeye deformity, which paradoxically leads to a decreased strength during flexion of the elbow and supination of the forearm.
Clinical significance:
Tendon rupture Tears of the biceps brachii may occur during athletic activities, however avulsion injuries of the distal biceps tendon are frequently occupational in nature and sustained during forceful, eccentric contraction of the biceps muscle while lifting.Treatment of a biceps tear depends on the severity of the injury. In most cases, the muscle will heal over time with no corrective surgery. Applying cold pressure and using anti-inflammatory medications will ease pain and reduce swelling. More severe injuries require surgery and post-op physical therapy to regain strength and functionality in the muscle. Corrective surgeries of this nature are typically reserved for elite athletes who rely on a complete recovery.
Clinical significance:
Training The biceps can be strengthened using weight and resistance training. Examples of well known biceps exercises are the chin-up and biceps curl.
Etymology and grammar:
The biceps brachii muscle is the one that gave all muscles their name: it comes from the Latin musculus, "little mouse", because the appearance of the flexed biceps resembles the back of a mouse. The same phenomenon occurred in Greek, in which μῦς, mȳs, means both "mouse" and "muscle".The term biceps brachii is a Latin phrase meaning "two-headed [muscle] of the arm", in reference to the fact that the muscle consists of two bundles of muscle, each with its own origin, sharing a common insertion point near the elbow joint. The proper plural form of the Latin adjective biceps is bicipites, a form not in general English use. Instead, biceps is used in both singular and plural (i.e., when referring to both arms).
Etymology and grammar:
The English form bicep, attested from 1939, is a back formation derived from misinterpreting the s of biceps as the English plural marker -s.
History:
Leonardo da Vinci expressed the original idea of the biceps acting as a supinator in a series of annotated drawings made between 1505 and 1510; in which the principle of the biceps as a supinator, as well as its role as a flexor to the elbow were devised. However, this function remained undiscovered by the medical community as da Vinci was not regarded as a teacher of anatomy, nor were his results publicly released. It was not until 1713 that this movement was re-discovered by William Cheselden and subsequently recorded for the medical community. It was rewritten several times by different authors wishing to present information to different audiences. The most notable recent expansion upon Cheselden's recordings was written by Guillaume Duchenne in 1867, in a journal named Physiology of Motion. It remains one of the major references on supination action of the biceps brachii.
Other species:
Neanderthals In Neanderthals, the radial bicipital tuberosities were larger than in modern humans, which suggests they were probably able to use their biceps for supination over a wider range of pronation-supination. It is possible that they relied more on their biceps for forceful supination without the assistance of the supinator muscle like in modern humans, and thus that they used a different movement when throwing.
Other species:
Horses In the horse, the biceps' function is to extend the shoulder and flex the elbow. It is composed of two short-fibred heads separated longitudinally by a thick internal tendon which stretches from the origin on the supraglenoid tubercle to the insertion on the medial radial tuberosity. This tendon can withstand very large forces when the biceps is stretched. From this internal tendon a strip of tendon, the lacertus fibrosus, connects the muscle with the extensor carpi radialis -- an important feature in the horse's stay apparatus (through which the horse can rest and sleep whilst standing.) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CDC5L**
CDC5L:
Cell division cycle 5-like protein is a protein that in humans is encoded by the CDC5L gene.
Function:
The protein encoded by this gene shares a significant similarity with Schizosaccharomyces pombe cdc5 gene product, which is a cell cycle regulator important for G2/M transition. This protein has been demonstrated to act as a positive regulator of cell cycle G2/M progression. It was also found to be an essential component of a non-snRNA spliceosome, which contains at least five additional protein factors and is required for the second catalytic step of pre-mRNA splicing.
Interactions:
CDC5L has been shown to interact with: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Feed ban**
Feed ban:
The term Feed ban is usually a reference to the regulations that have prohibited the feeding of most mammalian-derived proteins to cattle as a method of preventing the spread of bovine spongiform encephalopathy (BSE). Feeding of infected ruminant material back to ruminants is believed to be the most likely means of transmission of the disease.
USA:
Since the 1997 publication of regulations by the Food and Drug Administration (FDA), a feed ban has been in place.Exceptions to the FDA ban have existed for mammalian blood and blood products; gelatin; inspected cooked meat products for humans; milk products; and products containing pork and equine (and avian) proteins.In July 2003 Will Hueston, who was then the director of the University of Minnesota Center for Animal Health and Food Safety in St. Paul, was very concerned about the lack of an FDA feed ban on cattle for Specified Risk Materials.On January 26, 2004, FDA officials said they would expand the feed ban by prohibiting, from ruminant feeds, ruminant blood and blood products, poultry litter, and restaurant plate waste. At issue are whether recommendations by some scientific experts to ban additional products from feed are necessary, as some foreign countries do where BSE is much more widespread. Meanwhile, the USDA "ban on SRMs in the food supply as an interim rule in January 2004, about 3 weeks after the first US" homegrown BSE case was discovered.In July 2007, "the use of high-pressure cattle-stunning devices that could drive SRM tissue into meat" was banned by the USDA. Several "BSE-related interim rules, including the ban on SRM from human food," were made permanent at this time.
Canada:
In 1992, Canada implemented a national bovine spongiform encephalopathy (BSE) surveillance program. In a 2011 publication, the Canadian Food Inspection Agency (CFIA) was at pains to stress that the "level and design of BSE testing in Canada has always been, and continues to be, in full accordance with the guidelines recommended by the World Organisation for Animal Health (OIE)."In July 2003, a Specified Risk Material feed ban was imposed by the CFIA. This was the first regulatory change to bovine farm practice in Canada after the British BSE disaster. At the time, the CFIA exceeded the caution of the FDA.On 12 July 2007, an "Enhanced Feed Ban" (EFB) was imposed by the CFIA's "Feed Ban Task Force", which was chaired by Freeman Libby. By that date, a total of 10 BSE cases had been discovered, the last one in May 2007 which prompted the EFB rule. The EFB was designed in order to maintain Canada's status as an OIE "controlled risk country". Under the EFB, a CFIA permit is required to transport or receive SRM in any form, and "livestock producers must no longer use any feed products containing SRM." The EFB banned "the use of cattle brains, spinal cords, and certain other body parts from all animal feeds, pet foods, and fertilizer." At the time, the SRMs included the "skull, brain, eyes, tonsils, spinal cord, and certain nerve bundles (trigeminal root ganglia and dorsal root ganglia) in cattle 30 months or older, plus the distal ileum (part of the small intestine) of all cattle." According to a CanWest News report, "SRM must now be removed with special equipment, hauled away in dedicated trucks, processed, and then buried in landfills, burned in high-temperature incinerators, or dumped into composters and bioenergy plants."At the time, the CFIA was firmly convinced that: with the EFB, "BSE is expected to be eliminated from Canadian cattle in about 10 years; without the new rules, eradication was expected to take several decades." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Organ (biology)**
Organ (biology):
In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system.
Organ (biology):
An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as bacteria, the functional analogue of an organ is known as an organelle. In plants, there are three main organs.In the study of anatomy, viscera (SG: viscus) refers to the internal organs of the abdominal, thoracic, and pelvic cavities. The abdominal organs may be classified as solid organs, or hollow organs. The solid organs are the liver, pancreas, spleen, kidneys, and adrenal glands. The hollow organs of the abdomen are the stomach, intestines, gallbladder, bladder, and rectum. In the thoracic cavity the heart is a hollow, muscular organ.The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body.
Animals:
Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The same is true for the musculoskeletal system because of the relationship between the muscular and skeletal systems.
Animals:
Cardiovascular system: pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels.
Digestive system: digestion and processing food with salivary glands, esophagus, stomach, liver, gallbladder, pancreas, intestines, colon, mesentery, rectum and anus.
Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroids and adrenals, i.e., adrenal glands.
Excretory system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine.
Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream, the lymph and the nodes and vessels that transport it including the immune system: defending against disease-causing agents with leukocytes, tonsils, adenoids, thymus and spleen.
Integumentary system: skin, hair and nails of mammals. Also scales of fish, reptiles, and birds, and feathers of birds.
Muscular system: movement with muscles.
Nervous system: collecting, transferring and processing information with brain, spinal cord and nerves.
Reproductive system: the sex organs, such as ovaries, fallopian tubes, uterus, vulva, vagina, testes, vas deferens, seminal vesicles, prostate and penis.
Respiratory system: the organs used for breathing, the pharynx, larynx, trachea, bronchi, lungs and diaphragm.
Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons.
Animals:
Viscera In the study of anatomy, viscera (singular viscus) refers to the internal organs of the abdominal, thoracic, and pelvic cavities. The abdominal organs may be classed as solid organs, or hollow organs. The solid organs include the liver, pancreas, spleen, kidneys, and adrenal glands. The hollow organs include the stomach, intestines, gallbladder, bladder, and rectum. In the thoracic cavity the heart is a hollow, muscular organ. Splanchnology is the study of the viscera. The term "visceral" is contrasted with the term "parietal", meaning "of or relating to the wall of a body part, organ or cavity" The two terms are often used in describing a membrane or piece of connective tissue, referring to the opposing sides.
Animals:
Origin and evolution The organ level of organisation in animals can be first detected in flatworms and the more derived phyla, i.e. the bilaterians. The less-advanced taxa (i.e. Placozoa, Porifera, Ctenophora and Cnidaria) do not show consolidation of their tissues into organs.
More complex animals are composed of different organs, which have evolved over time. For example, the liver and heart evolved in the chordates about 550-500 million years ago, while the gut and brain are even more ancient, arising in the ancestor of vertebrates, insects, molluscs, and worms about 700-650 million years ago.
Animals:
Given the ancient origin of most vertebrate organs, researchers have looked for model systems, where organs have evolved more recently, and ideally have evolved multiple times independently. An outstanding model for this kind of research is the placenta, which has evolved more than 100 times independently in vertebrates, has evolved relatively recently in some lineages, and exists in intermediate forms in extant taxa. Studies on the evolution of the placenta have identified a variety of genetic and physiological processes that contribute to the origin and evolution of organs, these include the re-purposing of existing animal tissues, the acquisition of new functional properties by these tissues, and novel interactions of distinct tissue types.
Plants:
The study of plant organs is covered in plant morphology. Organs of plants can be divided into vegetative and reproductive. Vegetative plant organs include roots, stems, and leaves. The reproductive organs are variable. In flowering plants, they are represented by the flower, seed and fruit. In conifers, the organ that bears the reproductive structures is called a cone. In other divisions (phyla) of plants, the reproductive organs are called strobili, in Lycopodiophyta, or simply gametophores in mosses. Common organ system designations in plants include the differentiation of shoot and root. All parts of the plant above ground (in non-epiphytes), including the functionally distinct leaf and flower organs, may be classified together as the shoot organ system.The vegetative organs are essential for maintaining the life of a plant. While there can be 11 organ systems in animals, there are far fewer in plants, where some perform the vital functions, such as photosynthesis, while the reproductive organs are essential in reproduction. However, if there is asexual vegetative reproduction, the vegetative organs are those that create the new generation of plants (see clonal colony).
Society and culture:
Many societies have a system for organ donation, in which a living or deceased donor's organ are transplanted into a person with a failing organ. The transplantation of larger solid organs often requires immunosuppression to prevent organ rejection or graft-versus-host disease.
There is considerable interest throughout the world in creating laboratory-grown or artificial organs.
Society and culture:
Organ transplants Beginning in the 20th century organ transplants began to take place as scientists knew more about the anatomy of organs. These came later in time as procedures were often dangerous and difficult. Both the source and method of obtaining the organ to transplant are major ethical issues to consider, and because organs as resources for transplant are always more limited than demand for them, various notions of justice, including distributive justice, are developed in the ethical analysis. This situation continues as long as transplantation relies upon organ donors rather than technological innovation, testing, and industrial manufacturing.
History:
The English word "organ" dates back to the twelfth century and refers to any musical instrument. By the late 14th century, the musical term's meaning had narrowed to refer specifically to the keyboard-based instrument. At the same time, a second meaning arose, in reference to a "body part adapted to a certain function".Plant organs are made from tissue composed of different types of tissue. The three tissue types are ground, vascular, and dermal. When three or more organs are present, it is called an organ system.The adjective visceral, also splanchnic, is used for anything pertaining to the internal organs. Historically, viscera of animals were examined by Roman pagan priests like the haruspices or the augurs in order to divine the future by their shape, dimensions or other factors. This practice remains an important ritual in some remote, tribal societies.
History:
The term "visceral" is contrasted with the term "parietal", meaning "of or relating to the wall of a body part, organ or cavity" The two terms are often used in describing a membrane or piece of connective tissue, referring to the opposing sides.
History:
Antiquity Aristotle used the word frequently in his philosophy, both to describe the organs of plants or animals (e.g. the roots of a tree, the heart or liver of an animal), and to describe more abstract "parts" of an interconnected whole (e.g. his logical works, taken as a whole, are referred to as the Organon).Some alchemists (e.g. Paracelsus) adopted the Hermetic Qabalah assignment between the seven vital organs and the seven classical planets as follows: Chinese traditional medicine recognizes eleven organs, associated with the five Chinese traditional elements and with yin and yang, as follows: The Chinese associated the five elements with the five planets (Jupiter, Mars, Venus, Saturn, and Mercury) similar to the way the classical planets were associated with different metals. The yin and yang distinction approximates the modern notion of solid and hollow organs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Drag coefficient**
Drag coefficient:
In fluid dynamics, the drag coefficient (commonly denoted as: cd , cx or cw ) is a dimensionless quantity that is used to quantify the drag or resistance of an object in a fluid environment, such as air or water. It is used in the drag equation in which a lower drag coefficient indicates the object will have less aerodynamic or hydrodynamic drag. The drag coefficient is always associated with a particular surface area.The drag coefficient of any object comprises the effects of the two basic contributors to fluid dynamic drag: skin friction and form drag. The drag coefficient of a lifting airfoil or hydrofoil also includes the effects of lift-induced drag. The drag coefficient of a complete structure such as an aircraft also includes the effects of interference drag.
Definition:
The drag coefficient cd is defined as where: Fd is the drag force, which is by definition the force component in the direction of the flow velocity; ρ is the mass density of the fluid; u is the flow speed of the object relative to the fluid; A is the reference areaThe reference area depends on what type of drag coefficient is being measured. For automobiles and many other objects, the reference area is the projected frontal area of the vehicle. This may not necessarily be the cross-sectional area of the vehicle, depending on where the cross-section is taken. For example, for a sphere A=πr2 (note this is not the surface area = 4πr2 ).
Definition:
For airfoils, the reference area is the nominal wing area. Since this tends to be large compared to the frontal area, the resulting drag coefficients tend to be low, much lower than for a car with the same drag, frontal area, and speed.
Airships and some bodies of revolution use the volumetric drag coefficient, in which the reference area is the square of the cube root of the airship volume (volume to the two-thirds power). Submerged streamlined bodies use the wetted surface area.
Two objects having the same reference area moving at the same speed through a fluid will experience a drag force proportional to their respective drag coefficients. Coefficients for unstreamlined objects can be 1 or more, for streamlined objects much less.
Background:
The drag equation Fd=12ρu2cdA is essentially a statement that the drag force on any object is proportional to the density of the fluid and proportional to the square of the relative flow speed between the object and the fluid. The factor of 1/2 comes from the dynamic pressure of the fluid, which is equal to the kinetic energy density.
Background:
The value of cd is not a constant but varies as a function of flow speed, flow direction, object position, object size, fluid density and fluid viscosity. Speed, kinematic viscosity and a characteristic length scale of the object are incorporated into a dimensionless quantity called the Reynolds number Re . cd is thus a function of Re . In a compressible flow, the speed of sound is relevant, and cd is also a function of Mach number Ma For certain body shapes, the drag coefficient cd only depends on the Reynolds number Re , Mach number Ma and the direction of the flow. For low Mach number Ma , the drag coefficient is independent of Mach number. Also, the variation with Reynolds number Re within a practical range of interest is usually small, while for cars at highway speed and aircraft at cruising speed, the incoming flow direction is also more-or-less the same. Therefore, the drag coefficient cd can often be treated as a constant.For a streamlined body to achieve a low drag coefficient, the boundary layer around the body must remain attached to the surface of the body for as long as possible, causing the wake to be narrow. A high form drag results in a broad wake. The boundary layer will transition from laminar to turbulent if Reynolds number of the flow around the body is sufficiently great. Larger velocities, larger objects, and lower viscosities contribute to larger Reynolds numbers.
Background:
For other objects, such as small particles, one can no longer consider that the drag coefficient cd is constant, but certainly is a function of Reynolds number.
Background:
At a low Reynolds number, the flow around the object does not transition to turbulent but remains laminar, even up to the point at which it separates from the surface of the object. At very low Reynolds numbers, without flow separation, the drag force Fd is proportional to v instead of v2 ; for a sphere this is known as Stokes' law. The Reynolds number will be low for small objects, low velocities, and high viscosity fluids.A cd equal to 1 would be obtained in a case where all of the fluid approaching the object is brought to rest, building up stagnation pressure over the whole front surface. The top figure shows a flat plate with the fluid coming from the right and stopping at the plate. The graph to the left of it shows equal pressure across the surface. In a real flat plate, the fluid must turn around the sides, and full stagnation pressure is found only at the center, dropping off toward the edges as in the lower figure and graph. Only considering the front side, the cd of a real flat plate would be less than 1; except that there will be suction on the backside: a negative pressure (relative to ambient). The overall cd of a real square flat plate perpendicular to the flow is often given as 1.17. Flow patterns and therefore cd for some shapes can change with the Reynolds number and the roughness of the surfaces.
Drag coefficient examples:
General In general, cd is not an absolute constant for a given body shape. It varies with the speed of airflow (or more generally with Reynolds number Re ). A smooth sphere, for example, has a cd that varies from high values for laminar flow to 0.47 for turbulent flow. Although the drag coefficient decreases with increasing Re , the drag force increases.
Drag coefficient examples:
Aircraft As noted above, aircraft use their wing area as the reference area when computing cd , while automobiles (and many other objects) use projected frontal area; thus, coefficients are not directly comparable between these classes of vehicles. In the aerospace industry, the drag coefficient is sometimes expressed in drag counts where 1 drag count = 0.0001 of a cd Automobile
Blunt and streamlined body flows:
Concept The force between a fluid and a body, when there is relative motion, can only be transmitted by normal pressure and tangential friction stresses. So, for the whole body, the drag part of the force, which is in-line with the approaching fluid motion, is composed of frictional drag (viscous drag) and pressure drag (form drag). The total drag and component drag forces can be related as follows: where: A is the planform area of the body, S is the wet surface of the body, cp is the pressure drag coefficient, cf is the friction drag coefficient, t^ is the unit vector in the direction of the shear stress acting on the body surface dS, n^ is the unit vector in the direction perpendicular to the body surface dS, pointing from the fluid to the solid, Tw magnitude of the shear stress acting on the body surface dS, po is the pressure far away from the body (note that this constant does not affect the final result), p is pressure at surface dS, i^ is the unit vector in direction of free stream flowTherefore, when the drag is dominated by a frictional component, the body is called a streamlined body; whereas in the case of dominant pressure drag, the body is called a blunt or bluff body. Thus, the shape of the body and the angle of attack determine the type of drag. For example, an airfoil is considered as a body with a small angle of attack by the fluid flowing across it. This means that it has attached boundary layers, which produce much less pressure drag.
Blunt and streamlined body flows:
The wake produced is very small and drag is dominated by the friction component. Therefore, such a body (here an airfoil) is described as streamlined, whereas for bodies with fluid flow at high angles of attack, boundary layer separation takes place. This mainly occurs due to adverse pressure gradients at the top and rear parts of an airfoil.
Due to this, wake formation takes place, which consequently leads to eddy formation and pressure loss due to pressure drag. In such situations, the airfoil is stalled and has higher pressure drag than friction drag. In this case, the body is described as a blunt body.
Blunt and streamlined body flows:
A streamlined body looks like a fish (Tuna), Oropesa, etc. or an airfoil with small angle of attack, whereas a blunt body looks like a brick, a cylinder or an airfoil with high angle of attack. For a given frontal area and velocity, a streamlined body will have lower resistance than a blunt body. Cylinders and spheres are taken as blunt bodies because the drag is dominated by the pressure component in the wake region at high Reynolds number.
Blunt and streamlined body flows:
To reduce this drag, either the flow separation could be reduced or the surface area in contact with the fluid could be reduced (to reduce friction drag). This reduction is necessary in devices like cars, bicycle, etc. to avoid vibration and noise production.
Practical example The aerodynamic design of cars has evolved from the 1920s to the end of the 20th century. This change in design from a blunt body to a more streamlined body reduced the drag coefficient from about 0.95 to 0.30. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Posterior intermuscular septum of leg**
Posterior intermuscular septum of leg:
The posterior intermuscular septum of leg, or posterior crural intermuscular septum is a band of fascia which separates the lateral compartment of leg.
Posterior intermuscular septum of leg:
The deep fascia of leg gives off from its deep surface, on the lateral side of the leg, two strong intermuscular septa, the anterior and posterior peroneal septa, which enclose the Peronæi longus and brevis, and separate them from the muscles of the anterior and posterior crural regions, and several more slender processes which enclose the individual muscles in each region. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wood industry**
Wood industry:
The wood industry or timber industry (sometimes lumber industry -- when referring mainly to sawed boards) is the industry concerned with forestry, logging, timber trade, and the production of primary forest products and wood products (e.g. furniture) and secondary products like wood pulp for the pulp and paper industry. Some of the largest producers are also among the biggest owners of timberland. The wood industry has historically been and continues to be an important sector in many economies.
Distinction:
In the narrow sense of the terms, wood, forest, forestry and timber/lumber industry appear to point to different sectors, in the industrialized, internationalized world, there is a tendency toward huge integrated businesses that cover the complete spectrum from silviculture and forestry in private primary or secondary forests or plantations via the logging process up to wood processing and trading and transport (e.g. timber rafting, forest railways, logging roads).Processing and products differs especially with regard to the distinction between softwood and hardwood. While softwood primarily goes into the production of wood fuel and pulp and paper, hardwood is used mainly for furniture, floors, etc.. Both types can be of use for building and (residential) construction purposes (e.g. log houses, log cabins, timber framing).
Top producers:
As of 2019, the top timberland owners in the US were structured as real-estate investment trusts and include: Weyerhaeuser Co.
Rayonier PotlatchDelticIn 2008 the largest lumber and wood producers in the US were Boise Cascade North Pacific Group Sierra Pacific IndustriesAs these companies are often publicly traded, their ultimate owners are a diversified group of investors. There are also timber-oriented real-estate investment trusts.
According to sawmilldatabase, the world top producers of sawn wood in 2007 were:
Issues:
Safety Noise Workers within the forestry and logging industry sub-sector fall within the agriculture, forestry, fishing, and hunting (AFFH) industry sector as characterized by the North American Industry Classification System (NAICS). The National Institute for Occupational Safety and Health (NIOSH) has taken a closer look at the AFFH industry's noise exposures and prevalence of hearing loss. While the overall industry sector had a prevalence of hearing loss lower than the overall prevalence of noise-exposed industries (15% v. 19%), workers within forestry and logging exceeded 21%. Thirty-six percent of workers within forest nurseries and gathering of forest products, a sub-sector within forestry and logging, experienced hearing loss, the most of any AFFH sub-sector. Workers within forest nurseries and gathering of forest products are tasked with growing trees for reforestation and gathering products such as rhizomes and barks. Comparatively, non-noise-exposed workers have only a 7% prevalence of hearing loss.Worker noise exposures in the forestry and logging industry have been found to be up to 102 dBA. NIOSH recommends that a worker have an 8-hour time-weighted average of noise exposure of 85 dBA. Excessive noise puts workers at an increased risk of developing hearing loss. If a worker were to develop a hearing loss as a result of occupational noise exposures, it would be classified as occupational hearing loss. Noise exposures within the forestry and logging industry can be reduced by enclosing engines and heavy equipment, installing mufflers and silencers, and performing routine maintenance on equipment. Noise exposures can also be reduced through the hierarchy of hazard controls where removal or replacement of noisy equipment serves as the best method of noise reduction.
Issues:
Injury The Bureau of Labor Statistics (BLS) has found that fatalities of forestry and logging workers have increased from 2013 to 2016, up from 81 to 106 per year. In 2016, there were 3.6 cases of injury and illness per 100 workers within this industry.
Illegal logging
Economy:
The existence of a wood economy, or more broadly, a forest economy (in many countries a bamboo economy predominates), is a prominent matter in many developing countries as well as in many other nations with a temperate climate and especially in those with low temperatures. These are generally the countries with greater forested areas so conditions allow for development of local forestry to harvest wood for local uses. The uses of wood in furniture, buildings, bridges, and as a source of energy are widely known. Additionally, wood from trees and bushes, can be used in a variety of products, such as wood pulp, cellulose in paper, celluloid in early photographic film, cellophane, and rayon (a substitute for silk).At the end of their normal usage, wood products can be burnt to obtain thermal energy or can be used as a fertilizer. The potential environmental damage that a wood economy could occasion include a reduction of biodiversity due to monoculture forestry (the intensive cultivation of very few trees types); and CO2 emissions. However, forests can aid in the reduction of atmospheric carbon dioxide and thus limit climate change.
Economy:
Paper is today the most used wood product.
Economy:
History of use of wood The wood economy was the starting point of the civilizations worldwide, since eras preceding the Paleolithic and the Neolithic. It necessarily preceded ages of metals by many millennia, as the melting of metals was possible only through the discovery of techniques to light fire (usually obtained by the scraping of two very dry wooden rods) and the building of many simple machines and rudimentary tools, as canes, club handles, bows, arrows, lances. One of the most ancient handmade articles ever found is a polished wooden spear tip (Clacton Spear) 250,000 years old (third interglacial period), that was buried under sediments in England, at Clacton-on-Sea.Successive civilizations such as the Egyptians and Sumerians built sophisticated objects of furniture. Many types of furniture in ivory and valuable woods have survived to our time practically intact, because secluded in inviolated secret tombs, they were protected from decay also by the dry environment of desert.Many buildings and parts of these (above all roofs) contained elements in wood (often of oak) forming structural supports and covering; means of transport such as boats, ships; and later (with the invention of the wheel) wagons and carriages, winches, flour mills powered by water, etc.
Economy:
Dimensions and geography The main source of the lumber used in the world is forests, which can be classified as virgin, semivirgin and plantations. Much timber is removed for firewood by local populations in many countries, especially in the third world, but this amount can only be estimated, with wide margins of uncertainty.In 1998, the worldwide production of "Roundwood" (officially counted wood not used as firewood), was about 1,500,000,000 cubic metres (2.0×109 cu yd), amounting to around 45% of the wood cultivated in the world. Cut logs and branches destined to become elements for building construction accounted for approximately 55% of the world's industrial wood production. 25% became wood pulp (including wood powder and broccoli) mainly destined for the production of paper and paperboard, and approximately 20% became panels in plywood and valuable wood for furniture and objects of common use (FAO 1998). The World's largest producer and consumer of officially accounted wood are the United States, although the country that possesses the greatest area of forest in Russia.In the 1970s, the countries with the largest forest area were: Soviet Union (approximately 8,800,000 km2), Brazil (5,150,000 km2), Canada (4,400,000 km2), United States (3,000,000 km2), Indonesia (1,200,000 km2) and Democratic Republic of Congo (1,000,000 km2). Other countries with important production and consumption of wood usually have a low density of population in relation to their territorial extension, here we can include countries as Argentina, Chile, Finland, Poland, Sweden, Ukraine.By 2001 the rainforest areas of Brazil were reduced by a fifth (respect of 1970), to around 4,000,000 km2; the ground cleared was mainly destined for cattle pasture—Brazil is the world's largest exporter of beef with almost 200,000,000 head of cattle. The booming Brazilian ethanol economy based upon sugar cane cultivation, is likewise reducing forests area. Canadian forest was reduced by almost 30% to 3,101,340 km2 over the same period.
Economy:
Importance in limiting climate change Regarding the problem of climate change, it is known that burning forests increase CO2 in the atmosphere, while intact virgin forest or plantations act as sinks for CO2, for these reasons wood economy fights greenhouse effect. The amount of CO2 absorbed depends on the type of trees, lands and the climate of the place where trees naturally grow or are planted. Moreover, by night plants do not photosynthesize, and produce CO2, eliminated the successive day. Paradoxically in summer oxygen created by photosynthesis in forests near to cities and urban parks, interacts with urban air pollution (from cars, etc.) and is transformed by solar beams in ozone (molecule of three oxygen atoms), that while in high atmosphere constitutes a filter against ultraviolet beams, in the low atmosphere is a pollutant, able to provoke respiratory disturbances.In a low-carbon economy, forestry operations will be focused on low-impact practices and regrowth. Forest managers will make sure that they do not disturb soil-based carbon reserves too much. Specialized tree farms will be the main source of material for many products. Quick maturing tree varieties will be grown on short rotations to maximize output.
Production by country:
In Australia Eucalyptus: these are seven hundred tree species from Australia, that grow very fast in tropical, sub-tropical and semi-arid climates, and are very resistant to forest fires (with their tree cortex) and drought. Its essential oil is used in pharmacology, its wood for building, and the small branches as firewood and pulpwood.
Production by country:
In Brazil Brazil has a long tradition in the harvesting of several types of trees with specific uses. Since the 1960s, imported species of pine tree and eucalyptus have been grown mostly for the plywood and paper pulp industries. Currently high-level research is being conducted, to apply the enzymes of sugar cane fermentation to cellulose in wood, to obtain methanol, but the cost is much higher when compared with ethanol derived from corn costs.
Production by country:
Brazilwood: has a dense, orange-red heartwood that takes a high red shine (brasa=ember), and it is the premier wood used for making bows for string instruments from the violin family. These trees soon became the biggest source of red dye, and they were such a large part of the economy and export of that country, that slowly it was known as Brazil.
Production by country:
Hevea brasiliensis: is the biggest source of the best latex, that is used to manufacture many objects in rubber, as an example gloves, condoms, anti-allergic mattresses and tires (vulcanized rubber). Latex has the ability to adjust to the exact shape of the body part, an advantage over polyurethane or polyethylene gloves.
Production by country:
In Canada and the US There is a close relation in the forestry economy between these countries; they have many tree genera in common, and Canada is the main producer of wood and wooden items destined to the US, the biggest consumer of wood and its byproducts in the world. The water systems of the Great Lakes, Erie Canal, Hudson River and Saint Lawrence Seaway to the east coast and the Mississippi River to the central plains and Louisiana allows transportation of logs at very low costs. On the west coast, the basin of the Columbia River has plenty of forests with excellent timber.
Production by country:
Canada The agency Canada Wood Council calculates that in the year 2005 in Canada, the forest sector employed 930,000 workers (1 job in every 17), making around $108 billion of value in goods and services. For many years products derived from trees in Canadian forests had been the most important export items of the country. In 2011, exports around the world totaled some $64.3 billion – the single largest contributor to Canadian trade balance.Canada is the world leader in sustainable forest management practices. Only 120,000,000 hectares (1,200,000 km2; 463,320 sq mi) (28% of Canadian forests) are currently managed for timber production while an estimated 32,000,000 hectares (320,000 km2; 123,550 sq mi) are protected from harvesting by the current legislation.The Canadian timber industry has led to environmental conflict with Indigenous people protecting their land from logging. For example, the Asubpeeschoseewagong First Nation set up the Grassy Narrows road blockade for twenty years beginning in 2002 to prevent clearcutting of their land.
Production by country:
United States Cherry: a hardwood prized for its high quality in grain, width, color, and rich warm glow. The first trees were carried to the lands surrounding Rome (Latium) from Armenia. In the United States, most cherry trees are grown in Washington, Pennsylvania, West Virginia, California and Oregon.
Production by country:
Cedar: this genus is a group of conifers of the family Pinaceae, originating from high mountain areas from the Carpathians, Lebanon and Turkey to the Himalayas. Their scented wood make them suitable for chests and closet lining. Cedar oil and wood is known to be a natural repellent to moths. Actually are planted in western and southern US, mostly for ornamental purposes, but also for the production of pencils (specially incense-cedar).
Production by country:
Douglas fir: a native tree of the United States west coast and Mountain States, with records in fast growth and high statures in brief time. The coast Douglas fir grows in coastal regions up to altitudes of about 1,800 meters; the Rocky Mountain Douglas fir grows farther inland, at altitudes ranging from 800 m to 3,000 m or higher. The wood is used for construction, for homebuilt aircraft, for paper pulp, and also as firewood.
Production by country:
Hybrid poplar is being investigated by Oak Ridge National Laboratory in Tennessee for genetic engineering to obtain a tree with a higher content of cellulose and a lower content in lignin, in such a way that the extraction of bioethanol (useful as a fuel) could be easier and less expensive.
Walnut: a prized furniture and carving hardwood because of its colour, hardness, grain and durability. Walnut wood has been the timber of choice for gun makers for centuries. It remains one of the most popular choices for rifle and shotgun stocks.
Production by country:
In the Caribbean and Central America Mahogany: has a straight grain, usually free of voids and pockets. The most prized species come from Cuba and Honduras. It has a reddish-brown color, which darkens over time, and displays a beautiful reddish sheen when polished. It has excellent workability, is available in big boards, and is very durable. Mahogany is used in the making of many musical instruments, as drums, acoustic and electric guitars' back and side, and luxury headphones.
Production by country:
In Europe Italy The species that are ideal for the many uses in this type of economy are those employed by arboriculture, that are very well known for their features and the need for certain types of ground and climates.
Fraxinus: being a lightweight wood is easy to transport, as firewood burns easily, grows in damp environments like those present in river flooding areas, stands pollution of water and air.
Larix: in Italy it grows at high altitudes around mountain tops, its timber stand sudden climatic change, from icy winds to high temperatures in sunny afternoon summers, it is excellent for use in the building of exposed structures as bridges, roofs, etc.
Production by country:
Stone pine: "Mediterranean pine" could be the noble emblem of many coastal areas in Italy, originally giant forests of pines extended from the mouth of the Tiber river until Liguria and Provence in France, over soils with high salinity, not very apt for agriculture. Its trees produce a vast amount of dry branches that can be burnt, cones (used for Christmas decoration) and needle-like foliage that can be burnt, or used as mulch. Oils and resins can be used in scents and ointments. The pinoli are useful elements in Italian cooking (along with basil are tritured to make pesto sauce). Currently, "progress" has brought to a severe reduction of this magnificent tree extensions, and in many places cheap beach buildings, car-parking and semi-abandoned areas have taken their place.
Production by country:
Poplar: in Italy is the most important species for tree plantations, is used for several purposes as plywood manufacture, packing boxes, paper, matches, etc. It needs good quality grounds with good drainage, but can be used to protect the cultivations if disposed in windbreak lines. More than 70% of Italian poplar cultivations are located in the pianura Padana. Constantly the extension of the cultivation is being reduced, from 650 km2 in the 1980s to current 350 km2. The yield of poplars is about 1,500 t/km2 of wood every year. The production from poplars is around 45–50% of the total Italian wood production.In the history of art poplar was the wood of choice for painting surfaces as panels, as in Renaissance (The Mona Lisa by Leonardo da Vinci). Because of this reason, many of the products with the highest added value, extremely expensive, are made with wood from the humble but durable poplar.
Production by country:
Because of the presence of tannic acid, poplar cortex was often used in Europe for the tanning of leather.
Production by country:
Portugal Oak for cork: are trees with a slow growth, but long life, are cultivated in warm hill areas (min. temp. > −5 °C) in all the west area of Mediterranean shores. Cork is popular as a material for bulletin boards. Even if the production as stoppers for wine bottles is diminishing in favor of nylon stoppers, in the sake of energy saving granules of cork can be mixed into concrete. These composites have low thermal conductivity, low density and good energy absorption (earthquake resistant). Some of the property ranges of the composites are density (400–1500 kg/m3), compressive strength (1–26 MPa) and flexural strength (0.5–4.0 MPa). Because of this cork can be used as thermal isolation in buildings (as well in its natural form and as a mixture), useful also as sound insulation. In the shoe industry cork is used for soles and insoles. In the world there are 20,000 km2 of cork oak plantations, and every year are extracted around 300,000 tons of cork, 50% in Portugal, 15,000 in Italy (12,000 in the island of Sardinia). The advantage of this natural industry is that the extraction of cork from layers outer to the cortex does not kill the tree.
Production by country:
In Fennoscandia and Russia In Sweden, Finland and to an extent Norway, much of the land area is forested, and the pulp and paper industry is one of the most significant industrial sectors. Chemical pulping produces an excess of energy, since the organic matter in black liquor, mostly lignin and hemicellulose breakdown products, is burned in the recovery boiler. Thus, these countries have high proportions of renewable energy use (25% in Finland, for instance). Considerable effort is directed towards increasing the value and usage of forest products by companies and by government projects.
Production by country:
Scots pine and Norway spruce: These species comprise most of the boreal forest, and together as a softwood mixture they are converted into chemical pulp for paper.
Production by country:
Birch is a genus with many species of trees in Scandinavia and Russia, excellent for acid soils. These act as pioneer species in the frozen border between taiga and tundra, and are very resistant to periods of drought and icy conditions. The species Betula nana has been identified as the ideal tree for the acid, nutrient-poor soils of mountain slopes, where these trees can be used to restrain landslides, including in southern Europe. Dissolving pulp is produced from birch. Xylitol can be produced by the hydrogenation of xylose, which is a byproduct of chemical birch pulping.
Outputs:
Combustion The burning of wood is currently the largest use of energy derived from a solid fuel biomass. Wood fuel may be available as firewood (e.g. logs, bolts, blocks), charcoal, chips, sheets, pellets and sawdust. Wood fuel can be used for cooking and heating through stoves and fireplaces, and occasionally for fueling steam engines and steam turbines that generate electricity. For many centuries many types of traditional ovens were used to benefit from the heat generated by wood combustion. Now, more efficient and clean solutions have been developed: advanced fireplaces (with heat exchangers), wood-fired ovens, wood-burning stoves and pellet stoves, that are able to filter and separate pollutants (centrifuging ashes with rotative filters), thus eliminating many emissions, also allowing to recover a higher quantity of heat that escaped with the chimney fumes.Mean energy density of wood, was calculated at around 6–17 Megajoule/Kilogram, depending on species and moisture content.Combustion of wood is, however, linked to the production of micro-environmental pollutants, as carbon dioxide (CO2), carbon monoxide (CO) (an invisible gas able to provoke irreversible saturation of blood's hemoglobine), as well as nanoparticles.In Italy poplar has been proposed as a tree cultivated to be transformed into biofuels, because of the excellent ratio of energy extracted from its wood because of poplar's fast growing and capture of atmospheric carbon dioxide to the small amount of energy needed to cultivate, cut and transport the trees. Populus x canadensis 'I-214', grows so fast that is able to reach 14 inches (36 cm) in diameter and heights of 100 feet (30 m) in ten years.
Outputs:
Charcoal Charcoal is the dark grey residue consisting of impure carbon obtained by removing water and other volatile constituents from animal and vegetation substances. Charcoal is usually produced by slow pyrolysis, the heating of wood or other substances in the absence of oxygen. Charcoal can then be used as a fuel with a higher combustion temperature.
Outputs:
Wood gasogen Wood gas generator (gasogen): is a bulky and heavy device (but technically simple) that transforms burning wood in a mix of molecular hydrogen (H2), carbon monoxide (CO), carbon dioxide (CO2), molecular nitrogen (N2) and water vapor (H2O). This gas mixture, known as "wood gas", "poor gas" or "syngas" is obtained after the combustion of dry wood in a reductive environment (low in oxygen) with a limited amount of atmospheric air, at temperatures of 900 °C, and can fuel an internal combustion engine.
Outputs:
In the time between World War I and World War II included, because of the lack of oil, in many countries, like Italy, France, Great Britain and Sweden, several gasoline-powered cars were modified, with the addition of a wood gas generator (a "gasogen"), a device powered by wood, coal, or burnable waste, able to produce (and purify) gas that immediately, in the same vehicle, could power a slightly modified ICE engine of a standard car (low-compression engine). Carburetor had to be changed with an air-gas mixer). There were several setbacks, as the great reduction of maximum speed and the need to drive using low gears and wisely dosing the amount of air. In modern cars, modified with a wood gas generator, gas emissions (CO, CO2 and NOx) are lower to those of the same vehicle running with gasoline (keeping the same catalytic converter).
Outputs:
Methanol Methanol (the simplest alcohol) behaves as a liquid at 25 °C, is toxic and corrosive, and in organic chemistry basic books is often called "the spirit of wood", since it can be obtained from wood fermentation. Rarely, when unwise wine-makers mix small chunks of wood and leaves with grapes, methanol can be found as a pollutant of the blend of water, ethanol and other substances derived from grape's fermentation.The best way to obtain methanol from wood is through syngas (CO, CO2, H2) produced by the anhydrous pyrolysis of wood, a method discovered by ancient Egyptians.Methanol can be used as an oxygen-rich additive for gasoline. However, it is usually much cheaper to produce methanol from methane or from syngas. Methanol is the most important base material for industrial chemistry, where it is often used to make more complex molecules through reactions of halogenation and chemical addition reaction.
Outputs:
Gas turbine Tanks The American M1 Abrams main battle tank is powered by a gas turbine of 1,500 hp (1,100 kW), that it is able to function also with a mix at 50% of wood powder and biodiesel, diesel fuel or kerosene. Its advantages over turbo-diesel engine, are the small size and light weight, the lack of a radiator (which gives an advantage against the effect of gun and cannon shots and missile strikes suffered in battle). A setback is the high fuel consumption, since the turbine engine has not the ability to work at a low revolutions per minute rate, much lower than ideal, and during the march this engine consumes twice as much fuel as a modern turbo-diesel engine with intercooler and direct injection.
Outputs:
Construction Wood is relatively light in weight, because its specific weight is less than 500 kg/m3, this is an advantage, when compared against 2,000–2,500 kg/m3 for reinforced concrete or 7,800 kg/m3 for steel.Wood is strong, because the efficiency of wood for structural purposes has qualities that are similar to steel.
Bridges, levees, microhydro, piers Wood is used to build bridges (as the Magere bridge in Amsterdam), as well as water and air mills, and microhydro generators for electricity.
Outputs:
Housing Hardwood is used as a material in wooden houses, and other structures with a broad range of dimensions. In traditional homes wood is preferred for ceilings, doors, floorings and windows. Wooden frames were traditionally used for home ceilings, but they risk collapse during fires.The development of energy efficient houses including the "passive house" has revamped the importance of wood in construction, because wood provides acoustic and thermal insulation, with much better results than concrete.
Outputs:
Earthquake resistant buildings In Japan, ancient buildings, of relatively high elevation, like pagodas, historically had shown to be able to resist earthquakes of high intensity, thanks to the traditional building techniques, employing elastic joints, and to the excellent ability of wooden frames to elastically deform and absorb severe accelerations and compressive shocks.In 2006, Italian scientists from CNR patented a building system that they called "SOFIE", a seven-storey wooden building, 24 meters high, built by the "Istituto per la valorizzazione del legno e delle specie arboree" (Ivalsa) of San Michele all'Adige. In 2007 it was tested with the hardest Japanese antiseismic test for civil structures: the simulation of Kobe's earthquake (7.2 Richter scale), with the building placed over an enormous oscillating platform belonging to the NIED-Institute, located in Tsukuba science park, near the city of Miki in Japan. This Italian project, employed very thin and flexible panels in glued laminated timber, and according to CNR researchers could lead to the construction of much more safe houses in seismic areas.
Outputs:
Shipbuilding One of the most enduring materials is the lumber from virginian southern live oak and white oak, specially live oak is 60% stronger than white oak and more resistant to moisture. As an example, the main component in the structure of battle ship USS Constitution, the world's oldest commissioned naval vessel afloat (launched in 1797) is white oak.Woodworking Woodworking is the activity or skill of making items from wood, and includes cabinet making (cabinetry and furniture), wood carving, joinery, carpentry, and woodturning. Millions of people make a livelihood on woodworking projects. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Textual variants in the Gospel of John**
Textual variants in the Gospel of John:
Textual variants in the Gospel of John are the subject of the study called textual criticism of the New Testament. Textual variants in manuscripts arise when a copyist makes deliberate or inadvertent alterations to a text that is being reproduced. An abbreviated list of textual variants in this particular book is given in this article below.
Textual variants in the Gospel of John:
Origen, writing in the 3rd century, was one of the first who made remarks about differences between manuscripts of texts that were eventually collected as the New Testament. In John 1:28, he preferred "Bethabara" over "Bethany" as the location where John was baptizing (Commentary on John VI.40 (24)). "Gergeza" was preferred over "Geraza" or "Gadara" (Commentary on John VI.40 (24) – see Matthew 8:28).
Textual variants in the Gospel of John:
Most of the variations are not significant and some common alterations include the deletion, rearrangement, repetition, or replacement of one or more words when the copyist's eye returns to a similar word in the wrong location of the original text. If their eye skips to an earlier word, they may create a repetition (error of dittography). If their eye skips to a later word, they may create an omission. They may resort to performing a rearranging of words to retain the overall meaning without compromising the context. In other instances, the copyist may add text from memory from a similar or parallel text in another location. Otherwise, they may also replace some text of the original with an alternative reading. Spellings occasionally change. Synonyms may be substituted. A pronoun may be changed into a proper noun (such as "he said" becoming "Jesus said"). John Mill's 1707 Greek New Testament was estimated to contain some 30,000 variants in its accompanying textual apparatus which was based on "nearly 100 [Greek] manuscripts." Peter J. Gurry puts the number of non-spelling variants among New Testament manuscripts around 500,000, though he acknowledges his estimate is higher than all previous ones.
Legend:
A guide to the sigla (symbols and abbreviations) most frequently used in the body of this article. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Microsoft Speech Server**
Microsoft Speech Server:
The Microsoft Speech Server is a product from Microsoft designed to allow the authoring and deployment of IVR applications incorporating Speech Recognition, Speech Synthesis and DTMF. The first version of the server was released in 2004 as Microsoft Speech Server 2004 and supported applications developed for U.S. English-speaking users. A later release (Speech Server 2004 R2) was released in 2005 and added support for North American Spanish and Canadian French as well as additional features and fixes. In August 2006, Microsoft announced that Speech Server 2007, originally slated to be released in May 2007, had been merged with the Microsoft Office Live Communications Server product line[1] to create Microsoft Office Communications Server.
Microsoft Speech Server:
The Speech Server 2007 components of Office Communications Server are also available separately in the free Speech Server 2007 Developers Edition. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chresonym**
Chresonym:
In biodiversity informatics, a chresonym is the cited use of a taxon name, usually a species name, within a publication. The term is derived from the Greek χρῆσις chresis meaning "a use" and refers to published usage of a name.
Chresonym:
The technical meaning of the related term synonym is for different names that refer to the same object or concept. As noted by Hobart and Rozella B. Smith, zoological systematists had been using "the term (synonymy) in another sense as well, namely in reference to all occurrences of any name or set of names (usually synonyms) in the literature." Such a "synonymy" could include multiple listings, one for each place the author found a name used, rather than a summarized list of different synonyms. The term "chresonym" was created to distinguish this second sense of the term "synonym." The concept of synonymy is furthermore different in the zoological and botanical codes of nomenclature.
Chresonym:
A name that correctly refers to a taxon is further termed an orthochresonym while one that is applied incorrectly for a given taxon may be termed a heterochresonym.
Orthochresonymy:
Species names consist of a genus part and a species part to create a binomial name. Species names often also include a reference to the original publication of the name by including the author and sometimes the year of publication of the name. As an example, the sperm whale, Physeter catodon, was first described by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae. Thus, the name may also be referenced as Physeter catodon Linnaeus 1758. That name was also used by Harmer in 1928 to refer to the species in the Proceedings of the Linnean Society of London and of course, it has appeared in numerous other publications since then. Taxonomic catalogues, such as Catalog of Living Whales by Philip Hershkovitz, may reference this usage with a Genus+species+authorship convention that may appear to indicate a new species (a homonym) when in fact it is referencing a particular usage of a species name (a chresonym). Hershkovitz, for example refers to Physeter catodon Harmer 1928, which can cause confusion as this name+author combination really refers to the same name that Linnaeus first published in 1758.
Heterochresonymy:
Nepenthes rafflesiana, a species of pitcher plant, was described by William Jack in 1835. The name Nepenthes rafflesiana as used by Hugh Low in 1848 is a heterochresonym. Cheek and Jebb (2001) explain the situation thus: Low, ... accidentally, or otherwise, had described what we know as N. rafflesiana as Nepenthes × hookeriana and vice versa in his book "Sarawak, its Inhabitants and Productions" (1848). Masters was the first author to note this in the Gardeners' Chronicle..., where he gives the first full description and illustration of Nepenthes × hookeriana.
Heterochresonymy:
The description that Maxwell Tylden Masters provided in 1881 for the taxon that had previously been known to gardeners as Nepenthes hookeriana (an interchangeable form of the name for the hybrid Nepenthes × hookeriana) differs from Low's description. The International Code of Nomenclature for algae, fungi, and plants does not require that descriptions from so long ago include specification of a type specimen, and types can be chosen later to fit these old names. Since the descriptions differ, Low's and Masters' name have different types. Masters therefore created a later homonym, which, according to the rules of the code is illegitimate. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Electric field NMR**
Electric field NMR:
Electric field NMR (EFNMR) spectroscopy is the NMR spectroscopy where additional information on a sample being probed is obtained from the effect of a strong, externally applied, electric field on the NMR signal. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**House call**
House call:
A house call is medical consultation performed by a doctor or other healthcare professionals visiting the home of a patient or client, instead of the patient visiting the doctor's clinic or hospital. In some locations, families used to pay dues to a particular practice to underwrite house calls.
History:
In the early 1930s, house calls by doctors were 40% of doctor-patient meetings. By 1980, it was only 0.6%. Reasons include increased specialization and technology. In the 1990s, team home care, including physician visits, was a small but growing field in health care, for frail older people with chronic illnesses.
History:
The reasons for fewer house calls include concerns about providing low-overhead care in the home, time inefficiency, and inconvenience. Yet, there are more and more doctors who like the idea of no office overhead. Also, it can provide safe access to care by people who are ill. Today, house calls may be making a revival among the wealthy through concierge telemedicine and mobile apps.
Canada:
In 2012 as part of its Action Plan for Healthcare the province of Ontario actively expanded funding for access to house calls with its primary focus being on seniors and those with physical limitations making it difficult for travel outside the home. Residents of Ontario with valid Ontario Health Insurance Plan cards are able to take advantage of the house call system, and arrange for appointments with physicians at their home. Currently, this service is only available in Toronto.
United States:
In the United States, leadership such as George Washington were known to receive house calls. Upon his deathbed in 1799, President Washington received a house call prior to his passing. Presently, the United States' leadership retain a Physician to the President on staff.
Midwifery The US rate of out-of-hospital birth has remained steady at 1% of all births since 1989, with data from 2007 showing that 27.3% of the home births since 1989 took place in a free-standing birth center and 65.4% in a residence.
Outcomes A 2007 randomized trial of in-home palliative care demonstrated increased patient satistifaction and decreased costs.
USSR and post-Soviet Russia:
In the Soviet Union the national government established a nationwide free outpatient polyclinic system, where each health center covered a part of a city, a neighbourhood, and this system has been preserved in post-Soviet times. Each general practitioner (therapeut) out of some 10 to 20 working in each state outpatient health centre serves his patients on weekdays both in his room during his 3–4 reception hours and spends another 3–4 hours on house visits (which become most numerous during flu and colds epidemics and can reach 40 per day) on his assigned block of streets with a standard number of residents. Unlike Soviet times, however, each patient now has to produce apart from his citizen ID (pasport with place of residence stamp showing his registration on the clinic's precinct) a now uniform medical insurance policy of obligatory medical insurance provided by a number of medical insurance companies through either financing by employers for working people or by the state – for children as well as old age and disability pensioners through regional funds of medical insurance.
USSR and post-Soviet Russia:
The purpose of such visit is primary diagnostics and prescription of treatment and mode of conduct as well as prescribing blood, urine and other tests to be carried out at the polyclinic. The doctor also supplies the patient with a sick leave from work or study for a number of days and the leave is to be closed by the same doctor or his/her substitute and sealed at the clinic on the patient's recovery and checkout. If need be, the GP may arrange a visit to the sick person from one of specialist physicians from his/her clinic and of his/her nurse for giving injections.
USSR and post-Soviet Russia:
There are two identical state systems of outpatient clinics running parallel – for adults and for children.
With the rise of private enterprise since 1990, city dwellers may place a phone order for a house call from a private medical facility (to be paid for out of patient's own money). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Creative consumer**
Creative consumer:
A creative consumer is defined as any "individual or group who adapt, modify, or transform a proprietary offering". Traditional consumers simply use and consume products and services; Creative consumers do the same but also change them in some way.
Examples include: The hacker George Hotz unlocked the original iPhone and hacked Sony's PlayStation 3, then gave away these hacks for free.
Jose Avila made FedEx furniture for his apartment exclusively from Federal Express boxes.
Creative consumer:
Jim Hill, a devoted Disney fan, designed and delivered guided but unauthorized tours of Disneyland.In 2005, The Economist published an article about the future of innovation, ‘The rise of the creative consumer’. This article explained that some companies rely on identifying and leveraging the innovation potential of creative consumers. However, many companies may feel threatened or upset by the actions of creative consumers. Hotz, Avila and Hill all received negative, and in some cases threatening, reactions from the companies whose products and services they had repurposed.
Creative consumer:
Berthon, et al., proposed that companies can take four general stances on creative consumers. These stances are determined by whether the company's actions toward these creative consumers are active or passive and whether the company's attitude towards creative consumers is either positive or negative.
The four stances are: Resist stance (active/negative): restrain consumer creativity Discourage stance (passive/negative): tolerate or ignore consumer creativity Encourage stance (passive/positive): don't actively facilitate consumer creativity Enable stance (active/positive): actively facilitate consumer creativity | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Green box (container)**
Green box (container):
The Green Box (GB) is a large metal container, designed and utilized for free public disposal and recycling of electronic waste. It is produced and sold by an eponymous California company.
Company history:
Matt Miller of Huntington Beach, California created The Green Box in 2011. It is a 7-by-5-by-5-foot (2.1 m × 1.5 m × 1.5 m) box that is placed on private or public property within cities wherein businesses and residents unload their old and broken electronics 24 hours a day, 7 days a week. Green Boxes were first released to the public for beta-testing in January 2012, using bins manufactured and operated by Orange County-based Green Box Electronic Recyclers, Inc. (Green Box E.R.I.), in their Huntington Beach and other test markets. In March 2012, Green Box E.R.I. began expanding Green Box numbers into many cities throughout southern California including Costa Mesa, Laguna Beach, Westminster, Mission Viejo, Newport Beach, Tustin, Seal Beach and Sunset Beach.
Company history:
The bins are highly recognizable in part because of their trademarked name and electric green coloration. Located in high traffic areas within multiple city regions of southern California, Green Boxes are exclusively serviced and managed by Green Box Electronic Recyclers, Inc. a California Corporation.
Design and function:
The shade of green chosen for Green Boxes is similar to that of a green highlighter marker. CEO Miller named the custom color ‘electric green’ and says his is the first company in the United States to apply this color to an unattended collection box. Green Box E.R.I. possesses a U.S. trademark related to the green coloration.
Design and function:
The deposit opening on a Green Box is large enough to fit most electronic waste including computers, DVD players, flat-screen computer monitors, LCDs, copiers, laptops, cell phones, musical devices such as iPods, household printers, fax machines, mice, image scanners, servers, digital cameras, calculators, electronic boards, cords and cables, CPUs, routers, stereo equipment, medical equipment, video cameras, VCRs, disc players and keyboards. Deposits are made by placing electronic waste on a large platform, and lifting up on the handle. Items fall downward into the box.The graphics on a Green Box include the company’s phone number, website, logo, a data destruction statement, a warning against dumping, a ‘No CRT’ label, a list of items accepted, the company’s Facebook and Twitter handles, the words ‘RECYCLE OLD ELECTRONICS’, and ‘FREE TO THE PUBLIC’, and icons of various electronic gadgets. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lymphotoxin**
Lymphotoxin:
Lymphotoxin is a member of the tumor necrosis factor (TNF) superfamily of cytokines, whose members are responsible for regulating the growth and function of lymphocytes and are expressed by a wide variety of cells in the body.Lymphotoxin plays a critical role in developing and preserving the framework of lymphoid organs and of gastrointestinal immune responses, as well as in the activation signaling of both the innate and adaptive immune responses. Lymphotoxin alpha (LT-α, previously known as TNF-beta) and lymphotoxin beta (LT-β), the two forms of lymphotoxin, each have distinctive structural characteristics and perform specific functions.
Structure and function:
Each LT-α/LT-β subunit is a trimer and assembles into homotrimers or heterotrimers. LT-α binds with LT-β to form membrane-bound heterotrimers LT-α1-β2 and LT-α2-β1, which are commonly referred to as lymphotoxin beta. LT-α1-β2 is the most prevalent form of lymphotoxin beta. LT-α also forms a homotrimer, LT-α3, which is secreted by activated lymphocytes as a soluble protein.Lymphotoxin is produced by lymphocytes upon activation and is involved with various aspects of the immune response, including inflammation and activation signaling. Upon binding to the LTβ receptor, LT-αβ transmits signals leading to proliferation, homeostasis and activation of tissue cells in secondary lymphoid organs through induced expression of chemokines, major histocompatibility complex, and adhesion molecules. LT-αβ, which is produced by activated TH1, CD8+ T cells, and natural killer (NK) cells, is known to have a major role in the normal development of Peyer's patches. Studies have found that mice with an inactivated LT-α gene (LTA) lack developed Peyer's patches and lymph nodes. In addition, LT-αβ is necessary for the proper formation of the gastrointestinal immune system.
Structure and function:
Receptor binding and signaling activation In general, lymphotoxin ligands are expressed by immune cells, while their receptors are found on stromal and epithelial cells.The lymphotoxin homotrimer and heterotrimers are specific to different receptors. The LT-αβ complexes are the primary ligands for the lymphotoxin beta receptor (LTβR), which is expressed on tissue cells in multiple lymphoid organs, as well as on monocytes and dendritic cells. The soluble LT-α homotrimer binds to TNF receptors 1 and 2 (TNFR-1 and TNFR-2), and the herpesvirus entry mediator, expressed on T cells, dendritic cells, macrophages, and epithelial cells. There is also evidence that LTα3 signaling through TNFRI and TNFRII contributes to the regulation of IgA antibody in the gut.Lymphotoxin administers a variety of activation signals in the innate immune response. LT-α is necessary for the expression of LT-α1-β2 on the cell surface as LT-α aids in the movement of LT-β to the cell surface to form LT-α1-β2. In the LT-α mediated signaling pathway, LT-α binds with LT-β to form the membrane-bound LT-α1-β2 complex. Binding of LT-α1-β2 to the LT-β receptor on the target cell can activate various signaling pathways in the effector cell such as the activation of the NF-κB pathway, a major signaling pathway that results in the release of additional pro-inflammatory cytokines essential for the innate response. The binding of lymphotoxin to LT-β receptors is essential for the recruitment of B cells and cytotoxic (CD8+) T cells to specific lymphoid sites to allow the clearing of antigen. Signaling of the LT-β receptors can also induce the differentiation of NK (natural killer) and NK-T cells, which are key players in the innate immune defense and in antiviral responses.
Carcinogenic interactions:
Lymphotoxin has cytotoxic properties that can aid in the destruction of tumor cells and promote the death of cancerous cells. The activation of LT-β receptors causes an up-regulation of adhesion molecules and directs B and T cells to specific sites to destroy tumor cells. Studies using mice with an LT-α knockout found increased tumor growth in the absence of LT-αβ.However, some studies using cancer models have found that a high expression of lymphotoxin can lead to increased growth of tumors and cancerous cell lines. The signaling of the LT-β receptor may induce the inflammatory properties of specific cancerous cell lines, and that the elimination of LT-β receptors may hinder tumor growth and lower inflammation. Mutations in the regulatory factors involved in lymphotoxin signaling may increase the risk of cancer development. One major instance is the continuous initiation of the NF-κB pathway due to an excessive binding of the LT-α1-β2 complex to LT-β receptors, which can lead to specific cancerous conditions including multiple myeloma and melanoma. As excessive inflammation can result in cell damage and a higher risk of the growth of cancer cells, mutations that affect the regulation of LT-α pro-inflammatory signaling pathways can increase the potential for cancer and tumor cell development. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Furniture retailer**
Furniture retailer:
A furniture retailer, furniture store or furniture shop is a retail businesses that sells furniture and related Products. Furniture retailers usually sell general furniture (like beds, dinning tables, sofa and wardrobes), seats and upholstered suites (like couches or sofas and chairs), and specialised items produced for a commission. They may sell a range of styles to suit different homes and personal tastes, or specialise in particular styles like retro style furniture.Many stores also sell outdoor or garden furniture, such as coffee tables, seats and couches, which are designed to be waterproof, rust-resistant and weather-proof rather than to follow modern indoor design trends.Furniture retail sales directly correlate with the state of the economy and housing market. When interest rates are lower and housing sales are higher, like in the United States in the early 1990s, sales of household and garden furniture increases. When business conditions are positive, like in the United States in the late 1990s, sales of furniture for offices, hotels and restaurants increases.
History:
The sector dates back the middle of the 19th century, when furniture sellers in North America and Europe began buying furniture from manufacturers at wholesale prices, and selling them to consumers in showrooms at higher prices. Many early showrooms had workshops to build specialty items.By the early 20th century, most production of furniture was common in the United States, with major manufacturing centers in Jamestown, New York, High Point, North Carolina and Grand Rapids, Michigan. However, hand-crafted items remained in demand and furniture factories remained small.World War II created a global shortage of wood products, preventing the production of furniture.The sale of mass-produced furniture in showrooms became more common in the second half of the twentieth century. The introduction of new materials, machinery, adhesives and finishes made it more difficult to distinguish commercially and handcrafted furniture. Many furniture retailers formed exclusive relationships with furniture manufacturers.
By market:
North America United States Canada Europe United Kingdom Serbia Asia India Oceania Middle East and Africa The One | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Positively separated sets**
Positively separated sets:
In mathematics, two non-empty subsets A and B of a given metric space (X, d) are said to be positively separated if the infimum inf 0.
Positively separated sets:
(Some authors also specify that A and B should be disjoint sets; however, this adds nothing to the definition, since if A and B have some common point p, then d(p, p) = 0, and so the infimum above is clearly 0 in that case.) For example, on the real line with the usual distance, the open intervals (0, 2) and (3, 4) are positively separated, while (3, 4) and (4, 5) are not. In two dimensions, the graph of y = 1/x for x > 0 and the x-axis are not positively separated. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BMPR1B**
BMPR1B:
Bone morphogenetic protein receptor type-1B also known as CDw293 (cluster of differentiation w293) is a protein that in humans is encoded by the BMPR1B gene.
Function:
BMPR1B is a member of the bone morphogenetic protein (BMP) receptor family of transmembrane serine/threonine kinases. The ligands of this receptor are BMPs, which are members of the TGF-beta superfamily. BMPs are involved in endochondral bone formation and embryogenesis. These proteins transduce their signals through the formation of heteromeric complexes of 2 different types of serine (threonine) kinase receptors: type I receptors of about 50-55 kD and type II receptors of about 70-80 kD. Type II receptors bind ligands in the absence of type I receptors, but they require their respective type I receptors for signaling, whereas type I receptors require their respective type II receptors for ligand binding.The BMPR1B receptor plays a role in the formation of middle and proximal phalanges.
Clinical significance:
Mutations in this gene have been associated with primary pulmonary hypertension.In the chick embryo, it has been shown that BMPR1B is found in precartilaginous condensations. BMPR1B is the major transducer of signals in these condensations as demonstrated in experiments using constitutively active BMPR1B receptors. BMPR1B is a more effective transducer of GDF5 than BMPR1A. Unlike BMPR1A null mice, which die at an early embryonic stage, BMPR1B null mice are viable. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Binaca (breath spray)**
Binaca (breath spray):
Binaca is an American brand of breath spray distributed by Ranir, LLC, a subsidiary of Perrigo. The sprays contain denatured alcohol and isobutane, the latter used as a propellant.
History:
In 1971, Binaca promoted its breath freshener products by selling a recipe booklet titled The Antisocial Cookbook for $1, which contains 150 recipes "extolling the virtues of garlic, onions, cheese [...]" and other ingredients known to cause breath odors; the reasoning for this was that Binaca's breath products would "make you socially acceptable" after eating such dishes.In 1974, Binaca was estimated to be worth $5 million. That year, Air Wick was acquired by Ciba-Geigy, and Binaca was moved into Air Wick's consumer products unit.
Safety:
Alcohol misuse In October 1993, articles in The Boston Globe and The Tribune reported that children and teenagers were supposedly inhaling Binaca in order to induce intoxication. The administration of Los Osos Middle School in Los Osos, California, prohibited students from possessing Binaca, citing safety concerns. Then-principal Greg Pruitt stated, "The kids were misusing it, spraying other kids and just horsing around. [...] Some years it's frogs and butterflies. One year it was Silly String. This year it was Binaca." Some stores and pharmacies in the Los Osos and Boston areas began storing Binaca products behind the counter and refusing to sell them to minors.In the season 4 episode 'The Opera' of television sitcom Seinfeld, Elaine sprays 'Crazy' Joe Davola’s face with Binaca. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Near-close near-front rounded vowel**
Near-close near-front rounded vowel:
The near-close front rounded vowel, or near-high front rounded vowel, is a type of vowel sound, used in some spoken languages.
The symbol in the International Phonetic Alphabet that represents this sound is ⟨ʏ⟩, a small capital version of the Latin letter y, and the equivalent X-SAMPA symbol is Y.
Near-close near-front rounded vowel:
Handbook of the International Phonetic Association defines [ʏ] as a mid-centralized (lowered and centralized) close front rounded vowel (transcribed [y̽] or [ÿ˕]), and the current official IPA name of the vowel transcribed with the symbol ⟨ʏ⟩ is near-close near-front rounded vowel. However, acoustic analysis of cardinal vowels as produced by Daniel Jones and John C. Wells has shown that basically all cardinal front rounded vowels (so not just [y] but also [ø, œ, ɶ]) are near-front (or front-central) in their articulation, so [ʏ] may be just a lowered cardinal [y] ([y˕]), a vowel that is intermediate between cardinal [y] and cardinal [ø]. In many languages that contrast close, near-close and close-mid front rounded vowels, there is no appreciable difference in backness between them. In some transcriptions, the vowel is transcribed with ⟨y⟩ or ⟨ø⟩. When that is the case, this article transcribes it with the symbols ⟨y˕⟩ (a lowered ⟨y⟩) and ⟨ø̝⟩ (a raised ⟨ø⟩), respectively. ⟨ʏ⟩ implies too weak a rounding in some cases (specifically in the case of the vowels that are described as tense in Germanic languages, which are typically transcribed with ⟨øː⟩), which would have to be specified as ⟨ʏ̹⟩.
Near-close near-front rounded vowel:
In some languages, however, ⟨ʏ⟩ is used to transcribe a vowel that is as low as close-mid but still fits the definition of a lowered and centralized (or just lowered) cardinal [y]. It occurs in German Standard German as well as some dialects of English (such as Estuary), and it can be transcribed with the symbol ⟨ʏ̞⟩ (a lowered ⟨ʏ⟩) in narrow transcription. For the close-mid front rounded vowel that is not usually transcribed with the symbol ⟨ʏ⟩ (or ⟨y⟩), see close-mid front rounded vowel.
Near-close near-front rounded vowel:
In most languages, the rounded vowel is pronounced with compressed lips (in an exolabial manner). However, in a few cases, the lips are protruded (in an endolabial manner), such as in Swedish, which contrasts the two types of rounding.
Transcription:
The near-close front rounded vowel is transcribed with ⟨y⟩, ⟨ʏ⟩ and ⟨ø⟩ in world's languages. However, when the Latin ⟨y⟩ or ⟨ø⟩ are used for this vowel, ⟨ʏ⟩ may still be used for phonological reasons for a vowel that is lower than near-close, potentially leading to confusion. This is the case in several Germanic language varieties, as well as in some transcriptions of Shanghainese.
Transcription:
In the following table, the difference between compressed and protruded vowels is ignored, except in the case of Swedish. Short vowels transcribed with ⟨ʉ⟩, ⟨ʏ⟩, ⟨ɵ⟩ and ⟨œ⟩ in broad transcription are assumed to have a weak rounding in most cases.
Because of that, IPA transcriptions of Limburgish dialects on Wikipedia utilize the symbol ⟨ɵ⟩ instead of ⟨ʏ⟩, following the symbol chosen for the corresponding Standard Dutch vowel by Rietveld & Van Heuven (2009).
Near-close front compressed vowel:
The near-close front compressed vowel is typically transcribed in IPA simply as ⟨ʏ⟩, and that is the convention used in this article. There is no dedicated diacritic for compression in the IPA. However, the compression of the lips can be shown with the letter ⟨β̞⟩ as ⟨ɪ͡β̞⟩ (simultaneous [ɪ] and labial compression) or ⟨ɪᵝ⟩ ([ɪ] modified with labial compression). The spread-lip diacritic ⟨ ͍ ⟩ may also be used with a rounded vowel letter ⟨ʏ͍⟩ as an ad hoc symbol, though technically 'spread' means unrounded.
Near-close front compressed vowel:
The close-mid front compressed vowel can be transcribed ⟨ɪ̞͡β̞⟩, ⟨ɪ̞ᵝ⟩ or ⟨ʏ͍˕⟩.
Features Its vowel height is near-close, also known as near-high, which means the tongue is not quite so constricted as a close vowel (high vowel).
Its vowel backness is front, which means the tongue is positioned forward in the mouth without creating a constriction that would be classified as a consonant. Rounded front vowels are often centralized, which means that often they are in fact near-front.
Its roundedness is compressed, which means that the margins of the lips are tense and drawn together in such a way that the inner surfaces are not exposed. The prototypical [ʏ] has a weak compressed rounding, more like [œ] than the neighboring cardinal vowels.
Occurrence Because front rounded vowels are assumed to have compression, and few descriptions cover the distinction, some of the following may actually have protrusion. Vowels transcribed with ⟨y˕⟩ and ⟨ø̝⟩ may have a stronger rounding than the prototypical value of ⟨ʏ⟩.
Near-close front protruded vowel:
Catford notes that most languages with rounded front and back vowels use distinct types of labialization, protruded back vowels and compressed front vowels. However, a few languages, such as Scandinavian languages, have protruded front vowels. One of them, Swedish, even contrasts the two types of rounding in front vowels as well as height and duration.As there are no diacritics in the IPA to distinguish protruded and compressed rounding, the old diacritic for labialization, ⟨◌̫⟩, will be used here as an ad hoc symbol for protruded front vowels. Another possible transcription is ⟨ʏʷ⟩ or ⟨ɪʷ⟩ (a near-close front vowel modified by endolabialization), but that could be misread as a diphthong.
Near-close front protruded vowel:
The close-mid front protruded vowel can be transcribed ⟨ʏ̫˕⟩, ⟨ʏ̞ʷ⟩ or ⟨ɪ̞ʷ⟩.
For the close-mid front protruded vowel that is not usually transcribed with the symbol ⟨ʏ⟩ (or ⟨y⟩), see close-mid front protruded vowel.
Acoustically, this sound is "between" the more typical compressed near-close front vowel [ʏ] and the unrounded near-close front vowel [ɪ].
Features Its vowel height is near-close, also known as near-high, which means the tongue is not quite so constricted as a close vowel (high vowel).
Its vowel backness is front, which means the tongue is positioned forward in the mouth without creating a constriction that would be classified as a consonant. Rounded front vowels are often centralized, which means that often they are in fact near-front.
Its roundedness is protruded, which means that the corners of the lips are drawn together, and the inner surfaces exposed. The prototypical [ʏ] has a weak rounding (though it is compressed, rather than protruded), more like [œ] than the neighboring cardinal vowels.
Occurrence | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Metamucil**
Metamucil:
Metamucil is a fiber supplement. Introduced in 1934 by G. D. Searle & Company, Metamucil was acquired by Procter & Gamble in 1985. The name is a combination of the Greek word for change (meta) and the class of fiber that it utilizes (mucilage). In its early years, Metamucil achieved sporadic drug-store distribution as a "behind the counter" brand. Since 1974, the brand was also marketed to consumers by print and TV advertising and became available in food outlets. Flavored versions were added in 1979.
Products:
The brand is sold as powdered drink mixes, capsules and wafers in a variety of flavors. Metamucil contains psyllium seed husks as the active ingredient. It is manufactured in Phoenix, Arizona, by Procter & Gamble. When first marketed to consumers in 1974, Metamucil was marketed as a laxative. The advertising slogan at that time was "If not nature, then Metamucil". Procter & Gamble sought to make Metamucil a household name by advertising in magazines and on television, using the claim "All fiber is not created equal". The target group was older people who are more likely to suffer from constipation.
Products:
On October 4, 2013, Procter & Gamble partnered with Tony Danza to organize the “Do More Than You Think” contest to promote and fund health and wellness charities. The main prize was the chance to select the charity that would receive a $100,000 donation from Procter & Gamble. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Amber moon**
Amber moon:
An amber moon is a cocktail containing Tabasco sauce, a raw egg, and whiskey or vodka. It is typically considered a "hair of the dog" hangover remedy (an alcoholic drink consumed for the purpose of relieving a hangover), though there is no scientific evidence showing that drinking alcohol is effective as a treatment for a hangover. It is similar to a prairie oyster, another traditional hangover remedy drink made with a raw egg, though a prairie oyster does not typically contain alcohol.The amber moon is featured in the 1974 mystery film Murder on the Orient Express, based on the 1934 novel by Agatha Christie. In the film, the butler Mr. Edward Beddoes, played by John Gielgud, brings this drink in the morning to his employer, Mr. Samuel Ratchett. Beddoes knocks on the door of the dead man's train compartment and announces, "It's me sir, Beddoes, with your pick-me-up. Your Amber Moon, Mr. Ratchett." Beddoes is later questioned about the death of Ratchett by the detective Hercule Poirot and relates, "His breakfast was his Amber Moon. He never rose until it had had its full effect." The amber moon in the film was prepared with vodka instead of whisky.
Amber moon:
Clark Gable makes himself an amber moon in the movie Comrade X.
An amber moon is also prepared for Pubert in the movie Addams Family Values.
Warren Beatty consumes this drink several times as the leading man of the 1971 western McCabe and Mrs. Miller.
An amber moon can also be seen in the streaming television series Russian Doll. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Triopoly (board game)**
Triopoly (board game):
Triopoly is a board game by Reveal Entertainment. It plays in much the same way as Monopoly, except that it has three tracks of properties instead of one, and additional buildings which may be constructed on squares. The tracks of the board are arranged concentrically, the middle two being slightly raised to form a ziggurat.
Triopoly (board game):
The game was invented in 1989 by Jeffrey Berndt, drawing inspiration from a five-hour game of Monopoly and the Tri-Dimensional Chess game he had seen an episode of Star Trek the following day. The game was co-designed and illustrated by Jeremy Parrish and Chris Hornbaker. The game was licensed to Reveal Entertainment, Inc., a company co-founded by Berndt, Maynard and Judy Gulley and Borden Duffel. The company raised funds to publish the game in 1997. The game won several awards and was named one of the "Best New Games" by Good Housekeeping and Games Magazine.The game can be played on one level, two levels or three levels, allowing players to decide the length of game they desire to play. Travel spaces allow players to move up and down levels with 'mini airline' tickets; or, an elevator that allows players to choose the level they desire to play. Players improve properties by building gas stations, shopping malls and skyscrapers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aminocyclopropanecarboxylate oxidase**
Aminocyclopropanecarboxylate oxidase:
In enzymology, an aminocyclopropanecarboxylate oxidase (EC 1.14.17.4) is an enzyme that catalyzes the chemical reaction 1-aminocyclopropane-1-carboxylate + ascorbate + O2 ⇌ ethylene + cyanide + dehydroascorbate + CO2 + 2 H2OThe 3 substrates of this enzyme are 1-aminocyclopropane-1-carboxylate, ascorbate, and O2, whereas its 5 products are ethylene, cyanide, dehydroascorbate, CO2, and H2O.
Aminocyclopropanecarboxylate oxidase:
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with reduced ascorbate as one donor, and incorporation of one atom of oxygen into the other donor. The systematic name of this enzyme class is 1-aminocyclopropane-1-carboxylate oxygenase (ethylene-forming). Other names in common use include ACC oxidase, and ethylene-forming enzyme.
Structural studies:
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1W9Y and 1WA6.
Reaction Mechanism:
Mechanistic and structural studies support binding of ACC and oxygen to an iron center located in the active site of ACC oxidase. The ring-opening of bound ACC is believed to result in the elimination of ethylene together with an unstable intermediate, cyanoformate ion, which then decomposes to cyanide ion and carbon dioxide. Cyanide ion is a known deactivating agent for iron-containing enzymes, but the cyanoformate ion intermediate is believed to play a vital role to carry potentially toxic cyanide away from the active site of ACC oxidase. Cyanoformate was recently identified in condensed media as a tetraphenylphosphonium salt with a weak carbon-carbon bond. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Formylmethanofuran dehydrogenase**
Formylmethanofuran dehydrogenase:
In enzymology, a formylmethanofuran dehydrogenase (EC 1.2.99.5) is an enzyme that catalyzes the chemical reaction: formylmethanofuran + H2O + acceptor ⇌ CO2 + methanofuran + reduced acceptor.The 3 substrates of this enzyme are formylmethanofuran, H2O, and acceptor, whereas its 3 products are CO2, methanofuran, and reduced acceptor.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the aldehyde or oxo group of donor with other acceptors. The systematic name of this enzyme class is formylmethanofuran:acceptor oxidoreductase. This enzyme is also called formylmethanofuran:(acceptor) oxidoreductase. This enzyme participates in folate biosynthesis. It has 2 cofactors: molybdenum, and Pterin.
Discovery and biological occurrence:
Formylmethanofuran (formyl-MFR) dehydrogenase is found in methanogenic archaea which are capable of synthesizing methane using substrates such as carbon dioxide, formate, methanol, methylamines, and acetate.In 1967, a reliable technique for the mass culture of hydrogen and carbon dioxide was developed for methanogens. It became obvious coenzymes are involved in biochemistry of methanogens as kilogram scale of cell was developed and utilized for biochemical studies. Methanobacterium thermoautotrophicum's reduction of carbon dioxide (CO2) with hydrogen is the most studied system. Methanobacterium thermoautotrophicum's metabolism involves almost all of the reactions in methanogenesis. Molybdenum and tungsten containing formyl-MFR was isolated from M. thermoautotrophicum when they purified proteins from soluble cofactors-depleted cell extracts. It was not known to have existed prior to the experiment. MFR was required to generate methane from CO2 insoluble cofactors-depleted cell extracts. Formyl-MFR dehydrogenase was also isolated from Methanosarcina barkeri and Archaeoglobus fulgidus cell extracts. Molybdenum-containing formyl-MFR dehydrogenase was isolated from Methanothermobacter wolfeii.
Structure:
In 2016, the X-ray structure of formylmethanofuran dehydrogenase was determined. Formyl-MFR contains two heterohexamers FwdABCDFG which are protein subunits which associate as a symmetric dimer in a C2 rotational symmetry. The formyl-MFR dehydrogenase also contains 23 and 46 iron-sulfur cubane clusters in the dimer and tetramer forms respectively. The subunit FwdA contains two zinc atoms analogous to dihydroorotase. It also contains N6-carboxylysine, zinc ligands, and an aspartate that is crucial to catalysis. Meanwhile, the subunit FwdF is composed of four T-shaped ferrodoxin domains that are similar. The T-shaped iron-sulfur clusters in the FwdF subunit link up to form a path from the outside edge to the inside core. FwdBD has a redox-active tungsten.The tungsten in FwdBD is coordinated by four dithiolene thiolates. Six sulfurs from the thiolate of Cys118 and an organic sulfide ligand coordinate to the tungsten of tungstopterin at the active site of FwdB. The tungsten is coordinated in a distorted octahedral geometry. A carbon dioxide (CO2) suitable binding site is occupied by the solvent in the X-ray structure of the crystal not in vivo. The binding site lies between Cys118, His119, Arg228, and sulfur-tungsten ligand.
Methanogenesis catalysis:
Formyl-MFR dehydrogenase catalyzes the methanogenesis reaction by reducing carbon dioxide (CO2) to form carboxy-MFR. The structural data obtained from the X-ray structure suggests carbon dioxide (CO2) is reduced to formate (E0’ = -430 mV) at FwdBD's tungstopterin active site to carboxy-MFR by a 4Fe-4S ferredoxin (E = ~ – 500 mV) located 12.4 Å away. Then, it reduces the carboxy-MFR to MFR at its tungsten or molybdenum active site.
Methanogenesis catalysis:
Proposed mechanism A 43 Å long hydrophilic tunnel supports the proposed two-step scenario of CO2 reduction and fixation. This hyrophilic tunnel is located in the middle of FwdBD and FwdA active sites and is convenient for formic acid and formate transportation [pKa = 3.75]. The tunnel has a bottleneck appearance which consists of a narrow passage and a wide solvent-filled cavity located at the front of each active site. Arg228 of FwdBD and Lys64 of FwdA control gate operation at the bottlenecks. Two [4Fe-4S] cluster chain's outer cluster in the branched outer arm of the FwdF subunits funnels electrons to the tungsten center. Then, carbon dioxide is reduced to formate (while tungsten is oxidized: tungsten oxidation state goes from +4 to +6) when carbon dioxide enters the catalytic compartment through FwdBD's hydrophobic tunnel. Formic acid or formate diffuses to the FwdA's active site via a hydrophilic tunnel. Once it is diffused at the active site, it is condense at the binuclear zinc center with MFR. Pumping formate into the tunnel is proposed to attain exergonic reduction of CO2 to formate with reduced ferredoxin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MountainsMap**
MountainsMap:
Mountains is an image analysis and surface metrology software platform published by the company Digital Surf. Its core is micro-topography, the science of studying surface texture and form in 3D at the microscopic scale. The software is dedicated to profilometers, 3D light microscopes ("MountainsMap"), scanning electron microscopes ("MountainsSEM") and scanning probe microscopes ("MountainsSPIP").
Integration by instrument manufacturers:
The main editor's distribution channel is OEM, through the integration of MountainsMap by most profiler and microscope manufacturers, usually under their respective brands; it is sold for instance as: Hitachi map 3D on Hitachi's scanning electron microscopes, TopoMAPS on Thermo Fisher Scientific (FEI division) scanning electron microscopes, TalyMap, TalyProfile, or TalyMap Contour on Taylor-Hobson's profilometers, PicoImage on Keysight's AFM's, HommelMap on Jenoptik's profilometers (Hommel-Etamic line of products), MountainsMap - X on Nikon's microscopes, Apex 2D or Apex 3D on KLA-Tencor's profilometers, Leica Map on Leica's microscopes, ConfoMap on Carl Zeiss' microscopes, MCubeMap on Mitutoyo profilometers.
Integration by instrument manufacturers:
Vision 64 Map on Bruker optical profilometers AttoMap on cathodoluminescence-analysis-dedicated scanning electron microscopes from AttoLight SmileView Map on JEOL's scanning electron microscopes, SensoMap on Sensofar's optical profilometers,
Compatibility:
Mountains native file format is the SURF format (.SUR extension).
Mountains is compatible with most instruments of the market capable of supplying images or topography.
Mountains complies to the ISO 25178 standard on 3D surface texture evaluation and offers the profile and areal filters defined in ISO 16610.
The metrology reports are generated in proprietary format but can also be exported to PDF and RTF formats.
Mountains is available in English, Brazilian Portuguese, simplified Chinese, French, German, Italian, Japanese, Korean, Polish, Russian and Spanish.
Data types ("studiables") accepted:
Vocabulary: x,y,z refer to space coordinates, t to the time, and i to an intensity. A=f(B) means A is function of B , B referring usually to space coordinates and A to a scalar.
In Mountains's vocabulary, these data types are referred to as "studiables".Most studiables have a dynamic (time-series) equivalent, e.g., the surface studiable z=f(x,y) used to study topography has an associate studiable Series of Surfaces z=f(x,y,t) used to study the evolution of topography (e.g., heat distortion of a surface).
Mountains analyses the following basic data types:
History of versions:
Digital Surf launched their first (2D) surface analysis software package in 1990 for MS-DOS ("DigiProfil 1.0"), then their first 3D surface analysis package in 1991 for Macintosh II ("DigiSurface 1.0").
Version 1.0 of MountainsMap was launched in September 1996, introducing a change in the name after a move of the editor to Windows from MsDos and Macintosh platforms.
History of versions:
Version 5.0 introduced the management of multi-layers images. It was a move to Confocal microscopy (analysis of topography+color as a single object as opposed to separate objects in former versions), and to SPM image analysis (analysis of topography+current, topography+phase, topography+force as a single image).Version 6.0 completed the specialization of the platform per instrument type. For Version 6.0 the company teamed with a group of alpinists to launch the new version at the summit of the Makalu mountain. A special logo was created for this marketing event. The expedition was successful and Alexia Zuberer, a French and Swiss mountaineer was then the first Swiss woman to reach the summit of the Makalu, Sandrine de Choudens, a French PhD in chemistry being the first French woman to succeed Version 7.0 was unveiled in September 2012 at the European Microscopy Congress in Manchester, UK. It expanded the list of instruments supported, in particular with new Scanning electron microscope 3D reconstruction software and hyperspectral data analysis (such as Raman and FT-IR hyperspectral cube analysis).
History of versions:
Version 7.2 (February 2015) introduces near real-time 3D topography reconstruction for scanning electron microscopes Version 7.3 (January 2016) adds fast colorization of scanning electron microscope images based on object-oriented image segmentation.
History of versions:
Version 7.4 (January 2017) offers 3D reconstruction from a single SEM image, and enhanced 3D printing Version 8.0 (June 2019) is the successor of both Mountains 7.4 and SPIP 6.7 software packages ("SPIP" standing for "Scanning Probe Image Processor") after the acquisition by Digital Surf of the Danish company Image Metrology A/S, the editor of SPIP. Version 8.0 also introduces the analysis of free form surfaces, called "Shells" in the software.
History of versions:
Version 9.0 (June 2021) completes the "shells" (free form surfaces) with surface texture analysis adapted from the ISO 25178 parameters already calculated on the standard surfaces. It also comes with a new product line, "MountainsSpectral", dedicated to the chemical mapping of elements in both 2D (images of chemical composition) and 3D (multi-channel tomography of chemical composition), with applications such as FIB-SEM EDX (X-Ray analysis coupled with focused ion beam tomography) or confocal Raman (Raman analysis in confocal microscopy) Version 10.0 (June 2023) completes the list of supported microscopes with Light Microscopes, and introduces new features such as CAD-comparison of free-form surfaces ("shells"), aspherics lens analysis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pax genes**
Pax genes:
In evolutionary developmental biology, Paired box (Pax) genes are a family of genes coding for tissue specific transcription factors containing an N-terminal paired domain and usually a partial, or in the case of four family members (PAX3, PAX4, PAX6 and PAX7), a complete homeodomain to the C-terminus. An octapeptide as well as a Pro-Ser-Thr-rich C terminus may also be present. Pax proteins are important in early animal development for the specification of specific tissues, as well as during epimorphic limb regeneration in animals capable of such.
Pax genes:
The paired domain was initially described in 1987 as the "paired box" in the Drosophila protein paired (prd; P06601).
Groups:
Within the mammalian family, there are four well defined groups of Pax genes. Pax group 1 (Pax 1 and 9), Pax group 2 (Pax 2, 5 and 8), Pax group 3 (Pax 3 and 7) and Pax group 4 (Pax 4 and 6).Two more families, Pox-neuro and Pax-α/β, exist in basal bilaterian species. Orthologous genes exist throughout the Metazoa, including extensive study of the ectopic expression in Drosophila using murine Pax6. The two rounds of whole-genome duplications in vertebrate evolution is responsible for the creation of as many as 4 paralogs for each Pax protein.
Members:
PAX1 has been identified in mice with the development of vertebrate and embryo segmentation, and some evidence this is also true in humans. It transcribes a 440 amino acid protein from 4 exons and 1,323bps in humans.
PAX2 has been identified with kidney and optic nerve development. It transcribes a 417 amino acid protein from 11 exons and 4,261 bps in humans. Mutation of PAX2 in humans has been associated with renal-coloboma syndrome as well as oligomeganephronia.
PAX3 has been identified with ear, eye and facial development. It transcribes a 479 amino acid protein in humans. Mutations in it can cause Waardenburg syndrome. PAX3 is frequently expressed in melanomas and contributes to tumor cell survival.
PAX4 has been identified with pancreatic islet beta cells. It transcribes a 350 amino acid protein from 9 exons and 2,010 bps in humans.
PAX5 has been identified with neural and spermatogenesis development and b-cell differentiation. It transcribes a 391 amino acid protein from 10 exons and 3,644bps in humans.
PAX6 (eyeless) is the most researched and appears throughout the literature as a "master control" gene for the development of eyes and sensory organs, certain neural and epidermal tissues as well as other homologous structures, usually derived from ectodermal tissues.
PAX7 has been possibly associated with myogenesis. It transcribes a protein of 520 amino acids from 8 exons and 2,260bps in humans. PAX7 directs postnatal renewal and propagation of myogenic satellite cells but not for the specification.
PAX8 has been associated with thyroid specific expression. It transcribes a protein of 451 amino acids from 11 exons and 2,526bps in humans.
PAX9 has found to be associated with a number of organ and other skeletal developments, particularly teeth. It transcribes a protein of 341 amino acids from 4 exons and 1,644bps in humans. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Packet switching**
Packet switching:
In telecommunications, packet switching is a method of grouping data into packets that are transmitted over a digital network. Packets are made of a header and a payload. Data in the header is used by networking hardware to direct the packet to its destination, where the payload is extracted and used by an operating system, application software, or higher layer protocols. Packet switching is the primary basis for data communications in computer networks worldwide.
Packet switching:
During the early 1960s, Polish-American engineer Paul Baran developed a concept he called "distributed adaptive message block switching", with the goal of providing a fault-tolerant, efficient routing method for telecommunication messages as part of a research program at the RAND Corporation, funded by the United States Department of Defense. His ideas contradicted then-established principles of pre-allocation of network bandwidth, exemplified by the development of telecommunications in the Bell System. The new concept found little resonance among network implementers until the independent work of British computer scientist Donald Davies at the National Physical Laboratory in 1965. Davies coined the modern term packet switching and inspired numerous packet switching networks in the decade following, including the incorporation of the concept into the design of the ARPANET in the United States and the CYCLADES network in France. The ARPANET and CYCLADES were the primary precursor networks of the modern Internet.
Concept:
A simple definition of packet switching is: The routing and transferring of data by means of addressed packets so that a channel is occupied during the transmission of the packet only, and upon completion of the transmission the channel is made available for the transfer of other traffic.
Concept:
Packet switching allows delivery of variable bit rate data streams, realized as sequences of packets, over a computer network which allocates transmission resources as needed using statistical multiplexing or dynamic bandwidth allocation techniques. As they traverse networking hardware, such as switches and routers, packets are received, buffered, queued, and retransmitted (stored and forwarded), resulting in variable latency and throughput depending on the link capacity and the traffic load on the network. Packets are normally forwarded by intermediate network nodes asynchronously using first-in, first-out buffering, but may be forwarded according to some scheduling discipline for fair queuing, traffic shaping, or for differentiated or guaranteed quality of service, such as weighted fair queuing or leaky bucket. Packet-based communication may be implemented with or without intermediate forwarding nodes (switches and routers). In case of a shared physical medium (such as radio or 10BASE5), the packets may be delivered according to a multiple access scheme.
Concept:
Packet switching contrasts with another principal networking paradigm, circuit switching, a method which pre-allocates dedicated network bandwidth specifically for each communication session, each having a constant bit rate and latency between nodes. In cases of billable services, such as cellular communication services, circuit switching is characterized by a fee per unit of connection time, even when no data is transferred, while packet switching may be characterized by a fee per unit of information transmitted, such as characters, packets, or messages.
Concept:
A packet switch has four components: input ports, output ports, routing processor, and switching fabric.
History:
Invention and development The concept of switching small blocks of data was first explored independently by Paul Baran at the RAND Corporation during the early 1960s in the US and Donald Davies at the National Physical Laboratory (NPL) in the UK in 1965.During the late 1950s, the US Air Force established a wide area network for the Semi-Automatic Ground Environment (SAGE) radar defense system. Recognizing vulnerabilities in this network, the Air Force sought a system that might survive a nuclear attack to enable a response, thus diminishing the attractiveness of the first strike advantage by enemies (see Mutual assured destruction). Baran developed the concept of distributed adaptive message block switching in support of the Air Force initiative. The concept was first presented to the Air Force in the summer of 1961 as briefing B-265, later published as RAND report P-2626 in 1962, and finally in report RM 3420 in 1964. Report P-2626 described a general architecture for a large-scale, distributed, survivable communications network. The work focuses on three key ideas: use of a decentralized network with multiple paths between any two points, dividing user messages into message blocks, and delivery of these messages by store and forward switching.Davies independently developed a similar message routing concept and more detailed network design in 1965. He invented the term packet switching, and proposed building a commercial nationwide data network in the UK. He gave a talk on the proposal in 1966, after which a person from the Ministry of Defence (MoD) told him about Baran's work. Roger Scantlebury, a member of Davies' team, presented their work at the 1967 Symposium on Operating Systems Principles and suggested it to Larry Roberts for use in the ARPANET. Davies had chosen some of the same parameters for his original network design as did Baran, such as a packet size of 1024 bits. In 1966, Davies proposed that a network should be built at the laboratory to serve the needs of NPL and prove the feasibility of packet switching. To deal with packet permutations (due to dynamically updated route preferences) and to datagram losses (unavoidable when fast sources send to a slow destinations), he assumed that "all users of the network will provide themselves with some kind of error control", thus inventing what came to be known as the end-to-end principle. After a pilot experiment in 1969, the NPL Data Communications Network began service in 1970.Leonard Kleinrock researched the application of queueing theory in the field of message switching for his doctoral dissertation at MIT in 1961–62 and published it as a book in 1964. In 1968, Lawrence Roberts contracted with Kleinrock to carry out theoretical work at UCLA to measure and model the performance of packet switching in the ARPANET, which underpinned the development of the network in the early 1970s. The NPL team also carried out simulation work on packet networks, including datagram networks.The French CYCLADES network was designed by Louis Pouzin in the early 1970s to study internetworking. It was the first to implement the end-to-end principle of Davies, and make the hosts responsible for the reliable delivery of data on a packet-switched network, rather than this being a service of the network itself. His team was thus first to tackle the highly complex problem of providing user applications with a reliable virtual circuit service while using a best-effort service, an early contribution to what will be Transmission Control Protocol (TCP).In May 1974, Vint Cerf and Bob Kahn described the Transmission Control Program, an internetworking protocol for sharing resources using packet-switching among the nodes. The specifications of the TCP were then published in RFC 675 (Specification of Internet Transmission Control Program), written by Vint Cerf, Yogen Dalal and Carl Sunshine in December 1974. This monolithic protocol was later layered as the Transmission Control Protocol, TCP, atop the Internet Protocol, IP.
History:
In the late 1970s and early 1980s, national and international public data networks emerged based on the X.25 protocol. X.25 is built on the concept of virtual circuits emulating traditional telephone connections. For a period in the late 1980s and early 1990s, the network engineering community was polarized over the implementation of competing protocol suites, commonly known as the Protocol Wars. It was unclear which of the OSI model and the Internet protocol suite would result in the best and most robust computer networks.Complementary metal–oxide–semiconductor (CMOS) VLSI (very-large-scale integration) technology led to the development of high-speed broadband packet switching during the 1980s–1990s.
History:
'Paternity' dispute Beginning in the mid-1990s, Leonard Kleinrock sought to be recognized as the "father of modern data networking". However, Kleinrock's claims that his work in the early 1960s originated the concept of packet switching and that this work was the source of the packet switching concepts used in the ARPANET are disputed by other Internet pioneers, including by Robert Taylor, Paul Baran, and Donald Davies. Baran and Davies are recognized by historians and the U.S. National Inventors Hall of Fame for independently inventing the concept of digital packet switching used in modern computer networking including the Internet.
Connectionless and connection-oriented modes:
Packet switching may be classified into connectionless packet switching, also known as datagram switching, and connection-oriented packet switching, also known as virtual circuit switching. Examples of connectionless systems are Ethernet, Internet Protocol (IP), and the User Datagram Protocol (UDP). Connection-oriented systems include X.25, Frame Relay, Multiprotocol Label Switching (MPLS), and the Transmission Control Protocol (TCP).
Connectionless and connection-oriented modes:
In connectionless mode each packet is labeled with a destination address, source address, and port numbers. It may also be labeled with the sequence number of the packet. This information eliminates the need for a pre-established path to help the packet find its way to its destination, but means that more information is needed in the packet header, which is therefore larger. The packets are routed individually, sometimes taking different paths resulting in out-of-order delivery. At the destination, the original message may be reassembled in the correct order, based on the packet sequence numbers. Thus a virtual circuit carrying a byte stream is provided to the application by a transport layer protocol, although the network only provides a connectionless network layer service.
Connectionless and connection-oriented modes:
Connection-oriented transmission requires a setup phase to establish the parameters of communication before any packet is transferred. The signaling protocols used for setup allow the application to specify its requirements and discover link parameters. Acceptable values for service parameters may be negotiated. The packets transferred may include a connection identifier rather than address information and the packet header can be smaller, as it only needs to contain this code and any information, such as length, timestamp, or sequence number, which is different for different packets. In this case, address information is only transferred to each node during the connection setup phase, when the route to the destination is discovered and an entry is added to the switching table in each network node through which the connection passes. When a connection identifier is used, routing a packet requires the node to look up the connection identifier in a table.Connection-oriented transport layer protocols such as TCP provide a connection-oriented service by using an underlying connectionless network. In this case, the end-to-end principle dictates that the end nodes, not the network itself, are responsible for the connection-oriented behavior.
Packet switching in networks:
Packet switching is used to optimize the use of the channel capacity available in digital telecommunication networks, such as computer networks, and minimize the transmission latency (the time it takes for data to pass across the network), and to increase robustness of communication.
Packet switching in networks:
Packet switching is used in the Internet and most local area networks. The Internet is implemented by the Internet Protocol Suite using a variety of link layer technologies. For example, Ethernet and Frame Relay are common. Newer mobile phone technologies (e.g., GSM, LTE) also use packet switching. Packet switching is associated with connectionless networking because, in these systems, no connection agreement needs to be established between communicating parties prior to exchanging data.
Packet switching in networks:
X.25 is a notable use of packet switching in that, despite being based on packet switching methods, it provides virtual circuits to the user. These virtual circuits carry variable-length packets. In 1978, X.25 provided the first international and commercial packet switching network, the International Packet Switched Service (IPSS). Asynchronous Transfer Mode (ATM) also is a virtual circuit technology, which uses fixed-length cell relay connection oriented packet switching.
Packet switching in networks:
Technologies such as Multiprotocol Label Switching (MPLS) and the Resource Reservation Protocol (RSVP) create virtual circuits on top of datagram networks. MPLS and its predecessors, as well as ATM, have been called "fast packet" technologies. MPLS, indeed, has been called "ATM without cells". Virtual circuits are especially useful in building robust failover mechanisms and allocating bandwidth for delay-sensitive applications.
Packet-switched networks:
The history of packet-switched networks can be divided into three overlapping eras: early networks before the introduction of X.25; the X.25 era when many postal, telephone, and telegraph (PTT) companies provided public data networks with X.25 interfaces; and the Internet era which initially competed with the OSI model.
Packet-switched networks:
Early networks Research into packet switching at the National Physical Laboratory (NPL) began with a proposal for a wide-area network in 1965, and a local-area network in 1966. ARPANET funding was secured in 1966 by Bob Taylor, and planning began in 1967 when he hired Larry Roberts. The NPL network, ARPANET, and SITA HLN became operational in 1969. Before the introduction of X.25 in 1976, about twenty different network technologies had been developed. Two fundamental differences involved the division of functions and tasks between the hosts at the edge of the network and the network core. In the datagram system, operating according to the end-to-end principle, the hosts have the responsibility to ensure orderly delivery of packets. In the virtual call system, the network guarantees sequenced delivery of data to the host. This results in a simpler host interface but complicates the network. The X.25 protocol suite uses this network type.
Packet-switched networks:
AppleTalk AppleTalk is a proprietary suite of networking protocols developed by Apple in 1985 for Apple Macintosh computers. It was the primary protocol used by Apple devices through the 1980s and 1990s. AppleTalk included features that allowed local area networks to be established ad hoc without the requirement for a centralized router or server. The AppleTalk system automatically assigned addresses, updated the distributed namespace, and configured any required inter-network routing. It was a plug-n-play system.AppleTalk implementations were also released for the IBM PC and compatibles, and the Apple IIGS. AppleTalk support was available in most networked printers, especially laser printers, some file servers and routers. AppleTalk support was terminated in 2009, replaced by TCP/IP protocols.
Packet-switched networks:
ARPANET The ARPANET was a progenitor network of the Internet and one of the first networks, along with ARPA's SATNET, to run the TCP/IP suite using packet switching technologies.
BNRNET BNRNET was a network which Bell-Northern Research developed for internal use. It initially had only one host but was designed to support many hosts. BNR later made major contributions to the CCITT X.25 project.
Cambridge Ring The Cambridge Ring was an experimental ring network developed at the Computer Laboratory, University of Cambridge. It operated from 1974 until the 1980s.
CompuServe CompuServe developed its own packet switching network, implemented on DEC PDP-11 minicomputers acting as network nodes that were installed throughout the US (and later, in other countries) and interconnected. Over time, the CompuServe network evolved into a complicated multi-tiered network incorporating ATM, Frame Relay, Internet Protocol (IP) and X.25 technologies.
Packet-switched networks:
CYCLADES The CYCLADES packet switching network was a French research network designed and directed by Louis Pouzin. First demonstrated in 1973, it was developed to explore alternatives to the early ARPANET design and to support network research generally. It was the first network to use the end-to-end principle and make the hosts responsible for reliable delivery of data, rather than the network itself. Concepts of this network influenced later ARPANET architecture.
Packet-switched networks:
DECnet DECnet is a suite of network protocols created by Digital Equipment Corporation, originally released in 1975 in order to connect two PDP-11 minicomputers. It evolved into one of the first peer-to-peer network architectures, thus transforming DEC into a networking powerhouse in the 1980s. Initially built with three layers, it later (1982) evolved into a seven-layer OSI-compliant networking protocol. The DECnet protocols were designed entirely by Digital Equipment Corporation. However, DECnet Phase II (and later) were open standards with published specifications, and several implementations were developed outside DEC, including one for Linux.
Packet-switched networks:
DDX-1 DDX-1 was an experimental network from Nippon PTT. It mixed circuit switching and packet switching. It was succeeded by DDX-2.
Packet-switched networks:
EIN The European Informatics Network (EIN), originally called COST 11, was a project beginning in 1971 to link networks in Britain, France, Italy, Switzerland and Euratom. Six other European countries also participated in the research on network protocols. Derek Barber directed the project,and Roger Scantlebury led the UK technical contribution; both were from NPL. The contract for its implementation was awarded to an Anglo French consortium led by the UK systems house Logica and Sesa and managed by Andrew Karney. Work began in 1973 and it became operational in 1976 including nodes linking the NPL network and CYCLADES. The transport protocol of the EIN was the basis of the one adopted by the International Networking Working Group. EIN was replaced by Euronet in 1979.
Packet-switched networks:
EPSS The Experimental Packet Switched Service (EPSS) was an experiment of the UK Post Office Telecommunications. It was the first public data network in the UK when it began operating in 1977. Ferranti supplied the hardware and software. The handling of link control messages (acknowledgements and flow control) was different from that of most other networks.
GEIS As General Electric Information Services (GEIS), General Electric was a major international provider of information services. The company originally designed a telephone network to serve as its internal (albeit continent-wide) voice telephone network.
In 1965, at the instigation of Warner Sinback, a data network based on this voice-phone network was designed to connect GE's four computer sales and service centers (Schenectady, New York, Chicago, and Phoenix) to facilitate a computer time-sharing service.
After going international some years later, GEIS created a network data center near Cleveland, Ohio. Very little has been published about the internal details of their network. The design was hierarchical with redundant communication links.
IPSANET IPSANET was a semi-private network constructed by I. P. Sharp Associates to serve their time-sharing customers. It became operational in May 1976.
IPX/SPX The Internetwork Packet Exchange (IPX) and Sequenced Packet Exchange (SPX) are Novell networking protocols from the 1980s derived from Xerox Network Systems' IDP and SPP protocols, respectively which date back to the 1970s. IPX/SPX was used primarily on networks using the Novell NetWare operating systems.
Packet-switched networks:
Merit Network Merit Network, an independent nonprofit organization governed by Michigan's public universities, was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan's public universities as a means to help the state's educational and economic development. With initial support from the State of Michigan and the National Science Foundation (NSF), the packet-switched network was first demonstrated in December 1971 when an interactive host-to-host connection was made between the IBM mainframe systems at the University of Michigan in Ann Arbor and Wayne State University in Detroit. In October 1972, connections to the CDC mainframe at Michigan State University in East Lansing completed the triad. Over the next several years, in addition to host-to-host interactive connections, the network was enhanced to support terminal-to-host connections, host-to-host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP; additionally, public universities in Michigan joined the network. All of this set the stage for Merit's role in the NSFNET project starting in the mid-1980s.
Packet-switched networks:
NPL In 1965, Donald Davies of the National Physical Laboratory (United Kingdom) designed and proposed a national commercial data network based on packet switching. The proposal was not taken up nationally but, in 1966, he designed a local network using "interface computers", today known as routers, to serve the needs of NPL and prove the feasibility of packet switching.By 1968 Davies had begun building the NPL network to meet the needs of the multidisciplinary laboratory and prove the technology under operational conditions. In 1976, 12 computers and 75 terminal devices were attached, and more were added until the network was replaced in 1986. NPL and the ARPANET were the first two networks to use packet switching, and were interconnected in the early 1970s.
Packet-switched networks:
Octopus Octopus was a local network at Lawrence Livermore National Laboratory. It connected sundry hosts at the lab to interactive terminals and various computer peripherals including a bulk storage system.
Philips Research Philips Research Laboratories in Redhill, Surrey developed a packet switching network for internal use. It was a datagram network with a single switching node.
PUP PARC Universal Packet (PUP or Pup) was one of the two earliest internetworking protocol suites; it was created by researchers at Xerox PARC in the mid-1970s. The entire suite provided routing and packet delivery, as well as higher level functions such as a reliable byte stream, along with numerous applications. Further developments led to Xerox Network Systems (XNS).
Packet-switched networks:
RCP RCP was an experimental network created by the French PTT. It was used to gain experience with packet switching technology before the specification of TRANSPAC was frozen. RCP was a virtual-circuit network in contrast to CYCLADES which was based on datagrams. RCP emphasised terminal-to-host and terminal-to-terminal connection; CYCLADES was concerned with host-to-host communication. RCP influenced the X.25 specification, which was deployed on TRANSPAC and other public data networks.
Packet-switched networks:
RETD Red Especial de Transmisión de Datos (RETD) was a network developed by Compañía Telefónica Nacional de España. It became operational in 1972 and thus was the first public network.
Packet-switched networks:
SCANNET "The experimental packet-switched Nordic telecommunication network SCANNET was implemented in Nordic technical libraries in the 1970s, and it included first Nordic electronic journal Extemplo. Libraries were also among first ones in universities to accommodate microcomputers for public use in the early 1980s." SITA HLN SITA is a consortium of airlines. Its High Level Network (HLN) became operational in 1969 at about the same time as ARPANET. It carried interactive traffic and message-switching traffic. As with many non-academic networks, very little has been published about it.
Packet-switched networks:
SRCnet/SERCnet A number of computer facilities serving the Science Research Council (SRC) community in the United Kingdom developed beginning in the early 1970s. Each had their own star network (ULCC London, UMRCC Manchester, Rutherford Appleton Laboratory). There were also regional networks centred on Bristol (on which work was initiated in the late 1960s) followed in the mid-late 1970s by Edinburgh, the Midlands and Newcastle. These groups of institutions shared resources to provide better computing facilities than could be afforded individually. The networks were each based on one manufacturer's standards and were mutually incompatible and overlapping. In 1981, the SRC was renamed the Science and Engineering Research Council (SERC). In the early 1980s a standardisation and interconnection effort started, hosted on an expansion of the SERCnet research network and based on the Coloured Book protocols, later evolving into JANET.
Packet-switched networks:
Systems Network Architecture Systems Network Architecture (SNA) is IBM's proprietary networking architecture created in 1974. An IBM customer could acquire hardware and software from IBM and lease private lines from a common carrier to construct a private network.
Packet-switched networks:
Telenet Telenet was the first FCC-licensed public data network in the United States. Telenet was incorporated in 1973 and started operations in 1975. It was founded by Bolt Beranek & Newman with Larry Roberts as CEO as a means of making packet switching technology public. Telenet initially used a proprietary virtual connection host interface, but changed the host interface to X.25 and the terminal interface to X.29. It went public in 1979 and was then sold to GTE.
Packet-switched networks:
Tymnet Tymnet was an international data communications network headquartered in San Jose, CA that utilized virtual call packet switched technology and used X.25, SNA/SDLC, BSC and ASCII interfaces to connect host computers (servers) at thousands of large companies, educational institutions, and government agencies. Users typically connected via dial-up connections or dedicated asynchronous serial connections. The business consisted of a large public network that supported dial-up users and a private network business that allowed government agencies and large companies (mostly banks and airlines) to build their own dedicated networks. The private networks were often connected via gateways to the public network to reach locations not on the private network. Tymnet was also connected to dozens of other public networks in the U.S. and internationally via X.25/X.75 gateways.
Packet-switched networks:
XNS Xerox Network Systems (XNS) was a protocol suite promulgated by Xerox, which provided routing and packet delivery, as well as higher level functions such as a reliable stream, and remote procedure calls. It was developed from PARC Universal Packet (PUP).
Packet-switched networks:
X.25 era There were two kinds of X.25 networks. Some such as DATAPAC and TRANSPAC were initially implemented with an X.25 external interface. Some older networks such as TELENET and TYMNET were modified to provide a X.25 host interface in addition to older host connection schemes. DATAPAC was developed by Bell-Northern Research which was a joint venture of Bell Canada (a common carrier) and Northern Telecom (a telecommunications equipment supplier). Northern Telecom sold several DATAPAC clones to foreign PTTs including the Deutsche Bundespost. X.75 and X.121 allowed the interconnection of national X.25 networks. A user or host could call a host on a foreign network by including the DNIC of the remote network as part of the destination address.
Packet-switched networks:
AUSTPAC AUSTPAC was an Australian public X.25 network operated by Telstra. Started by Telecom Australia in the early 1980s, AUSTPAC was Australia's first public packet-switched data network and supported applications such as on-line betting, financial applications—the Australian Tax Office made use of AUSTPAC—and remote terminal access to academic institutions, who maintained their connections to AUSTPAC up until the mid-late 1990s in some cases. Access was via a dial-up terminal to a PAD, or, by linking a permanent X.25 node to the network.
Packet-switched networks:
ConnNet ConnNet was a network operated by the Southern New England Telephone Company serving the state of Connecticut. Launched on March 11, 1985, it was the first local public packet-switched network in the United States.
Packet-switched networks:
Datanet 1 Datanet 1 was the public switched data network operated by the Dutch PTT Telecom (now known as KPN). Strictly speaking Datanet 1 only referred to the network and the connected users via leased lines (using the X.121 DNIC 2041), the name also referred to the public PAD service Telepad (using the DNIC 2049). And because the main Videotex service used the network and modified PAD devices as infrastructure the name Datanet 1 was used for these services as well.
Packet-switched networks:
DATAPAC DATAPAC was the first operational X.25 network (1976). It covered major Canadian cities and was eventually extended to smaller centers.
Datex-P Deutsche Bundespost operated the Datex-P national network in Germany. The technology was acquired from Northern Telecom.
Eirpac Eirpac is the Irish public switched data network supporting X.25 and X.28. It was launched in 1984, replacing Euronet. Eirpac is run by Eircom.
Packet-switched networks:
Euronet Nine member states of the European Economic Community contracted with Logica and the French company SESA to set up a joint venture in 1975 to undertake the Euronet development, using X.25 protocols to form virtual circuits. It was to replace EIN and established a network in 1979 linking a number of European countries until 1984 when the network was handed over to national PTTs.
Packet-switched networks:
HIPA-NET Hitachi designed a private network system for sale as a turnkey package to multi-national organizations. In addition to providing X.25 packet switching, message switching software was also included. Messages were buffered at the nodes adjacent to the sending and receiving terminals. Switched virtual calls were not supported, but through the use of logical ports an originating terminal could have a menu of pre-defined destination terminals.
Packet-switched networks:
Iberpac Iberpac is the Spanish public packet-switched network, providing X.25 services. It was based on RETD which was operational since 1972. Iberpac was run by Telefonica.
IPSS In 1978, X.25 provided the first international and commercial packet-switching network, the International Packet Switched Service (IPSS).
Packet-switched networks:
JANET JANET was the UK academic and research network, linking all universities, higher education establishments, and publicly funded research laboratories following its launch in 1984. The X.25 network, which used the Coloured Book protocols, was based mainly on GEC 4000 series switches, and ran X.25 links at up to 8 Mbit/s in its final phase before being converted to an IP-based network in 1991. The JANET network grew out of the 1970s SRCnet, later called SERCnet.
Packet-switched networks:
PSS Packet Switch Stream (PSS) was the Post Office Telecommunications (later to become British Telecom) national X.25 network with a DNIC of 2342. British Telecom renamed PSS Global Network Service (GNS), but the PSS name has remained better known. PSS also included public dial-up PAD access, and various InterStream gateways to other services such as Telex.
REXPAC REXPAC was the nationwide experimental packet switching data network in Brazil, developed by the research and development center of Telebrás, the state-owned public telecommunications provider.
TRANSPAC TRANSPAC was the national X.25 network in France. It was developed locally at about the same time as DATAPAC in Canada. The development was done by the French PTT and influenced by the experimental RCP network. It began operation in 1978, and served commercial users and, after Minitel began, consumers.
UNINETT UNINETT was a wide-area Norwegian packet-switched network established through a joint effort between Norwegian universities, research institutions and the Norwegian Telecommunication administration. The original network was based on X.25; Internet protocols were adopted later.
VENUS-P VENUS-P was an international X.25 network that operated from April 1982 through March 2006. At its subscription peak in 1999, VENUS-P connected 207 networks in 87 countries.
Venepaq Venepaq is the national X.25 public network in Venezuela. It is run by Cantv and allows direct and dial-up connections. Venepaq provides nationwide access at low cost. It provides national and international access and allows connection from 19.2 to 64 kbit/s in direct connections, and 1200, 2400 and 9600 bit/s in dial-up connections.
Packet-switched networks:
Internet era When Internet connectivity was made available to anyone who could pay for an Internet service provider subscription, the distinctions between national networks blurred. The user no longer saw network identifiers such as the DNIC. Some older technologies such as circuit switching have resurfaced with new names such as fast packet switching. Researchers have created some experimental networks to complement the existing Internet.
Packet-switched networks:
CSNET The Computer Science Network (CSNET) was a computer network funded by the NSF that began operation in 1981. Its purpose was to extend networking benefits for computer science departments at academic and research institutions that could not be directly connected to ARPANET due to funding or authorization limitations. It played a significant role in spreading awareness of, and access to, national networking and was a major milestone on the path to the development of the global Internet.
Packet-switched networks:
Internet2 Internet2 is a not-for-profit United States computer networking consortium led by members from the research and education communities, industry, and government. The Internet2 community, in partnership with Qwest, built the first Internet2 Network, called Abilene, in 1998 and was a prime investor in the National LambdaRail (NLR) project. In 2006, Internet2 announced a partnership with Level 3 Communications to launch a brand new nationwide network, boosting its capacity from 10 to 100 Gbit/s. In October, 2007, Internet2 officially retired Abilene and now refers to its new, higher capacity network as the Internet2 Network.
Packet-switched networks:
NSFNET The National Science Foundation Network (NSFNET) was a program of coordinated, evolving projects sponsored by the National Science Foundation (NSF) beginning in 1985 to promote advanced research and education networking in the United States. NSFNET was also the name given to several nationwide backbone networks, operating at speeds of 56 kbit/s, 1.5 Mbit/s (T1), and 45 Mbit/s (T3), that were constructed to support NSF's networking initiatives from 1985–1995. Initially created to link researchers to the nation's NSF-funded supercomputing centers, through further public funding and private industry partnerships it developed into a major part of the Internet backbone.
Packet-switched networks:
NSFNET regional networks In addition to the five NSF supercomputer centers, NSFNET provided connectivity to eleven regional networks and through these networks to many smaller regional and campus networks in the United States. The NSFNET regional networks were: BARRNet, the Bay Area Regional Research Network in Palo Alto, California; CERFnet, California Education and Research Federation Network in San Diego, California, serving California and Nevada; CICNet, the Committee on Institutional Cooperation Network via the Merit Network in Ann Arbor, Michigan and later as part of the T3 upgrade via Argonne National Laboratory outside of Chicago, serving the Big Ten Universities and the University of Chicago in Illinois, Indiana, Michigan, Minnesota, Ohio, and Wisconsin; Merit/MichNet in Ann Arbor, Michigan serving Michigan, formed in 1966, still in operation as of 2023; MIDnet in Lincoln, Nebraska serving Arkansas, Iowa, Kansas, Missouri, Nebraska, Oklahoma, and South Dakota; NEARNET, the New England Academic and Research Network in Cambridge, Massachusetts, added as part of the upgrade to T3, serving Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont, established in late 1988, operated by BBN under contract to MIT, BBN assumed responsibility for NEARNET on 1 July 1993; NorthWestNet in Seattle, Washington, serving Alaska, Idaho, Montana, North Dakota, Oregon, and Washington, founded in 1987; NYSERNet, New York State Education and Research Network in Ithaca, New York; JVNCNet, the John von Neumann National Supercomputer Center Network in Princeton, New Jersey, serving Delaware and New Jersey; SESQUINET, the Sesquicentennial Network in Houston, Texas, founded during the 150th anniversary of the State of Texas; SURAnet, the Southeastern Universities Research Association network in College Park, Maryland and later as part of the T3 upgrade in Atlanta, Georgia serving Alabama, Florida, Georgia, Kentucky, Louisiana, Maryland, Mississippi, North Carolina, South Carolina, Tennessee, Virginia, and West Virginia, sold to BBN in 1994; and Westnet in Salt Lake City, Utah and Boulder, Colorado, serving Arizona, Colorado, New Mexico, Utah, and Wyoming.
Packet-switched networks:
National LambdaRail The National LambdaRail (NRL) was launched in September 2003. It is a 12,000-mile high-speed national computer network owned and operated by the US research and education community that runs over fiber-optic lines. It was the first transcontinental 10 Gigabit Ethernet network. It operates with an aggregate capacity of up to 1.6 Tbit/s and a 40 Gbit/s bitrate. NLR ceased operations in March 2014.
Packet-switched networks:
TransPAC, TransPAC2, and TransPAC3 TransPAC2 and TransPAC3, continuations of the TransPAC project, a high-speed international Internet service connecting research and education networks in the Asia-Pacific region to those in the US. TransPAC is part of the NSF's International Research Network Connections (IRNC) program.
Packet-switched networks:
Very high-speed Backbone Network Service (vBNS) The Very high-speed Backbone Network Service (vBNS) came on line in April 1995 as part of a National Science Foundation (NSF) sponsored project to provide high-speed interconnection between NSF-sponsored supercomputing centers and select access points in the United States. The network was engineered and operated by MCI Telecommunications under a cooperative agreement with the NSF. By 1998, the vBNS had grown to connect more than 100 universities and research and engineering institutions via 12 national points of presence with DS-3 (45 Mbit/s), OC-3c (155 Mbit/s), and OC-12c (622 Mbit/s) links on an all OC-12c backbone, a substantial engineering feat for that time. The vBNS installed one of the first ever production OC-48c (2.5 Gbit/s) IP links in February 1999 and went on to upgrade the entire backbone to OC-48c.In June 1999 MCI WorldCom introduced vBNS+ which allowed attachments to the vBNS network by organizations that were not approved by or receiving support from NSF. After the expiration of the NSF agreement, the vBNS largely transitioned to providing service to the government. Most universities and research centers migrated to the Internet2 educational backbone. In January 2006, when MCI and Verizon merged, vBNS+ became a service of Verizon Business. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nickel selenide**
Nickel selenide:
Nickel selenide is the inorganic compound with the formula NiSe. As for many metal chalcogenides, the phase diagram for nickel(II) selenide is complicated. Two other selenides of nickel are known, NiSe2 with a pyrite structure, and Ni2Se3. Additionally, NiSe is usually nonstoichiometric and is often described with the formula Ni1−xSe, with 0 < x < 0.15. This material is a semi-conducting solid, and can be obtained as in the form of a black fine powder, or silver if obtained in the form of larger crystals. Nickel(II) selenide is insoluble in all solvents, but can be degraded by strongly oxidizing acids.
Synthesis and structure:
Typically, NiSe is prepared by high temperature reaction of the elements. Such reactions typically afford mixed phase products. Milder methods have also been described using more specialised techniques such as reactions of the elements in liquid ammonia in a pressure vessel.Like many related materials, nickel(II) selenide adopts the nickel arsenide motif. In this structure, nickel is octahedral and the selenides are in trigonal prismatic sites. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stream thrust averaging**
Stream thrust averaging:
In fluid dynamics, stream thrust averaging is a process used to convert three-dimensional flow through a duct into one-dimensional uniform flow. It makes the assumptions that the flow is mixed adiabatically and without friction. However, due to the mixing process, there is a net increase in the entropy of the system. Although there is an increase in entropy, the stream thrust averaged values are more representative of the flow than a simple average as a simple average would violate the second Law of Thermodynamics.
Equations for a perfect gas:
Stream thrust: F=∫(ρV⋅dA)V⋅f+∫pdA⋅f.
Mass flow: m˙=∫ρV⋅dA.
Stagnation enthalpy: H=1m˙∫(ρV⋅dA)(h+|V|22), 0.
Solutions Solving for U¯ yields two solutions. They must both be analyzed to determine which is the physical solution. One will usually be a subsonic root and the other a supersonic root. If it is not clear which value of velocity is correct, the second law of thermodynamics may be applied.
ρ¯=m˙U¯A, p¯=FA−ρ¯U¯2, h¯=p¯Cpρ¯R.
Second law of thermodynamics: ln ln (p¯p1).
The values T1 and p1 are unknown and may be dropped from the formulation. The value of entropy is not necessary, only that the value is positive.
ln ln (p¯).
One possible unreal solution for the stream thrust averaged velocity yields a negative entropy. Another method of determining the proper solution is to take a simple average of the velocity and determining which value is closer to the stream thrust averaged velocity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Condorcet winner criterion**
Condorcet winner criterion:
An electoral system satisfies the Condorcet winner criterion (English: ) if it always chooses the Condorcet winner when one exists. The candidate who wins a majority of the vote in every head-to-head election against each of the other candidates – that is, a candidate preferred by more voters than any others – is the Condorcet winner, although Condorcet winners do not exist in all cases. It is sometimes simply referred to as the "Condorcet criterion", though it is very different from the "Condorcet loser criterion". Any voting method conforming to the Condorcet winner criterion is known as a Condorcet method. The Condorcet winner is the person who would win a two-candidate election against each of the other candidates in a plurality vote. For a set of candidates, the Condorcet winner is always the same regardless of the voting system in question, and can be discovered by using pairwise counting on voters' ranked preferences.
Condorcet winner criterion:
A Condorcet winner will not always exist in a given set of votes, which is known as Condorcet's voting paradox; however, there will always be a smallest group of candidates such that more voters prefer anyone in the group over anyone outside of the group in a head-to-head matchup, which is known as the Smith set. When voters identify candidates on a 1-dimensional, e.g., left-to-right axis and always prefer candidates closer to themselves, a Condorcet winner always exists. Real political positions are multi-dimensional, however, which can lead to circular societal preferences with no Condorcet winner.These terms are named after the 18th-century mathematician and philosopher Marie Jean Antoine Nicolas Caritat, the Marquis de Condorcet. The concept had previously been proposed by Ramon Llull in the 13th century, though this was not known until the 2001 discovery of his lost manuscripts.
Example:
Suppose the following matrix of pairwise preferences exists for an election: where the vertical axis labels of the above matrix indicate the runner and the horizontal axis labels indicate the opponent and votes in a pairwise contest can be found by comparing correspondences of runner/opponent. For example, to calculate the number of votes won by B in a head-to-head contest against A, the middle cell of the leftmost column indicates that B wins 305 votes against A, while the corresponding top cell in the middle column indicates that A gets 186 votes against B; therefore, B beats A in a two-candidate, pairwise election with a total of 305 votes to 186. In the example matrix above, B is the Condorcet winner, because they beat A and C in head-to-head elections.
Example:
Proof of violation of fairness Note that the quantity of votes might not strongly favor a candidate, but that candidates only need to win the most number of contests in order to be a Condorcet winner. In the above example, B beats two other candidates (there are two green boxes indicating victory for B in the middle row) and A beats one. But the total margin by which a Condorcet winner is irrelevant: a Condorcet winner could win enough contests to be the Condorcet winner by just one vote each, while another candidate might win more votes but fewer contests. Condorcet-consistent voting systems can also, in rare cases, exhibit a preference cycle or paradox, although the circumstances that would cause this has not been known to occur yet in a governmental election using ranked ballots.
Relation to other criteria:
The Condorcet criterion implies the majority criterion; that is, any system that satisfies the former will satisfy the latter. It further implies the mutual majority criterion whenever there is a Condorcet winner; the Smith criterion, which is a generalization of the Condorcet criterion, always implies the mutual majority criterion; not all Condorcet methods pass the Smith criterion. The Condorcet criterion is incompatible with the later-no-harm criterion, the favorite betrayal criterion, the participation criterion, and the consistency criterion. The Condorcet criterion satisfies the following criterion with some similarity to independence of irrelevant alternatives: removing losing candidates from the election can't change the result whenever there is a Condorcet winner. In addition, adding candidates who are pairwise beaten by the Condorcet winner can't change the winner when there is a Condorcet winner. (These two properties are related to, and implied by, the Independence of Smith-dominated alternatives criterion.) The Condorcet winner criterion is different from the Condorcet loser criterion. A system complying with the Condorcet loser criterion will never allow a Condorcet loser to win; that is a candidate who can be defeated in a head-to-head competition against each other candidate
Compliance of methods:
Complying methods The following methods satisfy the Condorcet criterion: Black Copeland Dodgson's method Kemeny-Young Minimax Nanson's method Baldwin method Tideman Ranked pairs Schulze Smith/IRV Smith/minimax Tideman alternative method CPO-STVSee Category:Condorcet methods Non-complying methods The following methods do not satisfy the Condorcet criterion. (This statement requires qualification in some cases: see the individual subsections.) Borda count Bucklin voting Instant-runoff voting Majority judgment Plurality voting Approval voting Range voting Coombs rule Borda count Borda count is a voting system in which voters rank the candidates in an order of preference. Points are given for the position of a candidate in a voter's rank order. The candidate with the most points wins.
Compliance of methods:
The Borda count does not comply with the Condorcet criterion in the following case. Consider an election consisting of five voters and three alternatives, in which three voters prefer A to B and B to C, while two of the voters prefer B to C and C to A. The fact that A is preferred by three of the five voters to all other alternatives makes it a Condorcet Winner. However the Borda count awards 2 points for 1st choice, 1 point for second and 0 points for third. Thus, from three voters who prefer A, A receives 6 points (3 × 2), and 0 points from the other two voters, for a total of 6 points. B receives 3 points (3 × 1) from the three voters who prefer A to B to C, and 4 points (2 × 2) from the other two voters who prefer B to C to A. With 7 points, B is the Borda winner.
Compliance of methods:
Bucklin voting Bucklin is a ranked voting method that was used in some elections during the early 20th century in the United States. The election proceeds in rounds, one rank at a time, until a majority is reached. Initially, votes are counted for all candidates ranked in first place; if no candidate has a majority, votes are recounted with candidates in both first and second place. This continues until one candidate has a total number of votes that is more than half the number of voters. Because multiple candidates per vote may be considered at one time, it is possible for more than one candidate to achieve a majority.
Compliance of methods:
Instant-runoff voting Instant-runoff voting (IRV) is a method (like Borda count) which requires each voter to rank the candidates. Unlike the Borda count, IRV uses a process of elimination to assign each voter's ballot to their first choice among a dwindling list of remaining candidates until one candidate receives an outright majority of ballots. It does not comply with the Condorcet criterion. Consider, for example, the following vote count of preferences with three candidates {A, B, C}: A > B > C: 35 C > B > A: 34 B > C > A: 31In this case, B is preferred to A by 65 votes to 35, and B is preferred to C by 66 to 34, hence B is strongly preferred to both A and C. B must then win according to the Condorcet criterion. Using the rules of IRV, B is ranked first by the fewest voters and is eliminated, and then C wins with the transferred votes from B.
Compliance of methods:
Note that 65 voters, a majority, prefer either candidate B or C over A; since IRV passes the mutual majority criterion, it guarantees one of B and C must win. If candidate A, an irrelevant alternative under IRV, was not running, a majority of voters would consider B their 1st choice, and IRV's mutual majority compliance would thus ensure B wins; in this way, IRV's failure of the Condorcet criterion here also implies a spoiler effect. In cases where there is a Condorcet Winner, and where IRV does not choose that candidate, a simple majority would by definition prefer the Condorcet Winner over the IRV winner. This anomalous case was demonstrated in the 2009 mayoral election of Burlington Vermont.
Compliance of methods:
Majority judgment Majority judgment is a system in which the voter gives all candidates a rating out of a predetermined set (e.g. {"excellent", "fair", "poor"}). The winner of the election would be the candidate with the best median rating.
Consider an election with three candidates A, B, C.
Compliance of methods:
35 voters rate candidate A "excellent", B "fair", and C "poor", 34 voters rate candidate C "excellent", B "fair", and A "poor", and 31 voters rate candidate B "excellent", C "fair", and A "poor".B is preferred to A by 65 votes to 35, and B is preferred to C by 66 to 34. Hence, B is the Condorcet winner. But B only gets the median rating "fair", while C has the median rating "good" and hereby C is chosen winner by Majority Judgment.
Compliance of methods:
Plurality voting With plurality voting, the full set of voter preferences is not recorded on the ballot and so cannot be deduced therefrom (e.g. following a real election). Under the assumption that no tactical voting takes place, i.e. that all voters vote for their first preference, it is easy to construct an example which fails the Condorcet criterion.
Compliance of methods:
Consider an election in which 30% of the voters prefer candidate A to candidate B to candidate C and vote for A, 30% of the voters prefer C to A to B and vote for C, and 40% of the voters prefer B to A to C and vote for B. Candidate B would win (with 40% of the vote) even though A would be the Condorcet winner, beating B 60% to 40%, and C 70% to 30%.
Compliance of methods:
The assumption of no tactical voting is also used to evaluate other systems; however, the assumption may be far less plausible with plurality precisely because plurality accommodates no other way for subsidiary preferences to be taken into account.
Approval voting Approval voting is a system in which the voter can approve of (or vote for) any number of candidates on a ballot. Depending on which strategies voters use, the Condorcet criterion may be violated.
Consider an election in which 70% of the voters prefer candidate A to candidate B to candidate C, while 30% of the voters prefer C to B to A. If every voter votes for their top two favorites, Candidate B would win (with 100% approval) even though A would be the Condorcet winner.
Compliance of methods:
Note that this failure of Approval depends upon a particular generalization of the Condorcet criterion, which may not be accepted by all voting theorists. Other generalizations, such as a "votes-only" generalization that makes no reference to voter preferences, may result in a different analysis. Also, if all voters have perfect information about each other's motivations, and a single Condorcet winner exists, then that candidate will win under the Nash equilibrium.
Compliance of methods:
Range voting Range voting is a system in which the voter gives all candidates a score on a predetermined scale (e.g. from 0 to 9). The winner of the election is the candidate with the highest total score.
Compliance of methods:
Range voting doesn't satisfy the Condorcet criterion. Consider an election with three voters and three candidates with the following range votes: In pluralistic head-to-head elections, two voters prefer A to B, and all three prefer both A and B to C, making A the Condorcet winner. However, candidate B is the range winner with 12 points compared to 11 points for A.
Compliance of methods:
Range voting satisfies the Condorcet criterion as long as voters score candidates in the head-to-head elections as they do in the full election. For example, let's say three voters vote for three candidates (A,B,C) as follows: The second candidate is the Condorcet winner and the winner of the normal election with 12 to 10 and 0 points. In the case where all voters are voting strategically, range voting is equivalent to approval voting, and any Condorcet winner will win because of the Nash equilibrium as mentioned above.
Compliance of methods:
However, if voters change their voting strategy from honest to strategic only for the head-to-head elections, then range voting does not satisfy Condorcet. For the same example shown above, the head-to-head elections involving A would look like this: Since in both cases, A would be the winner, the Condorcet winner is A, but B still wins the full election. Some, like the authors of rangevoting.org, say that defining the Condorcet criterion in this way makes the criterion not always desirable. If the winners of the head-to-head contests were determined by range voting rules rather than pluralistic voting, range voting would satisfy Condorcet.
Compliance of methods:
STAR voting STAR voting is a variant of score voting with an additional runoff step, where the most preferred of the top-two rated candidates wins. STAR voting does not satisfy the Condorcet criterion. Nevertheless, provided the Condorcet winner is one of the top two rated candidates, it wins the election by virtue of the runoff step being based solely on the ranked preferences between the two.
Compliance of methods:
The following example is an election with 100 voters and 3 candidates {A, B, C}: 45 voters voting: A=5 B=1 C=0 (Preferences: A>B>C) 40 voters voting: A=0 B=1 C=5 (Preferences: C>B>A) 15 voters voting: A=1 B=5 C=0 (Preferences: B>A>C)Under STAR voting, the total ratings are A=240, B=160, C=200, thus A and C are chosen as finalists and the ballots are read again, taking only preferences between the two finalists into account. From the given ballots, we have that 60% of voters prefer A over C, thus A is the STAR voting winner. However, from the preference information alone, we see that B is preferred over A by 55% of voters, and over C by 60% of voters. This makes B the Condorcet winner, and thus STAR voting has failed to elect it.
Compliance of methods:
Proponents of STAR voting argue that this can be a superior result given the extra information available in rated ballots, as the ranking information alone is insufficient information to distinguish between a second-choice ranking being almost as good as a voter's favorite, or almost as bad as their worst case scenario. The above example illustrates such situation, in which the Condorcet winner can be rated very poorly by 85% of voters. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Effacement (histology)**
Effacement (histology):
Effacement is the shortening, or thinning, of a tissue. It can refer to cervical effacement. It can also refer to a process occurring in podocytes in nephrotic syndrome.In histopathology, it refers to the near obliteration of a tissue, as in the normal parenchyma of tissues in the case of some cancers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Unwired enterprise**
Unwired enterprise:
An unwired enterprise is an organization that extends and supports the use of traditional thick client enterprise applications to a variety of mobile devices and their users throughout the organization. The abiding characteristic is seamless universal mobile access to critical applications and business data.
Use:
By supporting mobile clients alongside more traditional desktop and laptop clients, an unwired enterprise attempts to increase productivity rates and speed the pace of many common business processes through anytime/anywhere accessibility. Furthermore, it is believed that supporting mobile access to enterprise applications can help facilitate cogent decision making by pulling business data in real time from server systems and making it available to the mobile workforce at the decision point. Even though the wireless network is quite ubiquitous, this type of client application requires built-in procedures to deal with any network unavailability seamlessly, without interfering with application core functionality. Pervasive broadband, simplified wireless integration and a common management system are technology trends driving more organizations toward an unwired enterprise due to lowering complexity and greater ease of use.Unwired enterprises may include office environments in which workers are untethered from traditional desktop clients and conduct all business and communication from a wide variety of wireless devices. In the unwired enterprise, client platform and operating system are deemphasized as focus shifts away from platform homogeneity to fluid and expedient data exchange and technology agnosticism. Open standards industry initiatives such as the Open Handset Alliance are designed to help mobile technology vendors deliver on this promise. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ubiquitin-binding domain**
Ubiquitin-binding domain:
Ubiquitin-binding domains (UBDs) are protein domains that recognise and bind non-covalently to ubiquitin through protein-protein interactions. As of 2019, a total of 29 types of UBDs had been identified in the human proteome. Most UBDs bind to ubiquitin only weakly, with binding affinities in the low to mid μM range. Proteins containing UBDs are known as ubiquitin-binding proteins or sometimes as "ubiquitin receptors".
Structure:
Most UBDs are of small size (often less than 50 amino acids) and adopt many different protein folds from multiple fold classes, including all-alpha, all-beta, and alpha/beta folds. Many UBDs can be roughly classified into four broad categories: alpha-helical structures (in some cases as small as a single helix, as in the ubiquitin-interacting motif); zinc fingers; pleckstrin homology (PH) domains; and domains similar to those in ubiquitin-conjugating (also known as E2) enzymes. Other UBDs not fitting these categories can be SH3 domains, PFU domains, and other structures. Small helical structures are the most common, and examples include ubiquitin-associated domains (UBA), CUE domains, the ubiquitin-interacting motif (UIM), the motif interacting with ubiquitin (MIU), and the VHS protein domain.
Binding mechanism:
Many UBDs of the UBA family bind to ubiquitin via a hydrophobic patch centred on a particular isoleucine residue (the "Ile44 patch"), although binding to other surface patches has been observed, for example the "Ile36 patch". Zinc finger UBDs have a broader range of binding modes including interactions with polar residues. Because many UBDs have a common or overlapping ubiquitin interaction surface, their interactions are often mutually exclusive; due to steric clashes, more than one UBD cannot physically interact with the same Ile44-centered hydrophobic patch on a single ubiquitin molecule.Most UBDs described to date bind to monoubiquitin and thus do not show a linkage-preference for the differently linked ubiquitin chains. There are, however, a handful of known, linkage-specific UBDs, that can specifically differentiate between the eight different ubiquitin linkages. This is important as the different linkage types are thought to signal for different molecular processes and linkage-specific recognition of these chains ensures the appropriate cellular response. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Relief army**
Relief army:
A relief army had the task of relieving or freeing a besieged city, town, fortress or castle.
Relief army:
Often relief had to be sought by sending a messenger out through the siege lines to deliver a request for help from allies or friendly forces. Well-known examples include: Gallic Wars, see Caesar de bello Gallico and Vercingetorix, 52 B. C. or the Battle of Alesia Siege of Paris (885–86) Fall of Constantinople – here the relieving fleet arrived too late.
Relief army:
Second Turkish Siege of Vienna (1683) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2-Amino-4-deoxychorismate synthase**
2-Amino-4-deoxychorismate synthase:
2-amino-4-deoxychorismate synthase (EC 2.6.1.86, ADIC synthase, 2-amino-2-deoxyisochorismate synthase, SgcD) is an enzyme with systematic name (2S)-2-amino-4-deoxychorismate:2-oxoglutarate aminotransferase. This enzyme catalyses the following chemical reaction (2S)-2-amino-4-deoxychorismate + L-glutamate ⇌ chorismate + L-glutamineThis enzyme requires Mg2+. The reaction occurs in the reverse direction. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Management of scoliosis**
Management of scoliosis:
The management of scoliosis is complex and is determined primarily by the type of scoliosis encountered: syndromic, congenital, neuromuscular, or idiopathic. Treatment options for idiopathic scoliosis are determined in part by the severity of the curvature and skeletal maturity, which together help predict the likelihood of progression. Non-surgical treatment (conservative treatment) should be pro-active with intervention performed early as "Best results were obtained in 10-25 degrees scoliosis which is a good indication to start therapy before more structural changes within the spine establish." Treatment options have historically been categorized under the following types: Observation Bracing Specialized physical therapy SurgeryFor adults, treatment usually focuses on relieving any pain, while physiotherapy and braces usually play only a minor role.
Management of scoliosis:
Painkilling medication Bracing Exercise SurgeryTreatment for idiopathic scoliosis also depends upon the severity of the curvature, the spine’s potential for further growth, and the risk that the curvature will progress.
Management of scoliosis:
Mild scoliosis (less than 30 degrees deviation) has traditionally been treated through observation only. However, the progression of adolescent idiopathic scoliosis has been linked to rapid growth, suggesting that observation alone is inadequate as progression can rapidly occur during the pubertal growth spurt. Another study has further shown that the peak rate of growth during puberty can actually be higher in individuals with scoliosis than those without, further exacerbating the issue of rapid worsening of the scoliosis curves. Moderately severe scoliosis (30–45 degrees) in a child who is still growing requires bracing. A 2013 study by Weinstein et al. found that rigid bracing significantly reduces worsening of curves in the 20-45 degree range and found that 58% of children receiving "observation only" progressed to surgical range. Recent guidelines published by the Scientific Society of Scoliosis Orthopaedic and Rehabilitation Treatment (SOSORT) in 2016 state that “the use of a brace is recommended in patients with evolutive idiopathic scoliosis above 25º during growth” based on a review of current scientific literature. Severe curvatures that rapidly progress may be treated surgically with spinal rod placement. Thus, early detection and early intervention prior to the pubertal growth spurt provides the greatest correction and prevention of progression to surgical range. In all cases, early intervention offers the best results. A growing body of scientific research testifies to the efficacy of specialized treatment programs of physical therapy, which may include bracing.
Physical therapy:
Physical therapists and occupational therapists help those who have experienced an injury or illness regain or maintain the ability to participate in everyday activities. For those with scoliosis, a physical therapist and/or occupational therapist can provide assistance through assessment, intervention, and ongoing evaluation of the condition. This helps them manage physical symptoms and/or use compensatory techniques so that they can participate in daily activities like self-care, productivity, and leisure.
Physical therapy:
One intervention involves bracing. During the past several decades, a large variety of bracing devices have been developed for the treatment of scoliosis. Studies demonstrate that preventing force sideways across a joint by bracing prevents further curvature of the spine in idiopathic scoliosis, while other studies have also shown that braces can be used by individuals with scoliosis during physical activities. It is important to note that scoliosis is not merely a lateral or sideways deformity, but occurs in three dimensions as a rotational component is often present.
Physical therapy:
Other interventions include postural strategies, such as posture training in sitting, standing, and sleeping positions, and in using positioning supports such as pillows, wedges, rolls, and corsets.Adaptive and compensatory strategies are also employed to help facilitate individuals to returning daily activities.
Physical therapy:
Scoliosis Specific Exercises Scoliosis specific exercised have been found to improve treatment outcomes when utilized in addition to bracing and other standards of care. Scoliosis specific exercises include methods such as Schroth which specifically aim to correct aesthetic differences and strengthened muscles and connective tissue that may have atrophied as a result of scoliosis and asymmetric posture. Schroth exercises and other scoliosis specific exercises should be utilized in conjunction with bracing and other standards of care, and be performed under the guidance of a trained professional to ensure the exercises are effective and target the individual's curve pattern so that the correct muscles are strengthened. Strengthening spinal muscles is a crucial preventive measure. This is because the muscles in the back are essential when it comes to supporting the spinal column and maintaining the spine's proper shape. Exercises that will help improve the strength of the muscles in the back include rows and leg and arm extensions. Elastic resistance exercise may also be able to sustain the progression of spinal curvature. This type of exercise is able to sustain progression by equalizing the strength of the torso muscles found on each side of the body.
Physical therapy:
Self-care Disability caused by scoliosis, as well as physical limitations during recovery from treatment-related surgery, often affects an individual’s ability to perform self-care activities. One of the first treatments of scoliosis is the attempt to prevent further curvature of the spine. Depending on the size of the curvature, this is typically done in one of three ways: bracing, surgery, or postural positioning through customized cushioning. Stopping the progression of the scoliosis can prevent the loss of function in many activities of daily living by maintaining range of motion, preventing deformity of the rib cage, and reducing pain during activities such as bending or lifting.
Physical therapy:
Occupational therapists are often involved in the process of selection and fabrication of customized cushions. These individualized postural supports are used to maintain the current spinal curvature, or they can be adjusted to assist in the correction of the curvature. This type of treatment can help to maintain mobility for a wheelchair user by preventing the deformity of the rib cage and maintaining an active range of motion in the arms.For other self-care activities (such as dressing, bathing, grooming, personal hygiene, and feeding), several strategies can be used as a part of occupational therapy treatment. Environmental adaptations for bathing could include a bath bench, grab bars installed in the shower area, or a handheld shower nozzle. For activities such as dressing and grooming, various assistive devices and strategies can be used to promote independence. An occupational therapist may recommend a long-handled reacher that can be used to assist self-dressing by allowing a person to avoid painful movements such as bending over; a long-handled shoehorn can be used for putting on and removing shoes. Problems with activities such as cutting meat and eating can be addressed by using specialized cutlery, kitchen utensils, or dishes.
Physical therapy:
Productivity Productive activities include paid or unpaid work, household chores, school, and play. Recent studies in healthcare have led to the development of a variety of treatments to assist in the management of scoliosis thereby maximizing productivity for people of all ages. Assistive technology has undergone dramatic changes over the past 20 years; the availability and quality of the technology has improved greatly. As a result of using assistive technology, functional changes may range from improvements in abilities, performance in daily activities, participation levels, and quality of life.A common assistive technology intervention is specialized seating and postural control. For children with poor postural control, a comfortable seating system that provides them with the support needed to maintain a sitting position can be essential for raising their overall level of well-being. A child's well-being in a productive sense involves the ability to participate in classroom and play activities. Specialized wheelchair seating has been identified as the most common prescription in the management of scoliosis in teenagers with muscular dystrophy.With comfortable wheelchair seating, teenagers are able to participate in classroom activities for longer periods with less fatigue. By tilting the seating position 20° forward (toward the thighs), seating pressure is significantly redistributed, so sitting is more comfortable. If an office worker with scoliosis can sit for longer periods, increased work output is likely to occur and could improve quality of life. Tall, forward-sloping seats or front parts of seats, and when possible with tall desk with the opposite slope, can, in general, reduce pains and the need of bending significantly while working or studying, and that is particularly important with braced, fragile, or tender backs. An open hip angle can benefit the used lung volume and respiration.For those not using a wheelchair, bracing may be used to treat scoliosis. Lifestyle changes are made to compensate for the proper use of spine braces.
Physical therapy:
Leisure Physical symptoms such as chest pains, back pains, shortness of breath, and limited spinal movement can hamper or preclude participation in leisure activities of a physical nature. The occupational therapist's role is to facilitate participation by helping the patient manage these symptoms.
Bracing is a common strategy recommended by an occupational therapist, in particular, for individuals engaging in sports and exercise. An OT is responsible for educating an individual on the advantages and disadvantages of different braces, proper ways to wear the brace, and the day-to-day care of the brace.
Physical therapy:
To help a person manage heart and lung symptoms, such as shortness of breath or chest pains, an occupational therapist can teach the individual energy conservation techniques. This includes scheduling routine breaks during the activity, as suitable for the individual. For example, an occupational therapist can recommend that a swimmer take breaks between laps to conserve energy. Adapting or modifying the exercise or sport is another way a person with scoliosis can do it. Adapting the activity may change the difficulty of the sport or exercise. For example, it might mean taking breaks throughout an exercise. If a person with scoliosis is unable to participate in a sport or exercise, an OT can help the individual explore other physical activities that are suitable to his/her interests and capabilities. An occupational therapist and the person with scoliosis can explore enjoyable and meaningful participation in the sport/exercise in another capacity, such as coaching or refereeing.
Bracing:
Bracing is most effective when the patient has bone growth remaining (is skeletally immature) and should aim to both prevent progression of the curve (prevent progression to surgery), as well as reduce the scoliosis curve. Reduction of the curve is important as the natural history of idiopathic scoliosis suggests it can continue to progress at a rate ~1 degree per year in adulthood, while the treatment results of bracing have been shown to hold over >15 years. In some cases with juveniles, bracing has reduced curves significantly, going from a 40 degrees (of the curve, mentioned in length above.) out of the brace to 18 degrees in it. Braces are sometimes prescribed for adults to relieve pain related to scoliosis. Bracing involves fitting the patient with a device that covers the torso; in some cases, it extends to the neck. The most commonly used brace is a TLSO, such as a Cheneau type brace, a corset-like appliance that fits from armpits to hips and is custom-made from fiberglass or plastic. It is worn upwards of 18–23 hours a day, depending on the doctor's prescription, and applies pressure on the curves in the spine. The effectiveness of the brace depends not only on brace design and orthotist skill; patient compliance; and amount of wear per day, but also the "stiffness" of the spine resulting from a shortened spinal cord and/or nerve tension. as evidence by the force necessary (mean force ~121 lbs) to physically correct scoliosis during spinal surgery The typical use of braces for idiopathic scoliosis is to prevent progression to surgical range as well as reduce the scoliotic curve of the spine as spinal fusion surgery can reduce mobility due to fusion of the vertebrate while potentially increasing pain long term. For non-idiopathic scoliosis (ie. neuromuscular, congenital, etc.) and those with additional comorbidities (ie. Marfans Syndrome) spinal surgery may be required due to structural changes in the spine.
Bracing:
Indications for Scoliosis Bracing: Scoliosis professionals determine the proper bracing method for a patient after a complete clinical evaluation. The patient’s growth potential, age, maturity, and scoliosis (Cobb angle, rotation, and sagittal profile) are also considered. Immature patients who present with Cobb angles less than 20 degrees should be closely monitored and proactively treated based on their risk of progression as surgery can be prevented with early intervention of conservative treatment. Immature patients who present with Cobb angles of 20 degrees to 29 degrees should be braced according to the risk of progression by considering age, Cobb angle increase over a six-month period, Risser sign, and clinical presentation. Immature patients who present with Cobb angles greater than 30 degrees should be braced. However, these are guidelines and not every patient will fit into this table. For example, an immature patient with a 17-degree Cobb angle and significant thoracic rotation or flatback could be considered for nighttime bracing. On the opposite end of the growth spectrum, a 29-degree Cobb angle and a Risser sign three or four might not need to be braced because there is reduced potential for progression.Surgery is indicated by the Society on Scoliosis Orthopaedic and Rehabilitation Treatment (SOSORT) at 45 degrees to 50 degrees and by the Scoliosis Research Society (SRS) at a Cobb angle of 45 degrees. SOSORT uses the 45-degree to 50-degree threshold as a result of the well-documented, plus or minus five degrees measurement error that can occur while measuring Cobb angles.
Bracing:
Scoliosis braces are usually comfortable for the patient, especially when it is well designed and fit; also after the 7- to 10-day break-in period. A well fit and functioning scoliosis brace provides comfort when it is supporting the deformity and redirecting the body into a more corrected and normal physiological position.The Scoliosis Research Society's recommendations for bracing include curves progressing to larger than 25°, curves presenting between 30 and 45°, Risser sign 0, 1, or 2 (an X-ray measurement of a pelvic growth area), and less than six months from the onset of menses in girls.
Bracing:
Progressive scolioses exceeding 25° Cobb angle in the pubertal growth spurt should be treated with a pattern-specific brace like the Chêneau brace and its derivatives, with an average brace-wearing time of 16 hours/day (23 hours/day assures the best possible result).
Bracing:
The latest standard of brace construction is with CAD/CAM technology. With the help of this technology, it has been possible to standardize the pattern-specific brace treatment. Severe mistakes in brace construction are largely ruled out with the help of these systems. This technology also eliminates the need to make a plaster cast for brace construction. The measurements can be taken in any place and are simple (and not comparable to plastering). Available CAD/CAM braces include the Regnier-Chêneau brace, the Rigo-System-Chêneau-brace (RSC brace), the Silicon Valley Brace, and the Gensingen brace; braces can and should be customized to fit the individual's curve pattern and reduce the curve as much as possible as immediate in-brace correction has been shown to be associated with better treatment outcomes. Many patients prefer the "Chêneau light" brace as it has good in-brace corrections reported in international literature and is easier to wear than other braces in use today. However, this brace is not available for all curve patterns.
Bracing:
Prior to 2013 the efficacy of bracing has not been definitively demonstrated in randomised clinical studies, with more limited studies giving inconsistent conclusions. In 2013 the Bracing in Adolescent Idiopathic Scoliosis Trial (BrAIST) published results establishing benefits of bracing in adolescents with idiopathic scoliosis. In the randomized cohort, 72% in the group instructed to wear a brace for 18 hours per day against 48% in the observation group sustained curve progression to under 50 degrees, the proxy used for not requiring surgery. Additionally results suggested that the more a patient wore the brace, the better the result.
Bracing:
Casting In progressive infantile and sometimes juvenile scoliosis, a plaster jacket applied early may be used instead of a brace. It has been proven possible to permanently correct cases of infantile idiopathic scoliosis by applying a series of plaster casts (EDF: elongation, derotation, flexion) on a specialized frame under corrective traction, which helps to "mould" the infant's soft bones and work with their growth spurts. This method was pioneered by UK scoliosis specialist Min Mehta. EDF casting is now the only clinically known nonsurgical method of complete correction in progressive infantile scoliosis. Complete correction may be obtained for curves less than 50° if the treatment begins before the second year of life.
Surgery:
Surgery is usually recommended by orthopedists for curves with a high likelihood of progression (i.e., greater than 45 to 50° of magnitude), curves that would be cosmetically unacceptable as an adult, curves in patients with spina bifida and cerebral palsy that interfere with sitting and care, and curves that affect physiological functions such as breathing.Surgery for scoliosis is performed by a surgeon specializing in spine surgery. For various reasons, it is usually impossible to completely straighten a scoliotic spine, but in most cases, significant corrections are achieved.The two main types of surgery are: Anterior fusion: This surgical approach is through an incision at the side of the chest wall.
Surgery:
Posterior fusion: This surgical approach is through an incision on the back and involves the use of metal instrumentation to correct the curve.One or both of these surgical procedures may be needed. The surgery may be done in one or two stages and, on average, takes four to eight hours. A Cochrane review could not draw conclusions on how effective surgical interventions were when compared to non-surgical interventions in patients with adolescent idiopathic scoliosis.
Surgery:
Spinal fusion with instrumentation Spinal fusion is the most widely performed surgery for scoliosis. In this procedure, bone [either harvested from elsewhere in the body (autograft) or from a donor (allograft)] is grafted to the vertebrae so when they heal, they form one solid bone mass and the vertebral column becomes rigid. This prevents worsening of the curve, at the expense of some spinal movement. This can be performed from the anterior (front) aspect of the spine by entering the thoracic or abdominal cavities, or more commonly, performed from the back (posterior). A combination may be used in more severe cases, though the modern pedicle screw system has largely negated the need for this.In recent years all-screw systems have become the gold-standard technique for adolescent idiopathic scoliosis. Pedicle screws achieve better fixation of the vertebral column and have better biomechanical properties than previous techniques, so enabling greater correction of the curve in all planes.Pedicle screw-only posterior spinal fusion may improve major curve correction at two years among patients with adolescent idiopathic scoliosis (AIS) as compared to hybrid instrumentation (proximal hooks with distal pedicle screws) (65% versus 46%) according to a retrospective, matched-cohort study. The prospective cohorts were matched to the retrospective cohorts according to patient age, fusion levels, Lenke curve type, and operative method. The two groups were not significantly different in regard to age, Lenke AIS curve type, or Riser grade. The numbers of fused vertebrae were significantly different (11.7±1.6 for pedicle screw versus 13.0±1.2 for hybrid group). This study's results may be biased due to the pedicle screw group's being analyzed prospectively versus retrospective analysis of the hybrid instrumentation group.
Surgery:
In general, modern spinal fusions have good outcomes with high degrees of correction and low rates of failure and infection. However a systematic review of PubMed papers in 2008 concluded "Scoliosis surgery has a varying but high rate of complications", although the non-standardised data on complications was difficult to assess and was incomplete. Patients with fused spines and permanent implants tend to have normal lives with unrestricted activities when they are younger; it remains to be seen whether those that have been treated with the newer surgical techniques develop problems as they age.
Surgery:
Thoracoplasty A complementary surgical procedure a surgeon may recommend is called thoracoplasty (also called costoplasty). This is a procedure to reduce the rib hump that affects most scoliosis patients with a thoracic curve. A rib hump is evidence of some rotational deformity to the spine. Thoracoplasty may also be performed to obtain bone grafts from the ribs instead of the pelvis, regardless of whether a rib hump is present. Thoracoplasty can be performed as part of a spinal fusion or as a separate surgery, entirely.
Surgery:
Thoracoplasty is the removal (or resection) of typically four to six segments of adjacent ribs that protrude. Each segment is one to two inches long. The surgeon decides which ribs to resect based on either their prominence or their likelihood to be realigned by correction of the curvature alone. The ribs grow back straight.
Surgery:
Thoracoplasty has risks, such as increased pain in the rib area during recovery or reduced pulmonary function (10–15% is typical) following surgery. This impairment can last anywhere from a few months to two years. Because thoracoplasty may lengthen the duration of surgery, patients may also lose more blood or develop complications from the prolonged anesthesia. A more significant, though far less common, risk is the surgeon might inadvertently puncture the pleura, a protective coating over the lungs. This could cause blood or air to drain into the chest cavity, hemothorax or pneumothorax, respectively.
Surgery:
Surgery without fusion for growing children Implants that aim to delay spinal fusion and to allow more spinal growth in young children is the gold standard for surgical treatment of early onset scoliosis. Surgery without fusion can be divided into three principles: distraction of the entire spine, compression of the short segment of spine, and guided-growth techniques. Distraction-based systems include Vertical, Expandable Prosthetic Titanium Ribs (VEPTR) & growing rods. The concept uses distraction to create additional soft-tissue space in-between the vertebrae, for the bone to later grow into. Its universal application was thrusted through the use of traditional growth rods which required repeated invasive surgeries every 6–12 months for the sustenance of growth, via distraction. Nowadays developed countries only use MAGEC (MAGnetic Expansion Control) rods to non-invasively lengthen the spine. In contrast, developing and under-developed countries still use traditional growing rods, which require invasive surgery every 6–12 months, because of high initial cost associated with procurement of MAGEC rods. Compression-based system include tethering using a flexible rope-like implant and are relatively new to receive FDA approval. Guided-growth technique include SHILLA (named after a hotel in Korea, where the concept was initiated). SHILLA has the advantage of being one-time surgery and is technologically less demanding compared with MAGEC rod. However, there are still two major disadvantages of using SHILLA: loss of correction and need for osteotomies.The failure of most of these standalone techniques has shown that the concept of “one size fits all” is not applicable for the surgical management of EOS. Therefore, newer concepts employing two or more of the above philosophies, i.e. various combinations of distraction-based, guided-growth, and compression-based approaches might be more suitable and biomechanically-speaking, a more optimal surgical intervention. One such combination currently used for surgery includes active apex correction (APC). It is a hybrid of guided-growth and compression-based management of deformity. The technique simply consists of replacing the apical fusion (of traditional SHILLA) with unilateral compression (via pedicle screws or any other means) on the convex side. The latest clinical results presented by spine researchers Aakash Agarwal and Alaaedldin Azmi Ahmad on APC shows good clinical results with no economic barrier to use the technology.
Surgery:
Complications The risk of undergoing surgery for scoliosis was estimated in 2008 to be varying, but with a high rate of complications. Possible complications may be inflammation of the soft tissue or deep inflammatory processes, breathing impairments, bleeding and nerve injuries. It is not yet clear what to expect from spine surgery in the long term. Taking into account that signs and symptoms of spinal deformity cannot be changed by surgical intervention, surgery remains primarily a cosmetic indication, only especially in patients with adolescent idiopathic scoliosis, the most common form of scoliosis never exceeding 80°. However, the cosmetic effects of surgery are not necessarily stable.For spinal fusion surgery on AIS cases, with instrumentation attached using pedicle screws, complication rates were reported in 2011 as transient neurological injuries between 0% to 1.5%, a pedicle fracture rate of 0.24%, screw malposition assessed by radiography at 1.5%, 6% when assessed by CT scans though these patients were asymptomatic not requiring screw revision, and screw loosening noted in 0.76% of patients.For surgery without fusion in growing children, substantial percentage of patients undergoing SHILLA technique experience loss of correction via crankshafting or adding-on (eg, distal migration). In addition, the need for osteotomies on the concave side has the potential of severe complications. For MAGEC rods, higher distraction magnitude resulted in the generation of higher distraction forces, and this in combination with off-axis loading (exemplified by “growth marks”) result in wear and breakage of MAGEC rod’s components.
Surgery:
After-surgery care Pain medication In the event of surgery to correct scoliosis, pain medications and anesthesia will be administered. Before the surgery, the patient will receive anesthesia. With adults, the anesthesia will be administered through an IV in the antecubital region of the arm. With young children, however, the child will be asked to breathe in nitrous oxide, or laughing gas. Because needles can be frightening for a young child, the nitrous oxide will put them to sleep so the anesthesiologist can then insert the IV in order to give them the anesthesia. After the surgery, the patient will most likely be given morphine. Until the patient is ready to take the medicine by mouth, an IV will be giving them their medication. Morphine is the most common pain medicine used after scoliosis surgery, and is often administered through a patient-controlled analgesia (PCA) system. The PCA system allows the patient to push a button when they are feeling pain, and the PCA will emit the drugs into the IV and then into the body. To prevent overdoses, there is a limit on the number of times a patient can push the button. If a patient pushes the button too much at once, the PCA will reject the request.
Surgery:
Bowel and bladder function For the patient's bladder control, a catheter will be inserted so that a patient can urinate without having to move. A catheter is inserted because the patient will not have much free movement to be able to get up and walk to the bathroom. The most common type of catheter used after major surgeries is an indwelling Foley catheter. The indwelling Foley catheter is most often put in the urethra, with a tube leading into a drainage bag. Once the catheter is inserted into the urethra, a balloon is blown up inside the bladder in order to keep it from falling out. The balloon allows the catheter to remain inside the urethra until the patient is able to get up and go to the bathroom on their own. The drainage bag is connected to the side of the bed, and must be changed or emptied out once it is full.
Surgery:
Bowel control can vary from patient to patient. The combination of no food, very little fluids, and a lot of prescription drugs has the potential to cause many patients to become constipated. The body is used to a normal diet, and used to excreting waste in a system. Interrupting the system can cause bowel problems. This constipation can be resolved in a couple of ways. The first way, and the most common way, is to administer a rectal suppository. A rectal suppository is administered through the anus, and into the rectum. They are bullet-shaped and contain medicine that will help the patient's bowels get back on track. Once the suppository is inserted, it is designed to melt off the wax-like case, and put the medicine in the body. If the suppository does not work, a laxative may be continued at home to keep the bowels in full function.
Surgery:
Diet When first returning home after surgery, a nutritional diet is necessary in order to keep the body operating correctly. Junk food is not a good idea, as the grease and sugar can irregulate the bowels. Fruit, vegetables, and juices will be a vital part in the diet. Food and drink will be limited for the patient after surgery. Because the bowels are not fully active because of anesthetic, clear water and ice may be the only acceptable thing to ingest. After the digestive tract is back up to speed, soft food and drink like pudding, soup broth, and orange juice are acceptable. Very dark urine with a strong odor means that the person is most likely dehydrated and needs more fluids. In order for the urine to become a pale or clear color, the patient will need to drink a lot of water. Juices such as prune juice are a healthy option and prune juice also helps with constipation, a common factor after surgery. When it comes to food, whole grains should be added into the diet. Whole grains can be broken down easily by the body whereas processed grains and flour cannot be broken down easily. Processed grains and flour also add to constipation.
History:
In 1962, Paul Harrington introduced a metal spinal system of instrumentation that assisted with straightening the spine, as well as holding it rigid while fusion took place. The original (now obsolete) Harrington rod operated on a ratchet system, attached by hooks to the spine at the top and bottom of the curvature that when cranked would distract, or straighten, the curve. The Harrington rod represented a major advance in the field, as it obviated the need for prolonged casting, allowing patients greater mobility in the postoperative period and significantly reducing the quality of life burden of fusion surgery. Additionally, as the first system to apply instrumentation directly to the spine, the Harrington rod was the precursor to most modern spinal instrumentation systems. A major shortcoming of the Harrington method was it failed to produce a posture wherein the skull would be in proper alignment with the pelvis, and it did not address rotational deformity. As a result, unfused parts of the spine would try to compensate for this in the effort to stand up straight. As the person aged, there would be increased wear and tear, early-onset arthritis, disc degeneration, muscular stiffness, and pain with eventual reliance on painkillers, further surgery, inability to work full-time, and disability. "Flatback" became the medical name for a related complication, especially for those who had lumbar scoliosis.In the 1960s, the gold standard for idiopathic scoliosis was a posterior approach using a single Harrington rod. Post-operative recovery involved bed rest, casts, and braces. Poor results became apparent over time.In the 1970s, an improved technique was developed using two rods and wires attached at each level of the spine. This segmented instrumentation system allowed patients to become mobile soon after surgery.In the 1980s, Cotrel-Dubousset instrumentation improved fixation and addressed sagittal imbalance and rotational defects unresolved by the Harrington rod system. This technique used multiple hooks with rods to give stronger fixation in three dimensions, usually eliminating the need for post-operative bracing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.