text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Life on Titan**
Life on Titan:
Whether there is life on Titan, the largest moon of Saturn, is currently an open question and a topic of scientific assessment and research. Titan is far colder than Earth, but of all the places in the Solar System, Titan is the only place besides Earth known to have liquids in the form of rivers, lakes, and seas on its surface. Its thick atmosphere is chemically active and rich in carbon compounds. On the surface there are small and large bodies of both liquid methane and ethane, and it is likely that there is a layer of liquid water under its ice shell. Some scientists speculate that these liquid mixes may provide prebiotic chemistry for living cells different from those on Earth.
Life on Titan:
In June 2010, scientists analyzing data from the Cassini–Huygens mission reported anomalies in the atmosphere near the surface which could be consistent with the presence of methane-producing organisms, but may alternatively be due to non-living chemical or meteorological processes. The Cassini–Huygens mission was not equipped to look directly for micro-organisms or to provide a thorough inventory of complex organic compounds.
Chemistry:
Titan's consideration as an environment for the study of prebiotic chemistry or potentially exotic life stems in large part due to the diversity of the organic chemistry that occurs in its atmosphere, driven by photochemical reactions in its outer layers. The following chemicals have been detected in Titan's upper atmosphere by Cassini's mass spectrometer: As mass spectrometry identifies the atomic mass of a compound but not its structure, additional research is required to identify the exact compound that has been detected. Where the compounds have been identified in the literature, their chemical formula has been replaced by their name above. The figures in Magee (2009) involve corrections for high pressure background. Other compounds believed to be indicated by the data and associated models include ammonia, polyynes, amines, ethylenimine, deuterium hydride, allene, 1,3 butadiene and any number of more complex chemicals in lower concentrations, as well as carbon dioxide and limited quantities of water vapour.
Surface temperature:
Due to its distance from the Sun, Titan is much colder than Earth. Its surface temperature is about 94 K (−179 °C, or −290 °F). At these temperatures, water ice—if present—does not melt, evaporate or sublime, but remains solid. Because of the extreme cold and also because of lack of carbon dioxide (CO2) in the atmosphere, scientists such as Jonathan Lunine have viewed Titan less as a likely habitat for extraterrestrial life, than as an experiment for examining hypotheses on the conditions that prevailed prior to the appearance of life on Earth. Even though the usual surface temperature on Titan is not compatible with liquid water, calculations by Lunine and others suggest that meteor strikes could create occasional "impact oases"—craters in which liquid water might persist for hundreds of years or longer, which would enable water-based organic chemistry.However, Lunine does not rule out life in an environment of liquid methane and ethane, and has written about what discovery of such a life form (even if very primitive) would imply about the prevalence of life in the universe.
Surface temperature:
Past hypothesis about the temperature In the 1970s, astronomers found unexpectedly high levels of infrared emissions from Titan. One possible explanation for this was the surface was warmer than expected, due to a greenhouse effect. Some estimates of the surface temperature even approached temperatures in the cooler regions of Earth. There was, however, another possible explanation for the infrared emissions: Titan's surface was very cold, but the upper atmosphere was heated due to absorption of ultraviolet light by molecules such as ethane, ethylene and acetylene.In September 1979, Pioneer 11, the first space probe to conduct fly-by observations of Saturn and its moons, sent data showing Titan's surface to be extremely cold by Earth standards, and much below the temperatures generally associated with planetary habitability.
Surface temperature:
Future temperature Titan may become warmer in the future. Five to six billion years from now, as the Sun becomes a red giant, surface temperatures could rise to ~200 K (−70 °C), high enough for stable oceans of a water–ammonia mixture to exist on its surface. As the Sun's ultraviolet output decreases, the haze in Titan's upper atmosphere will be depleted, lessening the anti-greenhouse effect on its surface and enabling the greenhouse effect created by atmospheric methane to play a far greater role. These conditions together could create an environment agreeable to exotic forms of life, and will persist for several hundred million years. This was sufficient time for simple life to evolve on Earth, although the presence of ammonia on Titan could cause the same chemical reactions to proceed more slowly.
Absence of surface liquid water:
The lack of liquid water on Titan's surface was cited by NASA astrobiologist Andrew Pohorille in 2009 as an argument against life there. Pohorille considers that water is important not only as the solvent used by "the only life we know" but also because its chemical properties are "uniquely suited to promote self-organization of organic matter". He has questioned whether prospects for finding life on Titan's surface are sufficient to justify the expense of a mission that would look for it.
Possible subsurface liquid water:
Laboratory simulations have led to the suggestion that enough organic material exists on Titan to start a chemical evolution analogous to what is thought to have started life on Earth. While the analogy assumes the presence of liquid water for longer periods than is currently observable, several hypotheses suggest that liquid water from an impact could be preserved under a frozen isolation layer. It has also been proposed that ammonia oceans could exist deep below the surface; one model suggests an ammonia–water solution as much as 200 km deep beneath a water ice crust, conditions that, "while extreme by terrestrial standards, are such that life could indeed survive". Heat transfer between the interior and upper layers would be critical in sustaining any sub-surface oceanic life. Detection of microbial life on Titan would depend on its biogenic effects. For example, the atmospheric methane and nitrogen could be examined for biogenic origin.Data published in 2012 obtained from NASA's Cassini spacecraft, have strengthened evidence that Titan likely harbors a layer of liquid water under its ice shell.
Formation of complex molecules:
Titan is the only known natural satellite (moon) in the Solar System that has a fully developed atmosphere that consists of more than trace gases. Titan's atmosphere is thick, chemically active, and is known to be rich in organic compounds; this has led to speculation about whether chemical precursors of life may have been generated there. The atmosphere also contains hydrogen gas, which is cycling through the atmosphere and the surface environment, and which living things comparable to Earth methanogens could combine with some of the organic compounds (such as acetylene) to obtain energy.
Formation of complex molecules:
The Miller–Urey experiment and several following experiments have shown that with an atmosphere similar to that of Titan and the addition of UV radiation, complex molecules and polymer substances like tholins can be generated. The reaction starts with dissociation of nitrogen and methane, forming hydrogen cyanide and acetylene. Further reactions have been studied extensively.In October 2010, Sarah Hörst of the University of Arizona reported finding the five nucleotide bases—building blocks of DNA and RNA—among the many compounds produced when energy was applied to a combination of gases like those in Titan's atmosphere. Hörst also found amino acids, the building blocks of protein. She said it was the first time nucleotide bases and amino acids had been found in such an experiment without liquid water being present.In April 2013, NASA reported that complex organic chemicals could arise on Titan based on studies simulating the atmosphere of Titan. In June 2013, polycyclic aromatic hydrocarbons (PAHs) were detected in the upper atmosphere of Titan.Research has suggested that polyimine could readily function as a building block in Titan's conditions. Titan's atmosphere produces significant quantities of hydrogen cyanide, which readily polymerize into forms which can capture light energy in Titan's surface conditions. As of yet, the answer to what happens with Titan's cyanide is unknown; while it is rich in the upper atmosphere where it is created, it is depleted at the surface, suggesting that there is some sort of reaction consuming it.
Hypotheses:
Hydrocarbons as solvents Although all living things on Earth (including methanogens) use liquid water as a solvent, it is conceivable that life on Titan might instead use a liquid hydrocarbon, such as methane or ethane. Water is a stronger solvent than hydrocarbons; however, water is more chemically reactive, and can break down large organic molecules through hydrolysis. A life-form whose solvent was a hydrocarbon would not face the risk of its biomolecules being destroyed in this way.Titan appears to have lakes of liquid ethane or liquid methane on its surface, as well as rivers and seas, which some scientific models suggest could support hypothetical non-water-based life.
Hypotheses:
It has been speculated that life could exist in the liquid methane and ethane that form rivers and lakes on Titan's surface, just as organisms on Earth live in water. Such hypothetical creatures would take in H2 in place of O2, react it with acetylene instead of glucose, and produce methane instead of carbon dioxide. By comparison, some methanogens on Earth obtain energy by reacting hydrogen with carbon dioxide, producing methane and water.
Hypotheses:
In 2005, astrobiologists Chris McKay and Heather Smith predicted that if methanogenic life is consuming atmospheric hydrogen in sufficient volume, it will have a measurable effect on the mixing ratio in the troposphere of Titan. The effects predicted included a level of acetylene much lower than otherwise expected, as well as a reduction in the concentration of hydrogen itself.Evidence consistent with these predictions was reported in June 2010 by Darrell Strobel of Johns Hopkins University, who analysed measurements of hydrogen concentration in the upper and lower atmosphere. Strobel found that the hydrogen concentration in the upper atmosphere is so much larger than near the surface that the physics of diffusion leads to hydrogen flowing downwards at a rate of roughly 1025 molecules per second. Near the surface the downward-flowing hydrogen apparently disappears. Another paper released the same month showed very low levels of acetylene on Titan's surface.Chris McKay agreed with Strobel that presence of life, as suggested in McKay's 2005 article, is a possible explanation for the findings about hydrogen and acetylene, but also cautioned that other explanations are currently more likely: namely the possibility that the results are due to human error, to a meteorological process, or to the presence of some mineral catalyst enabling hydrogen and acetylene to react chemically. He noted that such a catalyst, one effective at −178 °C (95 K), is presently unknown and would in itself be a startling discovery, though less startling than discovery of an extraterrestrial life form.The June 2010 findings gave rise to considerable media interest, including a report in the British newspaper, the Telegraph, which spoke of clues to the existence of "primitive aliens".
Hypotheses:
Cell membranes A hypothetical cell membrane capable of functioning in liquid methane was modeled in February 2015. The proposed chemical base for these membranes is acrylonitrile, which has been detected on Titan. Called an "azotosome" ('nitrogen body'), formed from "azoto", Greek for nitrogen, and "soma", Greek for body, it lacks the phosphorus and oxygen found in phospholipids on Earth but contains nitrogen. Despite the very different chemical structure and external environment, its properties are surprisingly similar, including autoformation of sheets, flexibility, stability, and other properties. According to computer simulations azotosomes could not form under the weather conditions found on Titan.An analysis of Cassini data, completed in 2017, confirmed substantial amounts of acrylonitrile in Titan's atmosphere.
Hypotheses:
Comparative habitability In order to assess the likelihood of finding any sort of life on various planets and moons, Dirk Schulze-Makuch and other scientists have developed a planetary habitability index which takes into account factors including characteristics of the surface and atmosphere, availability of energy, solvents and organic compounds. Using this index, based on data available in late 2011, the model suggests that Titan has the highest current habitability rating of any known world, other than Earth.
Hypotheses:
Titan as a test case While the Cassini–Huygens mission was not equipped to provide evidence for biosignatures or complex organics, it showed an environment on Titan that is similar, in some ways, to ones theorized for the primordial Earth. Scientists think that the atmosphere of early Earth was similar in composition to the current atmosphere on Titan, with the important exception of a lack of water vapor on Titan. Many hypotheses have developed that attempt to bridge the step from chemical to biological evolution.
Hypotheses:
Titan is presented as a test case for the relation between chemical reactivity and life, in a 2007 report on life's limiting conditions prepared by a committee of scientists under the United States National Research Council. The committee, chaired by John Baross, considered that "if life is an intrinsic property of chemical reactivity, life should exist on Titan. Indeed, for life not to exist on Titan, we would have to argue that life is not an intrinsic property of the reactivity of carbon-containing molecules under conditions where they are stable..."David Grinspoon, one of the scientists who in 2005 proposed that hypothetical organisms on Titan might use hydrogen and acetylene as an energy source, has mentioned the Gaia hypothesis in the context of discussion about Titan life. He suggests that, just as Earth's environment and its organisms have evolved together, the same thing is likely to have happened on other worlds with life on them. In Grinspoon's view, worlds that are "geologically and meteorologically alive are much more likely to be biologically alive as well".
Hypotheses:
Panspermia or independent origin An alternate explanation for life's hypothetical existence on Titan has been proposed: if life were to be found on Titan, it could have originated from Earth in a process called panspermia. It is theorized that large asteroid and cometary impacts on Earth's surface have caused hundreds of millions of fragments of microbe-laden rock to escape Earth's gravity. Calculations indicate that a number of these would encounter many of the bodies in the Solar System, including Titan. On the other hand, Jonathan Lunine has argued that any living things in Titan's cryogenic hydrocarbon lakes would need to be so different chemically from Earth life that it would not be possible for one to be the ancestor of the other. In Lunine's view, presence of organisms in Titan's lakes would mean a second, independent origin of life within the Solar System, implying that life has a high probability of emerging on habitable worlds throughout the cosmos.
Planned and proposed missions:
The proposed Titan Mare Explorer mission, a Discovery-class lander that would splash down in a lake, "would have the possibility of detecting life", according to astronomer Chris Impey of the University of Arizona.The planned Dragonfly rotorcraft mission is intended to land on solid ground and relocate many times. Dragonfly will be New Frontiers program Mission #4. Its instruments will study how far prebiotic chemistry may have progressed. Dragonfly will carry equipment to study the chemical composition of Titan's surface, and to sample the lower atmosphere for possible biosignatures, including hydrogen concentrations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cohesion (chemistry)**
Cohesion (chemistry):
In chemistry and physics, cohesion (from Latin cohaesiō 'cohesion, unity'), also called cohesive attraction or cohesive force, is the action or property of like molecules sticking together, being mutually attractive. It is an intrinsic property of a substance that is caused by the shape and structure of its molecules, which makes the distribution of surrounding electrons irregular when molecules get close to one another, creating electrical attraction that can maintain a microscopic structure such as a water drop. Cohesion allows for surface tension, creating a "solid-like" state upon which light-weight or low-density materials can be placed.
Cohesion (chemistry):
Water, for example, is strongly cohesive as each molecule may make four hydrogen bonds to other water molecules in a tetrahedral configuration. This results in a relatively strong Coulomb force between molecules. In simple terms, the polarity (a state in which a molecule is oppositely charged on its poles) of water molecules allows them to be attracted to each other. The polarity is due to the electronegativity of the atom of oxygen: oxygen is more electronegative than the atoms of hydrogen, so the electrons they share through the covalent bonds are more often close to oxygen rather than hydrogen. These are called polar covalent bonds, covalent bonds between atoms that thus become oppositely charged. In the case of a water molecule, the hydrogen atoms carry positive charges while the oxygen atom has a negative charge. This charge polarization within the molecule allows it to align with adjacent molecules through strong intermolecular hydrogen bonding, rendering the bulk liquid cohesive. Van der Waals gases such as methane, however, have weak cohesion due only to van der Waals forces that operate by induced polarity in non-polar molecules.
Cohesion (chemistry):
Cohesion, along with adhesion (attraction between unlike molecules), helps explain phenomena such as meniscus, surface tension and capillary action.
Cohesion (chemistry):
Mercury in a glass flask is a good example of the effects of the ratio between cohesive and adhesive forces. Because of its high cohesion and low adhesion to the glass, mercury does not spread out to cover the bottom of the flask, and if enough is placed in the flask to cover the bottom, it exhibits a strongly convex meniscus, whereas the meniscus of water is concave. Mercury will not wet the glass, unlike water and many other liquids, and if the glass is tipped, it will 'roll' around inside. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Consensus error grid**
Consensus error grid:
The consensus error grid (also known as the Parkes error grid) was developed as a new tool for evaluating the accuracy of a blood glucose meter. In recent times, the consensus error grid has been used increasingly by blood glucose meter manufacturers in their clinical studies. It was published in August 2000 by Joan L. Parkes, Stephen L. Slatin, Scott Pardo, and Barry H. Ginsberg. The guidelines for ISO15197:2013 specify the usage of the consensus error grid for evaluation of blood glucose monitoring systems. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hesperadin**
Hesperadin:
Hesperadin is an aurora kinase inhibitor.
Hesperadin:
The small molecule inhibits chromosome alignment and segregation by limiting the function of mitotic kinases Aurora B and Aurora A. Hesperadin causes cells to enter anaphase much faster, sometimes before the chromosomes are properly bi-oriented.Hesperadin, like other miotic inhibitors, limits and sometimes can stop the process of mitosis in cells. For this reason, some have considered hesperadin's potential as a cancer-preventing drug.Hesperadin works as an inhibitor, attaching to the active sites of Aurora A and Aurora B kinases. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Proton-coupled folate transporter**
Proton-coupled folate transporter:
The proton-coupled folate transporter is a protein that in humans is encoded by the SLC46A1 gene. The major physiological roles of PCFTs are in mediating the intestinal absorption of folate (Vitamin B9), and its delivery to the central nervous system.
Structure:
PCFT is located on chromosome 17q11.2 and consists of five exons encoding a protein with 459 amino acids and a MW of ~50kDa. PCFT is highly conserved, sharing 87% identity to the mouse and rat PCFT and retaining more than 50% amino acid identity to the frog (XP415815) and zebrafish (AAH77859) proteins. Structurally, there are twelve transmembrane helices with the N- and C- termini directed to the cytoplasm and a large internal loop that divides the molecule in half. There are two glycosylation sites (N58, N68) and a disulfide bond connecting residues C66, in the 1st and C298 in the 4th, external loop. Neither glycosylation nor the disulfide bond are essential for function. Residues have been identified that play a role in proton-coupling, proton binding, folate binding and oscillation of the carrier between its conformational states. PCFT forms oligomers and some of the linking residues have been identified.
Regulation:
PCFT-mediated transport into cells is optimal at pH 5.5. The low-pH activity and the structural specificity of PCFT (high affinity for folic acid, and low affinity for PT523 - a non-polyglutamable analog of aminopterin) distinguishes this transporter functionally from the other major folate transporter, the reduced folate carrier (optimal activity at pH 7.4, very low affinity for folic acid and very high affinity for PT523), another member (SLC19A1) of the superfamily of solute transporters. Influx mediated by PCFT is electrogenic and can be assessed by current, cellular acidification, and radiotracer uptake. Influx has a Km range of 0.5 to 3µM for most folates and antifolates at pH 5.5. The influx Km rises and the influx Vmax falls as the pH is increased, least so for the antifolate, pemetrexed. The transporter is specific for the monoglutamyl forms of folates. A variety of organic anions inhibit PCFT-mediated transport at extremely high ratio of inhibitor to folate, the most potent are sulfobromophthalein, p-aminobenzylglutamate, and sulfathalazine. This may have pharmacological relevance in terms of the inhibitory effect of these agents on the intestinal absorption of folates. The PCFT minimal promoter has been defined and contains an NRF1 response element. There is also evidence for a role of vitamin D in the regulation of PCFT with a VDR response element upstream of the minimal promoter. PCFT mRNA was reported to be increased in folate-deficient mice.
Tissue distribution:
PCFT is expressed in the proximal jejunum with a lower level of expression elsewhere in the intestine. Expression is localized to the apical membrane of intestinal and polarized MDCK dog kidney cells. PCFT is also expressed at the basolateral membrane of the choroid plexus. In view of the low levels of folate in the cerebrospinal fluid (CSF) in PCFT-null humans, PCFT must play a role in transport of folates across the choroid plexus into the CSF; however, the underlying mechanism for this has not been established. PCFT is expressed at the sinusoidal (basolateral) membrane of the hepatocyte, the apical brush-border membrane of the proximal tubule of the kidney, the basolateral membrane of the retinal pigment epithelium and the placenta. There is a prominent low-pH folate transport activity in the cells and/or membrane vesicles derived from these tissues which, in some cases, has been shown to be indicative of a proton-coupled folate transport process. However, it is unclear as to the extent that PCFT contributes to folate transport across these epithelia.
Loss-of-function:
The physiological role of PCFT is known based upon the phenotype of subjects with loss-of-function mutations of this gene – the rare autosomal hereditary disorder, hereditary folate malabsorption (HFM). These subjects have two major abnormalities: (i) severe systemic folate deficiency and (ii) a defect in the transport of folates from blood across the choroid plexus into the CSF with very low CSF folate levels even when the blood folate level is corrected or supranormal. Severe anemia, usually macrocytic, always accompanies the folate deficiency. Sometimes there is pancytopenia and/or hypogammaglobulinemia and/or T-cell dysfunction which can result in infections such as Pneumocystis jirovecii pneumonia. There can be GI signs including diarrhea and mucositis. The CNS folate deficiency is associated with a variety of neurological findings including developmental delays and seizures. The phenotype of the PCFT-null mouse has been reported and mirrors many of the findings in humans. PCFT was initially reported to be a low-affinity heme transporter. However, a role for PCFT in heme and iron homeostasis is excluded by the observation that humans or mice with loss-of-function PCFT mutations are not iron or heme deficient and the anemia, and all other systemic consequences of the loss of this transporter, are completely corrected with high-dose oral, or low-dose, parenteral folate.
As a drug target:
Because of the Warburg effect, and a compromised blood supply, human epithelial cancers grow within an acidic milieu, as lactate is produced during anaerobic glycolysis. Because PCFT activity is optimal at low pH, and its expression and a prominent low-pH transport activity are present in human cancers, there is interest in exploiting these properties by the development of antifolates that have a high affinity for this transporter and a very low affinity for the reduced folate carrier which delivers antifolates to normal tissues and thereby mediates the toxicity of these agents. A novel class of inhibitors of one carbon incorporation into purines is being developed with these properties. Pemetrexed, an antifolate inhibitor primarily of thymidylate synthase, is a good substrate for PCFT even at neutral pH as compared to other antifolates and folates. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cyclotomic field**
Cyclotomic field:
In number theory, a cyclotomic field is a number field obtained by adjoining a complex root of unity to Q, the field of rational numbers.
Cyclotomic field:
Cyclotomic fields played a crucial role in the development of modern algebra and number theory because of their relation with Fermat's Last Theorem. It was in the process of his deep investigations of the arithmetic of these fields (for prime n) – and more precisely, because of the failure of unique factorization in their rings of integers – that Ernst Kummer first introduced the concept of an ideal number and proved his celebrated congruences.
Definition:
For n ≥ 1, let ζn = e2πi/n ∈ C; this is a primitive nth root of unity. Then the nth cyclotomic field is the extension Q(ζn) of Q generated by ζn.
Properties:
The nth cyclotomic polynomial gcd gcd (k,n)=11≤k≤n(x−ζnk) is irreducible, so it is the minimal polynomial of ζn over Q.The conjugates of ζn in C are therefore the other primitive nth roots of unity: ζkn for 1 ≤ k ≤ n with gcd(k, n) = 1.
The degree of Q(ζn) is therefore [Q(ζn) : Q] = deg Φn = φ(n), where φ is Euler's totient function.
The roots of xn − 1 are the powers of ζn, so Q(ζn) is the splitting field of xn − 1 (or of Φ(x)) over Q.
Therefore Q(ζn) is a Galois extension of Q.
Properties:
The Galois group Gal (Q(ζn)/Q) is naturally isomorphic to the multiplicative group (Z/nZ)× , which consists of the invertible residues modulo n, which are the residues a mod n with 1 ≤ a ≤ n and gcd(a, n) = 1. The isomorphism sends each Gal (Q(ζn)/Q) to a mod n, where a is an integer such that σ(ζn) = ζan.
Properties:
The ring of integers of Q(ζn) is Z[ζn].
For n > 2, the discriminant of the extension Q(ζn) / Q is (−1)φ(n)/2nφ(n)∏p|npφ(n)/(p−1).
In particular, Q(ζn) / Q is unramified above every prime not dividing n.
If n is a power of a prime p, then Q(ζn) / Q is totally ramified above p.
If q is a prime not dividing n, then the Frobenius element Frob Gal (Q(ζn)/Q) corresponds to the residue of q in (Z/nZ)× The group of roots of unity in Q(ζn) has order n or 2n, according to whether n is even or odd.
Properties:
The unit group Z[ζn]× is a finitely generated abelian group of rank φ(n)/2 – 1, for any n > 2, by the Dirichlet unit theorem. In particular, Z[ζn]× is finite only for n ∈ {1, 2, 3, 4, 6}. The torsion subgroup of Z[ζn]× is the group of roots of unity in Q(ζn), which was described in the previous item. Cyclotomic units form an explicit finite-index subgroup of Z[ζn]×.
Properties:
The Kronecker–Weber theorem states that every finite abelian extension of Q in C is contained in Q(ζn) for some n. Equivalently, the union of all the cyclotomic fields Q(ζn) is the maximal abelian extension Qab of Q.
Relation with regular polygons:
Gauss made early inroads in the theory of cyclotomic fields, in connection with the problem of constructing a regular n-gon with a compass and straightedge. His surprising result that had escaped his predecessors was that a regular 17-gon could be so constructed. More generally, for any integer n ≥ 3, the following are equivalent: a regular n-gon is constructible; there is a sequence of fields, starting with Q and ending with Q(ζn), such that each is a quadratic extension of the previous field; φ(n) is a power of 2; n=2ap1⋯pr for some integers a, r ≥ 0 and Fermat primes p1,…,pr . (A Fermat prime is an odd prime p such that p − 1 is a power of 2. The known Fermat primes are 3, 5, 17, 257, 65537, and it is likely that there are no others.) Small examples n = 3 and n = 6: The equations ζ3=−1+−32 and ζ6=1+−32 show that Q(ζ3) = Q(ζ6) = Q(√−3 ), which is a quadratic extension of Q. Correspondingly, a regular 3-gon and a regular 6-gon are constructible.
Relation with regular polygons:
n = 4: Similarly, ζ4 = i, so Q(ζ4) = Q(i), and a regular 4-gon is constructible.
n = 5: The field Q(ζ5) is not a quadratic extension of Q, but it is a quadratic extension of the quadratic extension Q(√5 ), so a regular 5-gon is constructible.
Relation with Fermat's Last Theorem:
A natural approach to proving Fermat's Last Theorem is to factor the binomial xn + yn, where n is an odd prime, appearing in one side of Fermat's equation xn+yn=zn as follows: xn+yn=(x+y)(x+ζy)⋯(x+ζn−1y) Here x and y are ordinary integers, whereas the factors are algebraic integers in the cyclotomic field Q(ζn). If unique factorization holds in the cyclotomic integers Z[ζn] , then it can be used to rule out the existence of nontrivial solutions to Fermat's equation.
Relation with Fermat's Last Theorem:
Several attempts to tackle Fermat's Last Theorem proceeded along these lines, and both Fermat's proof for n = 4 and Euler's proof for n = 3 can be recast in these terms. The complete list of n for which Q(ζn) has unique factorization is 1 through 22, 24, 25, 26, 27, 28, 30, 32, 33, 34, 35, 36, 38, 40, 42, 44, 45, 48, 50, 54, 60, 66, 70, 84, 90.Kummer found a way to deal with the failure of unique factorization. He introduced a replacement for the prime numbers in the cyclotomic integers Z[ζn], measured the failure of unique factorization via the class number hn and proved that if hp is not divisible by a prime p (such p are called regular primes) then Fermat's theorem is true for the exponent n = p. Furthermore, he gave a criterion to determine which primes are regular, and established Fermat's theorem for all prime exponents p less than 100, except for the irregular primes 37, 59, and 67. Kummer's work on the congruences for the class numbers of cyclotomic fields was generalized in the twentieth century by Iwasawa in Iwasawa theory and by Kubota and Leopoldt in their theory of p-adic zeta functions.
List of class numbers of cyclotomic fields:
(sequence A061653 in the OEIS), or OEIS: A055513 or OEIS: A000927 for the h -part (for prime n) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Super-resolution photoacoustic imaging**
Super-resolution photoacoustic imaging:
Super-resolution photoacoustic imaging is a set of techniques used to enhance spatial resolution in photoacoustic imaging. Specifically, these techniques primarily break the optical diffraction limit of the photoacoustic imaging system. It can be achieved in a variety of mechanisms, such as blind structured illumination, multi-speckle illumination, or photo-imprint photoacoustic microscopy in Figure 1.
Photoacoustic (PA) imaging:
This particular biomedical imaging modality is a combination of optical imaging, and ultrasound imaging. In other words, a photoacoustic (PA) image can be viewed as an ultrasound image in which its contrast depends on the optical properties, such as optical resolution of biomolecules like hemoglobin, water, melanin, lipids, and collagen. The advantages of photoacoustic imaging are that it gives higher specificity than conventional ultrasound imaging and greater penetration depth than conventional ballistic optical imaging modalities. Photoacoustic imaging works by irradiating the target with a short-pulsed laser, or alternatively an intensity-modulated laser. The target absorbs this optical energy, which is converted into heat; in most cases the amount of heat is a fraction of approximately (1 - fluorescence quantum yield). Heat is further converted into a pressure rise via thermoelastic expansion, and the pressure rise behaves as an ultrasonic wave, the wave of which is called a “photoacoustic wave” that propagates throughout the surroundings of the target. The photoacoustic wave is then detected by ultrasonic transducers, and these signals are used by an image processor and computer to reconstruct an image of the target.
Super-resolution imaging and photoacoustics:
Recently, several techniques have broken the diffraction limit of light, enabling the observation of individual cellular structures, sub-cellular structures, and processes at the nanometer level, structures that were previously unresolvable by conventional microscopes due to resolutions finer than the optical diffraction limit (~250 nm in lateral direction at high optical NA). However, as super-resolution optical imaging generally relies on fluorophores, only fluorescence imaging by usage of multiple lasers or chemical manipulation of fluorophores is possible. This results in a set of complex configurations for the imaging system and limited use for fluorescent targets. Photoacoustic tomography, is able to complement these super resolution techniques and achieve much greater field of depth and remove the need for fluorescent molecules.
Super resolution techniques:
Photoacoustics relies on optical excitation of targets and detection of acoustic emission. Since the frequencies of electromagnetic waves (light) are significantly higher than that of acoustic waves, the optical excitation generally sets the absolute resolution. This absolute resolution is well known in optics and is called the diffraction limit of light. This diffraction limit shown below represents the minimum distance that can be resolved between two objects (or similarly the minimum separation distance between two objects excitated by a laser).
Super resolution techniques:
d=λ/2NA Super resolution techniques break this limit by generally using nonlinear techniques such as saturations, switchable quenching, or multiphoton absorption. Photoacoustics directly benefits from these techniques as the excitation can be held to a tighter spot and hence detect acoustic waves to a tighter spot, ultimately improving resolution.
Optical saturation and nonlinear thermal expansion Photoacoustics primarily relies on the pressure wave generated by the absorbing material. At higher excitation energy levels, the excitation becomes greatly nonlinear, leading to different effects on the pressure wave generated. The two equations below are the resulting pressure waves from either nonlinear thermal expansion or optical saturation.
(1) p(r→−r0→,z−z0)=1κ{β1T(r→−r0→,z−z0)+12β2[T(r→−r0→,z−z0)]2} (2) p0(r0→,z0,Ep)=∑n=1∞cn(r0→,z0)⋅(Ep)2 Equation (1) above is the pressure wave generated by the non-linear thermal expansion. Equation (2) is the resulting wave generated from optical saturation.
Super resolution techniques:
The key idea is that the sample is pumped with high levels of energy such that all of its electrons are saturated to the excitated state. In this saturated state, the additional energy will be utilized in different ways such as nonlinear thermal expansion. From these pressure waves, images corresponding to the absorption of different molecules can be reconstructed. In Figure 2a and 2b, two 100 nm gold nanoparticles are resolved with super resolution enhanced photoacoustics and verified by atomic force microscopy, a different modality that can break the diffraction limit resolution since it does not rely on light.
Super resolution techniques:
Figure 2: Comparison of different imaging modalities on two 100 nm gold nanoparticle.
This can be extended to whole images with scanning techniques and Figure 3 shows the dramatic difference with the added super resolution technique. With super resolution, the smaller details of the melanoma cell pictured is visible.
Figure 3: Before and after super resolution techniques applied to photoacoustic phase microscopy.
Super resolution techniques:
Photoswitchable probes Reversibly photoswitchable proteins can switch back and forth between two optically separated states, making them useful for high-contrast and high resolution PA imaging. Yao et al. demonstrated differential imaging of the reversibly switchable phytochrome BphP1 in in vivo experiments (terming the technology RS-PAM). The two states of the BphP1 molecule are the Pfr and Pf states, referred to as the ON and OFF states, respectively. When stable, BphP1is at the On state. Upon 780 nm laser pulse train illumination, BphP1 molecules in the on state gradually switch to the off state. As a result, the amplitude of the generated PA signals decrease. The decay rate is proportional to the local excitation intensity. Thus, PA signal in the center of the excitation beam will decay faster compared to the surrounding. Using the difference image between the two states gives a high contrast image of the molecules (Figure 4), and sub-diffraction limited lateral and axial resolutions will be achieved. In the lateral direction, the decay dependence on the local excitation intensity results in a smaller FWHM of the decay PSF. The higher the order of the nonlinear decay, the higher the resolution enhancement will be. The effective lateral PSF for this imaging technique was shown to be 0.51 /1+bm)λ0/NA where b is the power dependence of the switching-off rate on the excitation intensity (for BphP1, b=1), and m is the order of the polynomial fitting. This technique achieves finer resolutions with a factor of 1+bm compared to conventional PAM.
Super resolution techniques:
In the axial direction, the nonlinear optical effect will determine the resolution (since only in focus molecules contribute in the signal decay), thus, making it independent from the acoustic detection. This allows this technology to have optical sectioning capability. For point targets, the achievable axial resolution is 1.8 21/1+bm−1×(λ0/NA2) whereas for large targets, it is 1.8 21/bm−1×(λ0/NA2) Figure 4: PA images of BphP1 expressing U87 cells and HbO2 in scattering media. The differential image effectively removes the background signal, increasing the contrast of the cell area.As a demonstration (figure 5) this technique showed much finer lateral and axial resolutions compared to conventional PAM. The lateral and axial resolutions were quantified to be ~141 nm and ~400 nm, respectively, which were about 2 and 75 times better than that of conventional PAM.Figure 5: Comparison of lateral and axial resolution of conventional PAM and RS-PAM in imaging.
Super resolution techniques:
Two-photon photoacoustic mechanism Non-radiative two-photon absorption can be utilized to achieve high 3D resolution in in vivo experiments. In conventional PAM temporal or spatial filtering is needed to eliminate background signal and achieve high axial resolution. However, in 2PAM, only the area within the focal spot generates nonlinear photoacoustic signals (figure 6). Based on the two-photon absorption PSF, the lateral ( Δx2PAM ) and axial resolution ( Δz2PAM ) of the 2PAM system are determined to be...(3) 0.64 NA 0.7 0.65 0.91 NA 0.7 (4) 1.064 λex2[1n−n2−NA2] where λex is the excitation wavelength, and n is the medium refractive index. The important property of the axial resolution is its independence from the frequency of the photoacoustic waves. Therefore, lower frequency ultrasound waves can be used for deeper detection.Figure 6: Comparison between PAM and 2PAM.Although two-photon excitation can potentially give a smaller PSF, but because one photon absorption is achieved easily by molecules, there will be a prominent 1PA signal from the area around the focal spot, making the detection of 2PA signals a hard problem. The 2PA signal can be separated from the 1PA background signal using a lock-in detection system, as demonstrated by Lee et al... After amplitude modulation of the input laser train (modulation frequency = f), nonlinear absorption of molecules within the focal excitation spot will generate high harmonics of the modulation frequency (2f, 3f….). Photoacoustic signals from the two-photon absorption can be extracted by locking at the second harmonic of the modulation frequency (2f). In this paper, lateral and axial resolutions of 0.51 μm and 2.41 μm in in vivo experiments were achieved, demonstrating the sub-femtoliter resolution (0.49 μm3 with NA=0.8) of the 2PAM imaging system. 1PA images had only 0.71 μm lateral resolution. The two-photon nature of the signals contribute to the 1.4 factor improvement of the 2PAM to 1PAM lateral resolution Figure 7: Comparison between PAM and 2PAM images of melanin distribution.
Super resolution techniques:
Structured illumination Structured illumination is an imaging technique that when applied to microscopy, can double the spatial resolution of that of conventional fluorescence microscopy using the moiré interference pattern, the coarse pattern that is produced when two finer patterns are overlapped and provides easier viewing than either original pattern. Structured illumination occurs in the interaction of a three-dimensional modulated illumination pattern and high-frequency variations in the sample fluorescence caused by small structures, the interaction of which produces a lower-frequency Moiré pattern that contains non-resolvable structures present in the observed image. When these Moiré patterns are imaged in different positions and subsequently computationally post-processed, then the lower-diffraction sample information can be algorithmically decoded and reconstructed. When information above and below the focal plane are added, then spatial resolution will be enhanced and normally non-resolvable sample structures are more easily resolvable. One advantage of structured illumination is its ability to be used with any conventional fluorophores, and one disadvantage is its image acquisition speed that requires complex imaging and therefore compromises temporal resolution needed for live cell imaging Blind illumination Blind structured illumination photoacoustic microscopy (BSIPAM) was employed as a feedback-free imaging method that uses random optical speckle patterns as a structured illumination for enhancing the spatial resolution of PA imaging within scattering media. Unlike structured illumination microscopy, where the spatial resolution enhancement was limited only to 2, BSIPAM has a higher spatial resolution. BSIPAM operates by the key principle: to recover absorber distribution ρ at spatial resolution close to speckle size.
Super resolution techniques:
Principle If there are M different speckle patterns, 𝚽1,...,𝚽M, and the assumption holds that speckle patterns and absorber distribution are represented by discrete vectors 𝚽m, ρ∈ℜN , an expression for measured PA data can be written.
ym=h∗[Φm⋅ρ]+ϵm where m = 1,...,M. h∈ℜN is the PSF of the photoacoustic imaging system in discrete form, [a⋅b](xi)=a(xi)b(xi) is the pointwise multiplication, [a∗b](xi)=∑j=1Na(xi−xj)b(xj) is the discrete deconvolution step, and εm is the noise in the data.
Super resolution techniques:
The goal was to recover absorber distribution and speckle patterns 𝚽m from the data from the expression above. As the intensity distributions and speckle patterns are unknown, MN equations with (M+1)N unknown scalar parameters are derived from the expression. Sound sources pm=Φm⋅ρ present in the expression are uniquely determined by deconvolution equations, but deconvolution is ill-conditioned, so authors use block-sparsity to give high-resolution reconstructions.
Super resolution techniques:
It is common knowledge that using sparsity gives a super-resolution signal recovery, as sparse-recovery algorithms assume that the signal is a superposition of a few elements from a large set of high-frequency and low-frequency information. But this approach misses the main point of the reconstruction problem: that all products come from same density distribution ρ. A joint sparsity term ‖p‖2,1=∑i=1N∑m=1M|pm(xi)|2 is implemented as a regularization term on p = (p1,...,pM) and a reconstruction algorithm, block-FISTA, is developed to realize this joint-sparsity approach for solving the expression.
Super resolution techniques:
Experimental setup In the experimental setup of BSIPAM (Figure 4), the sample is placed in a water-filled tank with transparent walls, and light from a pulsed Nd:YAG laser (λ = 532 nm) goes through a glass diffuser and focused on the sample with a lens with focal length f = 50mm. The speckle size at the focal plane of the lens is 1.22λf/d, where λ is the wavelength of the light source and d is the aperture diameter. Ultrasonic signals were recorded by an ultrasonic transducer that is connected to an ultrasonic receiver and sampled with a 12-bit digital oscilloscope. Then, photoacoustic signals were collected with a step size of 10 μm over a scan length of 1.0 mm, the time traces were processed by the Fourier transform, and the magnitude was produced between 15 and 85 MHz to determine the net photoacoustic response at each spatial location.Figure 4: Schematic of the experimental setup of BSIPAM. Figure courtesy of.
Super resolution techniques:
Research BSIPAM was tested with a simple one-dimensional sample where the absorber distribution ρ was a series of 8 lines of 10 μm thickness, the distance between the lines varied from 40 and 150 μm, and the absorbers were illuminated with the speckle pattern with a speckle size of 25 μm (Figure 5). The photoacoustic response distribution was determined by the product of the speckle intensity and absorber distribution. Next, it was assumed the absorber was present in the focal plane of an ultrasonic transducer with a Gaussian point spread function (PSF) h with a FWHM of 100 μm. The photoacoustic response was determined by the convolution of the source distribution with the transducer PSF, and the experiment was repeated M = 100 more times with a new random speckle pattern each time.Figure 5: The results from running BSIPAM with the simple 1D sample. (a) The sample, (b) The speckle pattern, (c) the photoacoustic source distribution from the product of speckle intensity and absorber distribution, and (d) photoacoustic response from the convolution of PA source distribution and transducer PSF. Figure courtesy of.Figure 6 shows the results of six line scans of M = 100 total line scans, each of them from a different random speckle pattern.Figure 6: (a) Results from the photoacoustic PA response measured from line scans over the absorber distribution subjected to six different random illumination patterns. (b) Object and reconstructions of object using 100 speckle patterns where responses are shown. Figure courtesy of.Based on Figure 6, structured illumination via random speckle illumination influence line scan response—this variation needed for super-resolution imaging. Also, on Figure 6(b), a set of different techniques were applied to compare their photoacoustic responses in reconstructing the absorber distribution. The mean response was obtained by averaging all line scans, the variance response was obtained by applying the square root of the signal variance at each spatial position, the Richardson-Lucy deconvolution (RLD) response was obtained by performing the deconvolution of the mean response with the given Gaussian PSF, and the BSIPAM response was obtained through the block-VISTA algorithm. In comparison to the other responses, the BSIPAM response shows that the smallest feature spacing (40 μm) is resolved, giving a resolution advantage over other approaches.
Super resolution techniques:
In another example, BSIPAM was tested with a two-dimensional sample, a star with an absorber of multiple lines. The sample has size 256 μm2, the distance between the lines is 17 μm, and the PSF is a two-dimensional Gaussian kernel with a FWHM of 35 μm. The BSIPAM experiment is repeated M = 200 times, each with random speckle patterns, and the speckles each have a size of 9 μm. Figure 7 displays the results. When BSIPAM is applied, the resolution is improved by a factor of 2.4.Figure 7: The results with a 2D sample and 200 random speckle patterns. (a) The object. The reconstructed object using... (b) the mean PA response, (c) regularized deconvolution, and (d) BSIPAM are displayed. Figure courtesy of.
Super resolution techniques:
Speckled Illumination Multiple optical speckle illumination was used as a source of fluctuations to produce super-resolution PA imaging. Specifically, a second-order analysis of optical speckle-induced PA fluctuations to develop PA images beyond the acoustic diffraction limit Principle Multiple optical speckle illumination makes use of the principle of super-resolution optical fluctuation imaging (SOFI). SOFI is the principle that a higher-order statistical analysis of temporal fluctuations caused by fluorescence blinking helps resolve uncorrelated fluorophores in the same diffraction spot.
Super resolution techniques:
In principle, it is assumed that the reconstructed PA quantity, A(r), can be written in the form of the equation: A(r)=[μa(r)×I(r)]∗h(r) where μa is the optical absorption distribution, I is the optical intensity pattern, and h is the point spread function (PSF). If the region of interest is illuminated by multiple speckle patterns Ik(r) with the ensemble mean ⟨I(r)⟩=I0 , then the expression for the mean PA image can be calculated by averaging the PA images. These PA images can be produced from multiple realizations Ik(r) of the speckle illumination, and therefore the expression is as follows: ⟨A⟩(r)=I0×[μa(r)∗h(r)] This shows that the resolution is determined by the spatial frequency in h(r). If it is assumed that the speckle size is much smaller than that in h(r), then the variance image is provided by this expression: σ2[A](r)∝μa2(r)∗h2(r) The squared PSF has a higher frequency content than the PSF itself, and therefore the variance image has a higher resolution than the mean image.
Super resolution techniques:
Experimental setup As shown in Figure 8, a 5 ns pulsed laser beam was focused on a ground glass rotating diffuser, and the light was scattered onto two-dimensional absorbing samples that were embedded in an agarose gel block. The speckle grain size was recorded to be approximately 30 μm due to the sample distance from the diffuser, 5 cm. The absorbing samples were then placed onto the ultrasound transducer array that was connected to an ultrasound scanner, and polyethylene beads of 50-100 μm in diameter were assigned in place of absorbing samples with isotropic emitters. Finally, the diffuser was rotated to produce a series of PA images for 100 different speckle patterns, and the mean and variance images were produced.Figure 8: Schematic of the experimental setup of multiple optical speckle illumination. Figure courtesy of.
Super resolution techniques:
Research When multiple optical speckle illumination was tested on a sample of a set of randomly distributed 100 μm diameter absorbing beads, the variance image clearly displayed the contributions of each bead, the images displayed approximations of the point spread function (PSF) and its square, and the resolution was enhanced by a factor of 1.4 for the variance image as opposed to the mean image. The variance image appears as the convolution of the sample with the squared PSF, and the results in the below figure (Figure 9) clearly demonstrate the ability of SOFI to produce super-resolution PA imaging with multiple-speckle illumination.Figure 9: (a) A photograph of the sample of 100 μm diameter beads, (b) The mean PA image of the sample over 100 speckle realizations, (c) The variance image of the sample. Insets in both (b) and (c) are images of a single bead. Figure courtesy of.
Future directions:
Super-resolution PA imaging faces potential directions. The algorithm of BSIPAM has the potential of reconstructing structures from signals using other modalities such as photothermal imaging or optical coherence tomography. Multiple speckle illumination can be applied to the fluctuation of the absorption caused by blinking or switchable contrast agents, instead of simply tissue-induced temporal decorrelation of speckle patterns | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**POLR2J2**
POLR2J2:
DNA directed RNA polymerase II polypeptide J-related gene, also known as POLR2J2, is a human gene.This gene is a member of the RNA polymerase II subunit 11 gene family, which includes three genes in a cluster on chromosome 7q22.1 and a pseudogene on chromosome 7p13. The founding member of this family, DNA directed RNA polymerase II polypeptide J, has been shown to encode a subunit of RNA polymerase II, the polymerase responsible for synthesizing messenger RNA in eukaryotes. This locus produces multiple, alternatively spliced transcripts that potentially express isoforms with distinct C-termini compared to DNA directed RNA polymerase II polypeptide J. Most or all variants are spliced to include additional non-coding exons at the 3' end which makes them candidates for nonsense-mediated decay (NMD). Consequently, it is not known if this locus expresses a protein or proteins in vivo. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Speed shop**
Speed shop:
Speed shops are local brick and mortar businesses which typically purvey aftermarket automotive accessories intended to increase the performance of automobiles. They came into existence in the 1940s in North America as a result of the then rising popularity in hotrod culture. The term has recently broadened such that it encompasses motorcycle performance.
Examples:
Austin Speed Shop, owned by Jesse James BRICK AND MORTAR BUSINESSES Rocco and Cheater's Speed Shop So-Cal Speed Shop | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chord (astronomy)**
Chord (astronomy):
In the field of astronomy the term chord typically refers to a line crossing an object which is formed during an occultation event. By taking accurate measurements of the start and end times of the event, in conjunction with the known location of the observer and the object's orbit, the length of the chord can be determined giving an indication of the size of the occulting object. By combining observations made from several different locations, multiple chords crossing the occulting object can be determined giving a more accurate shape and size model. This technique of using multiple observers during the same event has been used to derive more sophisticated shape models for asteroids, whose shape can be highly irregular. A notable example of this occurred in 2002 when the asteroid 345 Tercidina underwent a stellar occultation of a very bright star as seen from Europe. During this event a team of at least 105 observers recorded 75 chords across the asteroid's surface allowing for a very accurate size and shape determination.In addition to using a known orbit to determine an objects size, the reverse process can also be used. In this usage the occulting object's size is taken to be known and the occultation time can be used to determine the length of the chord the background object traced across the foreground object. Knowing this chord and the foreground object's size, a more precise orbit for the object can be determined.
Chord (astronomy):
This usage of the term "chord" is similar to the geometric concept (see: Chord (geometry)). The difference being that in the geometric sense a chord refers to a line segment whose ends lie on a circle, whereas in the astronomical sense the occulting shape is not necessarily circular.
Observation process:
Because an occultation event for an individual object is quite rare, the process of observing occultation events begins with the creation of a list of candidate targets. The list is generated from a computer by analyzing the orbital motions of a large collection of objects with known orbital parameters. Once a candidate event has been chosen whose ground track passes over the site of an observer, the preparations for the observation begin. A few minutes before the event is expected to happen the observing telescope is targeted to the target star and the star's lightcurve is recorded. The recording of the lightcurve continues during and for a short time after the predicted event. This extra recording time is due in part to uncertainties in the occulting objects orbit but also due to the possibility of detecting other objects orbiting the primary object (for example in the case of a binary asteroid, also the ring system around the planet Uranus was detected this way).
Observation process:
The exact method of lightcurve determination is dependent on the specific equipment available to the observer and the goals of the observation, however in all occultation events accurate timing is an essential component of the observation process. The exact time that the foreground object eclipses the other can be used to work out a very precise position along the occulting object's orbit. Also, since the duration of the drop in the measured lightcurve gives the object's size and since occultation events typically only last somewhere on the order of a few seconds, very fast integration times are required to allow for high temporal resolution along the lightcurve. A second method of achieving very high temporal accuracy is to actually use a long exposure and allow the target star to drift across the CCD during the exposure. This method, known as the trailed image method, produces a streak along the photograph whose thickness corresponds to the brightness of the target star with the distance along the streak direction indicates time; this allows for very high temporal accuracy even when the target star may be too dim for the method described above using high frequency short exposures. With high enough temporal resolution even the angular size of the background star can be determined.Once the lightcurve has been recorded the chord across the occulting object can be determined via calculation. By using the start and end times of the occultation event the position in space of both the observer and the occulting object can be worked out (a process complicated by the fact that both the object and the observer are moving). Knowing these two locations, combined with the direction to the background object, the two endpoints of the chord can be determined using simple geometry. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solid sorbents for carbon capture**
Solid sorbents for carbon capture:
Solid sorbents for carbon capture include a diverse range of porous, solid-phase materials, including mesoporous silicas, zeolites, and metal-organic frameworks. These have the potential to function as more efficient alternatives to amine gas treating processes for selectively removing CO2 from large, stationary sources including power stations. While the technology readiness level of solid adsorbents for carbon capture varies between the research and demonstration levels, solid adsorbents have been demonstrated to be commercially viable for life-support and cryogenic distillation applications. While solid adsorbents suitable for carbon capture and storage are an active area of research within materials science, significant technological and policy obstacles limit the availability of such technologies.
Overview:
The combustion of fossil fuels generates over 13 gigatons of CO2 per year. Concern over the effects of CO2 with respect to climate change and ocean acidification led governments and industries to investigate the feasibility of technologies that capture the resultant CO2 from entering the carbon cycle. For new power plants, technologies such as pre-combustion and oxy-fuel combustion may simplify the gas separation process.
Overview:
However, existing power plants require the post-combustion separation of CO2 from the flue gas with a scrubber. In such a system, fossil fuels are combusted with air and CO2 is selectively removed from a gas mixture also containing N2, H2O, O2 and trace sulphur, nitrogen and metal impurities. While exact separation conditions are fuel and technology dependent, in general CO2 is present at low concentrations (4-15% v/v) in gas mixtures near atmospheric pressure and at temperatures of approximately -60 °C. Sorbents for carbon capture are regenerated using temperature, pressure or vacuum, so that CO2 can be collected for sequestration or utilization and the sorbent can be reused.
Overview:
The most significant impediment to carbon capture is the large amount of electricity required. Without policy or tax incentives, the production of electricity from such plants is not competitive with other energy sources. The largest operating cost for power plants with carbon capture is the reduction in the amount of electricity produced, because energy in the form of steam is diverted from making electricity in the turbines to regenerating the sorbent. Thus, minimizing the amount of energy required for sorbent regeneration is the primary goal behind much carbon capture research.
Metrics:
Significant uncertainty exists around the total cost of post-combustion CO2 capture because full-scale demonstrations of the technology have yet to come online. Thus, individual performance metrics are generally relied upon when comparisons are made between different adsorbents.Regeneration energy—Generally expressed in energy consumed per weight of CO2 captured (e.g. 3,000 kJ/kg). These values, if calculated directly from the latent and sensible heat components of regeneration, measure the total amount of energy required for regeneration.Parasitic energy—Similar to regeneration energy, but measures how much usable energy is lost. Owing to the imperfect thermal efficiency of power plants, not all of the heat required to regenerate the sorbent would actually have produced electricity.Adsorption capacity—The amount of CO2 adsorbed onto the material under the relevant adsorption conditions.
Metrics:
Working capacity—The amount of CO2 that can be expected to be captured by a specified amount of adsorbent during one adsorption–desorption cycle. This value is generally more relevant than the total adsorption capacity.
Selectivity—The calculated ability of an adsorbent to preferentially adsorb one gas over another gas. Multiple methods of reporting selectivity have been reported and in general values from one method are not comparable to values from another method. Similarly, values are highly correlated to temperature and pressure.
Comparison to aqueous amine absorbents:
Aqueous amine solutions absorb CO2 via the reversible formation of ammonium carbamate, ammonium carbonate and ammonium bicarbonate. The formation of these species and their relative concentration in solution is dependent upon the specific amine or amines as well as the temperature and pressure of the gas mixture. At low temperatures, CO2 is preferentially absorbed by the amines and at high temperatures CO2 is desorbed. While liquid amine solutions have been used industrially to remove acid gases for nearly a century, amine scrubber technology is still under development at the scale required for carbon capture.
Comparison to aqueous amine absorbents:
Advantages Multiple advantages of solid sorbents have been reported. Unlike amines, solid sorbents can selectively adsorb CO2 without the formation of chemical bonds (physisorption). The significantly lower heat of adsorption for solids requires less energy for the CO2 to desorb from the material surface. Also, two primary or secondary amine molecules are generally required to absorb a single CO2 molecule in liquids. For solid surfaces, large capacities of CO2 can be adsorbed. For temperature swing adsorption processes, the lower heat capacity of solids has been reported to reduce the sensible energy required for sorbent regeneration. Many environmental concerns over liquid amines can be eliminated by the use of solid adsorbents.
Comparison to aqueous amine absorbents:
Disadvantages Manufacturing costs are expected to be significantly greater than the cost of simple amines. Because flue gas contains trace impurities that degrade sorbents, solid sorbents may prove to be prohibitively expensive. Significant engineering challenges must be overcome. Sensible energy required for sorbent regeneration cannot be effectively recovered if solids are used, offsetting their significant heat capacity savings. Additionally, heat transfer through a solid bed is slow and inefficient, making it difficult and expensive to cool the sorbent during adsorption and heat it during desorption. Lastly, many promising solid adsorbents have been measured only under ideal conditions, which ignores the potentially significant effects H2O can have on working capacity and regeneration energy.
Physical adsorbents:
Carbon dioxide adsorbs in appreciable quantities onto many porous materials through van der Waals interactions. Compared to N2, CO2 adsorbs more strongly because the molecule is more polarizabable and possesses a larger quadrupole moment. However, stronger adsorptives including H2O often interfere with the physical adsorption mechanism. Thus, discovering porous materials that can selectively bind CO2 under flue gas conditions using only a physical adsorption mechanism is an active research area.
Physical adsorbents:
Zeolites Zeolites, a class of porous aluminosilicate solids, are currently used in a wide variety of industrial and commercial applications including CO2 separation. The capacities and selectivities of many zeolites are among the highest for adsorbents that rely upon physisorption. For example, zeolite Ca-A (5A) has been reported to display both a high capacity and selectivity for CO2 over N2 under conditions relevant for carbon capture from coal flue gas, although it has not been tested in the presence of H2O. Industrially, CO2 and H2O can be co-adsorbed on a zeolite, but high temperatures and a dry gas stream are required to regenerate the sorbent.
Physical adsorbents:
Metal-organic frameworks Metal-organic frameworks (MOFs) are promising adsorbents. Sorbents displaying a diverse set of properties have been reported. MOFs with extremely large surface areas are generally not among the best for CO2 capture compared to materials with at least one adsorption site that can polarize CO2. For example, MOFs with open metal coordination sites function as Lewis acids and strongly polarize CO2. Owing to CO2's greater polarizability and quadrupole moment, CO2 is preferentially adsorbed over many flue gas components such as N2. However, flue gas contaminants such as H2O often interfere. MOFs with specific pore sizes, tuned specifically to preferentially adsorb CO2 have been reported. 2015 studies using dolomite based solid sorbents and the MgO-based or CaO-based sorbent showed high capability and durability at elevated temperatures and pressures.
Chemical adsorbents:
Amine impregnated solids Frequently, porous adsorbents with large surface areas, but only weak adsorption sites, lack sufficient capacity for CO2 under realistic conditions. To increase low pressure CO2 adsorption capacity, adding amine functional groups to highly porous materials has been reported to result in new adsorbents with higher capacities. This strategy has been analyzed for polymers, silicas, activated carbons and metal-organic frameworks. Amine impregnated solids utilize the well-established acid-base chemistry of CO2 with amines, but dilute the amines by containing them within the pores of solids rather than as H2O solutions. Amine impregnated solids are reported to maintain their adsorption capacity and selectivity under humid test conditions better than alternatives. For example, a 2015 study of 15 solid adsorbent candidates for CO2 capture found that under multicomponent equilibrium adsorption conditions simulating humid flue gas, only adsorbents functionalized with alkylamines retained a significant capacity for CO2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Filet lace**
Filet lace:
Filet lace is the general word used for all the different techniques of embroidery on knotted net (or in French broderie sur filet noué). It is a hand made needlework created by weaving or embroidery using a long blunt needle and a thread on a ground of knotted net lace or filet work made of square or diagonal meshes of the same sizes or of different sizes. Lacis uses the same technique but is made on a ground of leno (a woven fabric) or small canvas (not a knotted lace).
History:
Filet lace is a form of decorative netting and as such can be presumed to have derived at some point from the fishnet that a community would require for fishing, hunting, transporting, etc. and not necessarily because they were living close to the water.
History:
The Latin word filatorium is being used to describe filet lace then Jourdain (1904) quotes a reference to Exeter Cathedral possessing four pieces of filet lace in 1327. Latin word filatorium place for spinning, from filare to spin, from Latin filum a thread. (See filatory.) Ingram (1922) states that there was a "cushion of net-work in St. Paul's Cathedral so [sic] early as 1295." Such work, in the 14th century, was also described as "opus araneum".Filet-work is the result of knotting a fabric of diagonal or square meshes to create an open fabric called lace. The tool to make a knotted net lace is a shuttle-needle and a gauge stick for measure of the meshes.
History:
The book Renaissance Patterns for Lace, Embroidery and Needlepoint, an unabridged facsimile of the Singuliers et nouveaux pourtraicts of 1587 by Federico de Vinciolo contains approximately 50 beautiful and well designed patterns which are suitable for filet lace-embroidery on knotted net using the linen stitch.
Technique:
As mentioned above, filet lace is created by doing embroidery stitches on a knotted net lace. The knotted ground lace can either be made by the lacemaker or as of 2003 purchased commercially in either handmade or machine-made varieties.
Technique:
Making the net by hand with a shuttle needle and a gauge involves anchoring the piece, using either a heavy cushion (which Carità (1909) recommends be made of lead but should be replaced by sand or a C clip), a chair or a stirrup around the worker's foot. Having a secure anchor against which to maintain tension, a square net is made starting from one corner and adding a new mesh on each row until the desired size is reached, then by decreasing. The individual meshes are formed on a gauge which helps ensure a uniform size and are created by knotting to a loop in the previous round: square mesh, diagonal mesh, circular, free form. By using a very fine thread and different sizes of gauge one can create a beautiful and delicate lace work.
Technique:
The knotted lace is then stretched on a frame and embroidery stitches are added using a long blunt needle and a thread. Patterns are designed on a grid with a mark for the meshes to be filled with the thread. A path (or direction) is traced on this pattern and then you follow this path with the needle on the ground lace. When a group of certain stitches are used, the technique takes a name: filet Guipure, filet Richelieu, filet Soutache, linen stitch (point de toile), darning stitch (point de reprise); and then, when a region recognizes it, it may become French filet, filet di Bosa, filet Italien, filet de Gruyère, Russian filet Guipure, etc.
Technique:
Many designs involve weaving the main design in linen stitch, indeed some designs consist entirely of linen stitch. This creates solid and open areas on the piece. A geometrical design or a sampler can use several different stitches, when a figural design will use very few stitches or only the linen stitch.
Filet lace is often seen in a single color of thread, usually white or ecru, but countries all over the world have used colored thread, precious metal threads, wool, feathers, etc. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**N-Terminal domain antiandrogen**
N-Terminal domain antiandrogen:
N-Terminal domain antiandrogens are a novel type of antiandrogen that bind to the N-terminal domain of the androgen receptor (AR) instead of the ligand-binding domain (where all currently-available antiandrogens bind) and disrupt interactions between the AR and its coregulatory binding partners, thereby blocking AR-mediated gene transcription. They are being investigated for the treatment of prostate cancer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wilson operation**
Wilson operation:
In topological graph theory, the Wilson operations are a group of six transformations on graph embeddings. They are generated by two involutions on embeddings, surface duality and Petrie duality, and have the group structure of the symmetric group on three elements. They are named for Stephen E. Wilson, who published them for regular maps in 1979; they were extended to all cellular graph embeddings (embeddings all of whose faces are topological disks) by Lins (1982).The operations are: identity, duality, Petrie duality, Petrie dual of dual, dual of Petrie dual, and dual of Petrie dual of dual or equivalently Petrie dual of dual of Petrie dual. Together they constitute the group S3.
Wilson operation:
These operations are characterized algebraically as the only outer automorphisms of certain group-theoretic representations of embedded graphs.
Wilson operation:
Via their action on dessins d'enfants, they can be used to study the absolute Galois group of the rational numbers.One can also define corresponding operations on the edges of an embedded graph, the partial dual and partial Petrie dual, such that performing the same operation on all edges simultaneously is equivalent to taking the surface dual or Petrie dual. These operations generate a larger group, the ribbon group, acting on the embedded graphs. As an abstract group, it is isomorphic to S3m , the m -fold product of copies of the three-element symmetric group. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Word art**
Word art:
Word art or text art is a form of art that includes text, forming words or phrases, as its main component; it is a combination of language and visual imagery.
Overview:
There are two main types of word art: One uses words or phrases because of their ideological meaning, their status as an icon, or their use in well-known advertising slogans; in this type, the content is of paramount importance, and is seen in some of the work of Barbara Kruger, On Kawara and Jenny Holzer's projection artwork called "For the City" (2005) in Manhattan.
Overview:
In the other kind of word art, as exemplified by the word paintings of Christopher Wool, text forms the actual artistic component of the work.The style has been used since the 1950s by artists classified as postmodern, partly as a reaction to abstract art of the time. Word art has been used in painting, sculpture, lithography, screen-printing and projection mapping, and applied to T-shirts and other practical items. Artists often use words from sources such as advertising, political slogans and graphic design, and use them for various effects from serious to comical.
Artists:
Other artists whose work is known for using text include Jasper Johns, Robert Indiana, Shepard Fairey, Mel Bochner, Kay Rosen, Lawrence Weiner, Ed Ruscha and the collective Guerrilla Girls, whose work conveys political messages in the tradition of protest art. Australian artists include Abdul Abdullah, Kate Just, Anastasia Klose, Sue Kneebone, and Vernon Ah Kee.Hong Kong artist Tsang Kin-Wah's work, which includes video installations, incorporates word art to express emotions and ideas, for example in Untitled-Hong Kong (2003-2004), which mixes bad language with pretty floral patterns based on William Morris designs.
Exhibitions:
A 2018 exhibition held simultaneously at Subliminal Projects (which was co-founded by Fairey) in Los Angeles and Faction Art Projects in New York featured the word art of Holzer, Ruscha, Guerrilla Girls and Betty Tompkins as well as younger artists like Ramsey Dau and Scott Albrecht.Also in 2018, an exhibition called Word in the Hugo Mitchell Gallery in Adelaide, South Australia, featured the work of Just, Abdullah, Klose, Kneebone, Alice Lang, Richard Lewer, Sera Waters, and many others. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Homo faber**
Homo faber:
Homo faber (Latin for 'Man the Maker') is the concept that human beings are able to control their fate and their environment as a result of the use of tools.
Original phrase:
In Latin literature, Appius Claudius Caecus uses this term in his Sententiæ, referring to the ability of man to control his destiny and what surrounds him: Homo faber suae quisque fortunae ("Every man is the artifex of his destiny").
In older anthropological discussions, Homo faber, as the "working man", is confronted with Homo ludens, the "playing man", who is concerned with amusements, humor, and leisure. It is also used in George Kubler's book, The Shape of Time as a reference to individuals who create works of art.
Modern usage:
The classic homo faber suae quisque fortunae was "rediscovered" by humanists in 14th century and was central in the Italian Renaissance.
In the 20th century, Max Scheler and Hannah Arendt made the philosophical concept central again.
Henri Bergson also referred to the concept in Creative Evolution (1907), defining intelligence, in its original sense, as the "faculty to create artificial objects, in particular tools to make tools, and to indefinitely variate its makings." Homo Faber is the title of an influential novel by the Swiss author Max Frisch, published in 1957. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Barcode of Life Data System**
Barcode of Life Data System:
The Barcode of Life Data System (commonly known as BOLD or BOLDSystems) is a web platform specifically devoted to DNA barcoding. It is a cloud-based data storage and analysis platform developed at the Centre for Biodiversity Genomics in Canada. It consists of four main modules, a data portal, an educational portal, a registry of BINs (putative species), and a data collection and analysis workbench which provides an online platform for analyzing DNA sequences. Since its launch in 2005, BOLD has been extended to provide a range of functionality including data organization, validation, visualization and publication. The most recent version of the system, version 4, launched in 2017, brings a set of improvements supporting data collection and analysis but also includes novel functionality improving data dissemination, citation, and annotation. Before November 16, 2020, BOLD already contained barcode sequences for 318,105 formally described species covering animals, plants, fungi, protists (with ~8.9 million specimens).BOLD is freely available to any researcher with interests in DNA Barcoding. By providing specialized services, it aids in the publication of records that meet the standards needed to gain BARCODE designation in the international nucleotide sequence databases. Because of its web-based delivery and flexible data security model, it is also well positioned to support projects that involve broad research alliances.Data release of BOLD mainly originated from a project BARCODE 500K executed by the International Barcode of Life (iBOL) Consortium from 2010 to 2015. It aimed for data acquisition of DNA barcode records for 5M specimens representing 500K species. All the specimens collection, sequences assignment, information sorting are contributed by great amount of scientists, collaborators and facilities from nations over the world. Data accumulation increases the accuracy of DNA barcode identification and facilitates the attainment of barcoding of life. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Norrin**
Norrin:
Norrin, also known as Norrie disease protein or X-linked exudative vitreoretinopathy 2 protein (EVR2) is a protein that in humans is encoded by the NDP gene. Mutations in the NDP gene are associated with the Norrie disease.
Function:
Signaling induced by the protein Norrin regulates vascular development of vertebrate retina and controls important blood vessels in the ear. Norrin binds with high affinity to Frizzled 4, and Frizzled 4 knockout mice exhibit abnormal vascular development of the retina.
Clinical significance:
NDP is the genetic locus identified as harboring mutations that result in Norrie disease. Norrie disease is a rare genetic disorder characterized by bilateral congenital blindness that is caused by a vascularized mass behind each lens due to a maldeveloped retina (pseudoglioma). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reverse Polish notation**
Reverse Polish notation:
Reverse Polish notation (RPN), also known as reverse Łukasiewicz notation, Polish postfix notation or simply postfix notation, is a mathematical notation in which operators follow their operands, in contrast to Polish notation (PN), in which operators precede their operands. It does not need any parentheses as long as each operator has a fixed number of operands. The description "Polish" refers to the nationality of logician Jan Łukasiewicz, who invented Polish notation in 1924.The first computer to use postfix notation, though it long remained essentially unknown outside of Germany, was Konrad Zuse's Z3 in 1941 as well as his Z4 in 1945. The reverse Polish scheme was again proposed in 1954 by Arthur Burks, Don Warren, and Jesse Wright and was independently reinvented by Friedrich L. Bauer and Edsger W. Dijkstra in the early 1960s to reduce computer memory access and use the stack to evaluate expressions. The algorithms and notation for this scheme were extended by the Australian philosopher and computer scientist Charles L. Hamblin in the mid-1950s.During the 1970s and 1980s, Hewlett-Packard used RPN in all of their desktop and hand-held calculators, and has continued to use it in some models into the 2020s. In computer science, reverse Polish notation is used in stack-oriented programming languages such as Forth, STOIC, PostScript, RPL, and Joy.
Explanation:
In reverse Polish notation, the operators follow their operands. For example, to add 3 and 4 together, the expression is 3 4 + rather than 3 + 4. The expression 3 − 4 + 5 in conventional notation is 3 4 − 5 + in reverse Polish notation: 4 is first subtracted from 3, then 5 is added to it.
Explanation:
The concept of a stack, a last-in/first-out construct, is integral to the left-to-right evaluation of RPN. In the example 3 4 -, first the 3 is put onto the stack, then the 4; the 4 is now on top and the 3 below it. The subtraction operator removes the top two items from the stack, performs 3 - 4, and puts the result of -1 onto the stack.
Explanation:
The common terminology is that added items are pushed on the stack and removed items are popped.
The advantage of reverse Polish notation is that it removes the need for order of operations and parentheses that are required by infix notation and can be evaluated linearly, left-to-right. For example, the infix expression (3 × 4) + (5 × 6) becomes 3 4 × 5 6 × + in reverse Polish notation.
Practical implications:
In comparison testing of reverse Polish notation with algebraic notation, reverse Polish has been found to lead to faster calculations, for two reasons. The first reason is that reverse Polish calculators do not need expressions to be parenthesized, so fewer operations need to be entered to perform typical calculations. Additionally, users of reverse Polish calculators made fewer mistakes than for other types of calculators. Later research clarified that the increased speed from reverse Polish notation may be attributed to the smaller number of keystrokes needed to enter this notation, rather than to a smaller cognitive load on its users. However, anecdotal evidence suggests that reverse Polish notation is more difficult for users who previously learned algebraic notation.
Converting from infix notation:
Edsger W. Dijkstra invented the shunting-yard algorithm to convert infix expressions to postfix expressions (reverse Polish notation), so named because its operation resembles that of a railroad shunting yard.
There are other ways of producing postfix expressions from infix expressions. Most operator-precedence parsers can be modified to produce postfix expressions; in particular, once an abstract syntax tree has been constructed, the corresponding postfix expression is given by a simple post-order traversal of that tree.
Implementations:
History The first computer implementing a form of reverse Polish notation (but without the name), was Konrad Zuse's Z3, which he started to construct in 1938 and demonstrated publicly on 12 May 1941. In dialog mode, it allowed operators to enter two operands followed by the desired operation. It was destroyed on 21 December 1943 in a bombing raid. With Zuse's help a first replica was built in 1961. The 1945 Z4 also added a stack.Other early computers to implement architectures enabling reverse Polish notation were the English Electric Company's KDF9 machine, which was announced in 1960 and commercially available in 1963, and the Burroughs B5000, announced in 1961 and also delivered in 1963: Presumably, the KDF9 designers drew ideas from Hamblin's GEORGE (General Order Generator), an autocode programming system written for a DEUCE computer installed at the University of Sydney, Australia, in 1957.One of the designers of the B5000, Robert S. Barton, later wrote that he developed reverse Polish notation independently of Hamblin sometime in 1958 after reading a 1954 textbook on symbolic logic by Irving Copi, where he found a reference to Polish notation, which made him read the works of Jan Łukasiewicz as well, and before he was aware of Hamblin's work.
Implementations:
Friden introduced reverse Polish notation to the desktop calculator market with the EC-130, designed by Robert "Bob" Appleby Ragen, supporting a four-level stack in June 1963. The successor EC-132 added a square root function in April 1965. Around 1966, the Monroe Epic calculator supported an unnamed input scheme resembling RPN as well.
Implementations:
Hewlett-Packard Hewlett-Packard engineers designed the 9100A Desktop Calculator in 1968 with reverse Polish notation with only three stack levels with working registers X ("keyboard"), Y ("accumulate") and visible storage register Z ("temporary"), a reverse Polish notation variant later referred to as three-level RPN. This calculator popularized reverse Polish notation among the scientific and engineering communities. The HP-35, the world's first handheld scientific calculator, introduced the classical four-level RPN with its specific ruleset of the so-called operational (memory) stack (later also called automatic memory stack) in 1972. In this scheme, the Enter ↑ key duplicates values into Y under certain conditions, and the top register gets duplicated on drops in order to ease some calculations and to save keystrokes. HP used reverse Polish notation on every handheld calculator it sold, whether scientific, financial, or programmable, until it introduced the HP-10 adding machine calculator in 1977. By this time, HP was the leading manufacturer of calculators for professionals, including engineers and accountants.
Implementations:
Later calculators with LCD displays in the early 1980s, such as the HP-10C, HP-11C, HP-15C, HP-16C, and the financial HP-12C calculator also used reverse Polish notation. In 1988, Hewlett-Packard introduced a business calculator, the HP-19B, without reverse Polish notation, but its 1990 successor, the HP-19BII, gave users the option of using algebraic or reverse Polish notation again.
Implementations:
Around 1987, HP introduced RPL, an object-oriented successor to reverse Polish notation. It deviates from classical reverse Polish notation by using a stack only limited by the amount of available memory (instead of three or four fixed levels) and which could hold all kinds of data objects (including symbols, strings, lists, matrices, graphics, programs, etc.) instead of just numbers. It also changed the behaviour of the stack to no longer duplicate the top register on drops (since in an unlimited stack there is no longer a top register) and the behaviour of the Enter ↑ key so that it no longer duplicated values into Y, which had shown to sometimes cause confusion among users not familiar with the specific properties of the automatic memory stack. From 1990 to 2003, HP manufactured the HP-48 series of graphing RPL calculators, and in 2006 introduced the HP 50g.
Implementations:
As of 2011, Hewlett-Packard was offering the calculator models 12C, 12C Platinum, 17bII+, 20b, 30b, 33s, 35s, 48gII (RPL) and 50g (RPL) which support reverse Polish notation. While calculators emulating classical models continue to support classical reverse Polish notation, new reverse Polish notation models feature a variant of reverse Polish notation, where the Enter ↑ key behaves as in RPL. This latter variant is sometimes known as entry RPN. In 2013, the HP Prime introduced a 128-level form of entry RPN called advanced RPN. In late 2017, the list of active models supporting reverse Polish notation included only the 12C, 12C Platinum, 17bii+, 35s and Prime. On 1 November 2021, Moravia Consulting spol. s r.o. (for all markets but the Americas) and Royal Consumer Information Products, Inc. (for the Americas) became the licensees of HP Development Company, L.P. to continue the development, production, distribution, marketing and support of HP-branded calculators. By July 2023, only the 12C, 12C Platinum, the freshly released HP 15C Collector's Edition, and the Prime remain active models supporting RPN, with a potentially new version of the 35s vaguely announced.
Implementations:
WP 31S and WP 34S The community-developed calculators WP 31S and WP 34S, which are based on the HP 20b/HP 30b hardware platform, support Hewlett-Packard-style classical reverse Polish notation with either a four- or an eight-level stack. A seven-level stack had been implemented in the MITS 7400C scientific desktop calculator in 1972 and an eight-level stack was already suggested by John A. Ball in 1978.
Implementations:
Sinclair Radionics In Britain, Clive Sinclair's Sinclair Scientific and Scientific Programmable models used reverse Polish notation.
Implementations:
Commodore In 1974, Commodore produced the Minuteman *6 (MM6) without an enter key and the Minuteman *6X (MM6X) with an enter key, both implementing a form of two-level RPN. The SR4921 RPN came with a variant of four-level RPN with stack levels named X, Y, Z, and W (rather than T) and an Ent key (for "entry"). In contrast to Hewlett-Packard's reverse Polish notation implementation, W filled with 0 instead of its contents being duplicated on stack drops.
Implementations:
Prinztronic Prinz and Prinztronic were own-brand trade names of the British Dixons photographic and electronic goods stores retail chain, later rebranded as Currys Digital stores, and became part of DSG International. A variety of calculator models was sold in the 1970s under the Prinztronic brand, all made for them by other companies.
Among these was the PROGRAM Programmable Scientific Calculator which featured reverse Polish notation.
Heathkit The Aircraft Navigation Computer Heathkit OC-1401/OCW-1401 used five-level RPN in 1978.
Soviet Union Soviet programmable calculators (MK-52, MK-61, B3-34 and earlier B3-21 models) used reverse Polish notation for both automatic mode and programming. Modern Russian calculators MK-161 and MK-152, designed and manufactured in Novosibirsk since 2007 and offered by Semico, are backwards compatible with them. Their extended architecture is also based on reverse Polish notation.
Implementations:
Other Existing implementations using reverse Polish notation include: Stack-oriented programming languages such as: Forth STOIC Factor PostScript page description language BibTeX Befunge Joy IPTSCRAE Lotus 1-2-3 and Lotus Symphony formulas RPL (aka Reverse Polish Language), a programming language for the Commodore PET around 1979/1981 RPL (aka Reverse Polish Lisp), a programming language for Hewlett-Packard calculators between 1984 and 2015 RPNL (Reverse Polish Notation Language) Hardware calculators: Some Hewlett-Packard science/engineering and business/finance calculators Semico calculators SwissMicros calculators Some APF calculators as well can use RPN Software calculators: Mac OS X Calculator Several Apple iPhone applications e.g. "reverse polish notation calculator" Several Android applications e.g. "RealCalc" Several Windows 10 Mobile applications e.g. "RPN9" Unix system calculator program dc Emacs lisp library package calc Xorg calculator (xcalc) ARPCalc, a powerful scientific/engineering RPN calculator for Windows, Linux and Android that also has a web-browser based version grpn scientific/engineering calculator using the GIMP Toolkit (GTK+) F-Correlatives in MultiValue dictionary items RRDtool, a widely used tabulating and graphing software grdmath, a program for algebraic operations on NetCDF grids, part of Generic Mapping Tools (GMT) suite galculator, a GTK desktop calculator Mouseless Stack-Calculator scientific/engineering calculator including complex numbers rpCalc, a simple reverse polish notation calculator written in Python for Linux and MS Windows and published under the GNU GPLv2 license orpie, RPN calculator for the terminal for real or complex numbers or matrices Qalculate!, a powerful and versatile cross-platform desktop calculator Class libraries TRURL, a class library for the construction of RPN calculators in Object Pascal | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BREST (reactor)**
BREST (reactor):
The BREST reactor is a Russian concept of lead-cooled fast reactor aiming to the standards of a generation IV reactor. Two designs are planned, the BREST-300 (300 MWe) and the BREST-1200 (1200 MWe). Main characteristics of the BREST reactor are passive safety and a closed fuel cycle.The reactor uses nitride uranium-plutonium fuel, is a breeder reactor and can burn long-term radioactive waste. Lead is chosen as a coolant for being high-boiling, radiation-resistant, low-activated and at atmospheric pressure.
BREST-300:
The construction of the BREST-300-OD in Seversk (near Tomsk) was approved in August 2016. The preparatory construction work commenced in May 2020. Construction started in 8 June 2021.The first BREST-300 will be a demonstration unit, as forerunner to the BREST-1200.
BREST-300:
The combination of a heat-conducting nitride fuel and the properties of the lead coolant allow for complete plutonium breeding inside the core. This results in a small operating reactivity margin and enables power operation without prompt neutron reactor power excursions. In simpler terms, the uranium 238 in the core is converted to plutonium, which itself will undergo an effective fission in the fast spectrum. This is in contrast to other fast reactor designs, where an outside blanket of uranium is required; placing too much uranium in the core section would lead to subcritical operation. In doing so, a substantial number of neutrons is required for breeding. This implies in turn, that in the reactor operation, there are "just enough" neutrons to operate, and no excess is present.
Technical data:
Thermal power: 700 MW Electrical power 300 MW Average lead coolant temperature: 540 °C (1,004 °F) on entry, 340 °C (644 °F) on exit of the steam generator Loop number: 4 Core height: 1,100 millimetres (43 in) Fuel load: 20.6 short tons (18.7 t) Fuel campaign: 5 years | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fillrate**
Fillrate:
In computer graphics, a video card's pixel fillrate refers to the number of pixels that can be rendered on the screen and written to video memory in one second. Pixel fillrates are given in megapixels per second or in gigapixels per second (in the case of newer cards), and are obtained by multiplying the number of render output units (ROPs) by the clock frequency of the graphics processing unit (GPU) of a video card. A similar concept, texture fillrate, refers to the number of texture map elements (texels) the GPU can map to pixels in one second. Texture fillrate is obtained by multiplying the number of texture mapping units (TMUs) by the clock frequency of the GPU. Texture fillrates are given in mega or gigatexels per second.
Fillrate:
However, there is no full agreement on how to calculate and report fillrates. Another possible method is to multiply the number of pixel pipelines by the GPU's clock frequency.
Fillrate:
The results of these multiplications correspond to a theoretical number. The actual fillrate depends on many other factors. In the past, the fillrate has been used as an indicator of performance by video card manufacturers such as ATI and NVIDIA, however, the importance of the fillrate as a measurement of performance has declined as the bottleneck in graphics applications has shifted. For example, today, the number and speed of unified shader processing units has gained attention.Scene complexity can be increased by overdrawing, which happens when an object is drawn to the frame buffer, and another object (such as a wall) is then drawn on top of it, covering it up. The time spent drawing the first object is thus wasted because it isn't visible. When a sequence of scenes is extremely complex (many pixels have to be drawn for each scene), the frame rate for the sequence may drop. When designing graphics intensive applications, one can determine whether the application is fillrate-limited (or shader limited) by seeing if the frame rate increases dramatically when the application runs at a lower resolution or in a smaller window. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TMEM219**
TMEM219:
Transmembrane protein 219 also known as insulin-like growth factor-binding protein 3 receptor or IGFBP-3R is a protein that in humans is encoded by the TMEM219 gene. IGFBP-3R acts as a cell death receptor for IGFBP3. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nu Boötis**
Nu Boötis:
The Bayer designation Nu Boötis (ν Boo / ν Boötis) is shared by two star systems, in the constellation Boötes: ν¹ Boötis ν² BoötisThey are separated by 0.17° on the sky. They have almost identical visual magnitudes, but contrasting colours: ν1 is a yellow giant star, while ν2 is a close binary with two white main sequence stars.
Both stars were members of asterism 七公 (Qī Gōng), Seven Excellencies, Heavenly Market enclosure.Ptolemy considered Nu Boötis to be shared by Hercules, and Bayer assigned it a designation in both constellations: Nu Boötis (ν Boo) and Psi Herculis (ψ Her). When the modern constellation boundaries were fixed in 1930, the latter designation dropped from use. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Grammar systems theory**
Grammar systems theory:
Grammar systems theory is a field of theoretical computer science that studies systems of finite collections of formal grammars generating a formal language. Each grammar works on a string, a so-called sequential form that represents an environment. Grammar systems can thus be used as a formalization of decentralized or distributed systems of agents in artificial intelligence.Let A be a simple reactive agent moving on the table and trying not to fall down from the table with two reactions, t for turning and ƒ for moving forward. The set of possible behaviors of A can then be described as formal language LA={(fmtnfr)+:1≤m≤k;1≤n≤ℓ;1≤r≤k}, where ƒ can be done maximally k times and t can be done maximally ℓ times considering the dimensions of the table.
Grammar systems theory:
Let GA be a formal grammar which generates language LA . The behavior of A is then described by this grammar. Suppose the A has a subsumption architecture; each component of this architecture can be then represented as a formal grammar, too, and the final behavior of the agent is then described by this system of grammars.The schema on the right describes such a system of grammars which shares a common string representing an environment. The shared sequential form is sequentially rewritten by each grammar, which can represent either a component or generally an agent.
Grammar systems theory:
If grammars communicate together and work on a shared sequential form, it is called a Cooperating Distributed (DC) grammar system. Shared sequential form is a similar concept to the blackboard approach in AI, which is inspired by an idea of experts solving some problem together while they share their proposals and ideas on a shared blackboard.
Grammar systems theory:
Each grammar in a grammar system can also work on its own string and communicate with other grammars in a system by sending their sequential forms on request. Such a grammar system is then called a Parallel Communicating (PC) grammar system.PC and DC are inspired by distributed AI. If there is no communication between grammars, the system is close to the decentralized approaches in AI. These kinds of grammar systems are sometimes called colonies or Eco-Grammar systems, depending (besides others) on whether the environment is changing on its own (Eco-Grammar system) or not (colonies). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ilizarov apparatus**
Ilizarov apparatus:
In medicine, the Ilizarov apparatus is a type of external fixation apparatus used in orthopedic surgery to lengthen or to reshape the damaged bones of an arm or a leg; used as a limb-sparing technique for treating complex fractures and open bone fractures; and used to treat an infected non-union of bones, which cannot be surgically resolved. The Ilizarov apparatus corrects angular deformity in a leg, corrects differences in the lengths of the legs of the patient, and resolves osteopathic non-unions; further developments of the Ilizarov apparatus progressed to the development of the Taylor Spatial Frame. Dr. Gavriil Abramovich Ilizarov developed the Ilizarov apparatus as a limb-sparing surgical remedy for the treatment of the osteopathic non-unions of patients with unhealed broken limbs. Consequent to a patient lengthening, rather than shortening, the adjustable-rod frame of his external-fixation apparatus, Dr. Ilizarov observed the formation of a fibrocartilage callus at and around the site of the bone fracture, and so discovered the phenomenon of distraction osteogenesis, the regeneration of bone and soft tissues that culminates in the creation of new bone.In 1987, Dr. Victor Frankel introduced to U.S. medicine the Ilizarov apparatus and Dr. Ilizarov's surgical techniques for repairing the broken bones of damaged limbs. The mechanical functions of the Ilizarov apparatus derive from the mechanics of the shaft bow harness for a horse.
The apparatus:
The Ilizarov apparatus is a specialized external fixator of modular construction, composed of rings (stainless steel, titanium) that are transfixed to healthy bone with Kirschner wires and pins of heavy-gauge stainless steel, and immobilised in place with additional rings and threaded rods that are attached with and through adjustable nuts. The circular construction of the apparatus, the rods, and the controlled tautness of the Kirschner wires immobilises the damaged limb to allow healing.The mechanical functions of the Ilizarov apparatus are based upon the principles of tension (pulling force), wherein the controlled application of mechanical tension to the damaged limb immobilises the broken bones, and so facilitates the biological process of distraction osteogenesis (the regeneration of bone and soft tissue) in a reliable and reproducible manner. Moreover, external fixation with the apparatus allows the damaged limb to bear weight early in the medical treatment.Once emplaced onto the limb, the top rings of the apparatus transfer mechanical force to the bottom ring through the rods, and so by-pass the site of the fractured bone, thus the Ilizarov apparatus immobilizes the damaged limb and relieves mechanical stresses from the wound, which then allows the patient to move the entire limb. The middle rings stiffen the support rods and hold the bone fragments in place, whilst supporting the immobilised limb. In by-passing the site of the bone fracture, the top and bottom rings bear the critical load by transferring mechanical force from the area of healthy bone above the fracture to the area of healthy bone below the fracture.
Clinical application:
The Ilizarov surgical method of distraction osteogenesis (regeneration of bone and soft tissues) for repairing complex fractures of the bones of the limbs is the preferred treatment for cases featuring a high risk of bacterial infection; and for cases wherein the extent and severity of the fracture precludes using internal fixators to immobilise the damaged bone for proper repair.In 1968, Dr. Ilizarov successfully treated the non-union osteopathy of Valeriy Brumel, a Soviet athlete, who suffered a broken ankle and a broken shinbone (tibia) of the right leg, had undergone more than twenty failed bone-repair surgeries in three years, and yet his broken leg-bones had not healed and the leg was shorter than before the motorcycle accident in 1965. By way of distraction osteogenesis and an external-fixation apparatus, Dr. Ilizarov resolved Brumel's osteopathic non-union, by growing new leg bone, which extended the athlete's leg 3.5 cm (1.4 in) to its normal length.In 1980, Ilizarov successfully treated the osteopathic non-union of Carlo Mauri, a journalist and an explorer, who, ten years earlier, had broken the distal end of a tibia in an Alpine accident, yet his broken leg-bone had yet to heal. During an expedition in the Atlantic Ocean, Mauri's leg wound reopened; a concerned teammate, a Russian doctor, recommended that Mauri consult with Dr. Ilizarov for proper diagnosis, surgical repair, and treatment in the city of Kurgan, Russia.In 2013, consequent to a PTSD-induced fall that broke his left leg, the British war correspondent Ed Vulliamy underwent limb-sparing medical treatment that featured surgeries and an Ilizarov apparatus to repair and heal the severely fractured bones in his left leg.
Clinical application:
Clinical exampleThe photographs and radiographs illustrate the application and emplacement of an external fixator, an Ilizarov apparatus, to repair the open fracture of the lower left leg of a man. The photographs were taken four weeks after the patient fractured the shinbone (tibia) and the calfbone (fibula) of his left leg, and two weeks after the surgical emplacement of the Ilizarov apparatus to immobilise the leg and isolate the wound and fracture site to facilitate healing.
Bone work:
The Ilizarov apparatus corrects deformed bones by way of the process of distraction osteogenesis, which reproduces bone tissues. After an initial surgery during which the bone to repair is fractured, and the apparatus is attached to the limb of the patient; once the fracture has been immobilised, the bone tissues begin to grow and eventually bridge the fracture with new bone. In the course of the osteogenesis process, the bone grows and the physician extends the rods of the Ilizarov apparatus to increase the space between the rings at each end of the apparatus. As the rings are installed at and connected to the opposite ends of the fracture site, the adjustment, done four times a day, separates the healing fracture by approximately one millimetre per day; in due course, the millimetric adjustments lengthen the bone of the damaged limb. Upon completing the bone-lengthening phase of treatment, the Ilizarov apparatus remains emplace for a period of osteopathic consolidation, the ossification of the regenerated bone tissues. Using crutches, the patient is able to bear weight on the damaged limb; once healed, the patient undergoes a second surgery to remove the Ilizarov apparatus from the repaired limb. The result of the Ilizarov surgical treatment is a limb that is much longer than before the medical treatment.
Bone work:
In the case of lengthening a leg bone, an additional surgery will lengthen the Achilles tendon to accommodate the longer length of the treated bone. The therapeutic advantage of the Ilizarov treatment is that the patient can be physically active whilst awaiting the bone to repair. The Ilizarov apparatus also is used to treat and resolve a structural defect in a long bone, by transporting a segment of bone whilst simultaneously lengthening and regenerating the bone to reduce the defect, and so produce a single bone. Installing the Ilizarov apparatus requires minimally invasive surgery, and is not free of medical complications, such as inflammation, muscle transfixion, and contracture of the affected joint. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Volatilisation**
Volatilisation:
Volatilization is the process whereby a dissolved sample is vaporised. In atomic spectroscopy this is usually a two-step process. The analyte is turned into small droplets in a nebuliser which are entrained in a gas flow which is in turn volatilised in a high temperature flame in the case of AAS or volatilised in a gas plasma torch in the case of ICP spectroscopy.
Herbicide volatilisation:
Herbicide volatilisation refers to evaporation or sublimation of a volatile herbicide. The effect of gaseous chemical is lost at its intended place of application and may move downwind and affect other plants not intended to be affected causing crop damage. Herbicides vary in their susceptibility to volatilisation. Prompt incorporation of the herbicide into the soil may reduce or prevent volatilisation. Wind, temperature, and humidity also affect the rate of volatilisation with humidity reducing in. 2,4-D and dicamba are commonly used chemicals that are known to be subject to volatilisation but there are many others. Application of herbicides later in the season to protect herbicide-resistant genetically modified plants increases the risk of volatilisation as the temperature is higher and incorporation into the soil impractical.,Herbicide applied as a powder or a mist can also drift in the wind in solid form as dust or liquid form as tiny drops. However, a transformation of known herbicides, such as glyphosate, dicamba or MCPA, into the form of herbicidal ionic liquids proved to be a solution to this particular problem since herbicidal ionic systems express lower susceptibility to volatilisation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nifekalant**
Nifekalant:
Nifekalant (INN) is a class III antiarrhythmic agent approved in Japan for the treatment of arrhythmias and ventricular tachycardia. It has the brand name Shinbit. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**RMIT School of Applied Sciences**
RMIT School of Applied Sciences:
The RMIT School of Applied Sciences was an Australian tertiary education school within the College of Science Engineering and Health of RMIT University. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pull to par**
Pull to par:
Pull to Par is the effect in which the price of a bond converges to par value as time passes. At maturity the price of a debt instrument in good standing should equal its par (or face value).Another name for this effect is reduction of maturity.
It results from the difference between market interest rate and the nominal yield on the bond.
The Pull to Par effect is one of two factors that influence the market value of the bond and its volatility (the second one is the level of market interest rates). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**EIF4G**
EIF4G:
Eukaryotic translation initiation factor 4 G (eIF4G) is a protein involved in eukaryotic translation initiation and is a component of the eIF4F cap-binding complex. Orthologs of eIF4G have been studied in multiple species, including humans, yeast, and wheat. However, eIF4G is exclusively found in domain Eukarya, and not in domains Bacteria or Archaea, which do not have capped mRNA. As such, eIF4G structure and function may vary between species, although the human EIF4G1 has been the focus of extensive studies. (Other human paralogs are EIF4G2 and EIF4G3.) Across species, eIF4G strongly associates with eIF4E, the protein that directly binds the mRNA cap. Together with the RNA helicase protein eIF4A, these form the eIF4F complex.
EIF4G:
Within the cell eIF4G is found primarily in the cytoplasm, usually bound to eIF4E; however, it is also found in the nucleus, where its function is unknown. It may have a role in nonsense-mediated decay.
History:
eIF4G stands for eukaryotic initiation factor 4 gamma (typically gamma is now replaced by G in the literature). It was initially isolated by fractionation, found present in fraction 4 gamma, and was involved in eukaryotic translation initiation.
Binding partners:
eIF4G has been found to associate with many other proteins besides those of the eIF4F complex, including MNK-1, CBP80, CBP20, PABP, and eIF3. eIF4G also directly binds mRNA and has multiple positively charged regions for this function. Several IRESs also bind eIF4G directly, as do BTE CITEs.
In translation initiation:
eIF4G is an important scaffold for the eIF4F complex and aids in recruiting the 40S ribosomal subunit to mRNA.
In translation initiation:
There are three mechanisms that the 40S ribosome can come to recognize the start codon: scanning, internal entry, and shunting. In scanning, the 40S ribosome slides along the RNA until it recognizes a start site (typically an AUG sequence in "good context"). In internal entry, the 40S ribosome does not start from the beginning (5' end) of the mRNA but instead starts from somewhere in the middle. In shunting, after the 40S ribosome starts sliding along the mRNA it "jumps" or skips large sections; the mechanism for this is still unclear. eIF4G is required for most types of initiation, except in special cases such as internal initiation at the HCV IRES or Cripavirus IRES.
In translation initiation:
eIF4G is an initiation factor involved in the assembly of the 43S and 48S translation initiation complex. This particular initiation factor binds to the PABPI (PolyA binding protein I), which is in turn binds the messenger RNA's poly(A) tail and eIF3, which is bound to the incoming small ribosomal subunit (40S).
In disease:
eIF4G has been implicated in breast cancer. It appears in increased levels in certain types of breast cancer and increases production of mRNAs that contain IRESs; these mRNAs produce hypoxia- and stress-related proteins that encourage blood vessel invasion (which is important for tumorigenesis).
Role in aging:
Regulation of translation initiation by eIF4G is vital for protein synthesis in developing organisms, for example yeast and nematodes. Deletion of eIF4G is lethal in yeast. In the roundworm C. elegans, knockout of eIF4G leads to animals that cannot develop past the early larval stage (L2) of development. The critical role of eIF4G in development appears to be reversed in adulthood, when eIF4G dysregulation negatively impacts lifespan and increases susceptibility to certain aging-related diseases (see eIF4G in diseases above). Inhibiting eIF4G during adulthood in C. elegans drastically extends lifespan, comparable to the lifespan increase exhibited during dietary restriction. In addition, inhibiting eIF4G reduces overall protein translation, while preferentially translating mRNA of genes important for responding to stress and against those associated with growth and reproduction. Thus eIF4G appears to control differential mRNA translation during periods or growth and stress, which may ultimately lead to age-related decline.
Importance in virology:
As previously mentioned, eIF4G is bound by certain IRESs, which were initially discovered in viruses. Some viral IRESs directly bind eIF4G and co-opt it to gain access to the ribosome. Some cellular mRNAs also contain IRESs (including eIF4G itself).Some viral proteases cleave off part of eIF4G, that contains the eIF4E binding region. This has the effect of preventing most cellular mRNAs from binding eIF4G; however, a few cellular mRNAs with IRESs still translate under these conditions.
Importance in virology:
One example of an eIF4G binding site in a viral IRES is in the EMCV IRES (nucleotides 746–949). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cesbronite**
Cesbronite:
Cesbronite is a copper-tellurium oxysalt mineral with the chemical formula Cu3Te6+O4(OH)4 (IMA 17-C). It is colored green and its crystals are orthorhombic dipyramidal. Cesbronite is rated 3 on the Mohs Scale. It is named after Fabien Cesbron (born 1938), a French mineralogist.
Occurrence:
It was first found in the Bambollita ("La Oriental") mine in the Mexican state of Sonora. It also occurs in the Tombstone District of Cochise County, Arizona and the Tintic District of the East Tintic Mountains, Juab County, Utah. It is often associated with argentian gold, teineite, carlfriesite, xocomecatlite, utahite, leisingite, jensenite and hematite. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Motorola Minitor**
Motorola Minitor:
The Motorola Minitor is a portable, analog, receive only, voice pager typically carried by fire, rescue, and EMS personnel (both volunteer and career) to alert of emergencies. The Minitor, slightly smaller than a pack of cigarettes, is carried on a person and usually left in selective call mode. When the unit is activated, the pager sounds a tone alert, followed by an announcement from a dispatcher alerting the user of a situation. After activation, the pager remains in monitor mode much like a scanner, and monitors transmissions on that channel until the unit is reset back into selective call mode either manually, or automatically after a set period of time, depending on programming.
Purpose and History:
In the times before modern radio communications, it was difficult for emergency services such as volunteer fire departments to alert their members to an emergency, since the members were not based at the station. The earliest methods of sounding an alarm would typically be by ringing a bell either at the fire station or the local church. As electricity became available, most fire departments used fire sirens or whistles to summon volunteers (many fire departments still use outdoor sirens and horns along with pagers to alert volunteers). Other methods included specialized phones placed inside the volunteer firefighter's home or business or by base radios or scanners. "Plectron" radio receivers were very popular, but were limited to 120VAC or 12VDC operation, limiting their use to a house/building or mounted in a vehicle. There was a great need and desire for a portable radio small enough to be worn by a person and only activated when needed. Thus, Motorola answered this call in the 1970s and released the very first Minitor pager.
Purpose and History:
There are six versions of Minitor pagers. The first was the original Minitor, followed by the Minitor II(1992), Minitor III(1999), Minitor IV, and the Minitor V released in late 2005. The Minitor VI was released in early 2014. The Minitor III, IV, and V used the same basic design, while the original Minitor and Minitor II use their own rectangular proprietary case design. Similar voice pagers released by Motorola were the Keynote and Director pagers. They were essentially stripped down versions of the Minitor and never gained widespread use, though the Keynotes were much more common in Europe because they could decode 5/6 tone alert patterns in addition to the more popular two tone sequential used in the United States.
Purpose and History:
Although the Minitor is primarily used by emergency personnel, other agencies such as utilities and private contractors also use the pager. Unlike conventional alphanumeric pagers and cell phones, Minitors are operated on an RF network that is generally restricted to a particular agency in a given geographical area. The Minitor is the most common voice pager used by emergency services in the United States. However, digital 2-way pagers that can display alpha-numeric characters can overcome some of the limitations of voice only pagers, are now starting to replace the Minitor pagers in certain applications.
Activation:
Minitor pagers, depending on the model and application, can operate in the VHF Low Band, VHF High Band, and UHF frequency ranges. They are alerted by using two-tone sequential Selective calling, generally following the Motorola Quick Call II standard. In other words, the pager will activate when a particular series of audible tones are sent over the frequency (commonly referred to as a "page") the pager is set to. For example, if a Minitor is programmed on VHF frequency channel 155.295 MHz and set to alert for 879 Hz & 358.6 Hz, it will disregard any other tone sequences transmitted on that frequency, only alerting when the proper sequence has been received. The pager may be reset back into its selective call mode by pressing the reset button, or it can be programmed to reset back into selective call mode automatically after a predetermined amount of time, to conserve battery power.
Activation:
Older Minitor pagers (both the Minitor I and Minitor II series) have tone reeds or filters that are tuned to a specific audible tone frequency, and must physically be replaced if alert tones are changed. For two-tone sequential paging, there are two reeds, the first tone passes through the first reed, and the second tone passes through the second reed, thereby activating the pager. Beginning with the Minitor III series, these physical reeds or filters are no longer necessary, as the pagers now feature all solid-state electronics, and various tone sequences can be programmed via computer software.
Activation:
Newer Minitor pagers can scan two channels by selecting that function via a rotary knob on the pager; in this mode when using a Minitor III or IV the user will hear all traffic, even without the correct tones being sent. If the activation tones are transmitted in this monitor mode, the pager alerts as normal. Minitor Vs have the option to remain in monitor mode or in selective call mode when scanning two channels. Minitor IIIs and IVs only have the option to remain in monitor mode when scanning two channels.
Activation:
The range of the Minitor's operating distance depends on the strength ("wattage") of the paging transmitter. A repeater is often used to improve paging coverage, as it can be located for better range than the dispatch center where the page originates from. Weather conditions, low battery, and even atmospheric conditions can affect the Minitor's ability to receive transmissions. In fact, a remote transmitter hundreds, even thousands of miles away belonging to a separate agency, can activate a Minitor (and also block it) unknowingly if the atmospheric conditions let the signal propagate that far. This is commonly known as radio skip.
Activation:
The Minitor is a receive-only unit, not a transceiver, and thus cannot transmit.
Features:
Note - most all of the features below refer to the Minitor pagers III and up, the original Minitor and Minitor II pagers may not have some of the listed features Newer generation Minitor pagers can simultaneously scan up to two channels and have multiple activation tones. This can be very helpful if a user belongs to several emergency services, or the emergency service has different alarms for different emergencies.
Features:
Alert tones - The default, and most common alert is the continuous beeping (sounds like "beep-beep-beep-beep...etc.)". Other alarms can include a steady high pitched tones, and the newest Minitor V's can even have musical tones for general non-emergency announcements.
VIBRA-Page - For silent alarm activation, most Minitor pagers can also vibrate without sounding an alarm tone. This is particularly useful in churches, schools, meetings, etc. where a loud noise would be disruptive. This feature is known as "VIBRA-Page".
Voice Record - Many Minitor pagers can also record (up to 8 minutes, depending on the model and options) of voice/transmission after the pager activates.
Controls - Physical controls (specifically on the Minitor III) include an "A,B,C,D" function knob, a power/volume knob, reset button, voice playback button, external speaker jack, and an amber and red LED. Depending on the model, the selection on the function knobs may do different things.
Features:
Control examples - For example, function A may be selective call mode, while function B is the vibrate function. Function C monitors channel 2. D is the mode that is similar to a scanner. When the pager is turned on, eight short beeps are heard along with flashing of both LEDs. Holding down the reset button in selective call mode will monitor the channel for any transmission on that channel at that time, or pure static as the squelch is bypassed.
Features:
Field Programmable - Some models have field programmable options such as Non- Priority Scan, Alert Duration, Priority Alert, On/Off Duty, Reset Options, and Push-To-Listen. Many Minitor pagers can be hooked up to a computer with a special cable and options changed.
Durability - Unlike older models, the Minitor V is "rainproof" as it meets "Military Standard 810, Procedure 1 for driving rain".
Belt Clip - A spring-loaded clip is attached to the back of each Minitor to allow the user to clip the pager onto a pocket or belt. Also, carrying cases and covers are also made to protect the pager.
Charging - Minitor pagers come standard with a charging stand and two rechargeable batteries.
Features:
Amplified base unit - An optional "Charger/Amplifier" base can be bought. Bigger than the standard charging stand, the "Charger/Amplifier" base not only charges the pager, but has an external antenna for increased reception, an amplified audio out jack to drive a stand-alone speaker, and some models even incorporate a relay to activate external devices along with the pager. Some uses for this relay include: Turning on lights in a building such as a fire station, activating an external audio/visual alarm, etc.
Features:
Accessories - Official Motorola accessories for the Minitor pagers include (including some listed above): Desktop Battery Charger, Desktop Battery Charger/Amplifier with Antenna and Relay, Vehicular Charger-Amp with Relay, Earpieces, Extra Loud Lapel Speaker, and Nylon Carrying Case.
Disadvantages:
The audible alarm on the Minitor lasts only for the duration of the second activation tone. If there is bad reception, the pager may only sound a quick beep, and the user may not be alerted properly. This can be changed by editing the codeplug's "Alert Duration" from STD to Fixed, the user can then set the alert duration longer than the second tone. The user must be cautious, however, as setting the alert tone duration too high may cover some voice information. Also, some units may have the volume knob set to control the sound output of the audible alert as well. The user may have the volume turned down to an undetectable level either by accident or by carelessness, thus missing the page. A factory option for "Fixed Alert" (the only option on the earlier Minitor I), however, lets the alert tone override the volume and sound at maximum volume regardless of the volume knob's position. It is possible to program the pager to always vibrate when an alert is received, giving the possibility of either a silent (vibrating) alert or audible and vibrating alerts. Minitor I and II do not have vibrating capabilities standard).
Disadvantages:
The vibrating motor in the newer (IV and V) Minitor pagers is quite strong in order to be felt in varying conditions, such as when performing heavy work. It is not uncommon for the vibrating motor in a pager, placed in a charger overnight and left in vibrate mode, to "walk" the pager and charger off of a table or nightstand.
Disadvantages:
Minitor pagers are powered by battery which will eventually run down if not charged (a flashing red LED and audible alarm is used as a warning of low battery power). As the Minitor is portable, its electronics aren't as sensitive as set top or base radios and are usually less able to pick up weak or distant signals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multiple Access with Collision Avoidance for Wireless**
Multiple Access with Collision Avoidance for Wireless:
Multiple Access with Collision Avoidance for Wireless (MACAW) is a slotted medium access control (MAC) protocol widely used in ad hoc networks. Furthermore, it is the foundation of many other MAC protocols used in wireless sensor networks (WSN). The IEEE 802.11 RTS/CTS mechanism is adopted from this protocol. It uses RTS-CTS-DS-DATA-ACK frame sequence for transferring data, sometimes preceded by an RTS-RRTS frame sequence, in view to provide solution to the hidden node problem. Although protocols based on MACAW, such as S-MAC, use carrier sense in addition to the RTS/CTS mechanism, MACAW does not make use of carrier sense.
Principles of operation:
Assume that node A has data to transfer to node B. Node A initiates the process by sending a Request to Send frame (RTS) to node B. The destination node (node B) replies with a Clear To Send frame (CTS). After receiving CTS, node A sends data. After successful reception, node B replies with an acknowledgement frame (ACK). If node A has to send more than one data fragment, it has to wait a random time after each successful data transfer and compete with adjacent nodes for the medium using the RTS/CTS mechanism.Any node overhearing an RTS frame (for example node F or node E in the illustration) refrains from sending anything until a CTS is received, or after waiting a certain time. If the captured RTS is not followed by a CTS, the maximum waiting time is the RTS propagation time and the destination node turnaround time.Any node (node C and node E) overhearing a CTS frame refrains from sending anything for the time until the data frame and ACK should have been received (solving the hidden terminal problem), plus a random time. Both the RTS and CTS frames contain information about the length of the DATA frame. Hence a node uses that information to estimate the time for the data transmission completion.Before sending a long DATA frame, node A sends a short Data-Sending frame (DS), which provides information about the length of the DATA frame. Every station that overhears this frame knows that the RTS/CTS exchange was successful. An overhearing station (node F), which might have received RTS and DS but not CTS, defers its transmissions until after the ACK frame should have been received plus a random time.To sum up, a successful data transfer (A to B) consists of the following sequence of frames: “Request To Send” frame (RTS) from A to B “Clear To Send” frame (CTS) from B to A “Data Sending” frame (DS) from A to B DATA fragment frame from A to B, and Acknowledgement frame (ACK) from B to A.MACAW is a non-persistent slotted protocol, meaning that after the medium has been busy, for example after a CTS message, the station waits a random time after the start of a time slot before sending an RTS. This results in fair access to the medium. If for example nodes A, B and C have data fragments to send after a busy period, they will have the same chance to access the medium since they are in transmission range of each other.
Principles of operation:
RRTS Node D is unaware of the ongoing data transfer between node A and node B. Node D has data to send to node C, which is in the transmission range of node B. D initiates the process by sending an RTS frame to node C. Node C has already deferred its transmission until the completion of the current data transfer between node A and node B (to avoid co-channel interference at node B). Hence, even though it receives RTS from node D, it does not reply back with CTS. Node D assumes that its RTS was not successful because of collision and hence proceeds to back off (using an exponential backoff algorithm).
Principles of operation:
If A has multiple data fragments to send, the only instant when node D successfully can initiate a data transfer is during small gaps in between that node A has completed data transfer and completion of node B next CTS (for node A next data transfer request). However, due to the node D backoff time period the probability to capture the medium during this small time interval is not high. To increase the per-node fairness, MACAW introduces a new control message called "Request for Request to Send" (RRTS).
Principles of operation:
Now, when node C, which cannot reply earlier due to ongoing transmission between node A and node B, sends an RRTS message to node D during next contention period, the recipient of the RRTS (node D) immediately responds with an RTS and the normal message exchange is commenced. Other nodes overhearing an RRTS defer for two time slots, long enough to hear if a successful RTS–CTS exchange occurs.
Principles of operation:
To summarize, a transfer may in this case consist of the following sequence of frames between node D and C: “Request To Send” frame (RTS) from D to C “Request for Request to send” frame (RRTS) from C to D (after a short delay) “Request To Send” frame (RTS) from D to C “Clear To Send” frame (CTS) from C to D “Data Sending” frame (DS) from D to C DATA fragment frame from D to C, Acknowledgement frame (ACK) from C to D
Ongoing research:
Additional back-off algorithms have been developed and researched to improve performance. The basic principle is based on the use of sequencing techniques where each node in the wireless network maintains a counter which limits the number attempts to less than or equal to the sequence number or use wireless channel states to control the access probabilities so that a node with a good channel state has a higher probability of contention success. This reduces the number of collisions.
Ongoing research:
Unsolved problems MACAW does not generally solve the exposed terminal problem. Assume that node G has data to send to node F in our example. Node G has no information about the ongoing data transfer from A to B. It initiates the process by sending an RTS signal to node F. Node F is in the transmission range of node A and cannot hear the RTS from node G, since it is exposed to co-channel interference. Node G assumes that its RTS was not successful because of collision and hence backs off before it tries again. In this case, the solution provided by the RRTS mechanism will not improve the situation much since the DATA frames sent from B are rather long compared to the other frames. The probability that F is exposed to transmission from A is rather high. Node F has no idea about any node interested in initiating data transfer to it, until G happens to transmit an RTS in between transmissions from A.
Ongoing research:
Furthermore, MACAW might not behave normally in multicasting. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Virtual finite-state machine**
Virtual finite-state machine:
A virtual finite-state machine (VFSM) is a finite-state machine (FSM) defined in a Virtual Environment. The VFSM concept provides a software specification method to describe the behaviour of a control system using assigned names of input control properties and output actions.
The VFSM method introduces an execution model and facilitates the idea of an executable specification. This technology is mainly used in complex machine control, instrumentation, and telecommunication applications.
Why:
Implementing a state machine necessitates the generation of logical conditions (state transition conditions and action conditions). In the hardware environment, where state machines found their original use, this is trivial: all signals are Boolean. In contrast state machines specified and implemented in software require logical conditions that are per se multivalued: Temperature could be Low, OK, High Commands may have several values: Init, Start, Stop, Break, Continue In a hierarchical control system the subordinate state machines can have many states that are used as conditions of the superior state machineIn addition input signals can be unknown due to errors or malfunctions, meaning even digital input signals (considered as classical Boolean values) are in fact 3 values: Low, High, Unknown.
Why:
A Positive Logical Algebra solves this problem via virtualization, by creating a Virtual Environment which allows specification of state machines for software using multivalued variables.
Control Properties:
A state variable in the VFSM environment may have one or more values which are relevant for the Control—in such a case it is an input variable. Those values are the control properties of this variable. Control properties are not necessarily specific data values but are rather certain states of the variable. For instance, a digital variable could provide three control properties: TRUE, FALSE and UNKNOWN according to its possible boolean values. A numerical (analog) input variable has control properties such as: LOW, HIGH, OK, BAD, UNKNOWN according to its range of desired values. A timer can have its OVER state (time-out occurred) as its most significant control value; other values could be STOPPED or RUNNING.
Actions:
Other state variables in the VFSM environment may be activated by actions—in such a case it is an output variable. For instance, a digital output has two actions: True and False. A numerical (analog) output variable has an action: Set. A timer which is both: an input and output variable can be triggered by actions like: Start, Stop or Reset.
Virtual Environment:
The virtual environment characterises the runtime environment in which a virtual machine operates. It is defined by three sets of names: input names represent the control properties of all available variables output names represent the available actions on the variables state names, as defined for each of the states of the FSM.The input names build virtual conditions to perform state transitions or input actions. The virtual conditions are built using the positive logic algebra. The output names trigger actions; entry actions, exit actions, input actions or transition actions.
Positive Logic Algebra:
The rules to build a virtual condition are as follows: Input Names and Virtual Input A state of an input is described by Input Names which create a Set: input A: Anames = {A1, A2, A3} input B: Bnames = {B1, B2} input C: Cnames = {C1, C2, C3, C4, C5}etc.
Positive Logic Algebra:
Virtual Input VI is a set of mutually exclusive elements of input names. A VI always contains the element always: VI = {always} VI = {always, A1} VI = {always, A1, B2, C4} Logical operations on Input Names & (AND) operation is a set of input names: A1 & B3 & C2 => {A1, B3, C2} | (OR) operation is a table of sets of input names: A1 | B3 | C2 => [{A1}{B3}{C2}] ~ (Complement) is a complement of a set of input names: ~A2 = {A1, A3} Logical expression A logical expression is an OR-table of AND-sets (a disjunctive normal form): A1 & B3 | A1 & B2 & C4 | C2 => [{A1B3}{A1B2C4}{C2}] Logical expressions are used to express any logical function.
Positive Logic Algebra:
Evaluation of a logical expression The logical value (true, false) of a logical expression is calculated by testing whether any of the AND-sets in the OR-table is a subset of VI.
Output Names and Virtual Output A state of an output is described by Output Names which create a set: output X: Xnames = {X1, X2} output Y: Ynames = {Y1, Y2, Y3}Virtual output VO is a set of mutually exclusive elements of output names.
Virtual Environment The Virtual Name and Virtual Output completed by State Names create the Virtual Environment VE where the behaviour is specified.
VFSM Execution Model:
A subset of all defined input names, which can exist only in a certain situation, is called virtual input or VI. For instance temperature can be either "too low", "good" or "too high". Although there are three input names defined, only one of them can exist in a real situation. This one builds the VI.
A subset of all defined output names, which can exist only in a certain situation is called virtual output or VO. This is built by the current action(s) of the VFSM.
The behaviour specification is built by a state table which describes all details of all states of the VFSM.
The VFSM executor is triggered by VI and the current state of the VFSM. In consideration of the behaviour specification of the current state, the VO is set.
Figure 2 shows one possible implementation of a VFSM executor. Based on this implementation a typical behaviour characteristics must be considered.
State Table:
A state table defines all details of the behaviour of a state of a VFSM. It consists of three columns; the first column names the state, the second lists virtual conditions built out of input names using the positive logic algebra, and the third column contains the output names: Read the table as following: the first two lines define the entry and exit actions of the current state. The following lines which do not provide the next state represent the input actions. Finally the lines providing the next state represent the state transition conditions and transition actions. All fields are optional. A pure combinatorial VFSM is possible in cases only where input actions are used, but no state transitions are defined. The transition action can be replaced by the proper use of other actions.
Tools:
StateWORKS: an implementation of the VFSM concept PlayMaker: implements the VFSM concept as a method of "visual scripting" the Unity game engine | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Network UPS Tools**
Network UPS Tools:
Network UPS Tools (NUT) is a suite of software component designed to monitor power devices, such as uninterruptible power supplies, power distribution units, solar controllers and servers power supply units. Many brands and models are supported and exposed via a network protocol and standardized interface.
Network UPS Tools:
It follows a three-tier model with dozens of NUT device driver daemons that communicate with power-related hardware devices over selected media using vendor-specific protocols, the NUT server upsd which represents the drivers on the network (defaulting to IANA registered port 3493/tcp) using the standardized NUT protocol, and NUT clients (running on same localhost as the server, or on remote systems) which can manage the power devices and query their power states and other metrics for any applications, usually ranging from historic graphing and graceful shutdowns to orchestrated power failover and VM migration.
Network UPS Tools:
Based on NUT design and protocol, the project community authored "UPS management protocol", Informational RFC 9271, which was published by IETF in August 2022, and the IANA port number registry was updated to reflect it (even though this RFC is not formally an Internet Standard).
Network UPS Tools:
Clients maintained in the NUT codebase include upsc, upsrw and upscmd for command-line actions, upsmon for relatively simple monitoring and graceful shutdowns (considering the amount of minimally required vs. total available power source units in the current server), upssched for complex monitoring scenarios, upscgi for a simple web interface, a NUT-Monitor X11 desktop client, as well as C, C++ and Python libraries for third-party clients. Community projects include more clients and bindings for other languages.
Network UPS Tools:
Being a cross-platform project, NUT works on most Unix, BSD and Linux platforms with various system architectures, from embedded systems to venerable Solaris, HP-UX and AIX servers. There were also native Windows builds based on previous stable NUT release line, last being 2.6.5. This effort was revived after the NUT 2.8.0 release, becoming part of the main codebase in September 2022 (at this time there are areas of the codebase documented in the project as placeholders and not yet ported to the Windows platform, and packages are not yet produced by the project).
History:
Pavel Korensky's original apcd provided the inspiration for pursuing the APC Smart-UPS protocol in 1996. This is the same software that Apcupsd derived from, according to the Debian maintainer of the latter.Russell Kroll, the original NUT author and coordinator, released the initial package, named smartupstools, in 1998. The design already provided for two daemons, upsd (which serves data) and upsmon (which protects systems), a set of drivers and examples, a number of CGI modules and client integration, and a set of client CLI tools (upsc, upsrw and upscmd), for interfacing the system with a specific UPS of a given model.Evgeny "Jim" Klimov, the current project leader since 2020, focuses first on automated testing and quality assurance of existing codebase to ensure minimal breakage introduced by new contributions, as well as to clean up older technical debts and inconsistencies highlighted by modern lint and coverage tools, and issuing a long-overdue new official release.Over its two-decade history, the open-source project became the de facto standard solution for UPS monitoring provided with OS distributions and embedded into many NAS solutions, some converged hypervisor set-ups, and other appliances, and enjoyed contributions and support from numerous end-users as well as representatives of power hardware vendors providing protocol specifications, sample hardware, and in many cases new NUT driver code and subsequent fixes based on NUT community feedback. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**COG8**
COG8:
Conserved oligomeric Golgi complex subunit 8 is a protein that in humans is encoded by the COG8 gene.Multiprotein complexes are key determinants of Golgi apparatus structure and its capacity for intracellular transport and glycoprotein modification. Several complexes have been identified, including the Golgi transport complex (GTC), the LDLC complex, which is involved in glycosylation reactions, and the SEC34 complex, which is involved in vesicular transport. These 3 complexes are identical and have been termed the conserved oligomeric Golgi complex (COG), which includes COG8 (Ungar et al., 2002).[supplied by OMIM] | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wendy Boss**
Wendy Boss:
Wendy Farmer Boss is an American botanist and the current William Neal Reynolds Distinguished Professor Emeritus at North Carolina State University. Her research focuses on plant physiology and phosphoinositide mediated signalling in plants. Phosphoinositols are derived from the phospholipids found in plasma membrane of the cell. Phosphoinositols are known to be key molecules in signal transduction pathways. The role of this chemical in plants is however not well understood and Dr. Boss' research has contributed significantly towards understanding this topic.
Early life and education:
Boss received her Bachelor of Science degree from Wake Forest University in 1968. Subsequently, she completed a Master of Science in 1970 from University of Washington. She was awarded a Doctorate of Philosophy from Indiana University Bloomington in 1977.
Career and research:
The Boss lab works on phosphoinositide metabolism in plants. Primarily, the research focuses on the role of the chemicals phosphatidyl-inositol-4P and phosphatidyl-inositol-4,5P2 in signal transduction in plants while adapting to environmental changes. In 2001 Dr. Boss received grants from NASA, National Science Foundation and the United States Department of Agriculture's Binational Agricultural Research Development (BARD) program to study the role of this chemicals in plants grown in space. The research measured the chemical surges occurring in plant cells moments after the plant is reoriented and the response time required by plants to adapt to the reorientation.
Honors and awards:
Inaugural fellow of American Society of Plant Biologists, 2007 Charles Reid Barnes Life membership award, 2015 awarded by American Society of Plant Biologists William Neal Reynolds Distinguished Professor Emeritus at the North Carolina State University Pioneer Member of the American Society of Plant Biologists. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dental radiography**
Dental radiography:
Dental radiographs, commonly known as X-rays, are radiographs used to diagnose hidden dental structures, malignant or benign masses, bone loss, and cavities.
Dental radiography:
A radiographic image is formed by a controlled burst of X-ray radiation which penetrates oral structures at different levels, depending on varying anatomical densities, before striking the film or sensor. Teeth appear lighter because less radiation penetrates them to reach the film. Dental caries, infections and other changes in the bone density, and the periodontal ligament, appear darker because X-rays readily penetrate these less dense structures. Dental restorations (fillings, crowns) may appear lighter or darker, depending on the density of the material.
Dental radiography:
The dosage of X-ray radiation received by a dental patient is typically small (around 0.150 mSv for a full mouth series), equivalent to a few days' worth of background environmental radiation exposure, or similar to the dose received during a cross-country airplane flight (concentrated into one short burst aimed at a small area). Incidental exposure is further reduced by the use of a lead shield, lead apron, sometimes with a lead thyroid collar. Technician exposure is reduced by stepping out of the room, or behind adequate shielding material, when the X-ray source is activated.
Dental radiography:
Once photographic film has been exposed to X-ray radiation, it needs to be developed, traditionally using a process where the film is exposed to a series of chemicals in a dark room, as the films are sensitive to normal light. This can be a time-consuming process, and incorrect exposures or mistakes in the development process can necessitate retakes, exposing the patient to additional radiation. Digital X-rays, which replace the film with an electronic sensor, address some of these issues, and are becoming widely used in dentistry as the technology evolves. They may require less radiation and are processed much more quickly than conventional radiographic films, often instantly viewable on a computer. However digital sensors are extremely costly and have historically had poor resolution, though this is much improved in modern sensors.
Dental radiography:
It is possible for both tooth decay and periodontal disease to be missed during a clinical exam, and radiographic evaluation of the dental and periodontal tissues is a critical segment of the comprehensive oral examination. The photographic montage at right depicts a situation in which extensive decay had been overlooked by a number of dentists prior to radiographic evaluation.
Intraoral radiographic views:
Placing the radiographic film or sensor inside the mouth produces an intraoral radiographic view.
Intraoral radiographic views:
Periapical view Periapical radiographs are taken to evaluate the periapical area of the tooth and surrounding boneFor periapical radiographs, the film or digital receptor should be placed parallel vertically to the full length of the teeth being imaged.The main indications for periapical radiography are Detect apical inflammation/ infection including cystic changes Assess periodontal problems Trauma-fractures to tooth and/or surrounding bone Pre/ post apical surgery/extraction. Pre extraction planning for any developmental anomalies and root morphology. Post extraction radiographs for any root fragments any other co-lateral damages.
Intraoral radiographic views:
Detect any presence or position of unerupted teeth Endodontics. For any endodontic treatment, a pre-treatment radiograph is taken to measure the working length of the canals and this measurement is confirmed with electronic apex locator. A 'cone fit' radiograph is used when Master Apical Cone is placed in wet canal to correct working length to achieve frictional fit apically. Next, obturation verification radiograph is indicated after the canal space is fully filled with master cone, sealer and accessory cones. In the end, a final radiograph is taken after a definitive restoration is placed to check the final outcome of root canal treatment.
Intraoral radiographic views:
Evaluation of implants.Intraoral periapical radiographs are widely used for the preoperative due to its simple technique, low cost and less radiation exposure and widely available in clinical settings.
Intraoral radiographic views:
Bitewing view The bitewing view is taken to visualize the crowns of the posterior teeth and the height of the alveolar bone in relation to the cementoenamel junctions, which are the demarcation lines on the teeth which separate tooth crown from tooth root. Routine bitewing radiographs are commonly used to examine for interdental caries and recurrent caries under existing restorations. When there is extensive bone loss, the films may be situated with their longer dimension in the vertical axis so as to better visualize their levels in relation to the teeth. Because bitewing views are taken from a more or less perpendicular angle to the buccal surface of the teeth, they more accurately exhibit the bone levels than do periapical views. Bitewings of the anterior teeth are not routinely taken.
Intraoral radiographic views:
The name bitewing refers to a little tab of paper or plastic situated in the center of the X-ray film, which when bitten on, allows the film to hover so that it captures an even amount of maxillary and mandibular information.
Intraoral radiographic views:
Occlusal view The occlusal view reveals the skeletal or pathologic anatomy of either the floor of the mouth or the palate. The occlusal film, which is about three to four times the size of the film used to take a periapical or bitewing, is inserted into the mouth so as to entirely separate the maxillary and mandibular teeth, and the film is exposed either from under the chin or angled down from the top of the nose. Sometimes, it is placed in the inside of the cheek to confirm the presence of a sialolith in Stenson's duct, which carries saliva from the parotid gland. The occlusal view is not included in the standard full mouth series.
Intraoral radiographic views:
1. Anterior oblique occlusal mandible – 45° Technique: the collimator is positioned in the midline, thru the chin aiming an angle of 45° to the image receptor which is placed centrally into the mouth, on to the occlusal surface of the lower arch.
Indications: 1) Periapical status of lower incisor teeth for patients who cannot tolerate periapical radiographs.
Intraoral radiographic views:
2) Assess the size of lesions such as cyst or tumours at anterior area of mandible 2. Lateral oblique occlusal mandible – 45° Technique: The collimator is positioned from below and behind the angle of mandible and parallel to the lingual surface of the mandible, aiming upwards and forwards at the image receptors which is placed centrally into the mouth, on to the occlusal surface of lower arch. Patients must turn their head away from the side of investigation.
Intraoral radiographic views:
Indications: 1) Detection of any sialoliths in submandibular salivary glands 2) Used to demonstrate unerupted lower 8's 3) Assess the size of lesions such as cyst or tumours in the posterior of body and angle of mandible Full mouth series A full mouth series is a complete set of intraoral X-rays taken of a patients' teeth and adjacent hard tissue. This is often abbreviated as either FMS or FMX (or CMRS, meaning Complete Mouth Radiographic Series). The full mouth series is composed of 18 films, taken the same day: four bitewings two molar bitewings (left and right) two premolar bitewings (left and right) eight posterior periapicals two maxillary molar periapicals (left and right) two maxillary premolar periapicals (left and right) two mandibular molar periapicals (left and right) two mandibular premolar periapicals (left and right) six anterior periapicals two maxillary canine-lateral incisor periapicals (left and right) two mandibular canine-lateral incisor periapicals (left and right) two central incisor periapicals (maxillary and mandibular)The Faculty of General Dental Practice of the Royal College of Surgeons of England publication Selection Criteria in Dental Radiography holds that given current evidence full mouth series are to be discouraged due to the large numbers of radiographs involved, many of which will not be necessary for the patient's treatment. An alternative approach using bitewing screening with selected periapical views is suggested as a method of minimising radiation dose to the patient while maximizing diagnostic yield. Contrary to advice that emphasises only conducting radiographs when in the patient's interest, recent evidence suggests that they are used more frequently when dentists are paid under fee-for-service
Intra-oral radiographic techniques:
Accurate positioning is of utmost importance to produce diagnostic radiographs and to avoid retakes, hence minimizing the radiation exposure of the patient. The requirements for ideal positioning include: Tooth and image receptor (film packet or digital sensor) should be parallel to one another The long axis of the image receptor is vertical for incisors and canines, and horizontal for premolars and molars. There should be enough receptor beyond the apices of the teeth for record of the apical tissues.
Intra-oral radiographic techniques:
The X-ray beam from the tube-head should meet the tooth and the image receptor at right angles in both the vertical and horizontal planes Positioning should be reproducible The tooth under investigation and image receptor should be in contact, or as close together as possibleHowever, the anatomy of the oral cavity makes it challenging to satisfy the ideal positioning requirements. Two different techniques have hence been developed to be utilised in the undertaking of an intra-oral radiograph – Paralleling technique and Bisected angle technique. It is generally accepted that the paralleling technique offers more advantages than disadvantages, and gives a more reflective image, as compared to the bisecting angle technique.
Intra-oral radiographic techniques:
Paralleling Technique This can be used for both periapical and bitewing radiographs. The image receptor is placed in a holder and positioned parallel to the long axis of the tooth being imaged. The X-ray tube head is aimed at right angles, both vertically and horizontally, to both the tooth and the image receptor. This positioning has the potential to satisfy four out of the five above requirements – the tooth and image receptor cannot be in contact whilst they are parallel. Because of this separation, a long focus-to-skin distance is required to prevent magnification.This technique is advantageous as the teeth are viewed exactly parallel with the central ray and therefore there are minimal levels of object distortion. With the use of this technique, the positioning can be duplicated with the use of film holders. This makes the recreation of the image is possible, which allows for future comparison. There is some evidence that the use of the paralleling technique reduces the radiation hazard to the thyroid gland, as compared to the use of the bisecting angle technique. This technique, however, may be impossible in some patients due to their anatomy, e.g. a shallow/flat palate.
Intra-oral radiographic techniques:
Bisecting Angle Technique The bisecting angle technique is an older method for periapical radiography. It can be a useful alternative technique when the ideal receptor placement using the paralleling technique cannot be achieved, for reasons such as anatomical obstacles e.g. tori, shallow palate, shallow floor of mouth, or narrow arch width.This technique is based on the principle of aiming the central ray of the X-ray beam at 90° to an imaginary line which bisects the angle formed by the long axis of the tooth and the plane of the receptor. The image receptor is placed as close as possible to the tooth under investigation, without bending the packet. Applying the geometrical principle of similar triangles, the length of the tooth on the image will be the same as that of the actual length of the tooth in the mouth.The many inherent variables can inevitably result in image distortion and reproducible views are not possible with this technique. An incorrect vertical tube head angulation will result in foreshortening or elongation of the image, while an incorrect horizontal tube head angulation will cause overlapping of the crowns and roots of teeth.Many frequent errors that arise from the bisecting angle technique include: improper film positioning, incorrect vertical angulation, cone-cutting, and incorrect horizontal angulation.
Extraoral radiographic views:
Placing the photographic film or sensor outside the mouth, on the opposite side of the head from the X-ray source, produces an extra-oral radiographic view.
A lateral cephalogram is used to evaluate dentofacial proportions and clarify the anatomic basis for a malocclusion, and an antero-posterior radiograph provides a face-forward view.
Extraoral radiographic views:
Lateral cephalometric radiography Lateral cephalometric radiography (LCR) is a standardized and reproducible form of skull radiography taken from the side of the face with precise positioning. It is used primarily in orthodontics and orthognathic surgery to assess the relationship of the teeth to the jaws, and the jaws to the rest of the facial skeleton. LCR is analyzed using cephalometric tracing or digitizing to obtain maximum clinical information.Indications of LCR include: Diagnosis of skeletal and/or soft tissues abnormalities Treatment planning Baseline for monitoring treatment progress Appraisal of orthodontic treatment and orthognathic surgery results Assessment of unerupted, malformed, or misplaced teeth Assessment of upper incisor root length Clinical teaching and research Panoramic films Panoramic films are extraoral films, in which the film is exposed while outside the patient's mouth, and they were developed by the United States Army as a quick way to get an overall view of a soldier's oral health. Exposing eighteen films per soldier was very time consuming, and it was felt that a single panoramic film could speed up the process of examining and assessing the dental health of the soldiers; as soldiers with toothache were incapacitated from duty. It was later discovered that while panoramic films can prove very useful in detecting and localizing mandibular fractures and other pathologic entities of the mandible, they were not very good at assessing periodontal bone loss or tooth decay.
Computed tomography:
There is increasing use of CT (computed tomography) scans in dentistry, particularly to plan dental implants; there may be significant levels of radiation and potential risk. Specially designed CBCT (cone beam CT) scanners can be used instead, which produce adequate imaging with a stated tenfold reduction in radiation. Although computed tomography offers high quality images and accuracy, the radiation dose of the scans is higher than the other conventional radiography views, and its use should be justified. Controversy surrounds the degree of radiation reduction though as the highest quality cone beam scans use radiation doses not dissimilar to modern conventional CT scans.
Computed tomography:
Cone beam computed tomography Cone beam computed tomography (CBCT), also known as digital volume tomography (DVT), is a special type of X-ray technology that generates 3D images. In the recent years, CBCT has been developed specifically for its use in the dental and maxillofacial areas to overcome the limitations of 2D imaging such as buccolingual superimposition. It is becoming the imaging modality of choice in certain clinical scenarios although clinical research justifies its limited use.
Computed tomography:
Indications of CBCT, according to the SEDENTEXCT (Safety and Efficacy of a New and Emerging Dental X-ray Modality) guidelines include:Developing dentition Assessment of unerupted and/or impacted teeth Assessment of external resorption Assessment of cleft palate Treatment planning for complex maxillofacial skeletal abnormalitiesRestoration of dentition (if conventional imaging is inadequate) Assessment of infra-bony defects and furcation lesions Assessment of root canal anatomy in multi-rooted teeth Treatment planning of surgical endodontic procedures and complex endodontic treatments Assessment of dental traumaSurgical Assessment of lower third molars where intimate relationship with the inferior dental canal is suspected Assessment of unerupted teeth Prior to implant placement Assessment of pathological lesions of the jaws (cysts, tumors, giant cell lesions, etc.) Assessment of facial fractures Treatment planning of orthognathic surgery Assessment of bony elements of the maxillary antra and TMJResearch A cross sectional diagnostic study compared and correlated bone sounding and open bone measurements with conventional radiograph and CBCT for periodontal disease. The study did not find any superior result of CBCT over the conventional techniques, except for lingual measurements.
Localisation techniques:
The concept of parallax was first introduced by Clark in 1909. It is defined as "the apparent displacement or difference in apparent direction of an object as seen from two different points not on a straight line with the object". It is used to overcome the limitations of the 2D image in the assessment of relationships of structures in a 3D object.
Localisation techniques:
It is mostly used to ascertain the position of an unerupted tooth in relation to the erupted ones (i.e. if the unerupted tooth is buccally / palatally placed / in line of the arch). Other indications for radiographic localization include: separating the multiple roots/canals of teeth in endodontics, assessing the displacement of fractures, or determining the expansion or destruction of bone.
Localisation techniques:
Horizontal parallax: Involves the taking of two radiographs at different horizontal angles, with the same vertical angulation. (E.g. two intra-oral periapical radiographs) Based on the rule of parallax, the more distant object will appear to move in the same direction as the tube shift, while the object which is nearer to the tube will appear to move in the opposite direction. (Same Lingual Opposite Buccal - SLOB rule) Vertical parallax: Involves the taking of two radiographs at different vertical angulations (E.g. one periapical and one maxillary anterior occlusal; one maxillary anterior occlusal and one panoramic) MBD Rule: Commonly employed in endodontics, the MBD rule states that when an exposure is given (about 5-7o) from the Mesial surface, the Buccal root or canal will lie to the Distal of the imageWith the rise in 3D radiographic techniques, the use of CBCT can be used to replace the undertaking of parallax radiographs, overcoming the limitations of the 2D radiographic technique. In cases of impacted teeth, the image obtained via CBCT can determine the buccal-palatal position and angulation of the impacted tooth, as well as the proximity of it to the roots of adjacent teeth and the degree of root resorption, if any.
Faults:
Dental radiographs are an essential component to aid in diagnosis. Alongside an efficient clinical examination, a dental radiograph of a high quality can show essential diagnostic information crucial for the ongoing treatment planning for a patient. Of course when a dental radiograph is recorded many faults may arise. This is immensely variable due to differing use of: image receptor type, X-ray equipment, levels of training and processing materials etc.
Faults:
General faults As previously stated a major difference in dental radiography is the versatile use of film vs digital radiography. This in itself leads to a long list of faults associated with each type of image receptor. Some typical film faults are discussed below with a variety of reasons as to why that fault has occurred.
Faults:
Dark film Overexposure of the image from the use of faulty X-ray equipment and/or incorrect exposure time Overdevelopment due to excessive time in developing agent Developer either being too hot and/or too concentrated Fogging due to poor storage conditions Use of old stock Faulty processing unit Thin patient tissues (The differences in the tissues atomic number depicts the different attenuation of the X-ray beam. Also the penetrative power itself is a component to achieve adequate contrast) Pale image Underexposure due to faulty X-ray equipment and/or incorrect exposure time Underdeveloped due to inadequate time in developing agent Developer either being too cold/dilute/exhausted Developer contaminated by fixing agent Excessively thick patient tissues Film packet being back to front also results in a pale image accompanied by an embossed appearance from the lead pattern inside the image receptor packet.
Faults:
Inadequate/low contrast due to: processing error under/overdeveloped developer contaminated by fixer inadequate fixing time fixer solution exhausted Fogging due to: Poor storage conditions Poor stock control/out of date Faulty cassettes Faulty processing unit Exposure to white light Lack of sharpness and clarity due to: Movement of patient/equipment during exposure Excessive bending of the film packet during exposure Poor film/screen contact within a cassette Speed of intensifying screens (the faster the screen the poorer the detail) Overexposure causing burn out edges of thin object (Cervical Burnout) Poor positioning in panoramic radiography Marked film due to: Bend/crimps in the film (dark lines) Careless handling of film in the darkroom leading to fingerprints and nail marks Splashes of chemicals before processing Patient biting too hard onto the film Dirty intensifying screens Static electricity causing a black starburst appearance Green tint to the film due to Insufficient fixing Double exposure which may occur when two images superimposed as a result of the receptor being used twice Partial image due to: Failure to direct collimator to centre of image receptor Manual processing – developer level too low and film only partially submerged in developer Exclusively digital faults As film and digital are very different in how they work and how they are handled it is inevitable that their faults will also differ. Below is a list of some typical digital faults which may arise. It must be kept in mind that these also vary as per the type of digital image receptor which is used: Thin white lines on PSP image Scratched phosphor plate White areas on edge of PSP image Phosphor coating de-bonding Areas of white "burn out" PSP underexposed or plate exposed to light before processing Grainy digital image.
Faults:
Under exposed Fine zig-zag line through image Dust in PSP scanner at level of laser White curved area at corner of image PSP corner folder forward in mouth Paler "finger shaped" area on image Finger print on PSP surface Curved darker area corner of CCD Damage to photocells in solid state sensor Paler portion of image Caused by bend across PSP "Marble effect" to image PSP exposed to excessive heat Faults in processing The potential faults associated with the choice of image receptor used have been covered, it should also be noted that other faults elsewhere in the process of formulating an ideal diagnostic radiograph can occur. The majority of these have already been mentioned due to other faults but due to processing inaccuracies alone these may occur: blank/clear film due to the wrong sequence of solutions (the correct sequence should be develop, wash then fixer) dark spots form due to developer drips on film before processing white or blank spots due to fixer drops on film before processing black or dark film due to an improper safelight or too warm a solution partial image due to processing solutions being low, film not covered completely by solution, films touching sides of tanks and/or each other on belt stained glass effect (reticulation) due to a large temperature difference of solution baths yellow-brownish stains due to an improper water bath stains from old solutions particularly the developer risk of retaking on the same image receptor causing a double exposure to patient health implication Faults in technique The training of staff is also an area which can lead to faults in the formulation of an ideal diagnostic radiograph. If someone is not adequately trained this can lead to discrepancies in pretty much any aspect in the process of achieving a diagnostic radiographic image. Below are some examples: Foreshortening of image (causing the structures on the X-ray to appear too short). This is due to an excessive vertical angulation of the X-ray tube whilst taking the radiograph.
Faults:
Elongation of the image refers to a lengthening effect on the structures of the X-ray which is due to a decreased vertical angulation of the X-ray tube Sometimes due to a bend in the film can lead to an elongation effect on just a few teeth rather than the whole image Overlapping of proximal surfaces is an error of improper horizontal angulation of the image receptor, either being too far forward or backward in respect the X-ray beam Slanting of the occlusal plane is when the film In the patient's mouth has been improperly placed as the occlusal plane should parallel to the margin of the film Apical region not visible Blurred distorted – movement Cone cut appearance which may occur when the X-ray beam is not positioned perpendicular over the film Double exposure occurs when two images are taken on one radiograph Reversible film Crimp marks Light image Dark image Image geometry: of which it compiles of the X-ray beam, object and image receptor all of which depend on a specific relationship to each other. Object and film should be in contact or as close together as possible, object and film should be parallel to one another and the X-ray tube head should be positioned so that beam meets the object and the film at right angles.
Faults:
Characteristics of the X-ray Beam: ideal beam should be able to sufficiently penetrate the film emulsion to produce good contrast, parallel and have a focal trough Image quality scale It is inevitable that some faults may occur despite the efforts of prevention, so a set of criteria for what is an acceptable image has been created. This has to be implemented so that the amount of re-exposure to a patient is minimal in order to get a diagnostic image and to improve the manner in which radiographs are taken in practice.
Faults:
When considering the quality of a radiographic image there are many factors which come into play. These can be split into sub-categories such as: Radiographic Technique, Type of image receptor (film or digital) and/or the processing of the image. A combination of all these factors are taken into account alongside the quality of the image itself to determine a specific grade for the image to determine if it is up to a standard for diagnostic use or not.
Faults:
The following grades have since been updated but may still be used in literature and by some clinicians: Grade 1 is given when there is an excellent quality image where no errors are prevalent in patient preparation, exposure, positioning, processing and/or film handling.
Grade 2 is given when there is a diagnostically acceptable image where some error of patient preparation, exposure, positioning, processing and/or film handling. Although these errors may be prevalent they do not detract from the diagnostic utility of the radiograph.
Faults:
Grade 3 is given when there are significant errors in patient preparation, exposure, film handling, processing and/or positioning which renders the radiograph diagnostically unacceptableIn 2020, FGDP updated guidance on a simplified system for image quality rating and analysis. The new system has the following grades: Diagnostically Acceptable (A) = no errors or minimal errors in either patient preparation, exposure, positioning, image processing, and is of sufficient image quality to answer the clinical question Diagnostically Not Acceptable (N) = errors in either patient preparation, exposure, positioning, or image processing, which render the image diagnostically unacceptableThe targets for Grade A radiographs are no less than 95% for digital, and no less than 90% for film imaging. Hence, the targets for Grade N radiographs are no more than 5% for digital, and no more than 10% for film imaging.
Faults:
Film reject analysis To maintain a high standard of images, each radiograph should be examined and appropriately graded. In simplistic terms as depicted by the World Health Organisation, "this is a well designed quality assurance programme which should be comprehensive but inexpensive to operate and maintain." The aim of the quality assurance is to continually achieve diagnostic radiographs of consistently high standards, therefore reducing the number of repeat radiographs by determining all sources of error to allow their correction. This, in turn, will then reduce the exposure to the patient keeping the doses as low as reasonably possible, as well as keeping a low cost.
Faults:
Quality assurance consists of close monitoring of image quality on a day to day basis, comparing each radiograph to one of a high standard. If a film does not reach this standard it goes through the process of film reject analysis. This is where diagnostically unacceptable radiographs are examined to determine the reason for their faults, to ensure the same mistakes are not made again. The X-ray equipment is also something to acknowledge and ensure that it is always compliant with the current regulations.
Regulations:
There are numerous risks associated with the taking of dental radiographs. Even though the dose to the patient is minimal, the collective dose needs to be considered in this context as well. Therefore, it is incumbent on the operator and prescriber to be aware of their responsibilities when it comes to exposing a patient to ionizing radiation. These dental radiographies have been indicated as a risk factor for cancer of salivary gland and for intracranial tumors due to improper protection from radiation. It is believed that children are more at risk from these effects of radiographic examination due to their increased rate of cellular division. Children are also more at risk due to the number of dental radiographs that are encountered during adolescence. The United Kingdom has two sets of regulations related to the taking of X-rays. These are the Ionizing Radiations Regulations of 2017 (IRR17) and the Ionizing Radiations Medical Exposures Regulations of 2018 (IRMER18). IRR17 principally relates to the protection of workers and the public, along with the equipment standards. IRMER18 is specific for patient protection. These regulations replace the previous versions which were being followed for many years (IRR99 and IRMER2000). This change has come primarily due to Basic Safety Standards Directive 2013 (BSSD; also known as European Council Directive 2013/59/Euratom), which all European Union member states are legally required to transpose into their national laws by 2018.The above regulations are specific to the United Kingdom; the EU and USA are principally governed by the directive 2013/59/Eurotam and The Federal Guidance For Radiation Protection, respectively. The goal of all these standards, including others governing other countries, is primarily to protect the patient, operators, maintain safe equipment and ensure quality assurance. The UK's Health and Safety Executive (HSE) has also published an accompanying Approved Code of Practice (ACoP) and associated guidance, which gives practical advise on how to comply with the law. Following the ACoP is not obligatory. However, compliance with it can prove very beneficial for the legal person if they were to face any negligence or lack of compliance to the law issues, as it will confirm that the said legal person has been implementing good practice. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Taurine dioxygenase**
Taurine dioxygenase:
In enzymology, a taurine dioxygenase (EC 1.14.11.17) is an enzyme that catalyzes the chemical reaction.
Taurine dioxygenase:
taurine + 2-oxoglutarate + O2 ⇌ sulfite + aminoacetaldehyde + succinate + CO2The 3 substrates of this enzyme are taurine, 2-oxoglutarate, and O2, whereas its 4 products are sulfite, aminoacetaldehyde, succinate, and CO2.This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with 2-oxoglutarate as one donor, and incorporation of one atom o oxygen into each donor. The systematic name of this enzyme class is taurine, 2-oxoglutarate:O2 oxidoreductase (sulfite-forming). Other names in common use include 2-aminoethanesulfonate dioxygenase, and alpha-ketoglutarate-dependent taurine dioxygenase. This enzyme participates in taurine and hypotaurine metabolism. It has 3 cofactors: iron, Ascorbate, and Fe2+.
Structural studies:
As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 1GQW, 1GY9, 1OS7, and 1OTJ.
Mechanism:
Initiating steps In the decomposition of taurine, it has been shown that molecular oxygen is activated by Iron II, which lies in the coordinating complex of taurine dioxygenase. Here the enzyme with conjunction of an Iron II and 2-oxoglutarate maintain non-covalent bonds by electrostatic interactions, and coordinate a nucleophilic attack from dioxygen on 2-oxoglutarate carbon number 2. This leads to the two oxidations, one on 2-oxoglutarate, and another on taurine, each one electron. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DotProject**
DotProject:
dotProject is a web-based, multi-user, multi-language project management application. It is free and open source software, and is maintained by an open community of volunteer programmers.
History:
dotProject was originally developed by Will Ezell at dotmarketing, Inc. to be an open source replacement for Microsoft Project, using a very similar user interface but including project management functionality. Begun in 2000, the project was moved to SourceForge in October 2001, and, from version 2.1.8 onwards, is hosted on GitHub.The project stalled in late 2002 when the original team moved to dotCMS. Subsequently, Andrew Eddie and Adam Donnison, two of the more active developers, were granted administration rights to the project. Andrew continued to work on the project until he moved on to Mambo and later Joomla. Adam remains an administrator.In late 2007, the new dotProject team began a major redevelopment using the Zend Framework, with version 3 (dP3) the expected target release to be utilising it. A fork called web2project was initiated at the same time.
History:
Since 2018, the dotProject core team has focused its efforts on keeping dotProject compatible with the latest versions of PHP and MySQL/MariaDB and updating its dependent packages; the overall look and feel remains notoriously similar to what it used to be in the late 2000s.
Overview of the main features:
dotProject is mostly a task-oriented project management system, predating contemporary tools addressing methodologies such as Agile software development. Instead, it uses the "waterfall" model to manage tasks, sequentially and/or in parallel, assigned to different members of a team or teams, and establishing dependencies between tasks and milestones. It can display such relationships visually using Gantt charts.
Overview of the main features:
It is not specifically designed for software project management but can be used by most kinds of project-oriented service companies (such as design studios, architects, media producers, lawyer offices, and the like), all of which organise their work conceptually in similar ways. Unlike most contemporary software project management tools, dotProject cannot be easily integrated with the usual constellation of 'business tools'; instead, it is a complete, standalone application, not requiring anything else besides a platform that supports PHP (it is web server agnostic) and MySQL/MariaDB. Except for drawing Gantt graphics, it has a reasonably small footprint in terms of memory and disk space requirements.
Overview of the main features:
In spite of its conceptual simplicity, dotProject nevertheless can be extended or integrated with other tools. It comes with a series of plugins, most of which pre-activated; there is even a repository of independently maintained 'mods' (or plugins) available on SourceForge, which include a Risks management module (released in late 2020) among others.
While dotProject is self-contained in terms of user authentication and management, it can also integrate with an external LDAP server, as well as synchronise its users with a phpNuke installation. Further authentication methods are possible to be developed separately but are currently not part of the core software.
Overview of the main features:
The core of dotProject focuses on Companies, which may have subunits known as Departments, which, in turn, have Users. Companies can be internal or external; thus, a project can be shared/viewed by customers, by giving them access via a special Role. Roles have a reasonably complex permissions system, allowing a certain degree of fine-tuning of what kind of information can be viewed and/or edited by the users. There is even the possibility of having a 'public' role with no access to any information but nevertheless able to submit tickets via the integrated ticketing system.
Overview of the main features:
Projects, in turn, are linked to one company and (optionally) one or more departments in that company; users assigned to a specific project, however, may come from any company or department — thus allowing cross-company development, or the involvement of external users (independent consultants, freelancers, or even the clients and their intermediaries).
Overview of the main features:
Projects are divided into Tasks, which can have all sorts of dependencies between them; tasks can also have subtasks, and they can be assigned to specific milestones. This allows the establishment of complex relationships between the team members, the many projects they might be involved in, and the amount of work to be distributed among all. As is common with other project management tools, tasks can be created as mere stubs and completed later; assigned and reassigned to team members; or even moved across projects (or becoming subtasks of other tasks).
Overview of the main features:
Team members are expected to register the amount of time they spend on each task, which is accomplished via Logs. These are often one-line comments with an estimate of the time consumed (but can optionally have much more information); dotProject will take those logs into account when calculating the workload, the overall cost of the project so far (and compare it to the budget), as well as figuring out what tasks are being completed in due time or are overdue. Depending on the company style and its level of activity tracking — according to their business culture — time-tracking can be as simple as just closing a task, or it might involve several logs until a supervisor deems that the task can be safely closed.
Overview of the main features:
All these activities are tracked and made part of the overall project history. Optionally, dotProject can send emails to the involved parties, triggered by special conditions — such as a task being overdue, or having been completed so that a customer can be invoiced. While dotProject is not a fully-fledged invoicing system, it can produce enough data output to send reasonably detailed invoices to customers. At the same time, via its reporting facility, the management or the board can get properly formatted reports about ongoing projects, besides having access to the Gantt charts.
Overview of the main features:
Communication between team members can be as simple as leaving comments on tasks and/or logs, but dotProject also includes a minimalistic Forum facility. These are usually assigned to a single project (but each project can have several separate forums, with separate moderators, serving different purposes).
Overview of the main features:
And while dotProject is not a sophisticated document management system, it nevertheless allows files to be uploaded to a special directory, also assigned to specific projects/tasks, and under control of the permission system (file names get hashed, and only someone with the proper permission will be able to retrieve those files). There is a very simple built-in file management system to allow for file uploading and categorising with metadata. The file folder can theoretically be mounted on an external file system on a cloud storage provider — so long as this is achieved at the operating system level; dotProject, by itself, does not connect directly to any storage provider. dotProject also includes a very simple versioning system.
Overview of the main features:
Tasks and milestones are also integrated into the built-in Calendar module, which is usually the preset entry point of the user — allowing them to keep up with the tasks they're involved in, or those that they supervise. There is some flexibility in how the information is presented. It is unknown if there is a way to automatically subscribe to a specific calendar; by contrast, Contacts, a module that allows editing the data related to each user, also permits exports using the vCard format.
Support and community:
As of 2021, the dotProject community mostly volunteers time to reply on dotProject's GitHub issues, but there is no other form of getting any support.
As of May 2013, there were over 50,210 registered users in the dotProject forums and an average of 500–700 downloads each day.As of April 2021, the original website mentioned before — which included a rich community of users — does not exist any longer, although https://dotproject.net/ is still actively maintained and points to some key resources (mostly on GitHub). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**K-factor (marketing)**
K-factor (marketing):
In viral marketing, the K-factor can be used to describe the growth rate of websites, apps, or a customer base. The formula is roughly as follows: number of invites sent by each customer (e.g. if each new customer invites five friends, i = 5) percent conversion of each invite (e.g. if one in five invitees convert to new users, c = .2) k=i∗c This usage is borrowed from the basic reproduction number in the medical field of epidemiology in which a virus having a k-factor of 1 is in a "steady" state of neither growth nor decline, while a k-factor greater than 1 indicates exponential growth and a k-factor less than 1 indicates exponential decline. In epidemiology, the k-factor is derived from the rates of distribution and infection for a disease. "Distribution" ( i ) measures the average number of people a host will contact while still infectious, and "infection" ( c ) measures how likely an average person is to also become infected after contact with an infectious host.In the context of viral marketing, while a higher k-factor is desirable, a k-factor below 1 can still lead to viral growth. In their early days, Dropbox and Whatsapp boasted k-factors of 0.7 and 0.4, respectively, which were major contributors to their financial success. K-factor is limited to measuring how directly effective word-of-mouth or member invitation schemes are, but there are other ways for a user to come to a new app. Social K-factor Defined With the advent of social media, a new evolution to the K-factor concept has emerged. The Social K-factor is an indicator of how viral a website is when content is shared from the website onto social media. It is a function of the Social Coefficient, which determines how fast content is spreading through social sharing, and the Sharing Ratio, a measure of how often your content is likely to be shared.As visitors to your website share your website's content on their social networks, the content can go viral because the social media posts attract new visitors who then share more content. The Social K-factor measures the lift delivered from social sharing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Black point compensation**
Black point compensation:
Black point compensation is a technique used in digital photography printing. It is a method of creating adjustments between the maximum black levels of digital files and the black capabilities of various digital devices.
External Links:
Setting Custom White Balance - Levels Part 4 - ronbigelow.com | Wayback Machine | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Localized heat contact urticaria**
Localized heat contact urticaria:
Localized heat contact urticaria is a cutaneous condition, one of the rarest forms of urticaria, where within minutes of contact with heat from any source, itching and wheals occur at the precise site of contact, lasting up to 1 hour. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TURBOMOLE**
TURBOMOLE:
TURBOMOLE is an ab initio computational chemistry program that implements various quantum chemistry methods. It was initially developed by the group of Prof. Reinhart Ahlrichs at the University of Karlsruhe.
TURBOMOLE:
In 2007, TURBOMOLE GmbH, founded by R. Ahlrichs, F. Furche, C. Hättig, W. Klopper, M. Sierka, and F. Weigend, took over the responsibility for the coordination of the scientific development of TURBOMOLE program, for which the company holds all copy and intellectual property rights. In 2018 David P. Tew joined the TURBOMOLE GmbH. Since 1987, this program is one of the useful tools as it involves in many fields of research including heterogeneous and homogeneous catalysis, organic and inorganic chemistry, spectroscopy as well as biochemistry. This can be illustrated by citation records of Ahlrich's 1989 publication which is more than 6700 times as of 18 July 2020. In the year 2014, the second Turbomole article has been published. The number of citations from both papers indicates that the Turbomole's user base is expanding.
General features:
Turbomole was developed in 1987 and turned into a mature program system under the control of Reinhart Ahlrichs and his collaborators. Turbomole can perform a large-scale quantum chemical simulations of molecules, clusters, and later periodic solids. Gaussian basis sets are used in Turbomole. The functionality of the program concentrates extensively on the electronic structure methods with effective cost-performance characteristics such as density functional theory, second–order Møller-Plesset and coupled cluster theory. Aside from energies and structures, an assortment of optical, electrical, and magnetic properties are available from analytical energy derivative for electronic ground and excited states. However, up to the year 2000, Turbomole was only limited to the calculation of molecules in gas phase, thus, COSMO has been implemented in the Turbomole in a cooperative initiative of BASF AG and Bayer AG. Turbomole version 6.5 releasing in the year 2013, comes with post-Kohn-Sham calculations within the random-phase approximation. Turbomole also comes with another significant additions including nonadiabatic molecular dynamics, ultra-efficient higher order CC methods, new density functionals and periodic calculations. TmoleX is available as a graphical user interface for Turbomole allowing the user to perform the entire workflow of a quantum chemical investigation ranging from building of an initial structure to the interpretation of the results.
Version history:
The current version of Turbomole is V7.3 released in July 2018 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Difluoride**
Difluoride:
Difluorides are chemical compounds with two fluorine atoms per molecule (or per formula unit).
Metal difluorides are all ionic. Despite being highly ionic, the alkaline earth metal difluorides generally have extremely high lattice stability and are thus insoluble in water. The exception is beryllium difluoride. In addition, many transition metal difluorides are water-soluble.
Calcium difluoride is a notable compound. In the form of the mineral fluorite it is the major source of commercial fluorine. It also has an eponymic crystal structure, which is an end member of the spectrum starting from bixbyite and progressing through pyrochlore.
List of the difluorides:
Examples of the difluorides include: Alkaline earth metal difluorides The alkaline earth metals all exhibit the oxidation state +2, and form difluorides. The difluoride of radium is however not well established due to the element's high radioactivity.
List of the difluorides:
Beryllium difluoride Magnesium fluoride Calcium fluoride Strontium difluoride Barium fluoride Radium fluoride Lanthanide difluorides Samarium difluoride Europium difluoride Ytterbium difluoride Transition metal difluorides Compounds of the form MF2: Cadmium difluoride Chromium(II) fluoride Cobalt difluoride Copper(II) fluoride Iron(II) fluoride Manganese(II) fluoride Mercury difluoride Nickel difluoride Palladium difluoride Silver difluoride Zinc difluoride Post-transition metal difluorides Lead difluoride Tin(II) fluoride Nonmetal and metalloid difluorides Dinitrogen difluoride Oxygen difluoride Dioxygen difluoride Selenoyl difluoride Sulfur difluoride Disulfur difluoride Thionyl difluoride Germanium difluoride Noble gas difluorides Helium difluoride (hypothetical) Argon difluoride (predicted) Krypton difluoride Xenon difluoride Radon difluoride Bifluorides The bifluorides contain the two fluorine atoms in a covalently bound HF2− polyatomic ion rather than as F− anions.
List of the difluorides:
Ammonium bifluoride Potassium bifluoride Sodium bifluoride Organic difluorides Ethanedioyl difluoride Ethylidene difluoride Carbonyl difluoride Carbon dibromide difluoride (dibromodifluoromethane) Carbon dichloride difluoride (dichlorodifluormethane) Methyl difluoride Methylphosphonyl difluoride Polyvinylidene difluoride | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DBLCI Optimum Yield Index**
DBLCI Optimum Yield Index:
In May 2006, Deutsche Bank launched a new set of commodity index products called the Deutsche Bank Liquid Commodities Indices Optimum Yield, or DBLCI-OY'. The DBLCI-OY indices are available for 24 commodities drawn from the energy, precious metals, industrial metals, agricultural and livestock sectors. A DBLCI-OY index based on the DBLCI benchmark weights is also available and the optimum yield technology has also been applied to the energy, precious metals, industrial metals and agricultural sector indices. Like the DBLCI, the DBLCI-OY is available in USD, EUR, GBP and JPY on a hedged and un-hedge basis. The DBLCI-OY is rebalanced on the fifth index business day of November when each commodity is adjusted to its base weight. The DBLCI-OY is also listed as an exchange-traded fund (ETF) on the American Stock Exchange.
Methodology:
The rationale of the Optimum Yield technology was to address the dynamic nature of commodity forward curves. Unstable forward curves has meant the traditional approach employed by commodity indices, namely rolling futures contracts on a predefined scheduled (e.g. monthly) has, in our view, become an inferior strategy for passive commodity index investing. The DBLCI-OY indices are designed to select the futures contacts that either maximises the positive roll yield in backwardated term structures or minimises the negative roll yield in contangoed markets from the list of tradeable futures that expire in the next 13 months.
Methodology:
The changing pattern in commodity term structures has important implications for commodity index investing. Historically the engine room of performance within a commodity index has derived from the positive roll return generated in the energy sector due to the tendency for forward curves in this part of the commodity complex to be downward sloping or backwardated. However, the appearance of contango in the crude oil term structure over the past three years has meant the benefits of a positive roll return have disappeared and have been replaced by a negative roll return.
Characteristics:
Six commodities: WTI crude oil, heating oil, aluminium, gold, corn and wheat.
Index rolls to the futures contract that generates the maximum implied roll return from the list of tradable futures that expire in the next 14 months.
Commodity weights are re-balanced annually.
The DBLCI-OY is listed as an Exchange Traded Fund on the American Stock Exchange Total and excess returns data are available from December 2, 1988. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Runting-stunting syndrome in broilers**
Runting-stunting syndrome in broilers:
Runting-stunting syndrome in broilers is a syndrome described in broilers since the 1940s, but often with specific etiological appellations (viral enteritis, malabsorption syndrome, brittle bone disease, infectious pro ventriculitis, helicopter disease and pale bird syndrome). It consists of stunted growth in birds, which is clearly visible in the second month of growth (30–42 days).
Symptoms:
The mortality of the flock is unaffected, but a certain proportion of birds (1 to 10 percent) show decreased body weights ("runts") and elevated feed conversion. This leads to reduced uniformity of the flock.
Aetiology:
Causing agents may include: viruses: reovirus (often considered a unique cause), adenoviruses, enteroviruses, rotaviruses, parvoviruses.
bacteria like Escherichia coli, Proteus mirabilis, Enterococcus faecium, Staphylococcus cohnii, Clostridium perfringens, Bacteroides fragilis and Bacillus licheniformis, often isolated in affected birds.
Control:
Reovirus vaccines are advocated (in dams or in broilers) but do not entirely solve the problem.
General hygiene and correct breeding conditions (especially correct brooding temperatures) may be efficient, but the disease often disappears as it had appeared, which makes it difficult to appreciate the effectiveness of control measures. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Protein structure reconstruction**
Protein structure reconstruction:
Protein structure reconstruction refers to constructing an atomic-resolution model of a protein structure from incomplete coarse-grained representations like, for example, protein contact maps, positions of alpha carbon atoms only or backbone chain atoms only. There are many computational tools for protein structure reconstruction that are usually focused on specific reconstruction tasks which include: backbone reconstruction from alpha carbons, side-chains reconstruction from backbone chain atoms, hydrogen atoms reconstruction from heavy atoms positions and recovery of protein structure from contact maps.
Software:
Backbone reconstruction Pulchra BBQ PD2Side chain reconstruction Pulchra SCWRL | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vulnerability management**
Vulnerability management:
Vulnerability management is the "cyclical practice of identifying, classifying, prioritizing, remediating, and mitigating" software vulnerabilities. Vulnerability management is integral to computer security and network security, and must not be confused with vulnerability assessment.Vulnerabilities can be discovered with a vulnerability scanner, which analyzes a computer system in search of known vulnerabilities, such as open ports, insecure software configurations, and susceptibility to malware infections. They may also be identified by consulting public sources, such as NVD, vendor specific security updates or subscribing to a commercial vulnerability alerting service. Unknown vulnerabilities, such as a zero-day, may be found with fuzz testing. Fuzzy testing can identify certain kinds of vulnerabilities, such as a buffer overflow with relevant test cases. Such analysis can be facilitated by test automation. In addition, antivirus software capable of heuristic analysis may discover undocumented malware if it finds software behaving suspiciously (such as attempting to overwrite a system file).
Vulnerability management:
Correcting vulnerabilities may variously involve the installation of a patch, a change in network security policy, reconfiguration of software, or educating users about social engineering.
Project vulnerability management:
Project vulnerability is the project's susceptibility to being subject to negative events, the analysis of their impact, and the project's capability to cope with negative events. Based on Systems Thinking, project systemic vulnerability management takes a holistic vision, and proposes the following process: 1. Project vulnerability identification.
2. Vulnerability analysis.
3. Vulnerability response planning.
4. Vulnerability controlling – which includes implementation, monitoring, control, and lessons learned.
Project vulnerability management:
Coping with negative events is done, in this model, through: resistance – the static aspect, referring to the capacity to withstand instantaneous damage, and resilience – the dynamic aspect, referring to the capacity to recover in time.Redundancy is a specific method to increase resistance and resilience in vulnerability management.Antifragility is a concept introduced by Nassim Nicholas Taleb to describe the capacity of systems to not only resist or recover from adverse events, but also to improve because of them. Antifragility is similar to the concept of positive complexity proposed by Stefan Morcov. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Semiotics of photography**
Semiotics of photography:
Semiotics is the study of meaning-making on the basis of signs. Semiotics of photography is the observation of symbolism used within photography or "reading" the picture. This article refers to realistic, unedited photographs not those that have been manipulated in any way.
Roland Barthes was one of the first people to study the semiotics of images. He developed a way to understand the meaning of images. Most of Barthes' studies related to advertising, but his concepts can apply to photography as well.
Denotation:
Denotation refers to the meaning hidden in symbols or images. A denotation is "what we see" in the picture or what is "there" in the picture. According to author Clive Scott, this is another way of saying that a photograph has both a signified and a referent, is both coded and encoded. This is to re-emphasize the co-existence of the iconic and idexical. In photography the photo itself is the signifier, the signified is what the image is or represents. The literal meaning of the image.
Connotation:
Connotation (Semiotics) is arbitrary in that the meanings brought to the image are based on rules or conventions that the reader has learnt. Connotation attaches additional meaning to the first signifier, which is why the first signifier is often described in multiple words that include things like camera angle, color, lighting, etc. It is the immediate cultural meaning from what is seen in the picture, but not what is actually there. Connotation is what is implied by the image.
Coded iconic:
According to Roland Barthes the coded iconic message is the story that the image portrays. This message is easily understood and the images represent a clear relationship. The "reader" of the image applies their knowledge to the encoding of the photo. An image of a bowl of fruit for example might imply still life, freshness or market stalls.
Noncoded iconic:
Noncoded iconic is another part of Barthes' theory of understanding images. Noncoded has nothing to do with the emotions from the image as a whole. It is the "literal" denotation, the recognition of identifiable object in the photograph, irrespective of the larger societal code. Using the bowl of fruit example, this photograph is just that, a bowl of fruit. A non-coded iconic has no deeper meaning, the image is exactly what it shows. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cholesterol sulfate**
Cholesterol sulfate:
Cholesterol sulfate, or cholest-5-en-3β-ol sulfate, is an endogenous steroid and the C3β sulfate ester of cholesterol. It is formed from cholesterol by steroid sulfotransferases (SSTs) such as SULT2B1b (also known as cholesterol sulfotransferase) and is converted back into cholesterol by steroid sulfatase (STS). Accumulation of cholesterol sulfate in the skin is implicated in the pathophysiology of X-linked ichthyosis, a congenital disorder in which STS is non-functional and the body cannot convert cholesterol sulfate back into cholesterol. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Concrete hinge**
Concrete hinge:
Concrete hinges are hinges produced out of concrete, with no or almost no steel in the hinge neck, which allows a rotation without a relevant bending moment. This high rotations are resulting from controlled tensile cracks as well as creep. Concrete hinges are mostly used in bridge engineering as monolithic, simple, economic alternative to steel hinges, which would need regular maintenance. Concrete hinges are also used in tunnel engineering. A concrete hinge consist of the hinge neck, which has a reduced cross section and of the hinge heads, which have a strong reinforcement.
History and guidelines:
Freyssinet invented the concrete hinges.Leonhardt introduced guidelines in the 1960s which are still used till the 2010s.
Janßen introduced the application of concrete hinges in tunnel engineering.
Gladwell developed another guideline for narrowing cross sections, which predicts a stiffer behaviour than the Leonhardt/Janßen-model Marx and Schacht translated Leonhardts guidelines for the first time in the nowadays used semipropablistic safteyconcept.
Schlappal, Kalliauer and coworkers introduced for the first time both limit caces (service-limit-states (SLS) and ultimate-limite-states (ULS)).
Kaufmann, Markić und Bimschas did further studies on concrete hinges.
Stresses, rotational capacity, bearing capacity:
Due to triaxial compression, strength in the neck region is much higher than for uniaxial compression, because lateral expansion is restricted.Eurocode 2 suggests for typical dimensions a compressive strength equal to about twice of the unixalial compressive strength. Also the concrete hinge neck has no, or almost no reinforcement, but the concrete hinge heads need a dense reinforcement cache, because of the tensile splitting.
Literature:
Fritz Leonhardt: Vorlesungen über Massivbau - Teil 2 Sonderfälle der Bemessung im Stahlbetonbau. [Concrete hinges: test report, recommendations for structural design. Critical stress states of concrete under multiaxial static short-term loading Springer-Verlag, Berlin 1986, ISBN 3-540-16746-3, S. 123–132. (in German) VPI: Der Prüfingenieur. Ausgabe April 2010, S. 15–26, (bvpi.de PDF; 2,3 MB). (in German) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pickands–Balkema–De Haan theorem**
Pickands–Balkema–De Haan theorem:
The Pickands–Balkema–De Haan theorem gives the asymptotic tail distribution of a random variable, when its true distribution is unknown. It is often called the second theorem in extreme value theory. Unlike the first theorem (the Fisher–Tippett–Gnedenko theorem), which concerns the maximum of a sample, the Pickands–Balkema–De Haan theorem describes the values above a threshold.
The theorem owes its name to mathematicians James Pickands, Guus Balkema, and Laurens de Haan.
Conditional excess distribution function:
For an unknown distribution function F of a random variable X , the Pickands–Balkema–De Haan theorem describes the conditional distribution function Fu of the variable X above a certain threshold u . This is the so-called conditional excess distribution function, defined as Fu(y)=P(X−u≤y|X>u)=F(u+y)−F(u)1−F(u) for 0≤y≤xF−u , where xF is either the finite or infinite right endpoint of the underlying distribution F . The function Fu describes the distribution of the excess value over a threshold u , given that the threshold is exceeded.
Statement:
Let (X1,X2,…) be a sequence of independent and identically-distributed random variables, and let Fu be their conditional excess distribution function. Pickands, Balkema and De Haan posed that for a large class of underlying distribution functions F , and large u , Fu is well approximated by the generalized Pareto distribution. That is: as u→∞ where Gk,σ(y)=1−(1+ky/σ)−1/k , if k≠0 Gk,σ(y)=1−e−y/σ , if 0.
Statement:
Here σ > 0, and y ≥ 0 when k ≥ 0 and 0 ≤ y ≤ −σ/k when k < 0. These special cases are also known as Exponential distribution with mean σ , if k = 0, Uniform distribution on [0,σ] , if k = -1, Pareto distribution, if k > 0.Since a special case of the generalized Pareto distribution is a power-law, the Pickands–Balkema–De Haan theorem is sometimes used to justify the use of a power-law for modeling extreme events.
Statement:
The theorem has been extended to include a wider range of distributions. While the extended versions cover, for example the normal and log-normal distributions, still continuous distributions exist that are not covered. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gyttja**
Gyttja:
Gyttja (sometimes gytta, from Swedish gyttja) is a mud formed from the partial decay of peat. It is black and has a gel-like consistency. Aerobic digestion of the peat by bacteria forms humic acid and reduces the peat in the first oxygenated metre (generally 0.5 metre) of the peat column. As the peat is buried under new peat or soil the oxygen is reduced, often by waterlogging, and further degradation by anaerobic microbes, anaerobic digestion can produce gyttja. The gyttja then slowly drains to the bottom of the column. It pools at the bottom of the peat column, about 10 metres (33 ft) below the surface or wherever it is stopped by e.g. compacted soil/peat, bedrock, or permafrost. Gyttja accumulates as long as new material is added to the top of the column and the conditions are right for anaerobic degradation of the peat. Gyttja can form in layers reflecting changes in the environment as with other sedimentary rock. Gyttja is the part of peat that forms coal, but it must be buried under thousands of meters for coalification to occur because it has to be hot enough to drive off the water it contains (see dopplerite). A good documented example of gyttja occurrence and its coverage change in time is the cultural heritage site in Puck Bay. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dalcroze eurhythmics**
Dalcroze eurhythmics:
Dalcroze eurhythmics, also known as the Dalcroze method or simply eurhythmics, is one of several developmental approaches including the Kodály method, Orff Schulwerk and Suzuki Method used to teach music to students. Eurhythmics was developed in the early 20th century by Swiss musician and educator Émile Jaques-Dalcroze. Dalcroze eurhythmics teaches concepts of rhythm, structure, and musical expression using movement, and is the concept for which Dalcroze is best known. It focuses on allowing the student to gain physical awareness and experience of music through training that takes place through all of the senses, particularly kinesthetic.
Dalcroze eurhythmics:
Eurhythmics often introduces a musical concept through movement before the students learn about its visual representation. This sequence translates to heightened body awareness and an association of rhythm with a physical experience for the student, reinforcing concepts kinesthetically. Eurhythmics has wide-ranging applications and benefits and can be taught to a variety of age groups. Eurhythmics classes for all ages share a common goal – to provide the music student with a solid rhythmic foundation through movement in order to enhance musical expression and understanding.
Émile Jaques-Dalcroze and the origins of eurhythmics:
Jaques-Dalcroze was appointed Professor of Harmony at the Conservatoire of Geneva in 1892, early in his career. As he taught his classes, he noticed that his students deeply needed an approach to learning music that included a kinesthetic component. He believed that in order to enhance and maximize musical expression, students needed to be trained early on to listen and appreciate music using both their minds and bodies. This coordination of mind and physical instincts formed the basis of his method.
Émile Jaques-Dalcroze and the origins of eurhythmics:
Ready to develop and employ an improved, integrated style of music education at the Conservatoire, Dalcroze discovered some obstacles. He found that students with innate rhythmic abilities were rare, just as are those with absolute, or "perfect," pitch. In response to his observations, he asserted that in order to develop rhythmic ability in his students, he must first, and as early as possible in their development, train them in exercises that utilized the entire body. Only when the student's muscles and motor skills were developed could they be properly equipped to interpret and understand musical ideas. As he mentioned in the foreword of his "Rhythm, Music, and Education," he sought the "connection between instincts for pitch and movement…time and energy, dynamics, and space, music and character, music and temperament, [and] finally the art of music and the art of dancing.” Because of the nature of his goals in expanding music education, his ideas are readily applicable to young students. An objective of his was to "musicalize" young children in order to prepare them for musical expression in future instrumental studies. He believed exposure to music, an expanded understanding of how to listen, and the training of gross and fine motor skills would yield faster progress later on in students’ musical studies. Related to this was his goal to sow the seeds of musical appreciation for future generations.
Émile Jaques-Dalcroze and the origins of eurhythmics:
As stated concisely by Claire-Lise Dutoit in her "Music Movement Therapy," successful eurhythmics lessons have the following three attributes in common: “The vital enjoyment of rhythmic movement and the confidence that it gives; the ability to hear, understand and express music in movement; [and] the call made on the pupil to improvise and develop freely his own ideas.”
Influences on the development of eurhythmics:
Before taking a post teaching theory, Émile Jaques-Dalcroze spent a year as a conductor in Algiers, where he was exposed to a rhythmic complexity that helped influence him to pay special attention to rhythmic aspects of music.
Jaques-Dalcroze also had an important friendship with Édouard Claparède, the renowned psychologist. In particular, their collaboration resulted in eurhythmics often employing games of change and quick reaction in order to focus attention and increase learning.
Current applications:
General education Eurhythmics classes are often offered as an addition to general education programs, whether in preschools, grade schools, or secondary schools. In this setting, the objectives of eurhythmics classes are to introduce students with a variety of musical backgrounds to musical concepts through movement without a specific performance-related goal.
Current applications:
For younger students, eurhythmics activities often imitate play. Games include musical storytelling, which associates different types of music with corresponding movements of the characters in a story. The youngest of students, who are typically experiencing their first exposure to musical knowledge in a eurhythmics class, learn to correlate types of notes with familiar movement; for example the quarter note is represented as a "walking note." As they progress, their musical vocabulary is expanded and reinforced through movement.
Current applications:
Performance-based applications While eurhythmics classes can be taught to general populations of students, they are also effective when geared toward music schools, either preparing students to begin instrumental studies or serving as a supplement to students who have already begun musical performance.
Aspects of a rhythmic curriculum:
Vocabulary Eurhythmics classes for students in elementary school through college and beyond can benefit from a rhythmic curriculum that explores rhythmic vocabulary. This vocabulary can be introduced and utilized in a number of different ways, but the primary objective of this component is to familiarize students with rhythmic possibilities and expand their horizons. Activities such as rhythmic dictation, composition, and the performance of rhythmic canons and polyrhythms can accommodate a wide range of meters and vocabulary. In particular, vocabulary can be organized according to number of subdivisions of the pulse.
Aspects of a rhythmic curriculum:
Movement A key component of a rhythmic education, movement provides another way of reinforcing rhythmic concepts - kinesthetic learning serves as a supplement to visual and aural learning. While the study of traditional classroom music theory reinforces concepts visually and encourages students to develop aural skills, the study of eurhythmics solidifies these concepts through movement. In younger students, the movement aspect of a rhythmic curriculum also develops musculature and gross motor skills. Ideally, most activities that are explored in eurhythmics classes should include some sort of kinesthetic reinforcement.
Aspects of a rhythmic curriculum:
Meter and Syncopation Another element of a rhythmic curriculum is the exploration of meter and syncopation. In particular, the study of meter should incorporate an organization of pulses and subdivisions. This organization can be expressed in a "meter chart," which can include both equal-beat and unequal-beat meters.
Aspects of a rhythmic curriculum:
The study of syncopation, a broad term that can involve a variety of rhythms that fall unexpectedly or somehow displace the pulse, is also essential in a rhythmic education. Eurhythmics classes can incorporate various activities to explore syncopation, including complex rhythmic dictations, the performance of syncopated rhythms, the exploration of syncopated rhythms in canon, and a general discussion of syncopated vocabulary.
Sample activities:
Ages 3–6: Warm-up activities: The students isolate and shake each body part, each one accompanied by different music.
Notes: Students learn about musical notation through associated movements. For example, quarter notes would be taught as “walking notes”. After familiarity with associated movements, note names are then introduced.
Storytelling: The teacher invents a story or uses a familiar storyline to incorporate rhythmic concepts Ball games: Students pass a ball around in different ways, exploring naturally occurring rhythm and developing motor skills Games with sticks: The students jump across a series of sticks on the floor, learning to coordinate body parts and their associated rhythm.
Drum activities: The students participate with small drums, getting to reproduce rhythm in an instrumental contextAges 7+ (activities can be adapted to different age groups) Swings: The teacher plays music improvised in a preset metrical pattern. The students use prescribed body motions to determine the pattern.
Rhythmic dictation: The teacher plays a number of measures of music repeatedly, the rhythm of which the students dictate.
Rhythms: Students clap or step a predetermined rhythmic pattern. The teacher can experiment with augmentation and diminution.
Small group activities: Students work together in small groups to accomplish rhythmic tasks, encouraging cooperation.
Ball games: Students pass a ball around in different ways, exploring naturally occurring rhythm and developing motor skills.
Tempos: Students work to discover different tempos that can be applied to classical repertoire, familiar songs, or everyday movements. The teacher can also lead in experimenting with tempo relationships and adjustment.
Polyrhythms: The teacher establishes two rhythms to be performed at once, one in the hands and one in the feet.
Cross rhythms: Students produce one even rhythm in the hands against another even rhythm in the feet. The teacher prompts them to switch which rhythm is produced in each body part.
“Cosmic Whole Note”: Students listen to a slow pulse (an example would be 6 beats per minute), subdivide the space between sounds, and predict when the next pulse sounds by clapping.
Canon: Students listen to rhythmic vocabulary performed by the teacher and step this vocabulary in canon. This activity can be executed in a variety of meters.
“Microbeats”: Students learn syllables to represent 1-9 subdivisions of a beat. Associated activities could include performing microbeats in prescribed patterns, at varying tempi, in canon, or as sight-reading.
Effectiveness of Dalcroze eurhythmics:
A group of 72 pre-school children were tested on their rhythmic ability; half of the children had free-play (35–40 min.) twice a week for a 10-week period while the other half had rhythmic movement classes for the same amount of time. The group that had classes (experimental group) did significantly better than the group that just had free-play (control group). The experiment group scored four or more points better in every area tested than the control group in the final test. This shows that eurhythmic classes can benefit a child’s sense of rhythm.
Higher education course offerings in eurhythmics:
Baldwin Wallace University offers a Solfege / Eurythmics course as part of its conservatory program https://www.bw.edu/schools/conservatory-music/ Longy School of Music of Bard College has an extensive program, including Dalcroze certificate and license training Carnegie Mellon University, as part of the Martha Sanchez Dalcroze Training Center Cleveland Institute of Music offers an Eurhythmics program. It includes degree programs for both intermediate (college level) eurhythmics and children's eurhythmics.
Higher education course offerings in eurhythmics:
Hope College offers various Dalcroze eurythmics courses for music and dance majors/minors Ohio State University courses for music and dance majors/minors Colorado State University Oberlin Conservatory of Music Stony Brook University University of Cincinnati – College-Conservatory of Music offers eurhythmics as part of their Percussion pedagogy [1] | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nipkow disk**
Nipkow disk:
A Nipkow disk (sometimes Anglicized as Nipkov disk; patented in 1884), also known as scanning disk, is a mechanical, rotating, geometrically operating image scanning device, patented by Paul Gottlieb Nipkow in Berlin. This scanning disk was a fundamental component in mechanical television, and thus the first televisions, through the 1920s and 1930s.
Operation:
The device is a mechanically spinning disk of any suitable material (metal, plastic, cardboard, etc.), with a series of equally-distanced circular holes of equal diameter drilled in it. The holes may also be square for greater precision. These holes are positioned to form a single-turn spiral starting from an external radial point of the disk and proceeding to the center of the disk. When the disk rotates, the holes trace circular ring patterns, with inner and outer diameter depending on each hole's position on the disk and thickness equal to each hole's diameter. The patterns may or may not partially overlap, depending on the exact construction of the disk. A lens projects an image of the scene in front of it directly onto the disk. Each hole in the spiral takes a "slice" through the image which is picked up as a temporal pattern of light and dark by a sensor. If the sensor is made to control a light behind a second Nipkow disk rotating synchronously at the same speed and in the same direction, the image will be reproduced line-by-line. The size of the reproduced image is again determined by the size of the disc; a larger disc produces a larger image.
Operation:
When spinning the disk while observing an object "through" the disk, preferably through a relatively small circular sector of the disk (the viewport), for example, an angular quarter or eighth of the disk, the object seems "scanned" line by line, first by length or height or even diagonally, depending on the exact sector chosen for observation. By spinning the disk rapidly enough, the object seems complete and capturing of motion becomes possible. This can be intuitively understood by covering all of the disk but a small rectangular area with black cardboard (which stays fixed), spinning the disk and observing an object through the small area.
Advantages:
One of the advantages of using a Nipkow disk is that the image sensor (that is, the device converting light to electric signals) can be as simple as a single photocell or photodiode, since at each instant only a very small area is visible through the disk (and viewport), and so decomposing an image into lines is done almost by itself with little need for scanline timing, and very high scanline resolution. A simple acquisition device can be built by using an electrical motor driving a Nipkow disk, a small box containing a single light-sensitive (electric) element and a conventional image focusing device (lens, dark box, etc.).
Advantages:
Another advantage is that the receiving device is very similar to the acquisition device, except that the light-sensitive device is replaced by a variable light source, driven by the signal provided by the acquisition device. Some means of synchronizing the disks on the two devices must also be devised (several options are possible, ranging from manual to electronic control signals).
Advantages:
These facts helped immensely in building the first mechanical television accomplished by the Scottish inventor John Logie Baird, as well as the first "TV-Enthusiasts" communities and even experimental image radio broadcasts in the 1920s.
Disadvantages:
The resolution along a Nipkow disk's scanline is potentially very high, being an analogue scan. However the maximum number of scanlines is much more limited, being equal to the number of holes on the disk, which in practice ranged from 30 to 100, with rare 200-hole disks tested.
Another drawback of the Nipkow disk as an image scanning device: the scanlines are not straight lines, but rather curves.
So the ideal Nipkow disk should have either a very large diameter, which means smaller curvature, or a very narrow angular opening of its viewport. Another way to produce acceptable images would be to drill smaller holes (millimeter or even micrometer scale) closer to the outer sectors of the disk, but technological evolution favoured electronic means of image acquisition.
Disadvantages:
Another significant disadvantage lay with reproducing images at the receiving end of the transmission which was also accomplished with a Nipkow disk. The images were typically very small, as small as the surface used for scanning, which, with the practical implementations of mechanical television, were the size of a postage-stamp in the case of a 30 to 50 cm diameter disk.
Disadvantages:
Further disadvantages include the non-linear geometry of the scanned images, and the impractical size of the disk, at least in the past. The Nipkow disks used in early TV receivers were roughly 30 cm to 50 cm in diameter, with 30 to 50 holes. The devices using them were also noisy and heavy with very low picture quality and a great deal of flickering. The acquisition part of the system was not much better, requiring very powerful lighting of the subject.
Disadvantages:
Disk scanners share a major limitation with the Farnsworth image dissector. Light is conveyed into the sensing system as the small aperture scans over the entire field of view. The actual amount of light gathered is instantaneous, occurring through a very small aperture, and the net yield is only a microscopic percentage of the incident energy.
Iconoscopes (and their successors) accumulate energy on the target continuously, thereby integrating energy over time. The scanning system simply "picks off" the accumulated charge as it sweeps past each site on the target. Simple calculations show that, for equally sensitive photosensitive receptors, the iconoscope is hundreds to thousands of times more sensitive than the disk or the Farnsworth scanner.
The scanning disk can be replaced by a polygonal mirror, but this suffers from the same problem – lack of integration over time.
Applications:
Apart from the aforementioned mechanical television, which did not become popular for the practical reasons mentioned above, a Nipkow disk is used in one type of confocal microscope, a powerful optical microscope. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quantum cohomology**
Quantum cohomology:
In mathematics, specifically in symplectic topology and algebraic geometry, a quantum cohomology ring is an extension of the ordinary cohomology ring of a closed symplectic manifold. It comes in two versions, called small and big; in general, the latter is more complicated and contains more information than the former. In each, the choice of coefficient ring (typically a Novikov ring, described below) significantly affects its structure, as well.
Quantum cohomology:
While the cup product of ordinary cohomology describes how submanifolds of the manifold intersect each other, the quantum cup product of quantum cohomology describes how subspaces intersect in a "fuzzy", "quantum" way. More precisely, they intersect if they are connected via one or more pseudoholomorphic curves. Gromov–Witten invariants, which count these curves, appear as coefficients in expansions of the quantum cup product.
Quantum cohomology:
Because it expresses a structure or pattern for Gromov–Witten invariants, quantum cohomology has important implications for enumerative geometry. It also connects to many ideas in mathematical physics and mirror symmetry. In particular, it is ring-isomorphic to symplectic Floer homology.
Throughout this article, X is a closed symplectic manifold with symplectic form ω.
Novikov ring:
Various choices of coefficient ring for the quantum cohomology of X are possible. Usually a ring is chosen that encodes information about the second homology of X. This allows the quantum cup product, defined below, to record information about pseudoholomorphic curves in X. For example, let H2(X)=H2(X,Z)/torsion be the second homology modulo its torsion. Let R be any commutative ring with unit and Λ the ring of formal power series of the form λ=∑A∈H2(X)λAeA, where the coefficients λA come from R, the eA are formal variables subject to the relation eAeB=eA+B for every real number C, only finitely many A with ω(A) less than or equal to C have nonzero coefficients λA .The variable eA is considered to be of degree 2c1(A) , where c1 is the first Chern class of the tangent bundle TX, regarded as a complex vector bundle by choosing any almost complex structure compatible with ω. Thus Λ is a graded ring, called the Novikov ring for ω. (Alternative definitions are common.)
Small quantum cohomology:
Let H∗(X)=H∗(X,Z)/torsion be the cohomology of X modulo torsion. Define the small quantum cohomology with coefficients in Λ to be QH∗(X,Λ)=H∗(X)⊗ZΛ.
Its elements are finite sums of the form ∑iai⊗λi.
The small quantum cohomology is a graded R-module with deg deg deg (λi).
The ordinary cohomology H*(X) embeds into QH*(X, Λ) via a↦a⊗1 , and QH*(X, Λ) is generated as a Λ-module by H*(X).
For any two cohomology classes a, b in H*(X) of pure degree, and for any A in H2(X) , define (a∗b)A to be the unique element of H*(X) such that ∫X(a∗b)A⌣c=GW0,3X,A(a,b,c).
(The right-hand side is a genus-0, 3-point Gromov–Witten invariant.) Then define := ∑A∈H2(X)(a∗b)A⊗eA.
This extends by linearity to a well-defined Λ-bilinear map QH∗(X,Λ)⊗QH∗(X,Λ)→QH∗(X,Λ) called the small quantum cup product.
Geometric interpretation:
The only pseudoholomorphic curves in class A = 0 are constant maps, whose images are points. It follows that GW0,3X,0(a,b,c)=∫Xa⌣b⌣c; in other words, (a∗b)0=a⌣b.
Thus the quantum cup product contains the ordinary cup product; it extends the ordinary cup product to nonzero classes A.
Geometric interpretation:
In general, the Poincaré dual of (a∗b)A corresponds to the space of pseudoholomorphic curves of class A passing through the Poincaré duals of a and b. So while the ordinary cohomology considers a and b to intersect only when they meet at one or more points, the quantum cohomology records a nonzero intersection for a and b whenever they are connected by one or more pseudoholomorphic curves. The Novikov ring just provides a bookkeeping system large enough to record this intersection information for all classes A.
Example:
Let X be the complex projective plane with its standard symplectic form (corresponding to the Fubini–Study metric) and complex structure. Let ℓ∈H2(X) be the Poincaré dual of a line L. Then H∗(X)≅Z[ℓ]/ℓ3.
The only nonzero Gromov–Witten invariants are those of class A = 0 or A = L. It turns out that ∫X(ℓi∗ℓj)0⌣ℓk=GW0,3X,0(ℓi,ℓj,ℓk)=δ(i+j+k,2) and ∫X(ℓi∗ℓj)L⌣ℓk=GW0,3X,L(ℓi,ℓj,ℓk)=δ(i+j+k,5), where δ is the Kronecker delta. Therefore, ℓ∗ℓ=ℓ2e0+0eL=ℓ2, ℓ∗ℓ2=0e0+1eL=eL.
In this case it is convenient to rename eL as q and use the simpler coefficient ring Z[q]. This q is of degree 6=2c1(L) . Then QH∗(X,Z[q])≅Z[ℓ,q]/(ℓ3=q).
Properties of the small quantum cup product:
For a, b of pure degree, deg deg deg (b) and deg deg (b)a∗b.
The small quantum cup product is distributive and Λ-bilinear. The identity element 1∈H0(X) is also the identity element for small quantum cohomology.
The small quantum cup product is also associative. This is a consequence of the gluing law for Gromov–Witten invariants, a difficult technical result. It is tantamount to the fact that the Gromov–Witten potential (a generating function for the genus-0 Gromov–Witten invariants) satisfies a certain third-order differential equation known as the WDVV equation.
An intersection pairing QH∗(X,Λ)⊗QH∗(X,Λ)→R is defined by ⟨∑iai⊗λi,∑jbj⊗μj⟩=∑i,j(λi)0(μj)0∫Xai⌣bj.
(The subscripts 0 indicate the A = 0 coefficient.) This pairing satisfies the associativity property ⟨a∗b,c⟩=⟨a,b∗c⟩.
Dubrovin connection:
When the base ring R is C, one can view the evenly graded part H of the vector space QH*(X, Λ) as a complex manifold. The small quantum cup product restricts to a well-defined, commutative product on H. Under mild assumptions, H with the intersection pairing ⟨,⟩ is then a Frobenius algebra.
The quantum cup product can be viewed as a connection on the tangent bundle TH, called the Dubrovin connection. Commutativity and associativity of the quantum cup product then correspond to zero-torsion and zero-curvature conditions on this connection.
Big quantum cohomology:
There exists a neighborhood U of 0 ∈ H such that ⟨,⟩ and the Dubrovin connection give U the structure of a Frobenius manifold. Any a in U defines a quantum cup product ∗a:H⊗H→H by the formula := ∑n∑A1n!GW0,n+3X,A(x,y,z,a,…,a).
Collectively, these products on H are called the big quantum cohomology. All of the genus-0 Gromov–Witten invariants are recoverable from it; in general, the same is not true of the simpler small quantum cohomology.
Big quantum cohomology:
Small quantum cohomology has only information of 3-point Gromov–Witten invariants, but the big quantum cohomology has of all (n ≧ 4) n-point Gromov–Witten invariants. To obtain enumerative geometrical information for some manifolds, we need to use big quantum cohomology. Small quantum cohomology would correspond to 3-point correlation functions in physics while big quantum cohomology would correspond to all of n-point correlation functions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Journal of Mixed Methods Research**
Journal of Mixed Methods Research:
The Journal of Mixed Methods Research is a peer-reviewed academic journal that publishes papers in the field of Research Methods. The journal's editors are Michael D. Fetters (Department of Family Medicine, Michigan Medicine, University of Michigan, United States) and Jose F. Molina-Azorin (University of Alicante, Alicante, Spain). It has been in publication since 2007 and is currently published by SAGE Publications.
Scope:
The Journal of Mixed Methods Research publishes empirical, methodological and theoretical articles about mixed methods research across social and behavioral sciences. The interdisciplinary journal aims to highlight where mixed methods research may be used more effectively and design and procedure issues.
Abstracting and indexing:
The Journal of Mixed Methods Research is abstracted and indexed in, among other databases: SCOPUS, and the Social Sciences Citation Index. According to the Journal Citation Reports, its 2018 impact factor is 3.524, ranking it 1 out of 98 journals in the category ‘Social Sciences, Interdisciplinary’. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cristina G. Fernandes**
Cristina G. Fernandes:
Cristina Gomes Fernandes is Professor of Computer Science at the University of São Paulo.
Cristina G. Fernandes:
Fernandes has a BSc in Computer Science from the University of São Paulo (1987), a MSc in Applied Mathematics from the University of São Paulo (1992) and a PhD in Computer Science from the Georgia Institute of Technology (1997), the title of her thesis was Approximation Algorithms for Planar and Highly Connected Subgraphs.Her research focus lies in the research of combinatorial optimization, with emphasis on approximation algorithms, algorithm analysis and computational complexity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fibrolytic bacterium**
Fibrolytic bacterium:
Fibrolytic bacteria constitute a group of microorganisms that are able to process complex plant polysaccharides thanks to their capacity to synthesize cellulolytic and hemicellulolytic enzymes. Polysaccharides are present in plant cellular cell walls in a compact fiber form where they are mainly composed of cellulose and hemicellulose.
Fibrolytic enzymes, which are classified as cellulases, can hydrolyze the β (1 ->4) bonds in plant polysaccharides. Cellulase and hemicellulase (also known as xylanase) are the two main representatives of these enzymes.
Biological characteristics:
Fibrolytic bacteria use glycolysis and the pentose phosphate pathway as the main metabolic routes to catabolize carbohydrates in order to obtain energy and carbon backbones. They use ammonia as the major and practically exclusive source of nitrogen, and they require several B-vitamins for their development.
They often depend on other microorganisms to obtain some of their nutrients. Although their growth rate is considered slow, it can be enhanced in the presence of considerable amounts of short-chain fatty acids (isobutyric and isovaleric). These compounds are normally generated as a product of the amino acid fermentative activity of other microorganisms.
Because of their habitat conditions, most fibrolytic bacteria are anaerobic.
Cellulolytic communities:
Most fibrolytic bacteria are classified as Bacteroidota or Bacillota and include several bacterial species with diverse morphological and physiological characteristics.
They are normally commensal species which have a symbiotic relationship with different insect and mammal species, constituting one of the main components of their gastrointestinal flora. In fact, in herbivores each milliliter of ruminal content can reach about 50 million of bacteria of a great variety of genera and species. .
Given the importance of industrial processing of plant fibers in different fields, the genomic analysis of fibrolytic communities in the gastrointestinal tract of different animals, may provide new biotechnological tools for the transformation of complex polysaccharides (including lignocellulytic biomass) .
Applications:
So far, most applications are performed using enzymatic aqueous solutions containing one or more types of cellulases. Enzyme production for industrial use has its origins at the end of the nineteenth century in Denmark and Japan. An enzyme is a cellular product which can be obtained from animal and vegetable tissues, or through the biological activity of selected microorganisms. Enzymes are then used in different industrial processes.
Applications:
In order to produce enzymatic solutions for industrial applications, it is first necessary to obtain them in huge amounts and then, purify them to a certain extent; this makes the production process long and expensive. One possible alternative would be working with microbial communities, which makes the process shorter, and cheaper. However, process control is much more difficult when working with bacterial communities than when applying enzymatic solutions.
General applications:
In the early 1980s, enzymes produced by fibrolytic bacteria were incorporated in cattle food. This allowed them to obtain more energy from the forage which they fed on, thanks to the partial digestion of lignocellulosic material.
General applications:
They have been gaining importance in the food processing industry, in the filtration of fruit and vegetables juices, in edible oil extraction, in baking, etc. Furthermore, the use of these kinds of enzymes was progressively extended to the textile and laundry industry, where they are used to fade the intense blue of fabrics and to provide them a more faded appearance.
General applications:
In the chemical industry, these enzymes have allowed the development of new detergents and washing-up liquids; in the paper industry they play a very important role in bleaching processes, minimizing toxicity and being more economic; and in biotechnological research, the use of the cellulose binding domains from fibrolytic enzymes has allowed the purification of recombinant proteins.
Energy applications:
Fibrolytic bacteria are expected to play an important role in renewable energy production through biomass degradation.
Energy applications:
One of the main objectives of biotechnology is biofuel production with the aim to reduce CO2 emissions, because biofuels obtained from plant material does not contribute a net atmospheric input of CO2. The gas emitted during the combustion of biofuels of cellulolytic origin will be reabsorbed in vegetable growth and this is why it does not have an environmental impact so negative.
Discovery of fibrolytic genes and fibrolytic bacteria:
Probably, the best studied fibrolytic community is the one in the rumen of ruminants. However, there are other organisms that are able to degrade vegetable fibres, from insects to mollusks, all of them can do it thanks to the activity of different microbial symbionts.
In order to improve the industrial transformation processes of vegetable fibres and related applications it is necessary to discover new and efficient enzymes and specialized bacterial communities.
Next we describe the main steps in the discovery of genes and genomes from fibrolytic bacteria .
The first step that can be followed to obtain fibrolytic bacteria from gastrointestinal cavities in ruminants is the culture of the target communities inside the rumen of a cow by introducing a nylon bag containing a forage with a high cellulose content (for example, Panicum virgatum).
An orifice is surgically done to the spine making rumen available from the outside through a tampon which avoids the closing of the fistula. The nylon bag is incubated in the rumen 72 hours.
After incubation it is important to separate the microorganisms adhered to the vegetable fibres from the ones that are in suspension in the ruminal fluid.
Analysis of microbial community specificity:
To analyze the specificity of the community on the sample, one can compare the diversity of sequences of small subunit ribosomal RNA of the sample with a sample of reference.
Analysis of microbial community specificity:
After extraction and purification of the DNA of the sample, the PCR emulsion technique is used to amplify the genes of the small ribosomal subunit. Then each amplicon is sequenced with the pyrosequencing technique. Once we have the sequences they have to be compared and grouped according to the degree of similarity, to define OTUS (Operational Taxonomic Units)-which are groups of sequences that belong to organisms phylogenetically close.
Analysis of microbial community specificity:
Comparing OTUS of the two samples the differences of both microbial communities could be assessed.
Metagenomic sequencing:
In order to obtain the sequences of lignocelulitic genes an accurate metagenomic analysis is done. Sequencing and assembly of the whole DNA of the sample gives the metagenome of the sample.
Identification of carbohydrate active genes:
The identification of genes that encode for proteins which have fibrolytic activity is done in two steps.
First, a bioinformatic analysis is performed. The sequences obtained in the metagenomic analysis are compared with the gene sequences of known fibrolytic proteins (for example the sequences that are on the data base Carbohidrate Active Enzymes (CAZy)).
In this first step the number of candidate genes is reduced considerably and these are the ones that are used on the following step.
In the second step, a library for protein expression is built. Expression vectors are introduced in E.coli and after the growth of these bacteria the supernatant is tested for biochemical activity on different substrates.
Identification of fibrolytic microorganisms:
To identify to which microorganisms belong the enzymes which have been identified, and check if metagenome assembly was right, a separation of different species of bacteria from the sample can be done by flow cytometry.
The use of specific antibodies labelled with fluorochromes makes possible to separate the different cell types of the sample which belong to different phylogenetic groups. This technique is called Fluorescence Activated Cell Sorting (FACS).
Once the different species of bacteria are separated, their genomes are sequenced and the validation of the metagenomic analysis can be done. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Heat exchanger**
Heat exchanger:
A heat exchanger is a system used to transfer heat between a source and a working fluid. Heat exchangers are used in both cooling and heating processes. The fluids may be separated by a solid wall to prevent mixing or they may be in direct contact. They are widely used in space heating, refrigeration, air conditioning, power stations, chemical plants, petrochemical plants, petroleum refineries, natural-gas processing, and sewage treatment. The classic example of a heat exchanger is found in an internal combustion engine in which a circulating fluid known as engine coolant flows through radiator coils and air flows past the coils, which cools the coolant and heats the incoming air. Another example is the heat sink, which is a passive heat exchanger that transfers the heat generated by an electronic or a mechanical device to a fluid medium, often air or a liquid coolant.
Flow arrangement:
There are three primary classifications of heat exchangers according to their flow arrangement. In parallel-flow heat exchangers, the two fluids enter the exchanger at the same end, and travel in parallel to one another to the other side. In counter-flow heat exchangers the fluids enter the exchanger from opposite ends. The counter current design is the most efficient, in that it can transfer the most heat from the heat (transfer) medium per unit mass due to the fact that the average temperature difference along any unit length is higher. See countercurrent exchange. In a cross-flow heat exchanger, the fluids travel roughly perpendicular to one another through the exchanger.
Flow arrangement:
For efficiency, heat exchangers are designed to maximize the surface area of the wall between the two fluids, while minimizing resistance to fluid flow through the exchanger. The exchanger's performance can also be affected by the addition of fins or corrugations in one or both directions, which increase surface area and may channel fluid flow or induce turbulence.
The driving temperature across the heat transfer surface varies with position, but an appropriate mean temperature can be defined. In most simple systems this is the "log mean temperature difference" (LMTD). Sometimes direct knowledge of the LMTD is not available and the NTU method is used.
Types:
Double pipe heat exchangers are the simplest exchangers used in industries. On one hand, these heat exchangers are cheap for both design and maintenance, making them a good choice for small industries. On the other hand, their low efficiency coupled with the high space occupied in large scales, has led modern industries to use more efficient heat exchangers like shell and tube or plate. However, since double pipe heat exchangers are simple, they are used to teach heat exchanger design basics to students as the fundamental rules for all heat exchangers are the same.
Types:
Double-pipe heat exchanger (a) When the other fluid flows into the annular gap between two tubes, one fluid flows through the smaller pipe. The flow may be a current flow or parallel flow in a double pipe heat exchanger. (b) Parallel flow, where at the same point, the hot and cold liquids join, flow in the same direction and exit at the same end. (c) Counter flow, where at opposite ends, hot and cold fluids join, flow in the opposite direction and exit at opposite ends. The figure above illustrates the parallel and counter-flow flow directions of the fluid exchanger. If this is done under comparable conditions, more heat is transferred to the counter-flow device than to the parallel flow heat exchanger. Owing to the large temperature differential arising from the high thermal voltage, the temperature profiles of the two heat exchangers display two significant disadvantages in the parallel-flow design. Which indicates that the partnership is a distinct disadvantage if it is intended a design is to increase the cold fluid temperature. Where two fluids are expected to be taken to exactly the same temperature, the parallel flow configuration is beneficial. While the counter flow heat exchanger has more significant advantages compared to parallel flow design. Where it can reduce thermal stress and produce more uniform rate of heat transfer. 2.
Types:
Shell-and-tube heat exchanger In a shell-and-tube heat exchanger, two fluids at different temperatures flow through the heat exchanger. One of the fluids flows through the tube side and the other fluid flows outside the tubes, but inside the shell (shell side).
Types:
Baffles are used to support the tubes, direct the fluid flow to the tubes in an approximately natural manner, and maximize the turbulence of the shell fluid. There are many various kinds of baffles, and the choice of baffle form, spacing, and geometry depends on the allowable flow rate of the drop in shell-side force, the need for tube support, and the flow-induced vibrations. There are several variations of shell-and-tube exchangers available; the differences lie in the arrangement of flow configurations and details of construction.
Types:
In application to cool air with shell-and-tube technology (such as intercooler / charge air cooler for combustion engines), fins can be added on the tubes to increase heat transfer area on air side and create a tubes & fins configuration. 3.
Types:
Plate Heat Exchanger A plate heat exchanger contains an amount of thin shaped heat transfer plates bundled together. The gasket arrangement of each pair of plates provides two separate channel system. Each pair of plates form a channel where the fluid can flow through. The pairs are attached by welding and bolting methods. The following shows the components in the heat exchanger. In single channels the configuration of the gaskets enables flow through. Thus, this allows the main and secondary media in counter-current flow. A gasket plate heat exchanger has a heat region from corrugated plates. The gasket function as seal between plates and they are located between frame and pressure plates. Fluid flows in a counter current direction throughout the heat exchanger. An efficient thermal performance is produced. Plates are produced in different depths, sizes and corrugated shapes. There are different types of plates available including plate and frame, plate and shell and spiral plate heat exchangers. The distribution area guarantees the flow of fluid to the whole heat transfer surface. This helps to prevent stagnant area that can cause accumulation of unwanted material on solid surfaces. High flow turbulence between plates results in a greater transfer of heat and a decrease in pressure. 4.
Types:
Condensers and Boilers Heat exchangers using a two-phase heat transfer system are condensers, boilers and evaporators. Condensers are instruments that take and cool hot gas or vapor to the point of condensation and transform the gas into a liquid form. The point at which liquid transforms to gas is called vaporization and vice versa is called condensation. Surface condenser is the most common type of condenser where it includes a water supply device. Figure 5 below displays a two-pass surface condenser. The pressure of steam at the turbine outlet is low where the steam density is very low where the flow rate is very high. To prevent a decrease in pressure in the movement of steam from the turbine to condenser, the condenser unit is placed underneath and connected to the turbine. Inside the tubes the cooling water runs in a parallel way, while steam moves in a vertical downward position from the wide opening at the top and travel through the tube. Furthermore, boilers are categorized as initial application of heat exchangers. The word steam generator was regularly used to describe a boiler unit where a hot liquid stream is the source of heat rather than the combustion products. Depending on the dimensions and configurations the boilers are manufactured. Several boilers are only able to produce hot fluid while on the other hand the others are manufactured for steam production.
Types:
Shell and tube Shell and tube heat exchangers consist of a series of tubes which contain fluid that must be either heated or cooled. A second fluid runs over the tubes that are being heated or cooled so that it can either provide the heat or absorb the heat required. A set of tubes is called the tube bundle and can be made up of several types of tubes: plain, longitudinally finned, etc. Shell and tube heat exchangers are typically used for high-pressure applications (with pressures greater than 30 bar and temperatures greater than 260 °C). This is because the shell and tube heat exchangers are robust due to their shape.Several thermal design features must be considered when designing the tubes in the shell and tube heat exchangers: There can be many variations on the shell and tube design. Typically, the ends of each tube are connected to plenums (sometimes called water boxes) through holes in tubesheets. The tubes may be straight or bent in the shape of a U, called U-tubes.
Types:
Tube diameter: Using a small tube diameter makes the heat exchanger both economical and compact. However, it is more likely for the heat exchanger to foul up faster and the small size makes mechanical cleaning of the fouling difficult. To prevail over the fouling and cleaning problems, larger tube diameters can be used. Thus to determine the tube diameter, the available space, cost and fouling nature of the fluids must be considered.
Types:
Tube thickness: The thickness of the wall of the tubes is usually determined to ensure: There is enough room for corrosion That flow-induced vibration has resistance Axial strength Availability of spare parts Hoop strength (to withstand internal tube pressure) Buckling strength (to withstand overpressure in the shell) Tube length: heat exchangers are usually cheaper when they have a smaller shell diameter and a long tube length. Thus, typically there is an aim to make the heat exchanger as long as physically possible whilst not exceeding production capabilities. However, there are many limitations for this, including space available at the installation site and the need to ensure tubes are available in lengths that are twice the required length (so they can be withdrawn and replaced). Also, long, thin tubes are difficult to take out and replace.
Types:
Tube pitch: when designing the tubes, it is practical to ensure that the tube pitch (i.e., the centre-centre distance of adjoining tubes) is not less than 1.25 times the tubes' outside diameter. A larger tube pitch leads to a larger overall shell diameter, which leads to a more expensive heat exchanger.
Tube corrugation: this type of tubes, mainly used for the inner tubes, increases the turbulence of the fluids and the effect is very important in the heat transfer giving a better performance.
Types:
Tube Layout: refers to how tubes are positioned within the shell. There are four main types of tube layout, which are, triangular (30°), rotated triangular (60°), square (90°) and rotated square (45°). The triangular patterns are employed to give greater heat transfer as they force the fluid to flow in a more turbulent fashion around the piping. Square patterns are employed where high fouling is experienced and cleaning is more regular.
Types:
Baffle Design: baffles are used in shell and tube heat exchangers to direct fluid across the tube bundle. They run perpendicularly to the shell and hold the bundle, preventing the tubes from sagging over a long length. They can also prevent the tubes from vibrating. The most common type of baffle is the segmental baffle. The semicircular segmental baffles are oriented at 180 degrees to the adjacent baffles forcing the fluid to flow upward and downwards between the tube bundle. Baffle spacing is of large thermodynamic concern when designing shell and tube heat exchangers. Baffles must be spaced with consideration for the conversion of pressure drop and heat transfer. For thermo economic optimization it is suggested that the baffles be spaced no closer than 20% of the shell's inner diameter. Having baffles spaced too closely causes a greater pressure drop because of flow redirection. Consequently, having the baffles spaced too far apart means that there may be cooler spots in the corners between baffles. It is also important to ensure the baffles are spaced close enough that the tubes do not sag. The other main type of baffle is the disc and doughnut baffle, which consists of two concentric baffles. An outer, wider baffle looks like a doughnut, whilst the inner baffle is shaped like a disk. This type of baffle forces the fluid to pass around each side of the disk then through the doughnut baffle generating a different type of fluid flow.
Types:
Tubes & fins Design: in application to cool air with shell-and-tube technology (such as intercooler / charge air cooler for combustion engines), the difference in heat transfer between air and cold fluid can be such that there is a need to increase heat transfer area on air side. For this function fins can be added on the tubes to increase heat transfer area on air side and create a tubes & fins configuration.Fixed tube liquid-cooled heat exchangers especially suitable for marine and harsh applications can be assembled with brass shells, copper tubes, brass baffles, and forged brass integral end hubs. (See: Copper in heat exchangers).
Types:
Plate Another type of heat exchanger is the plate heat exchanger. These exchangers are composed of many thin, slightly separated plates that have very large surface areas and small fluid flow passages for heat transfer. Advances in gasket and brazing technology have made the plate-type heat exchanger increasingly practical. In HVAC applications, large heat exchangers of this type are called plate-and-frame; when used in open loops, these heat exchangers are normally of the gasket type to allow periodic disassembly, cleaning, and inspection. There are many types of permanently bonded plate heat exchangers, such as dip-brazed, vacuum-brazed, and welded plate varieties, and they are often specified for closed-loop applications such as refrigeration. Plate heat exchangers also differ in the types of plates that are used, and in the configurations of those plates. Some plates may be stamped with "chevron", dimpled, or other patterns, where others may have machined fins and/or grooves.
Types:
When compared to shell and tube exchangers, the stacked-plate arrangement typically has lower volume and cost. Another difference between the two is that plate exchangers typically serve low to medium pressure fluids, compared to medium and high pressures of shell and tube. A third and important difference is that plate exchangers employ more countercurrent flow rather than cross current flow, which allows lower approach temperature differences, high temperature changes, and increased efficiencies.
Types:
Plate and shell A third type of heat exchanger is a plate and shell heat exchanger, which combines plate heat exchanger with shell and tube heat exchanger technologies. The heart of the heat exchanger contains a fully welded circular plate pack made by pressing and cutting round plates and welding them together. Nozzles carry flow in and out of the platepack (the 'Plate side' flowpath). The fully welded platepack is assembled into an outer shell that creates a second flowpath ( the 'Shell side'). Plate and shell technology offers high heat transfer, high pressure, high operating temperature, compact size, low fouling and close approach temperature. In particular, it does completely without gaskets, which provides security against leakage at high pressures and temperatures.
Types:
Adiabatic wheel A fourth type of heat exchanger uses an intermediate fluid or solid store to hold heat, which is then moved to the other side of the heat exchanger to be released. Two examples of this are adiabatic wheels, which consist of a large wheel with fine threads rotating through the hot and cold fluids, and fluid heat exchangers.
Types:
Plate fin This type of heat exchanger uses "sandwiched" passages containing fins to increase the effectiveness of the unit. The designs include crossflow and counterflow coupled with various fin configurations such as straight fins, offset fins and wavy fins.
Types:
Plate and fin heat exchangers are usually made of aluminum alloys, which provide high heat transfer efficiency. The material enables the system to operate at a lower temperature difference and reduce the weight of the equipment. Plate and fin heat exchangers are mostly used for low temperature services such as natural gas, helium and oxygen liquefaction plants, air separation plants and transport industries such as motor and aircraft engines.
Types:
Advantages of plate and fin heat exchangers: High heat transfer efficiency especially in gas treatment Larger heat transfer area Approximately 5 times lighter in weight than that of shell and tube heat exchanger.
Types:
Able to withstand high pressureDisadvantages of plate and fin heat exchangers: Might cause clogging as the pathways are very narrow Difficult to clean the pathways Aluminium alloys are susceptible to Mercury Liquid Embrittlement Failure Finned tube The usage of fins in a tube-based heat exchanger is common when one of the working fluids is a low-pressure gas, and is typical for heat exchangers that operate using ambient air, such as automotive radiators and HVAC air condensers. Fins dramatically increase the surface area with which heat can be exchanged, which improves the efficiency of conducting heat to a fluid with very low thermal conductivity, such as air. The fins are typically made from aluminium or copper since they must conduct heat from the tube along the length of the fins, which are usually very thin.
Types:
The main construction types of finned tube exchangers are: A stack of evenly-spaced metal plates act as the fins and the tubes are pressed through pre-cut holes in the fins, good thermal contact usually being achieved by deformation of the fins around the tube. This is typical construction for HVAC air coils and large refrigeration condensers.
Fins are spiral-wound onto individual tubes as a continuous strip, the tubes can then be assembled in banks, bent in a serpentine pattern, or wound into large spirals.
Types:
Zig-zag metal strips are sandwiched between flat rectangular tubes, often being soldered or brazed together for good thermal and mechanical strength. This is common in low-pressure heat exchangers such as water-cooling radiators. Regular flat tubes will expand and deform if exposed to high pressures but flat microchannel tubes allow this construction to be used for high pressures.Stacked-fin or spiral-wound construction can be used for the tubes inside shell-and-tube heat exchangers when high efficiency thermal transfer to a gas is required.
Types:
In electronics cooling, heat sinks, particularly those using heat pipes, can have a stacked-fin construction.
Types:
Pillow plate A pillow plate heat exchanger is commonly used in the dairy industry for cooling milk in large direct-expansion stainless steel bulk tanks. Nearly the entire surface area of a tank can be integrated with this heat exchanger, without gaps that would occur between pipes welded to the exterior of the tank. Pillow plates can also be constructed as flat plates that are stacked inside a tank. The relatively flat surface of the plates allows easy cleaning, especially in sterile applications.
Types:
The pillow plate can be constructed using either a thin sheet of metal welded to the thicker surface of a tank or vessel, or two thin sheets welded together. The surface of the plate is welded with a regular pattern of dots or a serpentine pattern of weld lines. After welding the enclosed space is pressurised with sufficient force to cause the thin metal to bulge out around the welds, providing a space for heat exchanger liquids to flow, and creating a characteristic appearance of a swelled pillow formed out of metal.
Types:
Waste heat recovery units A waste heat recovery unit (WHRU) is a heat exchanger that recovers heat from a hot gas stream while transferring it to a working medium, typically water or oils. The hot gas stream can be the exhaust gas from a gas turbine or a diesel engine or a waste gas from industry or refinery.
Large systems with high volume and temperature gas streams, typical in industry, can benefit from steam Rankine cycle (SRC) in a waste heat recovery unit, but these cycles are too expensive for small systems. The recovery of heat from low temperature systems requires different working fluids than steam.
An organic Rankine cycle (ORC) waste heat recovery unit can be more efficient at low temperature range using refrigerants that boil at lower temperatures than water. Typical organic refrigerants are ammonia, pentafluoropropane (R-245fa and R-245ca), and toluene.
Types:
The refrigerant is boiled by the heat source in the evaporator to produce super-heated vapor. This fluid is expanded in the turbine to convert thermal energy to kinetic energy, that is converted to electricity in the electrical generator. This energy transfer process decreases the temperature of the refrigerant that, in turn, condenses. The cycle is closed and completed using a pump to send the fluid back to the evaporator.
Types:
Dynamic scraped surface Another type of heat exchanger is called "(dynamic) scraped surface heat exchanger". This is mainly used for heating or cooling with high-viscosity products, crystallization processes, evaporation and high-fouling applications. Long running times are achieved due to the continuous scraping of the surface, thus avoiding fouling and achieving a sustainable heat transfer rate during the process.
Types:
Phase-change In addition to heating up or cooling down fluids in just a single phase, heat exchangers can be used either to heat a liquid to evaporate (or boil) it or used as condensers to cool a vapor and condense it to a liquid. In chemical plants and refineries, reboilers used to heat incoming feed for distillation towers are often heat exchangers.Distillation set-ups typically use condensers to condense distillate vapors back into liquid.
Types:
Power plants that use steam-driven turbines commonly use heat exchangers to boil water into steam. Heat exchangers or similar units for producing steam from water are often called boilers or steam generators.
Types:
In the nuclear power plants called pressurized water reactors, special large heat exchangers pass heat from the primary (reactor plant) system to the secondary (steam plant) system, producing steam from water in the process. These are called steam generators. All fossil-fueled and nuclear power plants using steam-driven turbines have surface condensers to convert the exhaust steam from the turbines into condensate (water) for re-use.To conserve energy and cooling capacity in chemical and other plants, regenerative heat exchangers can transfer heat from a stream that must be cooled to another stream that must be heated, such as distillate cooling and reboiler feed pre-heating.
Types:
This term can also refer to heat exchangers that contain a material within their structure that has a change of phase. This is usually a solid to liquid phase due to the small volume difference between these states. This change of phase effectively acts as a buffer because it occurs at a constant temperature but still allows for the heat exchanger to accept additional heat. One example where this has been investigated is for use in high power aircraft electronics.
Types:
Heat exchangers functioning in multiphase flow regimes may be subject to the Ledinegg instability.
Types:
Direct contact Direct contact heat exchangers involve heat transfer between hot and cold streams of two phases in the absence of a separating wall. Thus such heat exchangers can be classified as: Gas – liquid Immiscible liquid – liquid Solid-liquid or solid – gasMost direct contact heat exchangers fall under the Gas – Liquid category, where heat is transferred between a gas and liquid in the form of drops, films or sprays.Such types of heat exchangers are used predominantly in air conditioning, humidification, industrial hot water heating, water cooling and condensing plants.
Types:
Microchannel Microchannel heat exchangers are multi-pass parallel flow heat exchangers consisting of three main elements: manifolds (inlet and outlet), multi-port tubes with the hydraulic diameters smaller than 1mm, and fins. All the elements usually brazed together using controllable atmosphere brazing process. Microchannel heat exchangers are characterized by high heat transfer ratio, low refrigerant charges, compact size, and lower airside pressure drops compared to finned tube heat exchangers. Microchannel heat exchangers are widely used in automotive industry as the car radiators, and as condenser, evaporator, and cooling/heating coils in HVAC industry.
Types:
Micro heat exchangers, Micro-scale heat exchangers, or microstructured heat exchangers are heat exchangers in which (at least one) fluid flows in lateral confinements with typical dimensions below 1 mm. The most typical such confinement are microchannels, which are channels with a hydraulic diameter below 1 mm. Microchannel heat exchangers can be made from metal or ceramics. Microchannel heat exchangers can be used for many applications including: high-performance aircraft gas turbine engines heat pumps Microprocessor and microchip cooling air conditioning
HVAC and refrigeration air coils:
One of the widest uses of heat exchangers is for refrigeration and air conditioning. This class of heat exchangers is commonly called air coils, or just coils due to their often-serpentine internal tubing, or condensers in the case of refrigeration, and are typically of the finned tube type. Liquid-to-air, or air-to-liquid HVAC coils are typically of modified crossflow arrangement. In vehicles, heat coils are often called heater cores.
HVAC and refrigeration air coils:
On the liquid side of these heat exchangers, the common fluids are water, a water-glycol solution, steam, or a refrigerant. For heating coils, hot water and steam are the most common, and this heated fluid is supplied by boilers, for example. For cooling coils, chilled water and refrigerant are most common. Chilled water is supplied from a chiller that is potentially located very far away, but refrigerant must come from a nearby condensing unit. When a refrigerant is used, the cooling coil is the evaporator, and the heating coil is the condenser in the vapor-compression refrigeration cycle. HVAC coils that use this direct-expansion of refrigerants are commonly called DX coils. Some DX coils are "microchannel" type.On the air side of HVAC coils a significant difference exists between those used for heating, and those for cooling. Due to psychrometrics, air that is cooled often has moisture condensing out of it, except with extremely dry air flows. Heating some air increases that airflow's capacity to hold water. So heating coils need not consider moisture condensation on their air-side, but cooling coils must be adequately designed and selected to handle their particular latent (moisture) as well as the sensible (cooling) loads. The water that is removed is called condensate.
HVAC and refrigeration air coils:
For many climates, water or steam HVAC coils can be exposed to freezing conditions. Because water expands upon freezing, these somewhat expensive and difficult to replace thin-walled heat exchangers can easily be damaged or destroyed by just one freeze. As such, freeze protection of coils is a major concern of HVAC designers, installers, and operators.
HVAC and refrigeration air coils:
The introduction of indentations placed within the heat exchange fins controlled condensation, allowing water molecules to remain in the cooled air.The heat exchangers in direct-combustion furnaces, typical in many residences, are not 'coils'. They are, instead, gas-to-air heat exchangers that are typically made of stamped steel sheet metal. The combustion products pass on one side of these heat exchangers, and air to heat on the other. A cracked heat exchanger is therefore a dangerous situation that requires immediate attention because combustion products may enter living space.
Helical-coil:
Although double-pipe heat exchangers are the simplest to design, the better choice in the following cases would be the helical-coil heat exchanger (HCHE): The main advantage of the HCHE, like that for the Spiral heat exchanger (SHE), is its highly efficient use of space, especially when it's limited and not enough straight pipe can be laid.
Under conditions of low flowrates (or laminar flow), such that the typical shell-and-tube exchangers have low heat-transfer coefficients and becoming uneconomical.
When there is low pressure in one of the fluids, usually from accumulated pressure drops in other process equipment.
Helical-coil:
When one of the fluids has components in multiple phases (solids, liquids, and gases), which tends to create mechanical problems during operations, such as plugging of small-diameter tubes. Cleaning of helical coils for these multiple-phase fluids can prove to be more difficult than its shell and tube counterpart; however the helical coil unit would require cleaning less often.These have been used in the nuclear industry as a method for exchanging heat in a sodium system for large liquid metal fast breeder reactors since the early 1970s, using an HCHE device invented by Charles E. Boardman and John H. Germer. There are several simple methods for designing HCHE for all types of manufacturing industries, such as using the Ramachandra K. Patil (et al.) method from India and the Scott S. Haraburda method from the United States.However, these are based upon assumptions of estimating inside heat transfer coefficient, predicting flow around the outside of the coil, and upon constant heat flux.
Spiral:
A modification to the perpendicular flow of the typical HCHE involves the replacement of shell with another coiled tube, allowing the two fluids to flow parallel to one another, and which requires the use of different design calculations. These are the Spiral Heat Exchangers (SHE), which may refer to a helical (coiled) tube configuration, more generally, the term refers to a pair of flat surfaces that are coiled to form the two channels in a counter-flow arrangement. Each of the two channels has one long curved path. A pair of fluid ports are connected tangentially to the outer arms of the spiral, and axial ports are common, but optional.The main advantage of the SHE is its highly efficient use of space. This attribute is often leveraged and partially reallocated to gain other improvements in performance, according to well known tradeoffs in heat exchanger design. (A notable tradeoff is capital cost vs operating cost.) A compact SHE may be used to have a smaller footprint and thus lower all-around capital costs, or an oversized SHE may be used to have less pressure drop, less pumping energy, higher thermal efficiency, and lower energy costs.
Spiral:
Construction The distance between the sheets in the spiral channels is maintained by using spacer studs that were welded prior to rolling. Once the main spiral pack has been rolled, alternate top and bottom edges are welded and each end closed by a gasketed flat or conical cover bolted to the body. This ensures no mixing of the two fluids occurs. Any leakage is from the periphery cover to the atmosphere, or to a passage that contains the same fluid.
Spiral:
Self cleaning Spiral heat exchangers are often used in the heating of fluids that contain solids and thus tend to foul the inside of the heat exchanger. The low pressure drop lets the SHE handle fouling more easily. The SHE uses a “self cleaning” mechanism, whereby fouled surfaces cause a localized increase in fluid velocity, thus increasing the drag (or fluid friction) on the fouled surface, thus helping to dislodge the blockage and keep the heat exchanger clean. "The internal walls that make up the heat transfer surface are often rather thick, which makes the SHE very robust, and able to last a long time in demanding environments." They are also easily cleaned, opening out like an oven where any buildup of foulant can be removed by pressure washing.
Spiral:
Self-cleaning water filters are used to keep the system clean and running without the need to shut down or replace cartridges and bags.
Flow arrangements There are three main types of flows in a spiral heat exchanger: Counter-current Flow: Fluids flow in opposite directions. These are used for liquid-liquid, condensing and gas cooling applications. Units are usually mounted vertically when condensing vapour and mounted horizontally when handling high concentrations of solids.
Spiral:
Spiral Flow/Cross Flow: One fluid is in spiral flow and the other in a cross flow. Spiral flow passages are welded at each side for this type of spiral heat exchanger. This type of flow is suitable for handling low density gas, which passes through the cross flow, avoiding pressure loss. It can be used for liquid-liquid applications if one liquid has a considerably greater flow rate than the other.
Spiral:
Distributed Vapour/Spiral flow: This design is that of a condenser, and is usually mounted vertically. It is designed to cater for the sub-cooling of both condensate and non-condensables. The coolant moves in a spiral and leaves via the top. Hot gases that enter leave as condensate via the bottom outlet.
Applications The Spiral heat exchanger is good for applications such as pasteurization, digester heating, heat recovery, pre-heating (see: recuperator), and effluent cooling. For sludge treatment, SHEs are generally smaller than other types of heat exchangers. These are used to transfer the heat.
Selection:
Due to the many variables involved, selecting optimal heat exchangers is challenging. Hand calculations are possible, but many iterations are typically needed. As such, heat exchangers are most often selected via computer programs, either by system designers, who are typically engineers, or by equipment vendors.
To select an appropriate heat exchanger, the system designers (or equipment vendors) would firstly consider the design limitations for each heat exchanger type.
Selection:
Though cost is often the primary criterion, several other selection criteria are important: High/low pressure limits Thermal performance Temperature ranges Product mix (liquid/liquid, particulates or high-solids liquid) Pressure drops across the exchanger Fluid flow capacity Cleanability, maintenance and repair Materials required for construction Ability and ease of future expansion Material selection, such as copper, aluminium, carbon steel, stainless steel, nickel alloys, ceramic, polymer, and titanium.Small-diameter coil technologies are becoming more popular in modern air conditioning and refrigeration systems because they have better rates of heat transfer than conventional sized condenser and evaporator coils with round copper tubes and aluminum or copper fin that have been the standard in the HVAC industry. Small diameter coils can withstand the higher pressures required by the new generation of environmentally friendlier refrigerants. Two small diameter coil technologies are currently available for air conditioning and refrigeration products: copper microgroove and brazed aluminum microchannel.Choosing the right heat exchanger (HX) requires some knowledge of the different heat exchanger types, as well as the environment where the unit must operate. Typically in the manufacturing industry, several differing types of heat exchangers are used for just one process or system to derive the final product. For example, a kettle HX for pre-heating, a double pipe HX for the ‘carrier’ fluid and a plate and frame HX for final cooling. With sufficient knowledge of heat exchanger types and operating requirements, an appropriate selection can be made to optimise the process.
Monitoring and maintenance:
Online monitoring of commercial heat exchangers is done by tracking the overall heat transfer coefficient. The overall heat transfer coefficient tends to decline over time due to fouling.
By periodically calculating the overall heat transfer coefficient from exchanger flow rates and temperatures, the owner of the heat exchanger can estimate when cleaning the heat exchanger is economically attractive.
Integrity inspection of plate and tubular heat exchanger can be tested in situ by the conductivity or helium gas methods. These methods confirm the integrity of the plates or tubes to prevent any cross contamination and the condition of the gaskets.
Mechanical integrity monitoring of heat exchanger tubes may be conducted through Nondestructive methods such as eddy current testing.
Fouling Fouling occurs when impurities deposit on the heat exchange surface.
Monitoring and maintenance:
Deposition of these impurities can decrease heat transfer effectiveness significantly over time and are caused by: Low wall shear stress Low fluid velocities High fluid velocities Reaction product solid precipitation Precipitation of dissolved impurities due to elevated wall temperaturesThe rate of heat exchanger fouling is determined by the rate of particle deposition less re-entrainment/suppression. This model was originally proposed in 1959 by Kern and Seaton.
Monitoring and maintenance:
Crude Oil Exchanger Fouling. In commercial crude oil refining, crude oil is heated from 21 °C (70 °F) to 343 °C (649 °F) prior to entering the distillation column. A series of shell and tube heat exchangers typically exchange heat between crude oil and other oil streams to heat the crude to 260 °C (500 °F) prior to heating in a furnace. Fouling occurs on the crude side of these exchangers due to asphaltene insolubility. The nature of asphaltene solubility in crude oil was successfully modeled by Wiehe and Kennedy. The precipitation of insoluble asphaltenes in crude preheat trains has been successfully modeled as a first order reaction by Ebert and Panchal who expanded on the work of Kern and Seaton.
Monitoring and maintenance:
Cooling water systems are susceptible to fouling. Cooling water typically has a high total dissolved solids content and suspended colloidal solids. Localized precipitation of dissolved solids occurs at the heat exchange surface due to wall temperatures higher than bulk fluid temperature. Low fluid velocities (less than 3 ft/s) allow suspended solids to settle on the heat exchange surface. Cooling water is typically on the tube side of a shell and tube exchanger because it's easy to clean. To prevent fouling, designers typically ensure that cooling water velocity is greater than 0.9 m/s and bulk fluid temperature is maintained less than 60 °C (140 °F). Other approaches to control fouling control combine the "blind" application of biocides and anti-scale chemicals with periodic lab testing.
Monitoring and maintenance:
Maintenance Plate and frame heat exchangers can be disassembled and cleaned periodically. Tubular heat exchangers can be cleaned by such methods as acid cleaning, sandblasting, high-pressure water jet, bullet cleaning, or drill rods.
In large-scale cooling water systems for heat exchangers, water treatment such as purification, addition of chemicals, and testing, is used to minimize fouling of the heat exchange equipment. Other water treatment is also used in steam systems for power plants, etc. to minimize fouling and corrosion of the heat exchange and other equipment.
A variety of companies have started using water borne oscillations technology to prevent biofouling. Without the use of chemicals, this type of technology has helped in providing a low-pressure drop in heat exchangers.
Design and manufacturing regulations:
The design and manufacturing of heat exchangers has numerous regulations, which vary according to the region in which they will be used.
Design and manufacturing codes include: ASME Boiler and Pressure Vessel Code (US); PD 5500 (UK); BS 1566 (UK); EN 13445 (EU); CODAP (French); Pressure Equipment Safety Regulations 2016 (PER) (UK); Pressure Equipment Directive (EU); NORSOK (Norwegian); TEMA; API 12; and API 560.
In nature:
Humans The human nasal passages serve as a heat exchanger, with cool air being inhaled and warm air being exhaled. Its effectiveness can be demonstrated by putting the hand in front of the face and exhaling, first through the nose and then through the mouth. Air exhaled through the nose is substantially cooler. This effect can be enhanced with clothing, by, for example, wearing a scarf over the face while breathing in cold weather.
In nature:
In species that have external testes (such as human), the artery to the testis is surrounded by a mesh of veins called the pampiniform plexus. This cools the blood heading to the testes, while reheating the returning blood.
In nature:
Birds, fish, marine mammals "Countercurrent" heat exchangers occur naturally in the circulation system of fish, whales and other marine mammals. Arteries to the skin carrying warm blood are intertwined with veins from the skin carrying cold blood, causing the warm arterial blood to exchange heat with the cold venous blood. This reduces the overall heat loss in cold water. Heat exchangers are also present in the tongue of baleen whales as large volume of water flow through their mouths. Wading birds use a similar system to limit heat losses from their body through their legs into the water.
In nature:
Carotid rete The carotid rete is a counter-current heat exchanging organ in some ungulates. The blood ascending the carotid arteries on its way to the brain, flows via a network of vessels where heat is discharged to the veins of cooler blood descending from the nasal passages. The carotid rete allows Thomson's gazelle to maintain its brain almost 3 °C (5.4 °F) cooler than the rest of the body, and therefore aids in tolerating bursts in metabolic heat production such as associated with outrunning cheetahs (during which the body temperature exceeds the maximum temperature at which the brain could function).
In industry:
Heat exchangers are widely used in industry both for cooling and heating large scale industrial processes. The type and size of heat exchanger used can be tailored to suit a process depending on the type of fluid, its phase, temperature, density, viscosity, pressures, chemical composition and various other thermodynamic properties.
In industry:
In many industrial processes there is waste of energy or a heat stream that is being exhausted, heat exchangers can be used to recover this heat and put it to use by heating a different stream in the process. This practice saves a lot of money in industry, as the heat supplied to other streams from the heat exchangers would otherwise come from an external source that is more expensive and more harmful to the environment.
In industry:
Heat exchangers are used in many industries, including: Waste water treatment Refrigeration Wine and beer making Petroleum refining Nuclear powerIn waste water treatment, heat exchangers play a vital role in maintaining optimal temperatures within anaerobic digesters to promote the growth of microbes that remove pollutants. Common types of heat exchangers used in this application are the double pipe heat exchanger as well as the plate and frame heat exchanger.
In aircraft:
In commercial aircraft heat exchangers are used to take heat from the engine's oil system to heat cold fuel. This improves fuel efficiency, as well as reduces the possibility of water entrapped in the fuel freezing in components.
Current market and forecast:
Estimated at US$17.5 billion in 2021, the global demand of heat exchangers is expected to experience robust growth of about 5% annually over the next years. The market value is expected to reach US$27 billion by 2030. With an expanding desire for environmentally friendly options and increased development of offices, retail sectors, and public buildings, market expansion is due to grow.
A model of a simple heat exchanger:
A simple heat exchange might be thought of as two straight pipes with fluid flow, which are thermally connected. Let the pipes be of equal length L, carrying fluids with heat capacity Ci (energy per unit mass per unit change in temperature) and let the mass flow rate of the fluids through the pipes, both in the same direction, be ji (mass per unit time), where the subscript i applies to pipe 1 or pipe 2.
A model of a simple heat exchanger:
Temperature profiles for the pipes are T1(x) and T2(x) where x is the distance along the pipe. Assume a steady state, so that the temperature profiles are not functions of time. Assume also that the only transfer of heat from a small volume of fluid in one pipe is to the fluid element in the other pipe at the same position, i.e., there is no transfer of heat along a pipe due to temperature differences in that pipe. By Newton's law of cooling the rate of change in energy of a small volume of fluid is proportional to the difference in temperatures between it and the corresponding element in the other pipe: du1dt=γ(T2−T1) du2dt=γ(T1−T2) ( this is for parallel flow in the same direction and opposite temperature gradients, but for counter-flow heat exchange countercurrent exchange the sign is opposite in the second equation in front of γ(T1−T2) ), where ui(x) is the thermal energy per unit length and γ is the thermal connection constant per unit length between the two pipes. This change in internal energy results in a change in the temperature of the fluid element. The time rate of change for the fluid element being carried along by the flow is: du1dt=J1dT1dx du2dt=J2dT2dx where Ji=Ciji is the "thermal mass flow rate". The differential equations governing the heat exchanger may now be written as: J1∂T1∂x=γ(T2−T1) J2∂T2∂x=γ(T1−T2).
A model of a simple heat exchanger:
Note that, since the system is in a steady state, there are no partial derivatives of temperature with respect to time, and since there is no heat transfer along the pipe, there are no second derivatives in x as is found in the heat equation. These two coupled first-order differential equations may be solved to yield: T1=A−Bk1ke−kx T2=A+Bk2ke−kx where k1=γ/J1 , k2=γ/J2 ,k=k1+k2 (this is for parallel-flow, but for counter-flow the sign in front of k2 is negative, so that if k2=k1 , for the same "thermal mass flow rate" in both opposite directions, the gradient of temperature is constant and the temperatures linear in position x with a constant difference (T2−T1) along the exchanger, explaining why the counter current design countercurrent exchange is the most efficient ) and A and B are two as yet undetermined constants of integration. Let 10 and 20 be the temperatures at x=0 and let T1L and T2L be the temperatures at the end of the pipe at x=L. Define the average temperatures in each pipe as: T¯1=1L∫0LT1(x)dx T¯2=1L∫0LT2(x)dx.
A model of a simple heat exchanger:
Using the solutions above, these temperatures are: Choosing any two of the temperatures above eliminates the constants of integration, letting us find the other four temperatures. We find the total energy transferred by integrating the expressions for the time rate of change of internal energy per unit length: 10 )=γL(T¯2−T¯1) 20 )=γL(T¯1−T¯2).
By the conservation of energy, the sum of the two energies is zero. The quantity T¯2−T¯1 is known as the Log mean temperature difference, and is a measure of the effectiveness of the heat exchanger in transferring heat energy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Automatic transmission fluid**
Automatic transmission fluid:
Automatic transmission fluid (ATF) is a type of hydraulic fluid used in vehicles with automatic transmissions. It is typically coloured red or green to distinguish it from motor oil and other fluids in the vehicle.
The fluid is optimized for the special requirements of a transmission, such as valve operation, brake band friction, and the torque converter, as well as gear lubrication.
ATF is also used as a hydraulic fluid in some power steering systems, as a lubricant in some 4WD transfer cases, and in some modern manual transmissions.
Modern use:
Modern ATF consists of a base oil plus an additive package containing a wide variety of chemical compounds intended to provide the required properties of a particular ATF specification. Most ATFs contain some combination of additives that improve lubricating qualities, such as anti-wear additives, rust and corrosion inhibitors, detergents, dispersants and surfactants (which protect and clean metal surfaces); kinematic viscosity and viscosity index improvers and modifiers, seal swell additives and agents (which extend the rotational speed range and temperature range of the additives' application); anti-foam additives and anti-oxidation compounds to inhibit oxidation and "boil-off" (which extends the life of the additives' application); cold-flow improvers, high-temperature thickeners, gasket conditioners, pour point depressant and petroleum dye. All ATFs contain friction modifiers, except for those ATFs specified for some Ford transmissions and the John Deere J-21A specification; the Ford ESP (or ESW) - M2C-33 F specification Type F ATF (Ford-O-Matic) and Ford ESP (or ESW) - M2C-33 G specification Type G ATF (1980s Ford Europe and Japan) specifically excludes the addition of friction modifiers. According to the same oil distributor, the M2C-33 G specification requires fluids which provide improved shear resistance and oxidation protection, better low-temperature fluidity, better EP (extreme pressure) properties and additional seal tests over and above M2C-33 F quality fluids.
Modern use:
Note that the friction modifier only means that the fluid sticks to the surface of the metal a little more strongly, and therefore only helps to prevent early wear. It would be required for Ford, BorgWarner to prove that their transmissions are somehow harmed by friction modifiers. In many countries, Ford has said that the modern Dex3 fluid is fine for the same transmissions that it says require the older standard.There are many specifications for ATF, such as the General Motors (GM) DEXRON and the Ford MERCON series, and the vehicle manufacturer will identify the ATF specification appropriate for each vehicle. The vehicle's owner's manual will typically list the ATF specification(s) that are recommended by the manufacturer.
Modern use:
Automatic transmission fluids have many performance-enhancing chemicals added to the fluid to meet the demands of each transmission. Some ATF specifications are open to competing brands, such as the common DEXRON specification, where different manufacturers use different chemicals to meet the same performance specification. These products are sold under license from the OEM responsible for establishing the specification. Some vehicle manufacturers will require "genuine" or Original Equipment Manufacturer (OEM) ATF. Most ATF formulations are open 3rd party licensing, and certification by the automobile manufacturer.
Modern use:
Each manufacturer has specific ATF requirements. Incorrect transmission fluid may result in transmission malfunction or severe damage, however this occurs where the viscosity is extremely different.
Current fluids:
DEXRON ULV - 2017 and above GM 10L90 10-speed automatic transmissions MERCON ULV - 2017 and above Ford 10R80 10-speed automatic transmissions DEXRON HP - 2013 and above GM 8L90 8-speed RWD automatic transmissions Mopar ATF+4 - Most Dodge, Jeep, Chrysler, and Plymouth replaces ATF+3, ATF+2, ATF+ DEXRON III/MERCON - Most pre-2006 GM and Ford, Mercury, Lincoln, pre-2004 Toyota products, many Asian vehicles, some Asian power steering fluid applications, some Ford/Mazda manual transmissions. It is generally less expensive than DEXRON VI/MERCON V .
Current fluids:
DEXRON VI - Most after 2006 GM, some Ford applications, replaces DEXRON III in GM automatic transmissions.
MERCON V - Most Ford, Mercury, Lincoln, Mazda B-Series, 2001-08 Mazda Tribute, Tribute Hybrid.
MERCON LV - Some Ford(DuratecHE), 2009-11 Mazda Tribute, Mazda in Europe or Asia.
Mercon SP - For the Ford 6R transmission Toyota ATF Type T-IV (T4) - Some older Toyota, Lexus including "Gen 1" hybrid CVT), some Mazda. Replaces Type T, and Type T-II (There was no Type T-III).
Toyota ATF WS - Most new models introduced with model year 2004 Toyota and Lexus including "Gen 2" and later hybrid CVT (except non-hybrid CVT); Volvo. It is not applicable in applications requiring ATF Type T-IV.
Honda DW-1 - All Honda and Acura (except continuously variable transmission (CVT), replaces Z1 specification fluid Diamond SP-III (or SP3) - Older Mitsubishi Motors (including older CVTs; Hyundai and Kia 4-speed automatic transmission.
Diamond SP-IV (or SP4) - All Hyundai and Kia 6-speed automatic transmission.
DiaQueen ATF-J3 - Most Mitsubishi Motors 6-speed automatic transmissions.
Nissan Matic fluids - For Nissan and Infiniti vehicles: Matic D is for 3- and 4-speed transmissions, Matic K is for 6-speed front-wheel-drive transmissions, Matic J is for 5-speed rear-wheel-drive transmissions, Matic-S fluid supersedes Matic-J fluid.
ATF-HP - For 2005 and later Subaru vehicles, except CVTs. 2004 and earlier Subaru vehicles use DEXRON III.
Mazda M5 (MV) fluid - For the Mazda FN4A-EL/Ford 4F27E and Mazda FS5A-EL/Ford FNR5. Also sold as Ford FNR5 fluid. Genuine Mazda M5 is made by Idemitsu Kosan, available as Idemitsu Type-M. This fluid is NOT MERCON V.
Mazda FZ fluid - For the SKYACTIV-Drive. Color of this fluid is blue.Synthetic ATF is available in modern OEM and aftermarket brands, offering better performance and service life for certain applications (such as frequent trailer towing).
Current fluids:
The use of a lint-free white rag to wipe the dipstick on automatic transmissions is advised so that the color of the fluid can be checked. Dark brown or black ATF can be an indicator of a transmission problem, vehicle abuse, or fluid that has far exceeded its useful life. Over-used ATF often has reduced lubrication properties and abrasive friction materials (from clutches and brake bands) suspended in it; failure to replace such fluid will accelerate transmission wear and could eventually ruin an otherwise healthy transmission. However, color alone is not a completely reliable indication of the service life of ATF as most ATF products will darken with use. The manufacturer's recommended service interval is a more reliable measure of ATF life. In the absence of service or repair records, fluid color is a common means of gauging ATF service life.
Current fluids:
CVTs and dual-clutch transmissions often use specialized fluids.
Transfer cases and differentials in four-wheel-drive/all-wheel-drive vehicles sometimes require specialized fluids, such as Honda Dual Pump-II, Honda VTM-4, Jeep Quadra-Trac, etc.
History:
The history of automatic transmission fluids parallels the history of automatic transmission technology. The world's first mass-produced automatic transmission, the Hydra-Matic 4-speed, was developed by General Motors (GM) for the 1940 model year. The Hydra-Matic transmission required a special lubricant GM called Transmission Fluid No. 1. for the Hydra-Matic Drive. This transmission fluid was only available at Oldsmobile, Pontiac, and Cadillac dealerships. Subsequent automatic transmission and fluid coupling technologies, and difficulties with fluids in cold and hot temperature extremes led to a need for longer lasting, higher quality transmission fluids. Additionally, a better system of automatic transmission fluid distribution and marketing was necessary for the long term success of the automatic transmission.
History:
In 1949, GM released a new Type "A" fluid specification. In an attempt to make GM automatic transmission fluid available at retailers and service garages everywhere. Every automatic transmission produced by any vehicle manufacturer used GM Type "A" transmission fluids in their transmissions from 1949-1958.
History:
In 1959, Ford began releasing their own automatic transmission fluid specifications, see MERCON for more information. From 1958-1968 many vehicle manufacturers continued to use the next GM automatic transmission fluid specification, the Type "A" Suffix "A" fluid in their transmissions. In 1966, Chrysler began releasing their own automatic transmission fluid specifications, see Mopar ATF for more information. GM ATF was the same color as engine oil through 1967. Aftermarket ATF was available with red dye as an aid in fluid leak detection. Dexron (B) was the first GM ATF to require red dye.
History:
In the 1940s, 1950s, 1960s, and early 1970s, ATF contained whale oil as a rust and corrosion inhibitor. A moratorium on whale oil at that time prevented the continued production of older ATF such as the original 1967 DEXRON formulation (Type B), and the fluids which preceded it. Vintage GM (1940-1967), Ford (1951-1967, and Chrysler products (1953-1966) used GM Type A fluid or GM Type A Suffix A fluids; these fluids are no longer produced. GM recommends Dexron-VI fluid, Ford recommends Mercon V fluid, and Chrysler recommends ATF+4 fluids for vintage transmission use.
History:
Through the late 1970s, Ford transmissions were factory filled with a fluid identified as ESW M2C33-F. To provide a fluid that would be available to the general public for service fill, oil companies and other than factory fill suppliers were allowed to develop fluids meeting the ESW M2C33-F specification and market these fluids under their own brand names but identified as Type F.
History:
The second generation of transmission fluid was released in 1974 as the factory fill specification, ESW M2C138-CJ. This fluid was developed to modify the vehicle shifting characteristics and to provide considerable improvement in the oxidation resistance and anti-wear performance.
No service fluids were developed and for a short time, DEXRON fluids approved by General Motors were considered acceptable.
History:
With continuing changes and improvements in transmission design, a centrifugal lock-up torque converter clutch was introduced into the C5 transmission to smooth engine vibrations sensed by the occupant of the vehicle. An associated shudder problem forced the introduction of the factory fill specification ESP M2C166-H. Servicing transmissions with DEXRON fluids was unacceptable since not all DEXRON fluids were capable of eliminating the shudder phenomenon. The fluids that could be used were a subset of the DEXRON fluids. The advent of Type H as factory fill necessitated the development of a service fluid specification to match the performance expected from Type H. This resulted in the release of the MERCON specification in 1987.
History:
One major revision occurred in September 1992, when low-temperature viscosity requirements, volatility requirements, viscosity change limits after high-temperature exposure and improved oxidation limits were introduced. These changes raised the performance of MERCON fluids above ESP M2C166-H levels.
The development of modulating and continuous slipping clutch converters has prompted the need to develop the MERCON V specification. Included are requirements to verify the anti-wear capabilities and anti-shudder characteristics of the fluid.
History:
The MERCON V specification was further modified some time prior to 2007 to make it backward-compatible with MERCON. Ford has / is terminating all license agreements for the manufacture and sale of MERCON in favor of MERCON V.Toyota continued using GM ATF, including Dexron (B) and Dexron-II(D) in most of their automatic transmissions until 2003. In 1988, Toyota began releasing their own automatic transmission fluid specifications, see Toyota ATF for more information.
"Lifetime" fluids:
In 1967, Ford produced the Type-F fluid specification. The Type-F specification was intended to produce a "lifetime" fluid which would never need to be changed. This was the first of many Ford "lifetime" fluids. The 1974 Ford Car Shop Manual reads "The automatic transmission is filled at the factory with "lifetime" fluid. If it is necessary to add or replace fluid, use only fluids which meet Ford Specification M2C33F. Many other transmission manufacturers have followed with their own "Lifetime" automatic transmission fluids".
"Lifetime" fluids:
How ATF Can Last a "Lifetime" To understand how a fluid can last a "lifetime", a study of the 1939 Chrysler Fluid Drive Fluid is helpful. The lesson learned by Chrysler with its fluid drives is applicable to modern automatic transmissions as well. The November 1954 edition of Lubrication Magazine (Published by The Texas Company, later known as Texaco) featured a story called "Evolution of the Chrysler PowerFlite Automatic Transmission". This article described the fluid used in the 1939 Chrysler Fluid Drive and its subsequent revisions and enhancements through 1954. The fluid drive fluid coupling is partially filled with Mopar Fluid Drive Fluid, a special highly refined straight mineral oil with a viscosity of about 185 SUS at 100°F., excellent inherent oxidation stability, high viscosity index (100), excellent ability to rapidly reject air, very low natural pour point (-25°F.), ability to adequately lubricate the pilot ball bearing and seal surface, and neutrality towards the seal bellows.
"Lifetime" fluids:
The fluid operates under almost ideal conditions in what is essentially a hermetically sealed case, the small amount of atmospheric oxygen initially present is removed by a harmless reaction with the fluid so as to leave a residual inert (nitrogen) atmosphere. As a consequence, it has not been necessary to drain and replace the fluid, and the level-check recommendation has been successively extended from the original 2,500 miles to 15,500 miles and finally to "never" - or the life of the car.
"Lifetime" fluids:
Since drains and level checks were not only unnecessary but frequently harmful (through the introduction of more air, and seal-destroying dirt) Chrysler eventually left off the tempting level inspection plugs. This mechanism is, therefore, one of the very few that are actually lubricated for the life of the car. There are now myriad examples of couplings that have operated well over 100,000 miles without any attention whatsoever and were still in perfect condition when the car was retired.
"Lifetime" fluids:
On European type cars, a "Lifetime" means 180,000 km or 112,000 miles as a lifetime of a vehicle or transmission. Service intervals of newer type cars are from 80,000 to 120,000 km which equals 50,000 to 75,000 miles. Flushing or refilling the fluid on lifetime filled transmissions require to use equipment to fill from below, engaging the transmissions torque converter or using an external pump.
"Lifetime" fluids:
Sealed Transmissions Any automatic transmission fluid will last longer if the transmission case could be hermetically sealed, but transmissions typically have two potential entry points for air: The Dipstick Tube. Any transmission with a dipstick tube has the potential to let additional oxygen into the transmission through a dipstick that is not fully seated in the tube, or dipstick tube plug that is not fully seated. Even the process of checking the fluid level with a dipstick can allow additional oxygen and dirt into the transmission. Many modern transmissions do not have a dipstick, they have sealed transmission fluid level check plugs instead. By removing the traditional dipstick, the transmission manufacturer has also removed a potential entry point for oxygen; this reduces the potential for fluid oxidation. A sealed transmission will typically have longer transmission fluid life than a non-sealed transmission.
"Lifetime" fluids:
The Transmission Vent. Transmissions need vents to compensate for internal air pressure changes that occur with fluctuating fluid temperatures and fluctuating fluid levels during transmission operation. Without those vents, pressure could build resulting in seal and gasket leaks. Before the use of better quality base oil in ATF in the late 1990s, some older transmission breather vents contained a Transmission Air Breathing Suppressor (TABS) valve to prevent oxygen and water ingestion into their transmissions. Oxygen reacts with high-temperature transmission fluid and can cause oxidation, rust, and corrosion. Automatic transmission fluids using lower quality base oil oxidized more easily than fluids using higher quality base oils. Transmission manufacturers now use smaller, remote mounted, breather vents specially designed to keep out water, but allow a small amount of air movement through the breather as necessary.
"Lifetime" fluids:
Sealed ATF Containers Any automatic transmission fluid will last longer if it comes from an unopened container Use Sealed Containers. Containers storing automatic transmission fluid (ATF) should always be sealed; if exposed to the atmosphere, ATF may absorb moisture and potentially cause shift concerns.
Use New Fluid Only. When performing repairs on ATF equipped transmissions, it is important to use only new, clean ATF when refilling the transmission. Never reuse ATF.
"Lifetime" fluids:
Example Maintenance Schedule Lifetime automatic transmission fluids made from higher quality base oil and an additive package are more chemically stable, less reactive, and do not experience oxidation as easily as lower quality fluids made from lower quality base oil and an additive package. Therefore, higher quality transmission fluids can last a long time in normal driving conditions (Typically 100,000 miles (160,000 km) or more).
"Lifetime" fluids:
The definition of 'Lifetime Fluid" differs from transmission manufacturer to transmission manufacturer. Always consult the vehicle maintenance guide for the proper service interval for the fluid in your transmission and your driving conditions.
Chevrolet Colorado Example: According to the Scheduled Maintenance Guide of a 2018 Chevrolet Colorado with "Lifetime Fluid" could have two different fluid service intervals depending upon how the vehicle is driven: 1. Normal Driving Carry passengers and cargo within recommended limits on the Tire and Loading Information label Driven on reasonable road surfaces within legal driving limits.
2. Severe Driving Mainly driven in heavy city traffic in hot weather Mainly driven in hilly or mountainous terrain Frequently towing a trailer Used for high speed or competitive driving Used for taxi, police, or delivery service.Under "Severe" driving conditions, replace automatic transmission fluid and filter every 45,000 mi (72,420 km)
Aftermarket Automatic Transmission Fluids:
For over 70 years, the oil aftermarket has produced both licensed, and non-licensed, formulations of automatic transmission fluids (ATF). Today, aftermarket fluids asserted by their manufacturers to be compatible for use in various brands of automatic transmissions continue to be sold under names such as Multi-Purpose and Multi-Vehicle fluids. Non-licensed fluid is typically less expensive; these fluids are not regulated or endorsed by the vehicle manufacturer for use in their transmissions. Vehicle manufacturer approved and licensed fluids must have the license number printed on the product information label of the container or on the container housing. Non-Licensed fluids do not show a license number. Make sure the fluid to be installed into a transmission matches the recommended fluid in the specifications section of the vehicle's owner's manual.
Aftermarket Automatic Transmission Fluids:
Mislabeled or Misleading Labeling on ATF Containers ATF which has been mislabeled, has misleading labeling, or is fraudulently bottled as another product is an ongoing problem. Some of these fluids have led to multiple transmission failures. The three organizations shown below are trying to stop this problem in the United States.
Aftermarket Automatic Transmission Fluids:
United States Laws: The U.S. Department of Commerce, National Institute of Standards and Technology (NIST), Handbook 130 2019 Edition, contains Uniform Laws and Regulations in the Areas of Legal Metrology and Fuel Quality. Section IV.G.3.14 defines laws regulating the Labeling and Identification of Transmission Fluid. Paragraph IV.G.3.14.1.1. Container Labeling. reads The label on a container of transmission fluid shall not contain any information that is false or misleading.
Aftermarket Automatic Transmission Fluids:
California Laws: The State of California has developed additional Laws in an attempt to prevent mislabeled and misleading labeling. Statutes: California Business and Professions Code, Division 5, Chapters 6, 14, 14.5, and 15. Regulations: California Code of Regulation, Title 4, Division 9, Chapters 6 and 7.
American Petroleum Institute (API) Monitoring: The American Petroleum Institute (API) maintains a list of invalid labeling of petroleum products. This real-time list includes motor oils and ATF. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The UNIX-HATERS Handbook**
The UNIX-HATERS Handbook:
The UNIX-HATERS Handbook is a semi-humorous edited compilation of messages to the UNIX-HATERS mailing list. The book was edited by Simson Garfinkel, Daniel Weise and Steven Strassmann and published in 1994.
Contents:
The book concerns the frustrations of users of the Unix operating system. Many users had come from systems that they felt were far more sophisticated in features and usability, and they were frustrated by the perceived "worse is better" design philosophy that they felt Unix and much of its software encapsulated.
The book is based on messages sent to the UNIX-HATERS mailing list between 1988 and 1993, and contains a foreword by the human factors guru Don Norman and an "anti-foreword" by Dennis Ritchie, one of the creators of the operating system.
Many of the book's complaints about the Unix operating system are based on design decisions and anomalies in the command-line interface.
The front-matter page's dedication says: "To Ken and Dennis, without whom this book would not have been possible.", referring to Ken Thompson and Dennis Ritchie, the creators of Unix.
Release:
This book was printed as a trade paperback. Its front cover was designed to be similar to The Scream. An air sickness bag, printed with the phrase "UNIX barf bag", was inserted into the inside back cover of every copy by the publisher.
The book was made available to download for free in electronic format in 2003.
Reception:
Later reviewers of the book have noted that some issues were resolved in the future, such as the development of the ext2 filesystem addressing the discussed lack of block storage. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stony Clove Sandstone**
Stony Clove Sandstone:
The Stony Clove Sandstone is a geologic formation in New York. It preserves fossils dating back to the Devonian period. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Linola**
Linola:
Linola is the trademark name of solin, cultivated forms of flax (Linum usitatissimum) bred for producing linseed oil with a low alpha-linolenic acid content. Linola was developed in the early 1990s by the Commonwealth Scientific and Industrial Research Organisation (CSIRO). It was developed and released in Australia in 1992 and first commercially grown in 1994. Australian Linola varieties are named after Australian lakes.
Genesis:
This variety was developed to provide a source of edible linseed oil with a low alpha-linolenic acid (ALA) content of approximately 2%, as compared to 50% in the wild type variety. It was done to improve the storage quality of linseed when used as a bulk livestock feed. Linseed's previous main use had been linseed oil for use as a paint ingredient, with the ALA (omega-3 fatty acid) being a quick drying component. With the advent of "plastic" water-based paints, the linseed market fell into decline, but when marketed as a stock feed, the omega-3 content also deteriorated quickly in storage. Compared to normal linseed, linola has a lower level of ALA, which increases the oxidative stability of the oil/seed, which means it remains edible much longer when stored. Linola has a correspondingly higher content of the linoleic acid, omega-6 fatty acid, around 65% to 75%. The seed colour was also changed from the wild type dark brown seed to a light yellow seed, which consequently gives an oil of a light colour, easily distinguished from the darker linseed oil.
Health Claims:
Linola is Generally Recognized as Safe (GRAS) by the U.S. Food and Drug Administration. Linola can specially help against neurodermatitis. (No published scientific evidence was found to support this claim.)
Agricultural Distribution:
Linola substitutes for flax in cropping rotations; it is claimed to have lower production costs than canola, but brings prices comparable to canola or other edible oils. Linola is produced in Australia, Canada, the U.K. and in the U.S. states of Washington and Idaho. All Canadian cultivars of Linola were deregistered for sale and use as of August 1, 2013. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Variational vector field**
Variational vector field:
In the mathematical fields of the calculus of variations and differential geometry, the variational vector field is a certain type of vector field defined on the tangent bundle of a differentiable manifold which gives rise to variations along a vector field in the manifold itself.
Variational vector field:
Specifically, let X be a vector field on M. Then X generates a one-parameter group of local diffeomorphisms FlXt, the flow along X. The differential of FlXt gives, for each t, a mapping dFlXt:TM→TM where TM denotes the tangent bundle of M. This is a one-parameter group of local diffeomorphisms of the tangent bundle. The variational vector field of X, denoted by T(X) is the tangent to the flow of d FlXt. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Batteries & Supercaps**
Batteries & Supercaps:
Batteries & Supercaps is a monthly peer-reviewed scientific journal covering electrochemical energy storage and its applications. It is published by Wiley-VCH on behalf of Chemistry Europe.
According to the Journal Citation Reports, the journal has a 2021 impact factor of 6.043. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Neopragmatism**
Neopragmatism:
Neopragmatism, sometimes called post-Deweyan pragmatism, linguistic pragmatism, or analytic pragmatism, is the philosophical tradition that infers that the meaning of words is a result of how they are used, rather than the objects they represent.
Neopragmatism:
The Blackwell Dictionary of Western Philosophy (2004) defines "neo-pragmatism" as "A postmodern version of pragmatism developed by the American philosopher Richard Rorty and drawing inspiration from authors such as John Dewey, Martin Heidegger, Wilfrid Sellars, W. V. O. Quine, and Jacques Derrida". It is a contemporary term for a philosophy which reintroduces many concepts from pragmatism. While traditional pragmatism focuses on experience, Rorty centers on language. The self is regarded as a "centerless web of beliefs and desires". It repudiates the notions of universal truth, epistemological foundationalism, representationalism, and epistemic objectivity. It is a nominalist approach that denies that natural kinds and linguistic entities have substantive ontological implications. Rorty denies that the subject-matter of the human sciences can be studied in the same ways as we study the natural sciences.It has been associated with a variety of other thinkers including Hilary Putnam, W. V. O. Quine, and Donald Davidson, though none of these figures have called themselves "neopragmatists". The following contemporary philosophers are also often considered to be neopragmatists: Nicholas Rescher (a proponent of methodological pragmatism and pragmatic idealism), Jürgen Habermas, Susan Haack, Robert Brandom, and Cornel West.
Background:
"Anglo-analytic" influences Neopragmatists, particularly Rorty and Putnam, draw on the ideas of classical pragmatists such as Charles Sanders Peirce, William James, and John Dewey. Putnam, in Words and Life (1994) enumerates the ideas in the classical pragmatist tradition, which newer pragmatists find most compelling. To paraphrase Putnam: Complete skepticism (the notion that a belief in philosophical skepticism requires as much justification as other beliefs); Fallibilism (the view that there are no metaphysical guarantees against the need to revise a belief); Antidualism about "facts" and "values"; That practice, properly construed, is primary in philosophy. (WL 152)Neopragmatism is distinguished from classical pragmatism (the pragmatism of James, Dewey, Peirce, and Mead) primarily due to the influence of the linguistic turn in philosophy that occurred in the early and mid-twentieth century. The linguistic turn in philosophy reduced talk of mind, ideas, and the world to language and the world. Philosophers stopped talking about the ideas or concepts one may have present in one's mind and started talking about the "mental language" and terms used to employ these concepts. In the early twentieth century philosophers of language (e.g. A.J. Ayer, Bertrand Russell, G.E. Moore) thought that analyzing language would bring about the arrival of meaning, objectivity, and ultimately, truth concerning external reality. In this tradition, it was thought that truth was obtained when linguistic terms stood in a proper correspondence relation to non-linguistic objects (this can be called "representationalism"). The thought was that in order for a statement or proposition to be true it must give facts which correspond to what is actually present in reality. This is called the correspondence theory of truth and is to be distinguished from a neo-pragmatic conception of truth.
Background:
There were many philosophical inquiries during the mid-twentieth century which began to undermine the legitimacy of the methodology of the early Anglo-analytic philosophers of language. W. V. O. Quine in Word and Object, originally published in 1960, attacked the notion of our concepts having any strong correspondence to reality. Quine argued for ontological relativity which attacked the idea that language could ever describe or paint a purely non-subjective picture of reality. More specifically, ontological relativity is the thesis that the things we believe to exist in the world are wholly dependent on our subjective, "mental languages". A 'mental language' is simply the way words which denote concepts in our minds are mapped to objects in the world.
Background:
Quine's argument for ontological relativity is roughly as follows: All ideas and perceptions concerning reality are given to our minds in terms of our own mental language.
Mental languages specify how objects in the world are to be constructed from our sense data.
Different mental languages will specify different ontologies (different objects existing in the world).
There is no way to perfectly translate between two different mental languages; there will always be several, consistent ways in which the terms in each language can be mapped onto the other.
Reality apart from our perceptions of it can be thought of as constituting a true, object language, that is, the language which specifies how things actually are.
There is no difference in translating between two mental languages and translating between the object language of reality and one's own mental language.
Therefore, just as there is no objective way of translating between two mental languages (no one-to-one mapping of terms in one to terms in the other) there is no way of objectively translating (or fitting) the true, object language of reality into our own mental language.
And therefore, there are many ontologies (possibly an infinite number) that can be consistently held to represent reality.(see Chapter 2, in Word and Object).
The above argument is reminiscent of the theme in neopragmatism against the picture theory of language, the idea that the goal of inquiry is to represent reality correctly with one's language.
Background:
A second critically influential philosopher to the neo-pragmatist is Thomas Kuhn who argued that our languages for representing reality, or what he called "paradigms", are only as good as they produce possible future experiments and observations. Kuhn, being a philosopher of science, argued in The Structure of Scientific Revolutions that "scientific progress" was a kind of a misnomer; for Kuhn, we make progress in science whenever we throw off old scientific paradigms with their associated concepts and methods in favor of new paradigms which offer novel experiments to be done and new scientific ontologies. For Kuhn 'electrons' exist just so much as they are useful in providing us with novel experiments which will allow us to uncover more about the new paradigm we have adopted. Kuhn believes that different paradigms posit different things to exist in the world and are therefore incommensurable with each other. Another way of viewing this is that paradigms describe new languages, which allow us to describe the world in new ways. Kuhn was a fallibilist; he believed that all scientific paradigms (e.g. classical Newtonian mechanics, Einsteinian relativity) should be assumed to be, on the whole, false but good for a time as they give scientists new ideas to play around with. Kuhn's fallibilism, holism, emphasis on incommensurability, and ideas concerning objective reality are themes which often show up in neopragmatist writings.
Background:
Wilfrid Sellars argued against foundationalist justification in epistemology and was therefore also highly influential to the neopragmatists, especially Rorty.
"Continental" influences Philosophers such as Derrida and Heidegger and their views on language have been highly influential to neopragmatist thinkers like Richard Rorty. Rorty has also emphasised the value of "historicist" or "genealogical" methods of philosophy typified by Continental thinkers such as Foucault.
Background:
Wittgenstein and language games The "later" Ludwig Wittgenstein in the Philosophical Investigations argues contrary to his earlier views in the Tractatus Logico-Philosophicus that the role of language is not to describe reality but rather to perform certain actions in communities. The language-game is the concept Wittgenstein used to emphasize this. Wittgenstein believed roughly that: Languages are used to obtain certain ends within communities.
Background:
Each language has its own set of rules and objects to which it refers.
Just as board games have rules guiding what moves may be made so do languages within communities where the moves to be made within a language game are the types of objects that may be talked about intelligibly.
Two people participating in two different language-games cannot be said to communicate in any relevant way.Many of the themes found in Wittgenstein are found in neopragmatism. Wittgenstein's emphasis of the importance of "use" in language to accomplish communal goals and the problems associated with trying to communicate between two different language games finds much traction in neopragmatist writings.
Richard Rorty and anti-representationalism:
Richard Rorty was influenced by James, Dewey, Sellars, Quine, Kuhn, Wittgenstein, Derrida, and Heidegger. He found common implications in the writings of many of these philosophers, as he believed that these philosophers were all in one way or another trying to hit on the thesis that our language does not represent things in reality in any relevant way. Rather than situating our language in ways in order to get things right or correct Rorty says in the Introduction to the first volume of his philosophical papers that we should believe that beliefs are only habits with which we use to react and adapt to the world. To Rorty getting things right as they are "in themselves" is useless if not downright meaningless.
Richard Rorty and anti-representationalism:
In 1995, Rorty wrote: "I linguisticize as many pre-linguistic-turn philosophers as I can, in order to read them as prophets of the utopia in which all metaphysical problems have been dissolved, and religion and science have yielded their place to poetry." This "linguistic turn" strategy aims to avoid what Rorty sees as the essentialisms ("truth," "reality," "experience") still extant in classical pragmatism. Rorty wrote: "Analytic philosophy, thanks to its concentration on language, was able to defend certain crucial pragmatist theses better than James and Dewey themselves. [...] By focusing our attention on the relation between language and the rest of the world rather than between experience and nature, post-positivistic analytic philosophy was able to make a more radical break with the philosophical tradition." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bit pairing**
Bit pairing:
In telecommunication, bit pairing is the practice of establishing, within a code set, a number of subsets that have an identical bit representation except for the state of a specified bit. Note: An example of bit pairing occurs in the International Alphabet No. 5 and the American Standard Code for Information Interchange (ASCII), where the upper case letters are related to their respective lower case letters by the state of bit six. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Circle-throw vibrating machine**
Circle-throw vibrating machine:
A circle-throw vibrating machine is a screening machine employed in processes involving particle separation. In particle processes screening refers to separation of larger from smaller particles in a given feed, using only the materials' physical properties. Circle throw machines have simple structure with high screening efficiency and volume. However it has limitations on the types of feed that can be processed smoothly. Some characteristics of circle-throw machines, such as frequency, vibration amplitude and angle of incline deck also affect output.
Applications:
They are widely used for screening quarry stone stock and classifying products in mining, sand, gold, energy and chemical industrial processes. The targeted substance is predominantly finer particles, which can then be directed into a separation unit, such as a hydrocyclone or are materials that can be removed and used. Removed materials are often formed intentionally and are classified by their shape, size and physical properties. For example, construction wastes are sorted and sieved by a circular vibrating screen into coarse and fine particles. The particles are taken to make concrete, architectural bricks and road base materials.
Competitive processes:
Circle-throw vibrating screens operate on an inclined surface. A deck moves in a circle. It operates with a continuous feed rather than in batches, leading to much greater output. The incline allows the feed to move through the device.
Circle-throw machines are larger than others, and may require greater space than other screening units. Fine, wet, sticky materials require a water spray to wash fine materials under spray bars. Circle-throws have a large stroke and allow heavy components to circulate and interfere with the screen box. A powerful motor is needed, while other separators may not.
Competitive processes:
Circle throw separation does not produce a separate waste stream. The feed is separated into multiple streams, with the number of exit streams matching the number of decks. Circle throw separation usually follows a grinding process. The coarser upper deck stream(s) can be directly re-fed into the grinding units due to continuous operation, thus reducing transport time, costs and storage.
Design:
The standard unit is a single-shaft, double-bearing unit constructed with a sieving box, mesh, vibration exciter and damper spring. The screen framing is steel side plates and cross-members that brace static and dynamic forces. At the center of the side plates, two roller bearings with counterweights are connected to run the drive. Four sets of springs are fixed on the base of the unit to overcome the lengthwise or crosswise tension from sieves and panels and to dampen movement. An external vibration exciter (motor) is mounted on the lateral (side) plate of the screen box with a cylindrical eccentric shaft and stroke adjustment unit. At the screen outlet, the flows are changed in direction, usually to 90 degrees or alternate directions, which reduces the exiting stream speed. Strong, ring-grooved lock bolts connect components.
Design:
Variations in this design regard the positioning of the vibration components. One alternative is top mounted vibration, in which the vibrators are attached to the top of the unit frame and produce an elliptical stroke. This decreases efficiency in favor of increased capacity by increasing the rotational speed, which is required for rough screening procedures where a high flow rate must be maintained.A refinement adds a counter-flow top mounting vibration, in which the sieving is more efficient because the material bed is deeper and the material stays on the screen for a longer time. It is employed in processes where higher separation efficiency per pass is required.
Design:
A dust hood or enclosure can be added to handle particularly loose particles. Water sprays may be attached above the top deck and the separation can be converted into a wet screening process.
Characteristics:
Screen deck inclination angle The circular-throw vibrating screen generates a rotating acceleration vector and the screen must maintain a steep throwing angle to prevent transportation along the screen deck. The deck is commonly constructed to have an angle within the range of 10° to 18°, in order to develop adequate particle movement. An Increase of deck angle speeds particle motion with proportional relationship to particle size. This decreases residence time and size stratification along the mesh screen. However, if the angle is greater than 20°, efficiency decreases due to reduction of effective mesh area. Effect of deck angle on efficiency is also influenced by particle density. In mining the optimal inclination angle is about 15°. Exceptions are the dewatering screens at 3° to 5° and steep screens at 20° to 40°.
Characteristics:
Short distribution time On average, 1.5 seconds is required for the screen process to reach a steady state and for particles to cover the screen. This is induced by the circular motion. The rotary acceleration has a loosening effect on the particles on the deck. Centrifugal forces spread particles across the screen. With the combination of the gravitational component, the efficiency of small particle passing through aperture is improved, and large size particles are carried forward towards the discharge end.
Characteristics:
Vibration separation Under vibration, particles of different sizes segregate (Brazil nut effect). Vibration lifts and segregates particles on the inclined screen. When vibration amplitude is within the range of 3 to 3.5mm, the equipment segregates the large and small particles with best efficiency. If the amplitude is too high, the contact area between particles and screen surface is reduced and energy is wasted; if too low, particles block the aperture, causing poor separation.Higher frequency of vibration improves component stratification along the screen and leads to better separation efficiency. Circle throw gear is designed with 750 ~to 1050 rpm, which screens large materials. However, frequencies that are too high vibrate particles excessively; therefore the effective contact area of mesh surface to particles decreases.
Characteristics:
Characteristics of feed Moisture in the feed forms larger particles by coagulating small particles. This effect reduces sieve efficiency. However the centrifugal force and vibration and acts to prevent aperture blockage and agglomerated particle formation. Feed particles are classified as fine, near-sized and oversized particles; most near-sized and fines pass through the aperture rapidly. The ratio of fine and near-size particles to oversize should be maximized to obtain high screening rates.Rate of feed is proportional to efficiency and capacity of screen; high feed rate reaches steady state and results in better screening rates. However, an optimum bed thickness should be maintained for consistent high efficiency.
Characteristics:
Stable efficiency Steady state screening efficiency is sensitive to the vibration amplitude. Good screening performance usually occurs when the amplitude is 3-3.5 mm. Particle velocity should be no more than 0.389 m/s. If the speed is too big, poor segregation and low efficiency follows. Eo shows the efficiency of undersize removal from oversize at steady state.
Characteristics:
100 mass flow rate of solids coarser than screen size in feed stream mass flow rate of solids in the oversize stream 100 100 (1−ox) where F is stph (short ton per hour) of feed ore, O is stph of oversize solids discharging as screen oversize, fx is cumulative weight fraction of feed finer than ‘x’ and ox is cumulative weight fraction of oversize finer than ‘x’.
Characteristics:
Eu shows the efficiency of undersize recovery. U is mass rate of solids in the undersize stream.
100 mass flow rate of solids in the undersize stream mass flow rate of solids finer than screen size in feed stream Thus 100 100 [fx−oxfx(1−ox)]=Eo−(1−fx)Eofx
Design heuristics:
Vibration design Circle-throw vibrating units rely on operating the screen component at a resonant frequency to sieve efficiently. While properly selected vibration frequencies drastically improve filtration, a deflection factor occurs as the vibrations displaces smaller particles. They do not properly pass through the screen due to excess movement. This is a property of the system's natural frequency. The natural frequency preferably vibrates at Fn is 188(1/d)2 (cycles per min) where d = (188/Fn)2 (inches). Static deflection corresponds to this frequency. Vibration isolation is a control principle employed to mitigate transmission. On circle-throw vibrating screens, passive vibration isolation in the form of mechanical springs and suspension are employed at the base of the unit, which provides stability and control of motor vibration. A rule of thumb regarding the amount of static deflection minimization that should be targeted with respect to operating RPM is provided in the table below.
Design heuristics:
Critical installations refer to roof-mounted units. Weight, loading and weight distribution are all elements which must be considered.
Design heuristics:
Roller bearing design A circle-throw vibrating screen with a shaft and bearing system requires consideration of the loading the unit will undergo. The extra loading to the screen box created by the centrifugal force, due to the circular motion of the load as it passes through the unit is also a factor. Bearings must be designed to accommodate the extra stress. Bearing load due to screen box centrifugal force (Fr) is 30 )2 Supplementary factor of Fz = ~1.2 is used to account for unfavorable dynamic stressing: P=Fz×Fr Index of dynamic stressing FL, speed factor Fn are used to calculate minimum required dynamic loading (kN) C=P×FLFn FL is taken between 2.5-3 generally as to correspond to a nominal fatigue life of 11,000-20,000 hours as part of a usual design.
Design heuristics:
Structural support of vibrating equipment The unit's processing ability is related to the vibration, requiring care to the design of the structural and support elements. An inadequate structural design is unable to stabilize the unit producing excess vibrations, leading to higher deflection or reducing effectiveness.
Design heuristics:
Total static force applied and spring stiffness: Deflection Force Stiffness static 2k When the dynamic forces of the loading is considered, an amplitude magnification factor (MF) must be considered: Amplitude dynamic 2k) An estimation of the magnification factor for a system with one degree of freedom may be gained using: MF=1[1−(fdfn)2]2+[2ζ(fdfn)2]12 Most structural mechanical systems are lightly damped. If the damping term is neglected: MF=1[1−(fdfn)2]2 where fd/fn represents frequency ratio (frequency due to dynamic force, fd, and natural frequency of the unit, fn).
Screen length/width:
Once the area of the unit is known, the screen's length and width must be calculated so that a ratio of 2-3 width (W) to 1 length (L) is maintained. Capacity is controlled by width adjustment and efficiency by width.
400 bulk density ) 120 10 inclination angle ) The bed depth D must be lesser or equal to 0.2 bulk density )]×Xs Xs is desired cut size.
Length =AW (ft) Starting deck angles can be estimated from 15.5 FW F= ideal oversize flowrate, standard widths for circle-throw machines are 24,36,48,60, 72, 84, 96 inches. Measurements should be matched to available "on-shelf" units to reduce capital cost.
Screen length/width:
Aperture size and shape At a fixed screen capacity, efficiency is likely to decrease as aperture size decreases. In general, particles are not required to be separated precisely at their aperture size. However efficiency is improved if the screen is designed to filter as close to the intended cut size as possible. The selection of aperture type is generalized by the table below: Bearings Most processes have employed two-bearing screens. Two- bearing circular vibrating screens with a screen box weight of 35 kN and speed of 1200 RPM were common. The centroid axis of the screen box and unbalanced load does not change during rotation.
Screen length/width:
A four-bearing vibrating screen (F- Class) was developed to meet demands especially for iron ore, phosphate and limestone production industries. F-Class features a HUCK-bolted screen body is connected for extra strength and rigidity and carbon steel is used for the side plates to give high strength. The shaft is strengthened with a reinforcing plate, which attaches to the slide plate and screen panels.
Screen length/width:
Four-bearing screen provide much greater unit stability thus higher vibration amplitudes and/or frequencies may be used without excess isolation or dampening; overall plant noise emission. The new design gives an accurate, fast sizing classification with materials ranging in cut size from 0.15 to 9.76 inches and high tonnage output that can process up to 5000 tons per hour. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ombrabulin**
Ombrabulin:
Ombrabulin was an experimental drug candidate discovered by Ajinomoto and further developed by Sanofi-Aventis. Ombrabulin is a combretastatin A-4 derivative that exerts its antitumor effect by disrupting the formation of blood vessels needed for tumor growth.It was granted orphan drug status by the European Medicines Agency in April 2011.In January 2013, Sanofi said it discontinued development of ombrabulin after disappointing results from phase III clinical trials. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Close collar minting**
Close collar minting:
Close collar minting is a method of coin manufacture that is used almost exclusively today. With close collar minting, the planchet is centred within a solid metal collar during the minting process.This restraining collar prevented the expansion of the planchet sideways and outwards and thus made it possible to mint completely round coins for the first time. These could also have a slightly raised edge (edge bar) and an edge inscription without additional milling. The edge minting made possible with the new technology is not only difficult to forge; it also increases the circulation security of the coins, since coin clipping is very easily noticed. A pearl circle often adjoins the edge bar on the inside.
Close collar minting:
Close collar minting is an invention of French medalist and engraver Jean-Pierre Droz (1746–1823). Its prototype of a functional minting machine had a six-part minting ring.
Close collars were used for the first time in the new Soho Mint. In Germany, Prussia systematically promoted such coinage via the German Customs Union from the middle of the 19th century.
Literature:
Ewald Junge (1977): Droz, Jean-Piere. "Circular minting". In: Tyll Kroha (main author) Lexikon der Numismatik. Bertelsmann Lexikonverlag, Gütersloh. p. 121.
Gerhard Welter (1977): "Circular minting". In: Tyll Kroha (main author) Lexikon der Numismatik. Bertelsmann Lexikonverlag, Gütersloh. p. 370. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chlorophenol**
Chlorophenol:
A chlorophenol is any organochloride of phenol that contains one or more covalently bonded chlorine atoms. There are five basic types of chlorophenols (mono- to pentachlorophenol) and 19 different chlorophenols in total when positional isomerism is taken into account. Chlorophenols are produced by electrophilic halogenation of phenol with chlorine.Most chlorophenols are solid at room temperature. They have a strong, medicinal taste and smell. Chlorophenols are commonly used as pesticides, herbicides, and disinfectants.
List of chlorophenols:
There is a total of 19 chlorophenols, corresponding to the different ways in which chlorine atoms can be attached to the five carbons in the benzene ring of the phenol molecule, excluding the carbon atom to which the hydroxy group is attached. Monochlorophenols have three isomers because there is only one chlorine atom that can occupy one of three ring positions on the phenol molecule; 2-chlorophenol, for example, is the isomer that has a chlorine atom in the ortho position. Pentachlorophenol, by contrast, has only one isomer because all five available ring positions on the phenol are fully chlorinated.
List of chlorophenols:
Monochlorophenol (3 positional isomers) 2-Chlorophenol 3-Chlorophenol 4-Chlorophenol Dichlorophenol (6 positional isomers) 2,3-Dichlorophenol 2,4-Dichlorophenol 2,5-Dichlorophenol 2,6-Dichlorophenol 3,4-Dichlorophenol 3,5-Dichlorophenol Trichlorophenol (6 positional isomers) 2,3,4-Trichlorophenol 2,3,5-Trichlorophenol 2,3,6-Trichlorophenol 2,4,5-Trichlorophenol 2,4,6-Trichlorophenol 3,4,5-Trichlorophenol Tetrachlorophenol (3 positional isomers) 2,3,4,5-Tetrachlorophenol 2,3,4,6-Tetrachlorophenol 2,3,5,6-Tetrachlorophenol Pentachlorophenol (1 positional isomer) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Minimum-distance estimation**
Minimum-distance estimation:
Minimum-distance estimation (MDE) is a conceptual method for fitting a statistical model to data, usually the empirical distribution. Often-used estimators such as ordinary least squares can be thought of as special cases of minimum-distance estimation.
While consistent and asymptotically normal, minimum-distance estimators are generally not statistically efficient when compared to maximum likelihood estimators, because they omit the Jacobian usually present in the likelihood function. This, however, substantially reduces the computational complexity of the optimization problem.
Definition:
Let X1,…,Xn be an independent and identically distributed (iid) random sample from a population with distribution F(x;θ):θ∈Θ and Θ⊆Rk(k≥1) Let Fn(x) be the empirical distribution function based on the sample.
Let θ^ be an estimator for θ . Then F(x;θ^) is an estimator for F(x;θ) Let d[⋅,⋅] be a functional returning some measure of "distance" between the two arguments. The functional d is also called the criterion function.
If there exists a θ^∈Θ such that inf {d[F(x;θ),Fn(x)];θ∈Θ} , then θ^ is called the minimum-distance estimate of θ (Drossos & Philippou 1980, p. 121)
Statistics used in estimation:
Most theoretical studies of minimum-distance estimation, and most applications, make use of "distance" measures which underlie already-established goodness of fit tests: the test statistic used in one of these tests is used as the distance measure to be minimised. Below are some examples of statistical tests that have been used for minimum-distance estimation.
Chi-square criterion The chi-square test uses as its criterion the sum, over predefined groups, of the squared difference between the increases of the empirical distribution and the estimated distribution, weighted by the increase in the estimate for that group.
Cramér–von Mises criterion The Cramér–von Mises criterion uses the integral of the squared difference between the empirical and the estimated distribution functions (Parr & Schucany 1980, p. 616).
Kolmogorov–Smirnov criterion The Kolmogorov–Smirnov test uses the supremum of the absolute difference between the empirical and the estimated distribution functions (Parr & Schucany 1980, p. 616).
Anderson–Darling criterion The Anderson–Darling test is similar to the Cramér–von Mises criterion except that the integral is of a weighted version of the squared difference, where the weighting relates the variance of the empirical distribution function (Parr & Schucany 1980, p. 616).
Theoretical results:
The theory of minimum-distance estimation is related to that for the asymptotic distribution of the corresponding statistical goodness of fit tests. Often the cases of the Cramér–von Mises criterion, the Kolmogorov–Smirnov test and the Anderson–Darling test are treated simultaneously by treating them as special cases of a more general formulation of a distance measure. Examples of the theoretical results that are available are: consistency of the parameter estimates; the asymptotic covariance matrices of the parameter estimates. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hawaiian grammar**
Hawaiian grammar:
This article summarizes grammar in the Hawaiian language.
Syntax:
Hawaiian is a predominantly verb–subject–object language. However, word order is flexible, and the emphatic word can be placed first in the sentence.: p28 Hawaiian largely avoids subordinate clauses,: p.27 and often uses a possessive construction instead.: p.41 Hawaiian, unlike English, is a pro-drop language, meaning pronouns may be omitted when the meaning is clear from context.: p.108 The typical detailed word order is given by the following,: p.19 with most items optional: Tense/aspect signs: i, ua, e, etc.
Syntax:
Verb Qualifying adverb: mau, wale, ole, pu, etc.
Passive sign: ʻia Verbal directives: aku, mai, etc.
Syntax:
Locatives nei or lā, or particles ana or ai Strengthening particle: nō Subject Object or predicate noun Exceptions to VSO word order If the sentence has a negative mood and the subject is a pronoun, word order is subject–verb–object following the negator ʻaʻole, as in: Another exception is when an emphatic adverbial phrase begins the sentence. In this case, a pronoun subject precedes the verb.: p.29 Interrogatives Yes–no questions can be unmarked and expressed by intonation,: p.32 or they can be marked by placing anei after the leading word of the sentence.: p.23 Examples of question-word questions include: See also Hawaiian Language: Syntax and other resources Archived 2010-05-22 at the Wayback Machine.
Nouns:
As Hawaiian does not particularly discern between word types, any verb can be nominalized by preceding it with the definite article, however, some words that are used as nouns are rarely or never used as verbs.: p.37 Within the noun phrase, adjectives follow the noun (e.g. ka hale liʻiliʻi "the house small", "the small house"), while possessors precede it (e.g. kou hale "your house"). Numerals precede the noun in the absence of the definite article, but follow the noun if the noun is preceded by the definite article.: p.31 Articles Every noun is preceded by an article (ka‘i). The three main ones are: ke and ka – definitive singular – ke for words starting with letters k, e, a and o (usually memorised as ke ao "the cloud" rule) exceptions include words called nā kūʻēlula "the rule defiers" eg. ke pākaukau "the table", ke ʻō "the fork" and ke mele "the song"). For all other words ka is used.
Nouns:
he – indefinite singular nā – plural (definite or indefinite) Number In noun phrases, two numbers (singular and plural) are distinguished. The singular articles ke and ka and the plural article nā are the only articles that mark number: ka pu‘u "the hill" vs. nā pu‘u "the hills"In the absence of these articles, plurality is usually indicated by inserting the pluralizing particle mau immediately before the noun: he hale "a house" vs. he mau hale "houses" ko‘u hoaaloha "my friend" vs. ko‘u mau hoaaloha "my friends"Most nouns do not change when pluralized; however, some nouns referring to people exhibit a lengthened vowel in the third syllable from the end in the plural: he wahine "a woman" vs. he mau wāhine "women" ka ‘elemakule "the old man" vs. nā ‘elemākule "the old men" ia kahuna "the aforementioned priest" vs. ia mau kāhuna "the aforementioned priests" Gender In Hawaiian, there is no gender distinction in the third person. The word for third person (he, she, it) is ia. It is commonly preceded by ʻo as in ʻo ia and, following standard modern orthographical rules, is written as two words, but it can be seen as one when written by older speakers and in historical documents.
Nouns:
Hawaiian nouns belong to one of two genders, this gender system is not based on biological sex. The two genders are known as the kino ʻō (o-class) and the kino ʻā (a-class). These classes are only taken into account when using the genitive case (see table of personal pronouns below).
Nouns:
Kino ʻō nouns, in general, are nouns whose creation cannot be controlled by the subject, such as inoa "name", puʻuwai "heart", and hale "house". Specific categories for o-class nouns include: modes of transportation (e.g. kaʻa "car" and lio "horse"), things that you can go into, sit on or wear (e.g., lumi "room", noho "chair", ʻeke "bag", and lole "clothes"), and people in your generation (e.g., siblings, cousins) and previous generations (e.g. makuahine "mother").
Nouns:
Kino ʻā nouns, in general, are those whose creation can be controlled, such as waihoʻoluʻu "color", as in kaʻu waihoʻoluʻu punahele "my favorite color". Specific categories include: your boyfriend or girlfriend (ipo), spouse, friends, and future generations in your line (all of your descendants).
Nouns:
The change of preposition of o "of" (kino ʻō) to a "of "(kino ʻā) is especially important for prepositional and subordinate phrases: ka mea "the thing" kona mea "his thing (nonspecific)" kāna mea "his thing (which he created or somehow chose)" ka mea āna i ʻike ai "the thing that he saw" kāna (mea) i ʻike ai "what he saw" kēia ʻike ʻana āna "this thing that he saw (purposefully)" kēia ʻike ʻana ona "this thing that he saw (purportedly)" where the seeing isn't much import
Verbs:
Tense, aspect, and mood Verbs can be analytically marked with particles to indicate tense, aspect and mood. Separate verb markers are used in relative clauses, after the negation word ʻaʻole, and in some other situations.
The marker ala/lā implies greater spatial or temporal distance from the speaker than nei or ana.
Verbs:
In his "Introduction to Hawaiian Grammar," W.D. Alexander proposed that Hawaiian has a pluperfect tense as follows: ua + verb + ʻē: pluperfect tense/aspect (ua hana ʻē au "I had worked")However, this is debatable since ʻē simply means "beforehand, in advance, already". Andrews [Gram. 1.4] suggested the same thing that Alexander forwards. However, Ua hana ʻē au could mean both "I have already worked", "I already worked", and (depending on the temporal context) "I had worked previous to that moment." "Already" is the operative unifier for these constructions as well as the perfective quality denoted by ua. ʻĒ therefore is acting like a regular Hawaiian adverb, following the verb it modifies: Ua hana paha au. Perhaps I worked.
Verbs:
Ua hana mālie au. I worked steadily, without disruption.
Ua hana naʻe au. I even worked.
Verbs:
Passive Voice Transitive verbs can be passivized with the particle ʻia, which follows the verb but precedes tense/aspect/mood markers. The agent, if specified, is marked with the preposition e, usually translated as "by" in English: Ke kūkulu ʻia lā ka hale e mākou. The house is being built by us Equative sentences Hawaiian does not have a copula verb meaning "to be" nor does it have a verb meaning "to have". Equative sentences are used to convey this group of ideas. All equative sentences in Hawaiian are zero-tense/mood (i.e., they cannot be modified by verbal markers, particles or adverbs).
Verbs:
Pepeke ʻAike He "A is a B" Pepeke ʻAike He is the name for the simple equative sentence "A is a(n) B". The pattern is "He B (ʻo) A." ʻO marks the third person singular pronoun ia (which means "he/she/it") and all proper nouns.
He kaikamahine ʻo Mary. Mary is a girl.
He kaikamahine ʻo ia. She is a girl.
He Hawaiʻi kēlā kaikamahine. That girl is (a) Hawaiian.
He haumana ke keiki. The child is a student.
Pepeke ʻAike ʻO Pepeke ʻAike ʻO is the name for the simple equative sentence "A is B." The pattern is " ʻO A (ʻo) B," where the order of the nouns is interchangeable and where ʻo invariably marks the third person singular pronoun ia and all proper nouns (regardless of where it is in the utterance).
ʻO Mary ʻo ia. ʻO ia ʻo Mary. She is Mary.
ʻO Mary nō ia. ʻO ia nō ʻo Mary. It's Mary.
ʻO wau ʻo Mary. ʻO Mary wau. I'm Mary.
ʻO ʻoe ʻo Mary. ʻO Mary ʻoe. You are Mary.
ʻO Mary ke kaikamahine. ʻO ke kaikamahine ʻo Mary. Mary is the girl. The girl is Mary.
ʻO ka haumana ke keiki. ʻO ke keiki ka haumana. The student is the child. The child is the student.
Pepeke Henua (Locational equative) Pepeke Henua is the name for the simple equative sentence "A is located (in/on/at/etc. B)." The pattern is "Aia (ʻo) A..." Aia ʻo Mary ma Hilo. Mary is in Hilo.
Aia ʻo ia maloko o ka wai. He/she/it is inside (of) the water.
Aia ka haumana mahea? Aia mahea ka haumana? Where is the student? Pepeke ʻAike Na Pepeke ʻAike Na is the name of the simple equative sentence "A belongs to B." The pattern is "Na (B) A." The singular pronouns undergo predictable changes.
Pepeke ʻAike Na Examples: Naʻu ke kaʻa. The car belongs to me. That's my car.
Na Mary ke keiki. The child is Mary's. It's Mary's child.
Nāna ka penikala. The pencil belongs to him/her/it.
Nāu nō au. I belong to you. I'm yours.
Note: ʻO kēia ke kaʻa nāu. This is the car I'm giving to you.
He makana kēlā na ke aliʻi. This is a present for the chief.
Verbs:
Other verbal particles Other post-verbal markers include: pp.228–231 verb + mai: "toward the speaker" verb + aku: "away from the speaker" verb + iho: "down" verb + aʻe: "up", "adjacent" stative verb + iā + agent: agent marker Causative verb creation Causative verbs can be created from nouns and adjectives by using the prefix ho'o-, as illustrated in the following:: p.24 nani "pretty"; hoʻonani "to beautify" nui "large"; hoʻonui "to enlarge" hui "club"; hoʻohui "to form a club"
Reduplication:
Reduplication: p.23 can emphasize or otherwise alter the meaning of a word. Examples are: ʻau "to swim"; ʻauʻau "to bathe" haʻi "to say"; haʻihaʻi "to speak back and forth" maʻi "sick"; maʻimaʻi "chronically sick" | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Growing teeth**
Growing teeth:
Growing teeth is a bioengineering technology with the ultimate goal to create new full molars in a person or an animal.
Chronology:
2002 – British scientists have learned how to grow almost whole, but feeble teeth from single cells.
2007 – Japanese scientists have bred mice almost full new teeth, but without a root.
2009 – from the stem cells were grown full teeth in mice, and even managed to grow a tooth root, previously it was not possible, but there is a problem, it is that grown teeth were slightly less "native" teeth.
2013 - Chinese scientists grow human teeth in mice using stem cells taken from human urine.
2015 - Growing New Teeth in the Mouth Using Stem-Cell Dental Implants 2018 - Protein disorder–order interplay to guide the growth of hierarchical mineralized structures.
Methods:
Outer – the tooth is grown separately and implanted in the patient.
Inner – the tooth is grown directly into the patient's mouth.
Regenerative Research:
2012 – Indian researchers found a way to cure and regenerate an infected root canal through stem cell activation. This replaces the old method of removing the tooth nerve.
2013 - Swiss researchers regenerate tooth enamel of early cavities using a peptide-based biomaterial.
Links:
Alligators Inspire New Way for Growing Teeth Archived 13 March 2016 at the Wayback Machine | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Eruptive pseudoangiomatosis**
Eruptive pseudoangiomatosis:
Eruptive pseudoangiomatosis is a cutaneous condition characterized by the sudden appearance of 2- to 4-mm blanchable red papules.: 399 It can appear in children or adults. The papules appear similar to hemangiomasViruses found in patients include Echovirus 25 and 32, coxsackie B, Epstein–Barr virus, and cytomegalovirus. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Isovaleramide**
Isovaleramide:
Isovaleramide is an organic compound with the formula (CH3)2CHCH2C(O)NH2. The amide derived from isovaleric acid, it is a colourless solid.
Occurrence and biological activity:
Isovaleramide is a constituent of valerian root.
Occurrence and biological activity:
In humans, it acts as a mild anxiolytic at lower doses and as a mild sedative at higher dosages. Isovaleramide has been shown to be non-cytotoxic and does not act as a CNS stimulant. It inhibits the liver alcohol dehydrogenases and has a reported LD50 of greater than 400 mg/kg when administered intraperitoneally in mice.It is a positive allosteric modulator of the GABAA receptor, similarly to isovaleric acid. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GTF3C4**
GTF3C4:
General transcription factor 3C polypeptide 4 is a protein that in humans is encoded by the GTF3C4 gene.
Interactions:
GTF3C4 has been shown to interact with GTF3C2, GTF3C1, POLR3C and GTF3C5. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jacob's staff**
Jacob's staff:
The term Jacob's staff is used to refer to several things, also known as cross-staff, a ballastella, a fore-staff, a ballestilla, or a balestilha. In its most basic form, a Jacob's staff is a stick or pole with length markings; most staffs are much more complicated than that, and usually contain a number of measurement and stabilization features. The two most frequent uses are: in astronomy and navigation for a simple device to measure angles, later replaced by the more precise sextants; in surveying (and scientific fields that use surveying techniques, such as geology and ecology) for a vertical rod that penetrates or sits on the ground and supports a compass or other instrument.The simplest use of a Jacob's staff is to make qualitative judgements of the height and angle of an object relative to the user of the staff.
In astronomy and navigation:
In navigation the instrument is also called a cross-staff and was used to determine angles, for instance the angle between the horizon and Polaris or the sun to determine a vessel's latitude, or the angle between the top and bottom of an object to determine the distance to said object if its height is known, or the height of the object if its distance is known, or the horizontal angle between two visible locations to determine one's point on a map.
In astronomy and navigation:
The Jacob's staff, when used for astronomical observations, was also referred to as a radius astronomicus. With the demise of the cross-staff, in the modern era the name "Jacob's staff" is applied primarily to the device used to provide support for surveyor's instruments.
In astronomy and navigation:
Etymology The origin of the name of the instrument is not certain. Some refer to the Biblical patriarch Jacob, specifically in the Book of Genesis (Gen 32:11). It may also take its name after its resemblance to Orion, referred to by the name of Jacob on some medieval star charts. Another possible source is the Pilgrim's staff, the symbol of St James (Jacobus in Latin). The name cross staff simply comes from its cruciform shape.
In astronomy and navigation:
History The original Jacob's staff was developed as a single pole device, in the 14th century, that was used in making astronomical measurements. It was first described by the French-Jewish mathematician Levi ben Gerson of Provence, in his "Book of the Wars of the Lord" (translated in Latin as well as Hebrew). He used a Hebrew name for the staff that translates to "Revealer of Profundities", while the term "Jacob's staff" was used by his Christian contemporaries. Its invention was likely due to fellow French-Jewish astronomer Jacob ben Makir, who also lived in Provence in the same period. Attribution to 15th century Austrian astronomer Georg Purbach is less likely, because Purbach was not born until 1423. (Such attributions may refer to a different instrument with the same name.) Its origins may be traced to the Chaldeans around 400 BC.
In astronomy and navigation:
Although it has become quite accepted that ben Gerson first described Jacob's staff, the British Sinologist Joseph Needham theorizes that the Song Dynasty Chinese scientist Shen Kuo (1031–1095), in his Dream Pool Essays of 1088, described a Jacob's staff. Shen was an antiquarian interested in ancient objects; after he unearthed an ancient crossbow-like device from a home's garden in Jiangsu, he realized it had a sight with a graduated scale that could be used to measure the heights of distant mountains, likening it to how mathematicians measure heights by using right-angle triangles. He wrote that when one viewed the whole breadth of a mountain with it, the distance on the instrument was long; when viewing a small part of the mountainside, the distance was short; this, he wrote, was due to the cross piece that had to be pushed further away from the eye, while the graduation started from the further end. Needham does not mention any practical application of this observation.During the medieval European Renaissance, the Dutch mathematician and surveyor Adriaan Metius developed his own Jacob's staff; Dutch mathematician Gemma Frisius made improvements to this instrument. In the 15th century, the German mathematician Johannes Müller (called Regiomontanus) made the instrument popular in geodesic and astronomical measurements.
In astronomy and navigation:
Construction In the original form of the cross-staff, the pole or main staff was marked with graduations for length. The cross-piece (BC in the drawing to the right), also called the transom or transversal, slides up and down on the main staff. On older instruments, the ends of the transom were cut straight across. Newer instruments had brass fittings on the ends, with holes in the brass for observation. (In marine archaeology, these fittings are often the only components of a cross-staff that survive.)It was common to provide several transoms, each with a different range of angles it would measure; three transoms were common. In later instruments, separate transoms were switched in favour of just one with pegs to indicate the ends. These pegs were mounted in one of several pairs of holes symmetrically located on either side of the transom. This provided the same capability with fewer parts. The transom on Frisius' version had a sliding vane on the transom as an end point.
In astronomy and navigation:
Usage The user places one end of the main staff against their cheek, just below the eye. By sighting the horizon at the end of the lower part of the transom (or through the hole in the brass fitting) [B], then adjusting the cross arm on the main arm until the sun is at the other end of the transom [C], the altitude can be determined by reading the position of the cross arm on the scale on the main staff. This value was converted to an angular measurement by looking up the value in a table.
In astronomy and navigation:
Cross-staff for navigation The original version was not reported to be used at sea, until the Age of Discoveries. Its use was reported by João de Lisboa in his Treatise on the Nautical Needle of 1514. Johannes Werner suggested the cross-staff be used at sea in 1514 and improved instruments were introduced for use in navigation. John Dee introduced it to England in the 1550s. In the improved versions, the rod was graduated directly in degrees. This variant of the instrument is not correctly termed a Jacob's staff but is a cross-staff.The cross-staff was difficult to use. In order to get consistent results, the observer had to position the end of the pole precisely against his cheek. He had to observe the horizon and a star in two different directions while not moving the instrument when he shifted his gaze from one to the other. In addition, observations of the sun required the navigator to look directly at the sun. This could be a uncomfortable exercise and made it difficult to obtain an accurate altitude for the sun. Mariners took to mounting smoked-glass to the ends of the transoms to reduce the glare of the sun.As a navigational tool, this instrument was eventually replaced, first by the backstaff or quadrant, neither of which required the user to stare directly into the sun, and later by the octant and the sextant. Perhaps influenced by the backstaff, some navigators modified the cross-staff to operate more like the former. Vanes were added to the ends of the longest cross-piece and another to the end of the main staff. The instrument was reversed so that the shadow of the upper vane on the cross piece fell on the vane at the end of the staff. The navigator held the instrument so that he would view the horizon lined up with the lower vane and the vane at the end of the staff. By aligning the horizon with the shadow of the sun on the vane at the end of the staff, the elevation of the sun could be determined. This actually increased the accuracy of the instrument, as the navigator no longer had to position the end of the staff precisely on his cheek.
In astronomy and navigation:
Another variant of the cross-staff was a spiegelboog, invented in 1660 by the Dutchman, Joost van Breen.
In astronomy and navigation:
Ultimately, the cross-staff could not compete with the backstaff in many countries. In terms of handling, the backstaff was found to be more easy to use. However, it has been proven by several authors that in terms of accuracy, the cross-staff was superior to the backstaff. Backstaves were no longer allowed on board Dutch East India Company vessels as per 1731, with octants not permitted until 1748.
In surveying:
In surveying, the term jacob staff refers to a monopod, a single straight rod or staff made of nonferrous material, pointed and metal-clad at the bottom for penetrating the ground. It also has a screw base and occasionally a ball joint on the mount, and is used for supporting a compass, transit, or other instrument.The term cross-staff may also have a different meaning in the history of surveying. While the astronomical cross-staff was used in surveying for measuring angles, two other devices referred to as a cross-staff were also employed.
In surveying:
Cross-head, cross-sight, surveyor's cross or cross - a drum or box shaped device mounted on a pole. It had two sets of mutually perpendicular sights. This device was used by surveyors to measure offsets. Sophisticated versions had a compass and spirit levels on the top. The French versions were frequently eight-sided rather than round.
In surveying:
Optical square - an improved version of the cross-head, the optical square used two silvered mirrors at 45° to each other. This permitted the surveyor to see along both axes of the instrument at once.In the past, many surveyor's instruments were used on a Jacob's staff. These include: Cross-head, cross-sight, surveyor's cross or cross Graphometer Circumferentor Holland circle Miner's dial Optical square Surveyor's sextant Surveyor's target Abney levelSome devices, such as the modern optical targets for laser-based surveying, are still in common use on a Jacob's staff.
In surveying:
In geology In geology, the Jacob's staff is mainly used to measure stratigraphic thicknesses in the field, especially when bedding is not visible or unclear (e.g., covered outcrop) and when due to the configuration of an outcrop, the apparent and real thicknesses of beds diverge therefore making the use of a tape measure difficult. There is a certain level of error to be expected when using this tool, due to the lack of an exact reference mean for measuring stratigraphic thickness. High-precision designs include a laser able to slide vertically along the staff and to rotate on a plane parallel to bedding. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Roald Dahl short stories bibliography**
Roald Dahl short stories bibliography:
Roald Dahl short stories bibliography is a comprehensive annotated list of short stories written by Roald Dahl.
Collections:
— (1946). Over to You: Ten Stories of Flyers and Flying. USA: Reynal & Hitchcock.
— (1973). Over to You: Ten Stories of Flyers and Flying (Reprinted ed.). London: Penguin. ISBN 978-0140035742.
— (1953). Someone Like You. Knopf.
— (1984). Someone Like You (Rev. and expanded ed.). Harmondsworth: Penguin. ISBN 978-0140030747.
— (1960). Kiss Kiss. Knopf.
— (1962). Kiss Kiss. Harmondsworth, Middlesex, England: Penguin Books.
— (1974). Switch Bitch. New York: Alfred A. Knopf. ISBN 978-0394494739.
— (2012). Switch Bitch (Reissue ed.). London: Michael Joseph. ISBN 978-0241955727.
— (1977). The Wonderful Story of Henry Sugar and Six More. London: Jonathan Cape. ISBN 978-0224015479.
— (1980). Tales of the Unexpected (Repr. ed.). Harmondsworth, Middlesex, England: Penguin. ISBN 0-14-005131-7.
— (1980). More Tales of the Unexpected (First ed.). London: Penguin Books. ISBN 978-0140056068.
— (1986). Two Fables. With illustrations by Graham Dean (First ed.). Harmondsworth, Middlesex, England: Viking/Penguin. ISBN 978-0670815302.
— (1989). Ah, Sweet Mystery of Life. London: Michael Joseph. ISBN 978-0140118476.
— (2001). The great automatic grammatizator and other stories. London: Puffin. ISBN 978-0141311500.
— (2002). Skin and Other Stories. New York: Puffin. ISBN 978-0141310343.
Omnibus editions — (1978). The Best of Roald Dahl: stories from Over to You, Someone Like You, Kiss Kiss, Switch Bitch. New York: Vintage Books. ISBN 978-0394725499.
— (1986). The Roald Dahl Omnibus. New York: Dorset Press. ISBN 978-0880291248.
— (1991). The Collected Short Stories of Roald Dahl. London: Michael Joseph. ISBN 978-0708987421.
— (2006). Roald Dahl: Collected Stories (Reissue from 1991 ed.). New York: Alfred A. Knopf. ISBN 978-0307264909.
— (1997). The Roald Dahl Treasury. London: Jonathan Cape. ISBN 978-0224046916.
— (2013). The Complete Short Stories: Volume One (1944–1953). London: Michael Joseph. ISBN 978-1405910101.
— (2013). The Complete Short Stories: Volume Two (1954–1988). London: Michael Joseph. ISBN 978-1405910118. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lee helm**
Lee helm:
There are 2 different meanings to the term 'lee helm' depending on whether one is discussing sailboats or motorized ships.
Sailboats::
Lee helm is the tendency of a sailboat to turn away from the wind while under sail. It is the opposite of weather helm (the tendency of a sailboat to "round up" into the wind). A boat with lee helm will be difficult to sail close hauled, and tacking may be difficult.
Description:
Lee helm is considered dangerous in a sailboat. While sailing, an undirected boat with lee helm will bear (turn) away from the wind, accelerate, and perform an accidental or uncontrolled gybe, perhaps repeatedly. In an uncontrolled gybe, the force of the wind moves the sails and boom from one side of the boat to the other. In a strong wind, this movement will be very fast and forceful... and can damage the boat, the sails, injure the crew, or cause the boat to broach (lay over on its side).
Description:
The cause of lee helm is the center of pressure exerted by the wind on the sails falls too far forward of the center of resistance of the hull—the natural point at which the hull tries to pivot. This tends to push the bow of the boat away from the wind. This can be due to a poor design, for example with the mast too far forward.
Description:
A small amount of lee helm can be counter-acted with the rudder, but this introduces significant drag in the water and slows the boat. A small amount of lee helm can also be cured by raking the mast backward (which moves the center of pressure aft), reducing the size of the jib on a sloop rigged boat, or increasing the size of the mizzen sail on a yawl or a ketch. Large amounts of lee helm can only be corrected by altering the placement of the mast(s) or keel/centerboard --- a non-trivial venture.
Description:
Ideally, a sailboat's sail plan should allow for a neutral helm, avoiding Weather Helm's tendency to round up too strongly (and perhaps place the boat 'in irons') or Lee Helm's tendency to make the boat fall off from the wind, possibly in an uncontrolled and dangerous jibe. How this balance is achieved will differ from sail plan to sail plan, depending on where the sails are carried relative to the center of effort of the hull.
Lee helm on U.S. Naval Vessels:
Traditionally, two stations are on the bridge of a ship for controlling the vessel's maneuvers: the helm, which uses a wheel (or touchscreen equivalent) to send signals to control the position of the rudder or rudders, and the lee helm, which traditionally inputs speed commands by operating an engine order telegraph to send engine commands to the engineering personnel below decks. On modern US Navy surface vessels using gas turbine or diesel propulsion, the lee helm directly controls the ship's speed via throttle (either a manual throttle lever or a touch screen on some ships). The bridge throttle directly manipulates propeller pitch and/or engine RPM any time the bridge station is in control, or switches to a traditional order telegraph during those rare situations the bridge station relinquishes control or if the control system fails. The lee helm on vessels with nuclear propulsion (carriers and submarines) and conventional steam propulsion (amphibious assault ships prior to LHD-8) uses a telegraph. One origin for the lee helm designation for the engine order-telegraph -- large sailing ships had a helmsman at the wheel, plus an auxiliary helmsman positioned on the leeward side of the helm – hence the term lee helm – to assist the helmsman in controlling the wheel while coming about or while the vessel is laboring in high winds or an extreme sea state.
Lee helm on U.S. Naval Vessels:
The US Navy's Integrated Bridge and Navigation System (IBNS) uses two touchscreens and steering wheel for steering. The helm is on the left, and the lee helm is on the right. In August, 2017, the destroyer USS John S. McCain was involved in a USS_John_S._McCain_(DDG-56)#2017_MV_Alnic_MC_collision resulting in massive casualties, including the deaths of ten American sailors. The NTSB accident report found an unreliable IBNS and lack of training were the causes of the accident. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Balloon effect**
Balloon effect:
The balloon effect is a criticism of United States drug policy. The name draws an analogy between efforts to eradicate the production of illegal drugs in South American countries and squeezing a balloon: If a balloon is squeezed the air is moved, but does not disappear, instead moving into another area of less resistance.
Examples:
Examples of this displacement in drug traffic include: Fumigation of marijuana in Mexico caused production to migrate to Colombia.
Marijuana crackdowns in the Sierra Nevada de Santa Marta moved activity to Cauca.
In the late 1990s, coca was largely eradicated in Peru and Bolivia, only to be replaced by new crops in Colombia.
Examples:
Recently, with the intense spraying in the Colombian Putumayo Department, coca has been planted in other departments including Arauca, Cauca, Caquetá, Guaviare, Huila, Meta, Nariño, and Santander.As described in The Economist: Drug-policy geeks call this the "balloon effect": pushing down on drug production in one region causes it to bulge somewhere else. Latin Americans have a better phrase: the efecto cucaracha, or cockroach effect. You can chase the pests out of one corner of your house, but they have an irritating habit of popping up somewhere else.
Examples:
Brazil and the Southern Cone (Chile, Uruguay, Paraguay, Argentina) neglected their respective drug trafficking issues and due to the concentration on the Andean region, these were neglected by the United States as well. These nations ignored the problem primarily due to its slow introduction and penetration into their society, the insistence from the U.S. that the sources of the drugs was the only problem and because the governments at the time were more concerned with foreign debt, inflation, economic growth, civil-military relations and political survival. The United States continued to increase their anti-drug operations in the Andean region resulting in displacement. This means that the U.S. tactics forced the drug traffickers to search for safer areas with less government pressure to eliminate the flow of narcotics. The drug traffickers took advantage of the neglected Southern Cone and began shifting their routes, locations for cocaine laboratories and money laundering centres. These shifts have also created growing drug consumption issues among the Southern Cone countries. While the role of the Southern Cone had been that of a transhipment point for cocaine produced in the Andean region, further evidence appeared to indicate that in fact since 1984 the region had been used extensively by Colombian and Bolivian drug traffickers. Cocaine labs were found in Northern and Western Brazil and in Argentina. It was also found that Uruguay and Chile had become major financial centres for money laundering after the invasion of Panama. Uruguay was particularly attractive as it has one of the most open banking systems in the Western hemisphere and the government has always put great emphasis on having tight bank secrecy laws.
Other contexts:
This also describes the offsetting behavior in health care when costs shift from hospital care to home care.
A software development colloquialism, it is often used to describe the effect of fixing a bug or problem in one area of the system, where the fix itself then causes another problem to occur; fixing this subsequent issue then results in further problems, ad infinitum.
This term is also used in business to describe situations were changes made in one area of a business lead to unforeseen and adverse effects in other parts of a business. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**(2R)-2-Methylpent-4-enoic acid**
(2R)-2-Methylpent-4-enoic acid:
(2R)-2-Methylpent-4-enoic acid is an organic acid with the chemical formula C6H10O2. Other names for this molecule include (R)-2-methyl-4-pentenoic acid, (R)-(−)-2-methyl-4-pentenoic acid, and methylallylacetic acid.
Synthesis:
(R)-2-Methylpent-4-enoic acid can be synthesized using a chiral auxiliary such an oxazolidinone derivative, popularized by David Evans. One route of synthesis consists of three steps: acylation of the oxazolidinone using triethylamine as a base, and DMAP as an acyl carrier catalyst addition of a pentene group via enolate addition using Sodium bis(trimethylsilyl)amide as a base and allyl iodide as the pentene donor and cleavage of the oxazolidinone by LiOH solution in hydrogen peroxide. and sulfite to reduce the peroxide to the acid.
Uses:
(R)-2-Methylpent-4-enoic acid can also be used in synthesis of other chiral compounds. For example, it has been used in the process of synthesizing the drug Sacubitril as a reagent for adding a chiral center to the molecule. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**T Tauri wind**
T Tauri wind:
The T Tauri wind — so named because of the young star currently in this stage—is a phenomenon indicative of the phase of stellar development between the accretion of material from the slowing rotating material of a solar nebula and the ignition of the hydrogen that has agglomerated into the protostar.
T Tauri wind:
The protostar at first only has about 1% of its final mass. But the envelope of the star continues to grow as infalling material is accreted. After 10,000–100,000 years, thermonuclear fusion begins in its core, then a strong stellar wind is produced which stops the infall of new mass. The protostar is now considered a young star since its mass is fixed, and its future evolution is now set.
The evolutionary picture of low mass protostars:
Initially there is a random amount of interstellar gaseous matter, mainly hydrogen, containing traces of dusts (ices, carbon, rocks).The T Tauri stars, with masses less than twice the mass of the Sun, are thought to follow this process: initially, the clouds which collapse are thought to be very slowly rotating The dense cores collapse faster than the less dense outer regions of the cloud. This follows from the free-fall time ~ 1/√(gxdensity). The initial collapse of the core is quite fast; time ~ 1/√(6.7×10−8×10−18 g/cm3) ~ 50,000–100,000 years or so. The lower density envelope takes longer to collapse accrete (collapse onto the protostar); time ~ millions of years or so. Roughly speaking, the Sun forms as shown here.
The evolutionary picture of low mass protostars:
The inside-out collapse leads to the formation of the forming star in the center of the cloud which then slowly builds up its mass by accreting the outer layers of the cloud.
The evolutionary picture of low mass protostars:
Another noteworthy aspect of this later stage of formation is that before the star actually gets hot enough to ignite nuclear fusion, an intense stellar wind is generated. Often because the cloud was slowly rotating, a disk of material forms around the star. The disk collimates the intense stellar wind into 2 oppositely directed beams producing what is referred to as a bipolar flow, which can cause the forming star to lose up to 0.4 mass of the Sun, and can start to disrupt the cloud.
The evolutionary picture of low mass protostars:
Even though it takes several millions of years for the cloud to accrete onto the protostar, because the protostars are relatively low mass, it takes even longer to slowly contract and approach starhood. For the most part, the cloud has a chance to accrete onto the protostar before the violent stages of evolution begin.
The character of accretion and stellar wind parameters of T Tauri stars:
The main portion of emission continuum of Classic T Tauri Stars is formed outside the accretion shock, what means a great deal of accretion matter falls onto the star in nearly horizontal direction. This gas decelerate in turbulent layer near the star surface. We suggest two scenarios to explain such nature of accretion: two-stream accretion (through boundary layer and magnetosphere) and magnetospheric accretion by way of streams, the bulk of matter in which falls onto the star in nearly horizontal direction.
The character of accretion and stellar wind parameters of T Tauri stars:
Observations have provided quantitative parameters of disk wind, derived from the analysis of optical and UV spectra of CTTS. The matter outflows observed from a disk region with an outer radius of < 0.5 AU. The outflowing matter initially moves almost along the disk until being accelerated up to V > 100 km/s and only afterwards begins to collimate. Inner region of the wind is collimated into the jet at a distance <3 AU from the disk mid plain. The Vz gas velocity component in the jet decreases with increasing distance from the jet axis. The gas temperature in the jet bottom is less than 20,000 kelvins. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SCAMP1**
SCAMP1:
Secretory carrier-associated membrane protein 1 is a protein that in humans is encoded by the SCAMP1 gene.
Function:
This gene product belongs to the SCAMP family of proteins which are secretory carrier membrane proteins. They function as carriers to the cell surface in post-golgi recycling pathways. Different family members are highly related products of distinct genes, and are usually expressed together. These findings suggest that the SCAMPs may function at the same site during vesicular transport rather than in separate pathways.
Interactions:
SCAMP1 has been shown to interact with ITSN1 and AP1GBP1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Direct market access**
Direct market access:
Direct market access (DMA) is a term used in financial markets to describe electronic trading facilities that give investors wishing to trade in financial instruments a way to interact with the order book of an exchange. Normally, trading on the order book is restricted to broker-dealers and market making firms that are members of the exchange. Using DMA, investment companies (also known as buy side firms) and other private traders use the information technology infrastructure of sell side firms such as investment banks and the market access that those firms possess, but control the way a trading transaction is managed themselves rather than passing the order over to the broker's own in-house traders for execution. Today, DMA is often combined with algorithmic trading giving access to many different trading strategies. Certain forms of DMA, most notably "sponsored access", have raised substantial regulatory concerns because of the possibility of a malfunction by an investor to cause widespread market disruption.
History:
As financial markets moved on from traditional open outcry trading on exchange trading floors towards decentralized electronic, screen-based trading and information technology improved, the opportunity for investors and other buy side traders to trade for themselves rather than handing orders over to brokers for execution began to emerge. The implementation of the FIX protocol gave market participants the ability to route orders electronically to execution desks. Advances in the technology enabled more detailed instructions to be submitted electronically with the underlying order.
History:
The logical conclusion to this, enabling investors to work their own orders directly on the order book without recourse to market makers, was first facilitated by electronic communication networks such as Instinet. Recognising the threat to their own businesses, investment banks began acquiring these companies (e.g. the purchase of Instinet in 2007 by Nomura Holdings) and developing their own DMA technologies. Most major sell-side brokers now provide DMA services to their clients alongside their traditional 'worked' orders and algorithmic trading solutions giving access to many different trading strategies.
Benefits:
There are several motivations for why a trader may choose to use DMA rather than alternative forms of order placement: DMA usually offers lower transaction costs because only the technology is being paid for and not the usual order management and oversight responsibilities that come with an order passed to a broker for execution.
Orders are handled directly by the originator giving them more control over the final execution and the ability to exploit liquidity and price opportunities more quickly.
Information leakage is minimised because the trading is done anonymously using the DMA provider's identity as a cover. DMA systems are also generally shielded from other trading desks within the provider's organisation by a Chinese wall.
Direct market access allows a user to 'Trade the Spread' of a stock. This is facilitated by the permission of entering your order onto the 'Level 2' order book, effectively negating the need to pass through a broker or dealer.
Ultra-low latency direct market access (ULLDMA):
Advanced trading platforms and market gateways are essential to the practice of high-frequency trading. Order flow can be routed directly to the line handler where it undergoes a strict set of Risk Filters before hitting the execution venue(s). Typically, ULLDMA systems built specifically for HFT can currently handle high amounts of volume and incur no delay greater than 500 microseconds. One area in which low-latency systems can contribute to best execution is with functionality such as direct strategy access (DSA) and Smart Order Router.
Sponsored access:
Following the Flash Crash, it has become difficult for a trading participant to get a true form of direct market access in a sponsored access arrangement with a broker. This owes to changes to the net capital rule, Rule 15c3-1, that the US Securities and Exchange Commission adopted in July 2013, which amended the regulatory capital requirements for US-regulated broker-dealers and required sponsored access trades to go through the sponsoring broker's pre-trade risk layer.
Foreign exchange direct market access:
Foreign exchange direct market access (FX DMA) refers to electronic facilities that match foreign exchange orders from individual investors, buy-side or sell-side firms with each other. FX DMA infrastructures, provided by independent FX agency desks or exchanges, consist of a front-end, API or FIX trading interfaces that disseminate order and available quantity data from all participants and enables buy-side traders, both institutions in the interbank market and individuals trading retail forex in a low latency environment.
Foreign exchange direct market access:
Other defining criteria of FX DMA: Trades are matched solely on a price/time protocol. There are no re-quotes.
Platforms display the full range (0-9) of one-tenth pip or percentage in point consistent with professional FX market quotation protocols not half-pip pricing (0 or 5).
Anonymous platforms ensure neutral prices reflecting global FX market conditions, not a dealer's knowledge or familiarity with a client's trading methods, strategies, tactics or current position(s).
Enhanced control of trade execution by providing live, executable price and quantity data enabling a trader to see exactly at what price they can trade for the full amount of a transaction.
Foreign exchange direct market access:
Orders are facilitated by agency brokers. The broker is not a market maker or liquidity destination on the DMA platform it provides to clients.f Market structures show variable spreads related to interbank market conditions, including volatility, pending or recently released news, as well as market maker trading flows. By definition, FX DMA market structures cannot show fixed spreads, which are indicative of dealer platforms.
Foreign exchange direct market access:
Fees are either a fixed markup into the client's dealing price and/or a commission. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Motive (law)**
Motive (law):
A motive is the cause that moves people to induce a certain action. In criminal law, motive in itself is not an element of any given crime; however, the legal system typically allows motive to be proven to make plausible the accused's reasons for committing a crime, at least when those motives may be obscure or hard to identify with. However, a motive is not required to reach a verdict. Motives are also used in other aspects of a specific case, for instance, when police are initially investigating.The law technically distinguishes between motive and intent. "Intent" in criminal law is synonymous with Mens rea, which means the mental state shows liability which is enforced by law as an element of a crime. "Motive" describes instead the reasons in the accused's background and station in life that are supposed to have induced the crime. Motives are often broken down into three categories; biological, social and personal.
Objections:
There are two objections to motive when considering punishment. The first is volitional objection, which is the argument that the person cannot manage his or her own motives and therefore cannot be punished for them. The second objection is neutrality objection. This is based on the idea that our society has contrasting political opinions and therefore a government’s preference should be limited.
Pertinence:
There are four different ways a defendant's motive can be pertinent to his or her criminal liability. Motive can be fully inculpatory or exculpatory or only partially inculpatory or exculpatory. When one has acted with a specific motive, lawful behavior becomes illegal, and this is when motive is fully inculpatory. If illegal activity with a particular motive does not hold a defendant responsible then that motive is fully exculpatory. When a motive supplies inadequate defense to a crime, the motive is partially exculpatory. When a motive says the kind of infraction for which the defendant is responsible, the motive is partially inculpatory. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fusarubin**
Fusarubin:
Fusarubin is a naphthoquinone-antibiotic which is produced by the fungi Fusarium solani. Fusarubin has the molecular formula C15H14O7. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alpha Cygni variable**
Alpha Cygni variable:
Alpha Cygni variables are variable stars which exhibit non-radial pulsations, meaning that some portions of the stellar surface are contracting at the same time other parts expand. They are supergiant stars of spectral types B or A. Variations in brightness on the order of 0.1 magnitudes are associated with the pulsations, which often seem irregular, due to beating of multiple pulsation periods. The pulsations typically have periods of several days to several weeks.
Alpha Cygni variable:
The prototype of these stars, Deneb (α Cygni), exhibits fluctuations in brightness between magnitudes +1.21 and +1.29. Small amplitude rapid variations have been known in many early supergiant stars, but they were not formally grouped into a class until the 4th edition of the General Catalogue of Variable Stars was published in 1985. It used the acronym ACYG for Alpha Cygni variable stars. Many luminous blue variables (LBVs) show Alpha Cygni-type variability during their quiescent (hot) phases, but the LBV classification is generally used in these cases.
Alpha Cygni variable:
A large number (32) were discovered by Christoffel Waelkens and colleagues analysing Hipparcos data in a 1998 study.
Pulsations:
The pulsations of Alpha Cygni Variable stars are not fully understood. They are not confined to a narrow range of temperatures and luminosities in the way that most pulsating stars are. Instead, most luminous A and B supergiants, and possibly also O and F stars, show some type of unpredictable small-scale pulsations. Nonadiabatic strange mode radial pulsations are predicted but only for the most luminous supergiants. Pulsations have also been modelled for less luminous supergiants by assuming they are low mass post-red supergiant stars, but most Alpha Cygni variables do not appear to have passed through the red supergiant stage.The pulsations are likely induced by kappa mechanism, caused by iron opacity variations, with strange modes producing the observed short periods for both radial and non-radial pulsations. Non-adiabatic g-modes may produce longer period variations, but these have not been observed in Alpha Cygni variables.
Sources:
Samus N.N., Durlevich O.V., et al. Combined General Catalog of Variable Stars (GCVS4.2, 2004 Ed.) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**International Society for Developmental Psychobiology**
International Society for Developmental Psychobiology:
International Society for Developmental Psychobiology (ISDP) promotes research on the behavioral development on all species including humans. It is an international-nonprofit organization. Its official scientific journal is Developmental Psychobiology published by John Wiley & Sons. It conducts annual meetings during which research on developmental psychobiology is presented and abstracts are published in Developmental Psychobiology. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Natural-language understanding**
Natural-language understanding:
Natural-language understanding (NLU) or natural-language interpretation (NLI) is a subtopic of natural-language processing in artificial intelligence that deals with machine reading comprehension. Natural-language understanding is considered an AI-hard problem.There is considerable commercial interest in the field because of its application to automated reasoning, machine translation, question answering, news-gathering, text categorization, voice-activation, archiving, and large-scale content analysis.
History:
The program STUDENT, written in 1964 by Daniel Bobrow for his PhD dissertation at MIT, is one of the earliest known attempts at natural-language understanding by a computer. Eight years after John McCarthy coined the term artificial intelligence, Bobrow's dissertation (titled Natural Language Input for a Computer Problem Solving System) showed how a computer could understand simple natural language input to solve algebra word problems.
History:
A year later, in 1965, Joseph Weizenbaum at MIT wrote ELIZA, an interactive program that carried on a dialogue in English on any topic, the most popular being psychotherapy. ELIZA worked by simple parsing and substitution of key words into canned phrases and Weizenbaum sidestepped the problem of giving the program a database of real-world knowledge or a rich lexicon. Yet ELIZA gained surprising popularity as a toy project and can be seen as a very early precursor to current commercial systems such as those used by Ask.com.In 1969, Roger Schank at Stanford University introduced the conceptual dependency theory for natural-language understanding. This model, partially influenced by the work of Sydney Lamb, was extensively used by Schank's students at Yale University, such as Robert Wilensky, Wendy Lehnert, and Janet Kolodner.
History:
In 1970, William A. Woods introduced the augmented transition network (ATN) to represent natural language input. Instead of phrase structure rules ATNs used an equivalent set of finite state automata that were called recursively. ATNs and their more general format called "generalized ATNs" continued to be used for a number of years.
History:
In 1971, Terry Winograd finished writing SHRDLU for his PhD thesis at MIT. SHRDLU could understand simple English sentences in a restricted world of children's blocks to direct a robotic arm to move items. The successful demonstration of SHRDLU provided significant momentum for continued research in the field. Winograd continued to be a major influence in the field with the publication of his book Language as a Cognitive Process. At Stanford, Winograd would later advise Larry Page, who co-founded Google.
History:
In the 1970s and 1980s, the natural language processing group at SRI International continued research and development in the field. A number of commercial efforts based on the research were undertaken, e.g., in 1982 Gary Hendrix formed Symantec Corporation originally as a company for developing a natural language interface for database queries on personal computers. However, with the advent of mouse-driven graphical user interfaces, Symantec changed direction. A number of other commercial efforts were started around the same time, e.g., Larry R. Harris at the Artificial Intelligence Corporation and Roger Schank and his students at Cognitive Systems Corp. In 1983, Michael Dyer developed the BORIS system at Yale which bore similarities to the work of Roger Schank and W. G. Lehnert.The third millennium saw the introduction of systems using machine learning for text classification, such as the IBM Watson. However, experts debate how much "understanding" such systems demonstrate: e.g., according to John Searle, Watson did not even understand the questions.John Ball, cognitive scientist and inventor of Patom Theory, supports this assessment. Natural language processing has made inroads for applications to support human productivity in service and ecommerce, but this has largely been made possible by narrowing the scope of the application. There are thousands of ways to request something in a human language that still defies conventional natural language processing. "To have a meaningful conversation with machines is only possible when we match every word to the correct meaning based on the meanings of the other words in the sentence – just like a 3-year-old does without guesswork."
Scope and context:
The umbrella term "natural-language understanding" can be applied to a diverse set of computer applications, ranging from small, relatively simple tasks such as short commands issued to robots, to highly complex endeavors such as the full comprehension of newspaper articles or poetry passages. Many real-world applications fall between the two extremes, for instance text classification for the automatic analysis of emails and their routing to a suitable department in a corporation does not require an in-depth understanding of the text, but needs to deal with a much larger vocabulary and more diverse syntax than the management of simple queries to database tables with fixed schemata.
Scope and context:
Throughout the years various attempts at processing natural language or English-like sentences presented to computers have taken place at varying degrees of complexity. Some attempts have not resulted in systems with deep understanding, but have helped overall system usability. For example, Wayne Ratliff originally developed the Vulcan program with an English-like syntax to mimic the English speaking computer in Star Trek. Vulcan later became the dBase system whose easy-to-use syntax effectively launched the personal computer database industry. Systems with an easy to use or English like syntax are, however, quite distinct from systems that use a rich lexicon and include an internal representation (often as first order logic) of the semantics of natural language sentences.
Scope and context:
Hence the breadth and depth of "understanding" aimed at by a system determine both the complexity of the system (and the implied challenges) and the types of applications it can deal with. The "breadth" of a system is measured by the sizes of its vocabulary and grammar. The "depth" is measured by the degree to which its understanding approximates that of a fluent native speaker. At the narrowest and shallowest, English-like command interpreters require minimal complexity, but have a small range of applications. Narrow but deep systems explore and model mechanisms of understanding, but they still have limited application. Systems that attempt to understand the contents of a document such as a news release beyond simple keyword matching and to judge its suitability for a user are broader and require significant complexity, but they are still somewhat shallow. Systems that are both very broad and very deep are beyond the current state of the art.
Components and architecture:
Regardless of the approach used, most natural-language-understanding systems share some common components. The system needs a lexicon of the language and a parser and grammar rules to break sentences into an internal representation. The construction of a rich lexicon with a suitable ontology requires significant effort, e.g., the Wordnet lexicon required many person-years of effort.The system also needs theory from semantics to guide the comprehension. The interpretation capabilities of a language-understanding system depend on the semantic theory it uses. Competing semantic theories of language have specific trade-offs in their suitability as the basis of computer-automated semantic interpretation. These range from naive semantics or stochastic semantic analysis to the use of pragmatics to derive meaning from context. Semantic parsers convert natural-language texts into formal meaning representations.Advanced applications of natural-language understanding also attempt to incorporate logical inference within their framework. This is generally achieved by mapping the derived meaning into a set of assertions in predicate logic, then using logical deduction to arrive at conclusions. Therefore, systems based on functional languages such as Lisp need to include a subsystem to represent logical assertions, while logic-oriented systems such as those using the language Prolog generally rely on an extension of the built-in logical representation framework.The management of context in natural-language understanding can present special challenges. A large variety of examples and counter examples have resulted in multiple approaches to the formal modeling of context, each with specific strengths and weaknesses. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Künneth theorem**
Künneth theorem:
In mathematics, especially in homological algebra and algebraic topology, a Künneth theorem, also called a Künneth formula, is a statement relating the homology of two objects to the homology of their product. The classical statement of the Künneth theorem relates the singular homology of two topological spaces X and Y and their product space X×Y . In the simplest possible case the relationship is that of a tensor product, but for applications it is very often necessary to apply certain tools of homological algebra to express the answer.
Künneth theorem:
A Künneth theorem or Künneth formula is true in many different homology and cohomology theories, and the name has become generic. These many results are named for the German mathematician Hermann Künneth.
Singular homology with coefficients in a field:
Let X and Y be two topological spaces. In general one uses singular homology; but if X and Y happen to be CW complexes, then this can be replaced by cellular homology, because that is isomorphic to singular homology. The simplest case is when the coefficient ring for homology is a field F. In this situation, the Künneth theorem (for singular homology) states that for any integer k, ⨁i+j=kHi(X;F)⊗Hj(Y;F)≅Hk(X×Y;F) .Furthermore, the isomorphism is a natural isomorphism. The map from the sum to the homology group of the product is called the cross product. More precisely, there is a cross product operation by which an i-cycle on X and a j-cycle on Y can be combined to create an (i+j) -cycle on X×Y ; so that there is an explicit linear mapping defined from the direct sum to Hk(X×Y) A consequence of this result is that the Betti numbers, the dimensions of the homology with Q coefficients, of X×Y can be determined from those of X and Y. If pZ(t) is the generating function of the sequence of Betti numbers bk(Z) of a space Z, then pX×Y(t)=pX(t)pY(t).
Singular homology with coefficients in a field:
Here when there are finitely many Betti numbers of X and Y, each of which is a natural number rather than ∞ , this reads as an identity on Poincaré polynomials. In the general case these are formal power series with possibly infinite coefficients, and have to be interpreted accordingly. Furthermore, the above statement holds not only for the Betti numbers but also for the generating functions of the dimensions of the homology over any field. (If the integer homology is not torsion-free, then these numbers may differ from the standard Betti numbers.)
Singular homology with coefficients in a principal ideal domain:
The above formula is simple because vector spaces over a field have very restricted behavior. As the coefficient ring becomes more general, the relationship becomes more complicated. The next simplest case is the case when the coefficient ring is a principal ideal domain. This case is particularly important because the integers Z are a PID.
In this case the equation above is no longer always true. A correction factor appears to account for the possibility of torsion phenomena. This correction factor is expressed in terms of the Tor functor, the first derived functor of the tensor product.
When R is a PID, then the correct statement of the Künneth theorem is that for any topological spaces X and Y there are natural short exact sequences 0.
Furthermore, these sequences split, but not canonically.
Singular homology with coefficients in a principal ideal domain:
Example The short exact sequences just described can easily be used to compute the homology groups with integer coefficients of the product RP2×RP2 of two real projective planes, in other words, Hk(RP2×RP2;Z) . These spaces are CW complexes. Denoting the homology group Hi(RP2;Z) by hi for brevity's sake, one knows from a simple calculation with cellular homology that h0≅Z ,h1≅Z/2Z ,hi=0 for all other values of i.The only non-zero Tor group (torsion product) which can be formed from these values of hi is Tor1Z(h1,h1)≅Tor1Z(Z/2Z,Z/2Z)≅Z/2Z .Therefore, the Künneth short exact sequence reduces in every degree to an isomorphism, because there is a zero group in each case on either the left or the right side in the sequence. The result is H0(RP2×RP2;Z)≅h0⊗h0≅ZH1(RP2×RP2;Z)≅h0⊗h1⊕h1⊗h0≅Z/2Z⊕Z/2ZH2(RP2×RP2;Z)≅h1⊗h1≅Z/2ZH3(RP2×RP2;Z)≅Tor1Z(h1,h1)≅Z/2Z and all the other homology groups are zero.
The Künneth spectral sequence:
For a general commutative ring R, the homology of X and Y is related to the homology of their product by a Künneth spectral sequence Epq2=⨁q1+q2=qTorpR(Hq1(X;R),Hq2(Y;R))⇒Hp+q(X×Y;R).
In the cases described above, this spectral sequence collapses to give an isomorphism or a short exact sequence.
Relation with homological algebra, and idea of proof:
The chain complex of the space X × Y is related to the chain complexes of X and Y by a natural quasi-isomorphism C∗(X×Y)≅C∗(X)⊗C∗(Y).
Relation with homological algebra, and idea of proof:
For singular chains this is the theorem of Eilenberg and Zilber. For cellular chains on CW complexes, it is a straightforward isomorphism. Then the homology of the tensor product on the right is given by the spectral Künneth formula of homological algebra.The freeness of the chain modules means that in this geometric case it is not necessary to use any hyperhomology or total derived tensor product.
Relation with homological algebra, and idea of proof:
There are analogues of the above statements for singular cohomology and sheaf cohomology. For sheaf cohomology on an algebraic variety, Alexander Grothendieck found six spectral sequences relating the possible hyperhomology groups of two chain complexes of sheaves and the hyperhomology groups of their tensor product.
Künneth theorems in generalized homology and cohomology theories:
There are many generalized (or "extraordinary") homology and cohomology theories for topological spaces. K-theory and cobordism are the best-known. Unlike ordinary homology and cohomology, they typically cannot be defined using chain complexes. Thus Künneth theorems can not be obtained by the above methods of homological algebra. Nevertheless, Künneth theorems in just the same form have been proved in very many cases by various other methods. The first were Michael Atiyah's Künneth theorem for complex K-theory and Pierre Conner and Edwin E. Floyd's result in cobordism. A general method of proof emerged, based upon a homotopical theory of modules over highly structured ring spectra. The homotopy category of such modules closely resembles the derived category in homological algebra. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interstitial collagenase**
Interstitial collagenase:
Interstitial collagenase, also known as fibroblast collagenase, and matrix metalloproteinase-1 (MMP-1) is an enzyme that in humans is encoded by the MMP1 gene. The gene is part of a cluster of MMP genes which localize to chromosome 11q22.3. MMP-1 was the first vertebrate collagenase both purified to homogeneity as a protein, and cloned as a cDNA. MMP-1 has an estimated molecular mass of 54 kDa.
Structure:
MMP-1 has an archetypal structure consisting of a pre-domain, a pro-domain, a catalytic domain, a linker region and a hemopexin-like domain. The primary structure of MMP-1 was first published by Goldberg, G I, et al. Two main nomenclatures for the primary structure are currently in use, the original one from which the first amino-acid starts with the signalling peptide and a second one where the first amino-acid starts counting from the prodomain (proenzyme nomenclature).
Structure:
Catalytic domain The catalytic domains of MMPs share very similar characteristics, having a general shape of oblate ellipsoid with a diameter of ~40 Å. Despite the similarity of the catalytic domains of MMPs, this entry will focus only on the structural features of MMP-1 catalytic domain.
Structure:
Overall structural characteristics The catalytic domain of MMP-1 is composed of five highly twisted β-strands (sI-sV), three α-helix (hA-hC) and a total of eight loops, enclosing a total of five metal ions, three Ca2+ and two Zn2+, one of which with catalytic role.The catalytic domain (CAT) of MMP-1 starts with the F100 (non-truncated CAT) as the first amino-acid of the N-terminal loop of the CAT domain. The first published x-ray structure of the CAT domain was representative of the truncated form of this domain, where the first 7 amino-acids are not present.After the initial loop, the sequences follows to the first and longest β-sheet (sI). A second loop precedes large "amphipathic α-helix" (hA) that longitudinally spans protein site. The β-strands sII and sIII follows separated by the respective loops, loop 4 being commonly designated as "short loop" bridging sII to sIII. Following the sIII strand the sequence meets the 'S-shaped double loop' that is of primary importance for the peptide structure and catalytic activity (see further) as it extends to the cleft side "bulge", continuing to the only antiparallel β-strand sIV, which is prime importance for binding peptidic substrates or inhibitors by forming main chain hydrogen bond. Following sIV, loop Gln186-Gly192 and β-strand sV are responsible for contributing with many ligands to the several metal ions present in the protein (read further). A large open loop follows sV which has proven importance in substrate specificity within the MMPs family. A specific region (183)RWTNNFREY(191) has been identified as a critical segment of matrix metalloproteinase 1 for the expression of collagenolytic activity. On C-terminal part of the CAT Domain the hB α-helix, known as the "active-site helix" encompasses part of the "zinc-binding consensus sequence" HEXXHXXGXXH that is characteristic of the Metzincin superfamily. The α-helix hB finishes abruptly at Gly225 where the last loop of the domain starts. This last loop contains the "specificity loop" which is the shortest in the MMPs family. The Catalytic Domain ends at Gly261 with α-helix hC.
Function:
MMPs are involved in the breakdown of extracellular matrix in normal physiological processes, such as embryonic development, reproduction, and tissue remodeling, as well as in disease processes, such as arthritis and metastasis. Specifically, MMP-1 breaks down the interstitial collagens, types I, II, and III.
Regulation:
Mechanical force may increase the expression of MMP1 in human periodontal ligament cells.
Interactions:
MMP1 has been shown to interact with CD49b. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Brood parasite**
Brood parasite:
Brood parasitism is a subclass of parasitism and phenomenon and behavioural pattern of certain animals, brood parasites, that rely on others to raise their young. The strategy appears among birds, insects and fish. The brood parasite manipulates a host, either of the same or of another species, to raise its young as if it were its own, usually using egg mimicry, with eggs that resemble the host's.
Brood parasite:
The evolutionary strategy relieves the parasitic parents from the investment of rearing young. This benefit comes at the cost of provoking an evolutionary arms race between parasite and host as they coevolve: many hosts have developed strong defenses against brood parasitism, such as recognizing and ejecting parasitic eggs, or abandoning parasitized nests and starting over. It is less obvious why most hosts do care for parasite nestlings, given that for example cuckoo chicks differ markedly from host chicks in size and appearance. One explanation, the mafia hypothesis, proposes that parasitic adults retaliate by destroying host nests where rejection has occurred; there is experimental evidence to support this. Intraspecific brood parasitism also occurs, as in many duck species. Here there is no visible difference between host and parasite eggs, which may be why the parasite eggs are so readily accepted. In eider ducks, the first and second eggs in a nest are especially subject to predation, perhaps explaining why they are often laid in another eider nest.
Evolutionary strategy:
Brood parasitism is an evolutionary strategy that relieves the parasitic parents from the investment of rearing young or building nests for the young by getting the host to raise their young for them. This enables the parasitic parents to spend more time on other activities such as foraging and producing further offspring.
Evolutionary strategy:
Adaptations for parasitism Among specialist avian brood parasites, mimetic eggs are a nearly universal adaptation. The generalist brown-headed cowbird may have evolved an egg coloration mimicking a number of their hosts. Size may also be important for the incubation and survival of parasitic species; it may be beneficial for parasitic eggs to be similar in size to the eggs of the host species.The eggshells of brood parasites are often thicker than those of the hosts. For example, two studies of cuckoos parasiting great reed warblers reported thickness ratios of 1.02 : 0.87 and 1.04 : 0.81. The function of this thick eggshell is debated. One hypothesis, the puncture resistance hypothesis, states that the thicker eggshells serve to prevent hosts from breaking the eggshell, thus killing the embryo inside. This is supported by a study in which marsh warblers damaged their own eggs more often when attempting to break cuckoo eggs, but incurred less damage when trying to puncture great reed warbler eggs put in the nest by researchers. Another hypothesis is the laying damage hypothesis, which postulates that the eggshells are adapted to damage the eggs of the host when the former is being laid, and prevent the parasite's eggs from being damaged when the host lays its eggs. In support of this hypothesis, eggs of the shiny cowbird parasitizing the house wren and the chalk-browed mockingbird and the brown-headed cowbird parasitizing the house wren and the red-winged blackbird damaged the host's eggs when dropped, and sustained little damage when host eggs were dropped on them.Most avian brood parasites have very short egg incubation periods and rapid nestling growth. In many brood parasites, such as cuckoos and honeyguides, this short egg incubation period is due to internal incubation periods up to 24 hours longer in cuckoos than hosts. Some non-parasitic cuckoos also have longer internal incubation periods, suggesting that this longer internal incubation period was not an adaptation following brood parasitism, but predisposed birds to become brood parasites. This is likely facilitated by a heavier yolk in the egg providing more nutrients. Being larger than the hosts on hatching is a further adaptation to being a brood parasite.
Evolutionary strategy:
Evolutionary arms race Bird parasites mitigate the risk of egg loss by distributing eggs amongst a number of different hosts. As such behaviours damage the host, they often result in an evolutionary arms race between parasite and host as they coevolve.
Evolutionary strategy:
Some host species have strong rejection defenses, forcing the parasitic species to evolve excellent mimicry. In other species, hosts do not defend against parasites, and the parasitic mimicry is poor.Intraspecific brood parasitism among coots significantly increases the reproductive fitness of the parasite, but only about half of the eggs laid parasitically in other coot nests survive. This implies that coots have somewhat effective anti-parasitism strategies. Similarly, the parasitic offspring of bearded reedlings, compared to offspring in non-parasitic nests, tend to develop much more slowly and often do not reach full maturity.Given that the cost to the host of egg removal by the parasite is unrecoverable, the best strategy for hosts is to avoid parasitism in the first place. This can take several forms, including selecting nest sites which are difficult to parasitize, starting incubation early so they are already sitting on the nests when parasites visit them early in the morning, and aggressively defending their territory.Once a parasitic egg has arrived in a host's nest, the next most optimal defense is to eject the parasitic egg. This requires the host to distinguish which eggs are not theirs, by identifying pattern differences or changes in the number of eggs. Eggs may be ejected by grasping, if the host has a large enough beak, or by puncturing. When the parasitic eggs are mimetic, hosts may mistake one of their own eggs for a parasite's. A host might also damage their own eggs while trying to eject a parasite's egg.Among hosts that do not eject parasitic eggs, some abandon parasitized nests and start over again. However, at high enough parasitism frequencies, this becomes maladaptive as the new nest will most likely also be parasitized. Some host species modify their nests to exclude the parasitic egg, either by weaving over the egg or by rebuilding a new nest over the existing one. For instance, American coots may kick the parasites' eggs out, or build a new nest beside the brood nests where the parasites' chicks starve to death. In the western Bonelli's warbler, a small host, small dummy parasitic eggs were always ejected, whilst with large dummy parasitic eggs, nest desertion was more frequent.
Evolutionary strategy:
Mafia hypothesis There is a question as to why the majority of the hosts of brood parasites care for the nestlings of their parasites. Not only do these brood parasites usually differ significantly in size and appearance, but it is also highly probable that they reduce the reproductive success of their hosts. The "mafia hypothesis" proposes that when a brood parasite discovers that its egg has been rejected, it destroys the host's nest and injures or kills the nestlings. The threat of such a response may encourage compliant behavior from the host. Mafia-like behavior occurs in the brown-headed cowbird of North America, and the great spotted cuckoo of Europe. The great spotted cuckoo lays most of its eggs in the nests of the European magpie. It repeatedly visits nests it has parasitised, a precondition for the mafia hypothesis. Experimentally, nests from which the parasite's egg has been removed are destroyed by the cuckoo, supporting the hypothesis. An alternative explanation is that the destruction encourages the magpie host to build a new nest, giving the cuckoo another opportunity for parasitism. Similarly, the brown-headed cowbird parasitises the prothonotary warbler. Experimentally, 56% of egg-ejected nests were predated upon, against 6% of non-ejected nests. 85% of parasitized nests rebuilt by hosts were destroyed. Hosts that ejected parasite eggs produced 60% fewer young than those that accepted the cowbird eggs.
Evolutionary strategy:
Similarity hypothesis Common cuckoo females have been proposed to select hosts with similar egg characteristics to her own. The hypothesis suggests that the female monitors a population of potential hosts and chooses nests from within this group. Study of museum nest collections shows a similarity between cuckoo eggs and typical eggs of the host species. A low percentage of parasitized nests were shown to contain cuckoo eggs not corresponding to the specific host egg morph. In these mismatched nests a high percent of the cuckoo eggs were shown to correlate to the egg morph of another host species with similar nesting sites. This has been pointed to as evidence for selection by similarity. The hypothesis has been criticised for providing no mechanism for choosing nests, nor identifying cues by which they might be recognised.
Evolutionary strategy:
Hosts raise offspring Sometimes hosts are completely unaware that they are caring for a bird that is not their own. This most commonly occurs because the host cannot differentiate the parasitic eggs from their own. It may also occur when hosts temporarily leave the nest after laying the eggs. The parasites lay their own eggs into these nests so their nestlings share the food provided by the host. It may occur in other situations. For example, female eiders would prefer to lay eggs in the nests with one or two existing eggs of others because the first egg is the most vulnerable to predators. The presence of others' eggs reduces the probability that a predator will attack her egg when a female leaves the nest after laying the first egg.Sometimes, the parasitic offspring kills the host nest-mates during competition for resources. For example, parasitic cowbird chicks kill the host nest-mates if food intake for each of them is low, but not if the food intake is adequate.
Taxonomic range:
Birds Intraspecific In many socially monogamous bird species, there are extra-pair matings resulting in males outside the pair bond siring offspring and used by males to escape from the parental investment in raising their offspring. In duck species such as the goldeneye, this form of cuckoldry is taken a step further, as females often lay their eggs in the nests of other individuals. Intraspecific brood parasitism has been recorded in 234 bird species, including 74 Anseriformes, 66 Passeriformes, 32 Galliformes, 19 Charadriiformes, 8 Gruiformes, 6 Podicipediformes, and small numbers of species in other orders.
Taxonomic range:
Interspecific Interspecific brood-parasites include the indigobirds, whydahs, and honeyguides in Africa, cowbirds, Old World cuckoos, black-headed ducks, and some New World cuckoos in the Americas. Seven independent origins of obligate interspecific brood parasitism in birds have been proposed. While there is still some controversy over when and how many origins of interspecific brood parasitism have occurred, recent phylogenetic analyses suggest two origins in Passeriformes (once in New World cowbirds: Icteridae, and once in African Finches: Viduidae); three origins in Old World and New World cuckoos (once in Cuculinae, Phaenicophaeinae, and in Neomorphinae-Crotophaginae); a single origin in Old World honeyguides (Indicatoridae); and in a single species of waterfowl, the black-headed duck (Heteronetta atricapilla).Most avian brood parasites are specialists which parasitize only a single host species or a small group of closely related host species, but four out of the five parasitic cowbirds (all except the screaming cowbird) are generalists which parasitize a wide variety of hosts; the brown-headed cowbird has 221 known hosts. They usually lay only one egg per nest, although in some cases, particularly the cowbirds, several females may use the same host nest.The common cuckoo presents an interesting case in which the species as a whole parasitizes a wide variety of hosts, including the reed warbler and dunnock, but individual females specialize in a single species. Genes regulating egg coloration appear to be passed down exclusively along the maternal line, allowing females to lay mimetic eggs in the nest of the species they specialize in. Females generally parasitize nests of the species which raised them. Male common cuckoos fertilize females of all lines, which maintains sufficient gene flow among the different maternal lines to prevent speciation.The mechanisms of host selection by female cuckoos are somewhat unclear, though several hypotheses have been suggested in attempt to explain the choice. These include genetic inheritance of host preference, host imprinting on young birds, returning to place of birth and subsequently choosing a host randomly ("natal philopatry"), choice based on preferred nest site (nest-site hypothesis), and choice based on preferred habitat (habitat-selection hypothesis). Of these hypotheses the nest-site selection and habitat selection have been most supported by experimental analysis.
Taxonomic range:
Fish Mouthbrooding parasites A mochokid catfish of Lake Tanganyika, Synodontis multipunctatus, is a brood parasite of several mouthbrooding cichlid fish. The catfish eggs are incubated in the host's mouth, and—in the manner of cuckoos—hatch before the host's own eggs. The young catfish eat the host fry inside the host's mouth, effectively taking up virtually the whole of the host's parental investment.
Taxonomic range:
Nest parasites A cyprinid minnow, Pungtungia herzi is a brood parasite of the percichthyid freshwater perch Siniperca kawamebari, which live in the south of the Japanese islands of Honshu, Kyushu and Shikoku, and in South Korea. Host males guard territories against intruders during the breeding season, creating a patch of reeds as a spawning site or "nest". Females (one or more per site) visit the site to lay eggs, which the male then defends. The parasite's eggs are smaller and stickier than the host's. 65.5% of host sites were parasitised in a study area.
Taxonomic range:
Insects Kleptoparasites There are many different types of cuckoo bees, all of which lay their eggs in the nest cells of other bees, but they are normally described as kleptoparasites (Greek: klepto-, to steal), rather than as brood parasites, because the immature stages are almost never fed directly by the adult hosts. Instead, they simply take food gathered by their hosts. Examples of cuckoo bees are Coelioxys rufitarsis, Melecta separata, Nomada and Epeoloides.Kleptoparasitism in insects is not restricted to bees; several lineages of wasp including most of the Chrysididae, the cuckoo wasps, are kleptoparasites. The cuckoo wasps lay their eggs in the nests of other wasps, such as those of the potters and mud daubers.
Taxonomic range:
True brood parasites True brood parasitism is rare among insects. Cuckoo bumblebees (the subgenus Psithyrus) are among the few insects which, like cuckoos and cowbirds, are fed by adult hosts. Their queens kill and replace the existing queen of a colony of the host species, and then use the host workers to feed their brood.
Taxonomic range:
One of only four true brood-parasitic wasps is Polistes semenowi.. This paper wasp has lost the ability to build its own nest, and relies on its host, P. dominula, to raise its brood. The adult host feeds the parasite larvae directly, unlike typical kleptoparasitic insects. Such insect social parasites are often closely related to their hosts, an observation known as Emery's rule.Host insects are sometimes tricked into bringing offspring of another species into their own nests, as with the parasitic butterfly, Phengaris rebeli, and the host ant Myrmica schencki. The butterfly larvae release chemicals that confuse the host ant into believing that the P. rebeli larvae are actually ant larvae. Thus, the M. schencki ants bring back the P. rebeli larvae to their nests and feed them, much like the chicks of cuckoos and other brood-parasitic birds. This is also the case for the parasitic butterfly, Niphanda fusca, and its host ant Camponotus japonicus. The butterfly releases cuticular hydrocarbons that mimic those of the host male ant. The ant then brings the third instar larvae back into its own nest and raises them until pupation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Methamphetamine**
Methamphetamine:
Methamphetamine (contracted from N-methylamphetamine) is a potent central nervous system (CNS) stimulant that is mainly used as a recreational drug and less commonly as a second-line treatment for attention deficit hyperactivity disorder and obesity. Methamphetamine was discovered in 1893 and exists as two enantiomers: levo-methamphetamine and dextro-methamphetamine. Methamphetamine properly refers to a specific chemical substance, the racemic free base, which is an equal mixture of levomethamphetamine and dextromethamphetamine in their pure amine forms, but the hydrochloride salt, commonly called crystal meth, is widely used. Methamphetamine is rarely prescribed over concerns involving human neurotoxicity and potential for recreational use as an aphrodisiac and euphoriant, among other concerns, as well as the availability of safer substitute drugs with comparable treatment efficacy such as Adderall and Vyvanse. Dextromethamphetamine is a stronger CNS stimulant than levomethamphetamine.
Methamphetamine:
Both racemic methamphetamine and dextromethamphetamine are illicitly trafficked and sold owing to their potential for recreational use. The highest prevalence of illegal methamphetamine use occurs in parts of Asia and Oceania, and in the United States, where racemic methamphetamine and dextromethamphetamine are classified as schedule II controlled substances. Levomethamphetamine is available as an over-the-counter (OTC) drug for use as an inhaled nasal decongestant in the United States. Internationally, the production, distribution, sale, and possession of methamphetamine is restricted or banned in many countries, owing to its placement in schedule II of the United Nations Convention on Psychotropic Substances treaty. While dextromethamphetamine is a more potent drug, racemic methamphetamine is illicitly produced more often, owing to the relative ease of synthesis and regulatory limits of chemical precursor availability.
Methamphetamine:
In low to moderate doses, methamphetamine can elevate mood, increase alertness, concentration and energy in fatigued individuals, reduce appetite, and promote weight loss. At very high doses, it can induce psychosis, breakdown of skeletal muscle, seizures and bleeding in the brain. Chronic high-dose use can precipitate unpredictable and rapid mood swings, stimulant psychosis (e.g., paranoia, hallucinations, delirium, and delusions) and violent behavior. Recreationally, methamphetamine's ability to increase energy has been reported to lift mood and increase sexual desire to such an extent that users are able to engage in sexual activity continuously for several days while binging the drug. Methamphetamine is known to possess a high addiction liability (i.e., a high likelihood that long-term or high dose use will lead to compulsive drug use) and high dependence liability (i.e. a high likelihood that withdrawal symptoms will occur when methamphetamine use ceases). Withdrawal from methamphetamine after heavy use may lead to a post-acute-withdrawal syndrome, which can persist for months beyond the typical withdrawal period. Methamphetamine is neurotoxic to human midbrain dopaminergic neurons and, to a lesser extent, serotonergic neurons at high doses. Methamphetamine neurotoxicity causes adverse changes in brain structure and function, such as reductions in grey matter volume in several brain regions, as well as adverse changes in markers of metabolic integrity.
Methamphetamine:
Methamphetamine belongs to the substituted phenethylamine and substituted amphetamine chemical classes. It is related to the other dimethylphenethylamines as a positional isomer of these compounds, which share the common chemical formula C10H15N.
Uses:
Medical In the United States, methamphetamine hydrochloride, under the trade name Desoxyn, has been approved by the FDA for treating ADHD and obesity in both adults and children; however, the FDA also indicates that the limited therapeutic usefulness of methamphetamine should be weighed against the inherent risks associated with its use. To avoid toxicity and risk of side effects, FDA guidelines recommend an initial dose of methamphetamine at doses 5-10mg/day for ADHD in adults and children over six years of age, and may be increased at weekly intervals of 5mg, up to 25mg/day, until optimum clinical response is found; the usual effective dose is around 20-25mg/day. Methamphetamine is sometimes prescribed off label for narcolepsy and idiopathic hypersomnia. In the United States, methamphetamine's levorotary form is available in some over-the-counter (OTC) nasal decongestant products.As methamphetamine is associated with a high potential for misuse, the drug is regulated under the Controlled Substances Act and is listed under Schedule II in the United States. Methamphetamine hydrochloride dispensed in the United States is required to include a boxed warning regarding its potential for recreational misuse and addiction liability.Desoxyn and Desoxyn Gradumet are both pharmaceutical forms of the drug. The latter is no longer produced and is a extended-release form of the drug, flattening the curve of the effect of the drug while extending it.
Uses:
Recreational Methamphetamine is often used recreationally for its effects as a potent euphoriant and stimulant as well as aphrodisiac qualities.According to a National Geographic TV documentary on methamphetamine, an entire subculture known as party and play is based around sexual activity and methamphetamine use. Participants in this subculture, which consists almost entirely of homosexual male methamphetamine users, will typically meet up through internet dating sites and have sex. Because of its strong stimulant and aphrodisiac effects and inhibitory effect on ejaculation, with repeated use, these sexual encounters will sometimes occur continuously for several days on end. The crash following the use of methamphetamine in this manner is very often severe, with marked hypersomnia (excessive daytime sleepiness). The party and play subculture is prevalent in major US cities such as San Francisco and New York City.
Contraindications:
Methamphetamine is contraindicated in individuals with a history of substance use disorder, heart disease, or severe agitation or anxiety, or in individuals currently experiencing arteriosclerosis, glaucoma, hyperthyroidism, or severe hypertension. The FDA states that individuals who have experienced hypersensitivity reactions to other stimulants in the past or are currently taking monoamine oxidase inhibitors should not take methamphetamine. The FDA also advises individuals with bipolar disorder, depression, elevated blood pressure, liver or kidney problems, mania, psychosis, Raynaud's phenomenon, seizures, thyroid problems, tics, or Tourette syndrome to monitor their symptoms while taking methamphetamine. Owing to the potential for stunted growth, the FDA advises monitoring the height and weight of growing children and adolescents during treatment.
Adverse effects:
Physical The physical effects of methamphetamine can include loss of appetite, hyperactivity, dilated pupils, flushed skin, excessive sweating, increased movement, dry mouth and teeth grinding (leading to "meth mouth"), headache, irregular heartbeat (usually as accelerated heartbeat or slowed heartbeat), rapid breathing, high blood pressure, low blood pressure, high body temperature, diarrhea, constipation, blurred vision, dizziness, twitching, numbness, tremors, dry skin, acne, and pale appearance. Long-term meth users may have sores on their skin; these may be caused by scratching due to itchiness or the belief that insects are crawling under their skin, and the damage is compounded by poor diet and hygiene. Numerous deaths related to methamphetamine overdoses have been reported.
Adverse effects:
Meth mouth Methamphetamine users and addicts may lose their teeth abnormally quickly, regardless of the route of administration, from a condition informally known as meth mouth. The condition is generally most severe in users who inject the drug, rather than swallow, smoke, or inhale it. According to the American Dental Association, meth mouth "is probably caused by a combination of drug-induced psychological and physiological changes resulting in xerostomia (dry mouth), extended periods of poor oral hygiene, frequent consumption of high-calorie, carbonated beverages and bruxism (teeth grinding and clenching)". As dry mouth is also a common side effect of other stimulants, which are not known to contribute severe tooth decay, many researchers suggest that methamphetamine-associated tooth decay is more due to users' other choices. They suggest the side effect has been exaggerated and stylized to create a stereotype of current users as a deterrence for new ones.
Adverse effects:
Sexually transmitted infection Methamphetamine use was found to be related to higher frequencies of unprotected sexual intercourse in both HIV-positive and unknown casual partners, an association more pronounced in HIV-positive participants. These findings suggest that methamphetamine use and engagement in unprotected anal intercourse are co-occurring risk behaviors, behaviors that potentially heighten the risk of HIV transmission among gay and bisexual men. Methamphetamine use allows users of both sexes to engage in prolonged sexual activity, which may cause genital sores and abrasions as well as priapism in men. Methamphetamine may also cause sores and abrasions in the mouth via bruxism, increasing the risk of sexually transmitted infection.Besides the sexual transmission of HIV, it may also be transmitted between users who share a common needle. The level of needle sharing among methamphetamine users is similar to that among other drug injection users.
Adverse effects:
Psychological The psychological effects of methamphetamine can include euphoria, dysphoria, changes in libido, alertness, apprehension and concentration, decreased sense of fatigue, insomnia or wakefulness, self-confidence, sociability, irritability, restlessness, grandiosity and repetitive and obsessive behaviors. Peculiar to methamphetamine and related stimulants is "punding", persistent non-goal-directed repetitive activity. Methamphetamine use also has a high association with anxiety, depression, amphetamine psychosis, suicide, and violent behaviors.
Adverse effects:
Neurotoxic and neuroimmunological Methamphetamine is directly neurotoxic to dopaminergic neurons in both lab animals and humans. Excitotoxicity, oxidative stress, metabolic compromise, UPS dysfunction, protein nitration, endoplasmic reticulum stress, p53 expression and other processes contributed to this neurotoxicity. In line with its dopaminergic neurotoxicity, methamphetamine use is associated with a higher risk of Parkinson's disease. In addition to its dopaminergic neurotoxicity, a review of evidence in humans indicated that high-dose methamphetamine use can also be neurotoxic to serotonergic neurons. It has been demonstrated that a high core temperature is correlated with an increase in the neurotoxic effects of methamphetamine. Withdrawal of methamphetamine in dependent persons may lead to post-acute withdrawal which persists months beyond the typical withdrawal period.Magnetic resonance imaging studies on human methamphetamine users have also found evidence of neurodegeneration, or adverse neuroplastic changes in brain structure and function. In particular, methamphetamine appears to cause hyperintensity and hypertrophy of white matter, marked shrinkage of hippocampi, and reduced gray matter in the cingulate cortex, limbic cortex, and paralimbic cortex in recreational methamphetamine users. Moreover, evidence suggests that adverse changes in the level of biomarkers of metabolic integrity and synthesis occur in recreational users, such as a reduction in N-acetylaspartate and creatine levels and elevated levels of choline and myoinositol.Methamphetamine has been shown to activate TAAR1 in human astrocytes and generate cAMP as a result. Activation of astrocyte-localized TAAR1 appears to function as a mechanism by which methamphetamine attenuates membrane-bound EAAT2 (SLC1A2) levels and function in these cells.Methamphetamine binds to and activates both sigma receptor subtypes, σ1 and σ2, with micromolar affinity. Sigma receptor activation may promote methamphetamine-induced neurotoxicity by facilitating hyperthermia, increasing dopamine synthesis and release, influencing microglial activation, and modulating apoptotic signaling cascades and the formation of reactive oxygen species.
Adverse effects:
Addictive Current models of addiction from chronic drug use involve alterations in gene expression in certain parts of the brain, particularly the nucleus accumbens. The most important transcription factors that produce these alterations are ΔFosB, cAMP response element binding protein (CREB), and nuclear factor kappa B (NFκB). ΔFosB plays a crucial role in the development of drug addictions, since its overexpression in D1-type medium spiny neurons in the nucleus accumbens is necessary and sufficient for most of the behavioral and neural adaptations that arise from addiction. Once ΔFosB is sufficiently overexpressed, it induces an addictive state that becomes increasingly more severe with further increases in ΔFosB expression. It has been implicated in addictions to alcohol, cannabinoids, cocaine, methylphenidate, nicotine, opioids, phencyclidine, propofol, and substituted amphetamines, among others.ΔJunD, a transcription factor, and G9a, a histone methyltransferase enzyme, both directly oppose the induction of ΔFosB in the nucleus accumbens (i.e., they oppose increases in its expression). Sufficiently overexpressing ΔJunD in the nucleus accumbens with viral vectors can completely block many of the neural and behavioral alterations seen in chronic drug use (i.e., the alterations mediated by ΔFosB). ΔFosB also plays an important role in regulating behavioral responses to natural rewards, such as palatable food, sex, and exercise. Since both natural rewards and addictive drugs induce expression of ΔFosB (i.e., they cause the brain to produce more of it), chronic acquisition of these rewards can result in a similar pathological state of addiction. ΔFosB is the most significant factor involved in both amphetamine addiction and amphetamine-induced sex addictions, which are compulsive sexual behaviors that result from excessive sexual activity and amphetamine use. These sex addictions (i.e., drug-induced compulsive sexual behaviors) are associated with a dopamine dysregulation syndrome which occurs in some patients taking dopaminergic drugs, such as amphetamine or methamphetamine.
Adverse effects:
Epigenetic factors Methamphetamine addiction is persistent for many individuals, with 61% of individuals treated for addiction relapsing within one year. About half of those with methamphetamine addiction continue with use over a ten-year period, while the other half reduce use starting at about one to four years after initial use.The frequent persistence of addiction suggests that long-lasting changes in gene expression may occur in particular regions of the brain, and may contribute importantly to the addiction phenotype. In 2014, a crucial role was found for epigenetic mechanisms in driving lasting changes in gene expression in the brain.A review in 2015 summarized a number of studies involving chronic methamphetamine use in rodents. Epigenetic alterations were observed in the brain reward pathways, including areas like ventral tegmental area, nucleus accumbens, and dorsal striatum, the hippocampus, and the prefrontal cortex. Chronic methamphetamine use caused gene-specific histone acetylations, deacetylations and methylations. Gene-specific DNA methylations in particular regions of the brain were also observed. The various epigenetic alterations caused downregulations or upregulations of specific genes important in addiction. For instance, chronic methamphetamine use caused methylation of the lysine in position 4 of histone 3 located at the promoters of the c-fos and the C-C chemokine receptor 2 (ccr2) genes, activating those genes in the nucleus accumbens (NAc). c-fos is well known to be important in addiction. The ccr2 gene is also important in addiction, since mutational inactivation of this gene impairs addiction.In methamphetamine addicted rats, epigenetic regulation through reduced acetylation of histones, in brain striatal neurons, caused reduced transcription of glutamate receptors. Glutamate receptors play an important role in regulating the reinforcing effects of misused illicit drugs.Administration of methamphetamine to rodents causes DNA damage in their brain, particularly in the nucleus accumbens region. During repair of such DNA damages, persistent chromatin alterations may occur such as in the methylation of DNA or the acetylation or methylation of histones at the sites of repair. These alterations can be epigenetic scars in the chromatin that contribute to the persistent epigenetic changes found in methamphetamine addiction.
Adverse effects:
Treatment and management A 2018 systematic review and network meta-analysis of 50 trials involving 12 different psychosocial interventions for amphetamine, methamphetamine, or cocaine addiction found that combination therapy with both contingency management and community reinforcement approach had the highest efficacy (i.e., abstinence rate) and acceptability (i.e., lowest dropout rate). Other treatment modalities examined in the analysis included monotherapy with contingency management or community reinforcement approach, cognitive behavioral therapy, 12-step programs, non-contingent reward-based therapies, psychodynamic therapy, and other combination therapies involving these.As of December 2019, there is no effective pharmacotherapy for methamphetamine addiction. A systematic review and meta-analysis from 2019 assessed the efficacy of 17 different pharmacotherapies used in randomized controlled trials (RCTs) for amphetamine and methamphetamine addiction; it found only low-strength evidence that methylphenidate might reduce amphetamine or methamphetamine self-administration. There was low- to moderate-strength evidence of no benefit for most of the other medications used in RCTs, which included antidepressants (bupropion, mirtazapine, sertraline), antipsychotics (aripiprazole), anticonvulsants (topiramate, baclofen, gabapentin), naltrexone, varenicline, citicoline, ondansetron, prometa, riluzole, atomoxetine, dextroamphetamine, and modafinil.
Adverse effects:
Dependence and withdrawal Tolerance is expected to develop with regular methamphetamine use and, when used recreationally, this tolerance develops rapidly. In dependent users, withdrawal symptoms are positively correlated with the level of drug tolerance. Depression from methamphetamine withdrawal lasts longer and is more severe than that of cocaine withdrawal.According to the current Cochrane review on drug dependence and withdrawal in recreational users of methamphetamine, "when chronic heavy users abruptly discontinue [methamphetamine] use, many report a time-limited withdrawal syndrome that occurs within 24 hours of their last dose". Withdrawal symptoms in chronic, high-dose users are frequent, occurring in up to 87.6% of cases, and persist for three to four weeks with a marked "crash" phase occurring during the first week. Methamphetamine withdrawal symptoms can include anxiety, drug craving, dysphoric mood, fatigue, increased appetite, increased movement or decreased movement, lack of motivation, sleeplessness or sleepiness, and vivid or lucid dreams.Methamphetamine that is present in a mother's bloodstream can pass through the placenta to a fetus and be secreted into breast milk. Infants born to methamphetamine-abusing mothers may experience a neonatal withdrawal syndrome, with symptoms involving of abnormal sleep patterns, poor feeding, tremors, and hypertonia. This withdrawal syndrome is relatively mild and only requires medical intervention in approximately 4% of cases.
Adverse effects:
Neonatal Unlike other drugs, babies with prenatal exposure to methamphetamine do not show immediate signs of withdrawal. Instead, cognitive and behavioral problems start emerging when the children reach school age.A prospective cohort study of 330 children showed that at the age of 3, children with methamphetamine exposure showed increased emotional reactivity, as well as more signs of anxiety and depression; and at the age of 5, children showed higher rates of externalizing and attention deficit/hyperactivity disorders.
Overdose:
A methamphetamine overdose may result in a wide range of symptoms. A moderate overdose of methamphetamine may induce symptoms such as: abnormal heart rhythm, confusion, difficult and/or painful urination, high or low blood pressure, high body temperature, over-active and/or over-responsive reflexes, muscle aches, severe agitation, rapid breathing, tremor, urinary hesitancy, and an inability to pass urine. An extremely large overdose may produce symptoms such as adrenergic storm, methamphetamine psychosis, substantially reduced or no urine output, cardiogenic shock, bleeding in the brain, circulatory collapse, hyperpyrexia (i.e., dangerously high body temperature), pulmonary hypertension, kidney failure, rapid muscle breakdown, serotonin syndrome, and a form of stereotypy ("tweaking"). A methamphetamine overdose will likely also result in mild brain damage owing to dopaminergic and serotonergic neurotoxicity. Death from methamphetamine poisoning is typically preceded by convulsions and coma.
Overdose:
Psychosis Use of methamphetamine can result in a stimulant psychosis which may present with a variety of symptoms (e.g., paranoia, hallucinations, delirium, and delusions). A Cochrane Collaboration review on treatment for amphetamine, dextroamphetamine, and methamphetamine use-induced psychosis states that about 5–15% of users fail to recover completely. The same review asserts that, based upon at least one trial, antipsychotic medications effectively resolve the symptoms of acute amphetamine psychosis. Amphetamine psychosis may also develop occasionally as a treatment-emergent side effect.
Overdose:
Emergency treatment Acute methamphetamine intoxication is largely managed by treating the symptoms and treatments may initially include administration of activated charcoal and sedation. There is not enough evidence on hemodialysis or peritoneal dialysis in cases of methamphetamine intoxication to determine their usefulness. Forced acid diuresis (e.g., with vitamin C) will increase methamphetamine excretion but is not recommended as it may increase the risk of aggravating acidosis, or cause seizures or rhabdomyolysis. Hypertension presents a risk for intracranial hemorrhage (i.e., bleeding in the brain) and, if severe, is typically treated with intravenous phentolamine or nitroprusside. Blood pressure often drops gradually following sufficient sedation with a benzodiazepine and providing a calming environment.Antipsychotics such as haloperidol are useful in treating agitation and psychosis from methamphetamine overdose. Beta blockers with lipophilic properties and CNS penetration such as metoprolol and labetalol may be useful for treating CNS and cardiovascular toxicity. The mixed alpha- and beta-blocker labetalol is especially useful for treatment of concomitant tachycardia and hypertension induced by methamphetamine. The phenomenon of "unopposed alpha stimulation" has not been reported with the use of beta-blockers for treatment of methamphetamine toxicity.
Interactions:
Methamphetamine is metabolized by the liver enzyme CYP2D6, so CYP2D6 inhibitors will prolong the elimination half-life of methamphetamine. Methamphetamine also interacts with monoamine oxidase inhibitors (MAOIs), since both MAOIs and methamphetamine increase plasma catecholamines; therefore, concurrent use of both is dangerous. Methamphetamine may decrease the effects of sedatives and depressants and increase the effects of antidepressants and other stimulants as well. Methamphetamine may counteract the effects of antihypertensives and antipsychotics owing to its effects on the cardiovascular system and cognition respectively. The pH of gastrointestinal content and urine affects the absorption and excretion of methamphetamine. Specifically, acidic substances will reduce the absorption of methamphetamine and increase urinary excretion, while alkaline substances do the opposite. Owing to the effect pH has on absorption, proton pump inhibitors, which reduce gastric acid, are known to interact with methamphetamine.
Pharmacology:
Pharmacodynamics Methamphetamine has been identified as a potent full agonist of trace amine-associated receptor 1 (TAAR1), a G protein-coupled receptor (GPCR) that regulates brain catecholamine systems. Activation of TAAR1 increases cyclic adenosine monophosphate (cAMP) production and either completely inhibits or reverses the transport direction of the dopamine transporter (DAT), norepinephrine transporter (NET), and serotonin transporter (SERT). When methamphetamine binds to TAAR1, it triggers transporter phosphorylation via protein kinase A (PKA) and protein kinase C (PKC) signaling, ultimately resulting in the internalization or reverse function of monoamine transporters. Methamphetamine is also known to increase intracellular calcium, an effect which is associated with DAT phosphorylation through a Ca2+/calmodulin-dependent protein kinase (CAMK)-dependent signaling pathway, in turn producing dopamine efflux. TAAR1 has been shown to reduce the firing rate of neurons through direct activation of G protein-coupled inwardly-rectifying potassium channels. TAAR1 activation by methamphetamine in astrocytes appears to negatively modulate the membrane expression and function of EAAT2, a type of glutamate transporter.In addition to its effect on the plasma membrane monoamine transporters, methamphetamine inhibits synaptic vesicle function by inhibiting VMAT2, which prevents monoamine uptake into the vesicles and promotes their release. This results in the outflow of monoamines from synaptic vesicles into the cytosol (intracellular fluid) of the presynaptic neuron, and their subsequent release into the synaptic cleft by the phosphorylated transporters. Other transporters that methamphetamine is known to inhibit are SLC22A3 and SLC22A5. SLC22A3 is an extraneuronal monoamine transporter that is present in astrocytes, and SLC22A5 is a high-affinity carnitine transporter.Methamphetamine is also an agonist of the alpha-2 adrenergic receptors and sigma receptors with a greater affinity for σ1 than σ2, and inhibits monoamine oxidase A (MAO-A) and monoamine oxidase B (MAO-B). Sigma receptor activation by methamphetamine may facilitate its central nervous system stimulant effects and promote neurotoxicity within the brain. Dextromethamphetamine is a stronger psychostimulant, but levomethamphetamine has stronger peripheral effects, a longer half-life, and longer perceived effects among addicts. At high doses, both enantiomers of methamphetamine can induce similar stereotypy and methamphetamine psychosis, but levomethamphetamine has shorter psychodynamic effects.
Pharmacology:
Pharmacokinetics The bioavailability of methamphetamine is 67% orally, 79% intranasally, 67 to 90% via inhalation (smoking), and 100% intravenously. Following oral administration, methamphetamine is well-absorbed into the bloodstream, with peak plasma methamphetamine concentrations achieved in approximately 3.13–6.3 hours post ingestion. Methamphetamine is also well absorbed following inhalation and following intranasal administration. Because of the high lipophilicity of methamphetamine, it can readily move through the blood–brain barrier faster than other stimulants, where it is more resistant to degradation by monoamine oxidase. The amphetamine metabolite peaks at 10–24 hours. Methamphetamine is excreted by the kidneys, with the rate of excretion into the urine heavily influenced by urinary pH. When taken orally, 30–54% of the dose is excreted in urine as methamphetamine and 10–23% as amphetamine. Following IV doses, about 45% is excreted as methamphetamine and 7% as amphetamine. The elimination half-life of methamphetamine varies with a range of 5–30 hours, but it is on average 9 to 12 hours in most studies. The elimination half-life of methamphetamine does not vary by route of administration, but is subject to substantial interindividual variability.CYP2D6, dopamine β-hydroxylase, flavin-containing monooxygenase 3, butyrate-CoA ligase, and glycine N-acyltransferase are the enzymes known to metabolize methamphetamine or its metabolites in humans. The primary metabolites are amphetamine and 4-hydroxymethamphetamine; other minor metabolites include: 4-hydroxyamphetamine, 4-hydroxynorephedrine, 4-hydroxyphenylacetone, benzoic acid, hippuric acid, norephedrine, and phenylacetone, the metabolites of amphetamine. Among these metabolites, the active sympathomimetics are amphetamine, 4‑hydroxyamphetamine, 4‑hydroxynorephedrine, 4-hydroxymethamphetamine, and norephedrine. Methamphetamine is a CYP2D6 inhibitor.The main metabolic pathways involve aromatic para-hydroxylation, aliphatic alpha- and beta-hydroxylation, N-oxidation, N-dealkylation, and deamination. The known metabolic pathways include: Detection in biological fluids Methamphetamine and amphetamine are often measured in urine or blood as part of a drug test for sports, employment, poisoning diagnostics, and forensics. Chiral techniques may be employed to help distinguish the source of the drug to determine whether it was obtained illicitly or legally via prescription or prodrug. Chiral separation is needed to assess the possible contribution of levomethamphetamine, which is an active ingredients in some OTC nasal decongestants, toward a positive test result. Dietary zinc supplements can mask the presence of methamphetamine and other drugs in urine.
Chemistry:
Methamphetamine is a chiral compound with two enantiomers, dextromethamphetamine and levomethamphetamine. At room temperature, the free base of methamphetamine is a clear and colorless liquid with an odor characteristic of geranium leaves. It is soluble in diethyl ether and ethanol as well as miscible with chloroform.In contrast, the methamphetamine hydrochloride salt is odorless with a bitter taste. It has a melting point between 170 and 175 °C (338 and 347 °F) and, at room temperature, occurs as white crystals or a white crystalline powder. The hydrochloride salt is also freely soluble in ethanol and water. The crystal structure of either enantiomer is monoclinic with P21 space group; at 90 K (−183.2 °C; −297.7 °F), it has lattice parameters a = 7.10 Å, b = 7.29 Å, c = 10.81 Å, and β = 97.29°.
Chemistry:
Degradation A 2011 study into the destruction of methamphetamine using bleach showed that effectiveness is correlated with exposure time and concentration. A year-long study (also from 2011) showed that methamphetamine in soils is a persistent pollutant. In a 2013 study of bioreactors in wastewater, methamphetamine was found to be largely degraded within 30 days under exposure to light.
Chemistry:
Synthesis Racemic methamphetamine may be prepared starting from phenylacetone by either the Leuckart or reductive amination methods. In the Leuckart reaction, one equivalent of phenylacetone is reacted with two equivalents of N-methylformamide to produce the formyl amide of methamphetamine plus carbon dioxide and methylamine as side products. In this reaction, an iminium cation is formed as an intermediate which is reduced by the second equivalent of N-methylformamide. The intermediate formyl amide is then hydrolyzed under acidic aqueous conditions to yield methamphetamine as the final product. Alternatively, phenylacetone can be reacted with methylamine under reducing conditions to yield methamphetamine.
History, society, and culture:
Amphetamine, discovered before methamphetamine, was first synthesized in 1887 in Germany by Romanian chemist Lazăr Edeleanu who named it phenylisopropylamine. Shortly after, methamphetamine was synthesized from ephedrine in 1893 by Japanese chemist Nagai Nagayoshi. Three decades later, in 1919, methamphetamine hydrochloride was synthesized by pharmacologist Akira Ogata via reduction of ephedrine using red phosphorus and iodine.
History, society, and culture:
Since 1938, methamphetamine was marketed on a large scale in Germany as a nonprescription drug under the brand name Pervitin, produced by the Berlin-based Temmler pharmaceutical company. It was used by all branches of the combined armed forces of the Third Reich, for its stimulant effects and to induce extended wakefulness. Pervitin became colloquially known among the German troops as "Stuka-Tablets" (Stuka-Tabletten) and "Herman-Göring-Pills" (Hermann-Göring-Pillen), as a snide allusion to Göring's widely-known addiction to drugs. However, the side effects, particularly the withdrawal symptoms, were so serious that the army sharply cut back its usage in 1940. By 1941, usage was restricted to a doctor's prescription, and the military tightly controlled its distribution. Soldiers would only receive a couple of tablets at a time, and were discouraged from using them in combat. Historian Łukasz Kamieński says, "A soldier going to battle on Pervitin usually found himself unable to perform effectively for the next day or two. Suffering from a drug hangover and looking more like a zombie than a great warrior, he had to recover from the side effects." Some soldiers turned violent, committing war crimes against civilians; others attacked their own officers. At the end of the war, it was used as part of a new drug: D-IX.
History, society, and culture:
Obetrol, patented by Obetrol Pharmaceuticals in the 1950s and indicated for treatment of obesity, was one of the first brands of pharmaceutical methamphetamine products. Because of the psychological and stimulant effects of methamphetamine, Obetrol became a popular diet pill in America in the 1950s and 1960s. Eventually, as the addictive properties of the drug became known, governments began to strictly regulate the production and distribution of methamphetamine. For example, during the early 1970s in the United States, methamphetamine became a schedule II controlled substance under the Controlled Substances Act. Currently, methamphetamine is sold under the trade name Desoxyn, trademarked by the Danish pharmaceutical company Lundbeck. As of January 2013, the Desoxyn trademark had been sold to Italian pharmaceutical company Recordati.
Trafficking:
The Golden Triangle (Southeast Asia), specifically Shan State, Myanmar, is the world's leading producer of methamphetamine as production has shifted to Yaba and crystalline methamphetamine, including for export to the United States and across East and Southeast Asia and the Pacific.Concerning the accelerating synthetic drug production in the region, the Cantonese Chinese syndicate Sam Gor, also known as The Company, is understood to be the main international crime syndicate responsible for this shift. It is made up of members of five different triads. Sam Gor is primarily involved in drug trafficking, earning at least $8 billion per year. Sam Gor is alleged to control 40% of the Asia-Pacific methamphetamine market, while also trafficking heroin and ketamine. The organization is active in a variety of countries, including Myanmar, Thailand, New Zealand, Australia, Japan, China, and Taiwan. Sam Gor previously produced meth in Southern China and is now believed to manufacture mainly in the Golden Triangle, specifically Shan State, Myanmar, responsible for much of the massive surge of crystal meth in circa 2019. The group is understood to be headed by Tse Chi Lop, a gangster born in Guangzhou, China who also holds a Canadian passport.
Trafficking:
Liu Zhaohua was another individual involved in the production and trafficking of methamphetamine until his arrest in 2005. It was estimated over 18 tonnes of methamphetamine were produced under his watch.
Legal status:
The production, distribution, sale, and possession of methamphetamine is restricted or illegal in many jurisdictions. Methamphetamine has been placed in schedule II of the United Nations Convention on Psychotropic Substances treaty.
Research:
It has been suggested, based on animal research, that calcitriol, the active metabolite of vitamin D, can provide significant protection against the DA- and 5-HT-depleting effects of neurotoxic doses of methamphetamine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PTPRN**
PTPRN:
Receptor-type tyrosine-protein phosphatase-like N, also called "IA-2", is an enzyme that in humans is encoded by the PTPRN gene.
Overview:
The IA-2 protein encoded by PTPRN gene is a member of the protein tyrosine phosphatase (PTP) family and PTPRN subfamily. PTPs are known to be signaling molecules that regulate a variety of cellular processes including cell growth, differentiation, mitotic cycle, and oncogenic transformation. This PTP possesses an extracellular region, a single transmembrane region, and a single catalytic domain, and thus represents a receptor-type PTP. This PTP was found to be an autoantigen that is reactive with insulin-dependent diabetes mellitus (IDDM) patient sera, and thus may be a potential target of autoimmunity in diabetes mellitus.
Overview:
Structure IA-2 and IA-2b belong to family of protein tyrosine phosphatase-like (PTP) molecules. IA-2 is a transmembrane protein with 979 amino acids encoded by a gene on human chromosome 2q35. Similarly, IA-2b has 986 amino acids, and it is located on human chromosome 7q36. The IA-2 is synthesised as a pro-protein of 110 kDa which is then converted by post-translational modifications into a 130 kDa protein.
Overview:
The IA-2 and IA-2b shares 74% identity within the intracellular domains, but only 27% in the extracellular domains.
The IA-2 protein is expressed mainly in cells of neuroendocrine origin, such as pancreatic islets and brain. The IA-2 protein is localised in the membrane of secretory granules of pancreatic β-cells.
Function Even though the IA-2/b has a similar structure to other PTPs, there is a critical amino acid replacement at position 911 (Asp for Ala), which is required for enzymatic activity. These proteins thus fail to show enzymatic activity and their function remains unclear. They could play role in insulin secretory pathways, sorting out proteins or regulates other PTPs.
Overview:
Autoantigen in Type 1 Diabetes The IA-2 is a second major autoantigen in Type 1 Diabetes. IA-2 autoantibodies are found in 78% type 1 diabetics at the time of diagnosis. It has been shown that the autoantibodies exclusively react with the intracellular domain, also called juxtamembrane, but not with the extracellular domain of IA-2/b.It is suggested that IA-2 and not the IA-2b is the primary PTP-like autoantigen in Type 1 Diabetes. The juxtamembrane region in IA-2 is probably the early antibody target. Followed by multiple epitope spreading which is believed to take place in the early development of the disease.
Overview:
Autoantibodies in Type 1 Diabetes Autoantibodies targeting pancreatic islet cell can occur years before a hyperglycaemia is established, therefore these autoantibodies are used in prediction of Type 1 Diabetes.
Overview:
Islet cell autoantibodies are detected in serum, including ICA (islet cell cytoplasma autoantibodies), IAA (autoantibodies to insulin), GAD (glutamic acid decarboxylase), IA-2 (insulinoma-associated protein 2), and ZnT8 (zinc transporter of islet beta cells).However, it is not clear if a primary autoantigen exists and immune reaction against other molecules results from secondary antigen spreading, or multiple molecules represent a primary target.The first autoimmune targets are usually aimed against insulin or GAD, and it is unique to observe IA-2 or ZnT8 as the first autoantibodies. What set off the appearance of a first β-cell targeting autoantibody is unclear.The IAA antibody usually appears early in life, median age is 1.49. Presence of GAD as the first autoantibody is more widespread with median age 4.04. It is relatively rare to see IA-2 as the primary autoantibody, median age 3.03. Interestingly, secondary autoantibodies follow different patterns to mask the primary autoantibodies, if both are combined. If the primary autoantibody is IAA then GAD briskly appears with peak of 2 years age. Secondary IAA usually occurs after GAD, where the age distribution is over wide range.It is unknown whether the appearance of autoantibodies corresponds with insulitis process in the pancreas and if so, what is the combination of the autoantibodies.
Interactions:
PTPRN has been shown to interact with SPTBN4. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Orthology (language)**
Orthology (language):
Orthology is the study of the correct speaking or the right use of words in language. The word comes from Greek ortho- ("correct") and -logy ("science of"). This science is a place where psychology, philosophy, linguistics and many other fields of learning come together. The most noted use of Orthology is for the selection of words for the language of Basic English by the Orthological Institute.
Orthology (language):
The book, The Meaning of Meaning, by C.K. Ogden and I.A. Richards, is an important book dealing with orthology. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sinclair BASIC**
Sinclair BASIC:
Sinclair BASIC is a dialect of the programming language BASIC used in the 8-bit home computers from Sinclair Research and Timex Sinclair. The Sinclair BASIC interpreter was made by Nine Tiles Networks Ltd.
History:
Sinclair BASIC was originally developed in 1979 for the ZX80 by Nine Tiles. The programmers were John Grant, the owner of Nine Tiles, and Steve Vickers.
History:
It was initially an incomplete implementation of the 1978 American National Standards Institute (ANSI) minimal BASIC standard with integer arithmetic only, termed 4K BASIC (for its ROM size) for the ZX80. It evolved through the floating-point 8K BASIC for the ZX81 and T/S 1000 (which was also available as an upgrade for the ZX80), and became an almost complete version in the 16 KB ROM ZX Spectrum (known as 48K BASIC). It is present in all ZX Spectrum compatibles and clones, with more advanced systems also offering expanded versions like 128K BASIC, +3 BASIC, T/S 2000 BASIC, BASIC64 or Timex Extended Basic. As of 2015, interpreters exist for modern operating systems, and older systems, that allow Sinclair Basic to be used easily.
Syntax:
Keywords On the 16K/48K ZX Spectrum (48K BASIC), there are 88 keywords in Sinclair BASIC, denoting commands (of which there are 50), functions and logical operators (31), and other keywords (16, including 9 which are also commands or functions): Keyword entry In 48K models and older, the keywords are entered via Sinclair's unique keyword entry system, as indicated on the table. The most common commands need one keystroke only; for example, pressing only P at the start of a line on a Spectrum produces the full command PRINT. Less frequent commands require more complex key sequences: BEEP (for example) is keyed by pressing CAPS SHIFT plus SYMBOL SHIFT to access extended mode (later models include an EXTENDED MODE key), keeping SYMBOL SHIFT held down and pressing Z. Keywords are colour-coded on the original Spectrum keyboard to indicate which mode is required: White: key only Red on the key itself: SYMBOL SHIFT plus the key Green above the key: EXTENDED MODE followed by the key Red below the key: EXTENDED MODE followed by SYMBOL SHIFT plus the keyThe ZX81 8K BASIC used the shorter forms GOTO, GOSUB, CONT and RAND, whereas the Spectrum 48K BASIC used the longer forms GO TO, GO SUB, CONTINUE and RANDOMIZE. The ZX80 4K BASIC also used these longer forms but differed by using the spelling RANDOMISE. The ZX81 8K BASIC was the only version to use FAST, SCROLL, SLOW and UNPLOT. The ZX80 4K BASIC had the exclusive function TL$(); it was equivalent to the string operator (2 TO ) in later versions.
Syntax:
Unique code points are assigned in the ZX80 character set, ZX81 character set and ZX Spectrum character set for each keyword or multi-character operator, i.e. <=, >=, <>, "" (tokenized on the ZX81 only), ** (replaced with ↑ on the Spectrum). These are expanded by referencing a token table in ROM. Thus, a keyword uses one byte of memory only, a significant saving over traditional letter-by-letter storage. This also meant that the BASIC interpreter could quickly determine any command or function by evaluating one byte, and that the keywords need not be reserved words like in other BASIC dialects or other programming languages, e.g., it is allowed to define a variable named PRINT and output its value with PRINT PRINT. This is also related to the syntax requirement that every line start with a command keyword, and pressing the one keypress for that command at the start of a line changes the editor from command mode to letter mode. Thus, variable assignment requires LET (i.e., LET a=1 not only a=1). This practice is also different from other BASIC dialects. Further, it meant that unlike other BASIC dialects, the interpreter needed no parentheses to identify functions; SIN x was sufficient, no SIN(x) needed (though the latter was allowed). The 4K BASIC ROM of the ZX80 had a short list of exceptions to this: the functions CHR$(), STR$(), TL$(), PEEK(), CODE(), RND(), USR() and ABS() did not have one-byte tokens but were typed in letter-by-letter and required the parentheses. They were listed as the INTEGRAL FUNCTIONS on a label above and to the right of the keyboard.128 BASIC, present on ZX Spectrum 128, +2, +3, +2A, and +2B, stored keywords internally in one-byte code points, but used a conventional letter-by-letter BASIC input system. It also introduced two new commands: PLAY, which operated the 128k models' General Instrument AY-3-8910 music chip SPECTRUM, which switched the 128k Spectrum into a 48k Spectrum compatibility modeThe original Spanish ZX Spectrum 128 included four additional BASIC editor commands in Spanish, one of which was undocumented: EDITAR (to edit a line number or invoke the full screen string editor) NUMERO (to renumber the program lines) BORRAR (to delete program lines) ANCHO (to set the column width of the RS-232 device, but undocumented as the code was broken)Unlike the LEFT$(), MID$() and RIGHT$() functions used in the ubiquitous Microsoft BASIC dialects for home computers, parts of strings in Sinclair BASIC are accessed by numeric range. For example, a$(5 TO 10) gives a substring starting with the 5th and ending with the 10th character of the variable a$. Thus, it is possible to replace the LEFT$() and RIGHT$() commands by simply omitting the left or right array position respectively; for example a$( TO 5) is equivalent to LEFT$(a$,5). Further, a$(5) alone is enough to replace MID$(a$,5,1).
Syntax:
Variable names Variables holding numeric values may be any length, while string and array variable names must consist of only one alphabetical character. Thus, LET a=5, LET Apples=5, LET a$="Hello", DIM a(10) and DIM a$(10) are all good, while LET Apples$="Fruit", DIM Apples(10) and DIM Apples$(10) are not.
The long variable names allowed for numeric variables can include alphanumeric characters after the first character, so LET a0=5 is allowed but not LET 0a=5. The long variable names can also include spaces, which are ignored, so LET number of apples = 5 is the same as LET numberofapples = 5
Official versions:
4K BASIC 4K BASIC for ZX80 (so named for residing in 4 KiB read-only memory (ROM)), was developed by John Grant of Nine Tiles for the ZX80. It has integer-only arithmetic.
Official versions:
System Commands: NEW RUN LIST LOAD SAVE Control Statements: GOTO IF THEN GOSUB STOP RETURN FOR TO NEXT CONTINUE Input/Output Statements: PRINT INPUT Assignment Statement: LET Other Statements: CLEAR CLS DIM REM RANDOMIZE POKE 8K BASIC 8K BASIC is the ZX81 BASIC (also available as an upgrade for the ZX80), updated with floating-point arithmetic by Steve Vickers, so named for residing in 8 KiB ROM.
Official versions:
Statements: PRINT RAND LET CLEAR RUN LIST GOTO CONT INPUT NEW REM PRINT STOP BREAK IF STOP FOR NEXT TO STEP SLOW FAST GOSUB RETURN SAVE LOAD CLS SCROLL PLOT UNPLOT PAUSE LPRINT LLIST COPY DIM POKE NEW Functions: ABS SGN SIN COS TAN ASN ACS ATN LN EXP SQR INT PI RND FUNCTION LEN VALSTR$ NOT CODE CHR$ INKEY$ AT TAB INKEY$ PEEK USR 48 BASIC 48 BASIC is the BASIC for the original 16/48 KB RAM ZX Spectrum (and clones), with colour and more peripherals added by Steve Vickers and John Grant. It resides in 16 KB ROM and began to be called 48 BASIC with the introduction of the ZX Spectrum 128 at which time the 16 KB Spectrum was no longer sold and most existing ones in use had been upgraded to 48 KB 128 BASIC 128 BASIC is the BASIC for the ZX Spectrum 128. It offers extra commands and uses letter-by-letter input.
Official versions:
New commands: LOAD ! SAVE ! MERGE ! ERASE ! PLAY SPECTRUM +3 BASIC +3 BASIC is the BASIC with disk support for the ZX Spectrum +3 and +2A.New commands: FORMAT COPY T/S 2000 BASIC T/S 2000 BASIC is used on the Spectrum-compatible Timex Sinclair 2068 (T/S 2068) and adds the following six new keywords: DELETE deletes BASIC program line ranges.
Official versions:
FREE is a function that gives the amount of free RAM. PRINT FREE will show how much RAM is free.
ON ERR is an error-handling function mostly used as ON ERR GO TO or ON ERR CONT.
RESET can be used to reset the behaviour of ON ERR. It was also intended to reset peripherals.
SOUND controls the AY-3-8192 sound chip.
STICK is a function that gives the position of the internal joystick (Timex Sinclair 2090).
Official versions:
BASIC64 BASIC64 by Timex of Portugal, is a software extension to allow better Basic programming with the 512×192 and dual display areas graphic modes available only on Timex Sinclair computers. This extension adds commands and does a complete memory remap to avoid the system overwriting the extended screen memory area. Two versions exist due to different memory maps - a version for TC 2048 and a version for T/S 2068 and TC 2068.
Official versions:
PRINT # Prints to a specific output channel.
LIST # Lists the program to a specific output channel.
CLS* Clears both display areas.
INK* Sets ink colour for both display areas PAPER* Sets paper colour both display areas SCREEN$ Selects the high / normal resolution modes.
PLOT* Plots a pixel and updates the drawing position.
LINE Draws a line from the previous PLOT position, supporting arc drawing CIRCLE* Draws a circle or oval, depending on screen mode.
Official versions:
Timex Extended Basic Timex Extended Basic by Timex of Portugal is used on the Timex Computer 3256, adding TEC - Timex Extended Commands commands supporting the AY-3-8912 sound chip, RS-232 network and the 512x192 pixel high resolution graphic mode.RAM drive commands: LOAD! SAVE! CAT! MERGE! ERASE! CLEAR! RS-232 commands: FORMAT! LPRINT LLIST AY-3-8912 commands: BEEP! 512x192 resolution commands: SCREEN$ DRAW! PLOT! CIRCLE!
Other versions, extensions, derivatives and successors:
Interpreters for the ZX Spectrum family Several ZX Spectrum interpreters exist.
Beta BASIC by Dr. Andy Wright, was originally a BASIC extension, but became a full interpreter.
YS MegaBasic by Mike Leaman.
ZebraOS by Zebra Systems in New York, a cartridge version of T/S 2000 BASIC that used the 512×192 screen mode.
Sea Change ROM by Steve Vickers and Ian Logan, modified by Geoff Wearmouth, a replacement ROM with an enhanced Sinclair BASIC.
Gosh Wonderful by Geoff Wearmouth, a replacement ROM that fixes bugs and adds a tokenizer, stream lister, delete and renumber commands.
OpenSE BASIC (formerly SE BASIC) by Andrew Owen, a replacement ROM with bug fixes and many enhancements including ULAplus support, published as open source in 2011 Compilers for the ZX Spectrum family Several ZX Spectrum compilers exist.
Other versions, extensions, derivatives and successors:
HiSoft COLT Compiler (a.k.a. HiSoft COLT Integer Compiler) HiSoft BASIC (a.k.a. HiSoft BASIC Compiler), an integer and floating-point capable compiler Laser Compiler Softek 'IS' Integer Compiler (successor to Softek Integer Compiler) Softek 'FP' Full Compiler ZIP Compiler Derivatives and successors for other computers SuperBASIC, a much more advanced BASIC dialect introduced with the Sinclair QL personal computer, with some similarities to the earlier Sinclair BASICs SAM Basic, the BASIC on the SAM Coupé, generally considered a ZX Spectrum clone ROMU6 by Cesar and Juan Hernandez - MSX Spectrum 48 by Whitby Computers - Commodore 64 Sparky eSinclair BASIC by Richard Kelsh, an operating system loosely based on ZX Spectrum BASIC - Zilog eZ80 Sinbas by Pavel Napravnik - DOS Basic (and CheckBasic) by Philip Kendall - Unix BINSIC by Adrian McMenamin, a reimplementation in Groovy closely modelled on ZX81 BASIC - Java BASin by Paul Dunn, a complete Sinclair BASIC integrated development environment (IDE) based on a ZX Spectrum emulator - Windows SpecBAS (a.k.a. SpecOS) by Paul Dunn, an integrated development environment (IDE) providing an enhanced superset of Sinclair BASIC - Windows, Linux, Pandora, and Raspberry Pi ZX-Basicus by Juan-Antonio Fernández-Madrigal, a synthesizer, analyzer, optimizer, interpreter and debugger of Sinclair BASIC 48K for PCs, freely downloadable for Linux and Windows. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Eucryptite**
Eucryptite:
Eucryptite is a lithium bearing aluminium silicate mineral with formula LiAlSiO4. It crystallizes in the trigonal - rhombohedral crystal system. It typically occurs as granular to massive in form and may pseudomorphically replace spodumene. It has a brittle to conchoidal fracture and indistinct cleavage. It is transparent to translucent and varies from colorless to white to brown. It has a Mohs hardness of 6.5 and a specific gravity of 2.67. Optically it is uniaxial positive with refractive index values of nω = 1.570 - 1.573 and nε = 1.583 - 1.587.
Eucryptite:
Its typical occurrence is in lithium-rich pegmatites in association with albite, spodumene, petalite, amblygonite, lepidolite and quartz.It occurs as a secondary alteration product of spodumene. It was first described in 1880 for an occurrence at its type locality, Branchville, Connecticut. Its name was from the Greek for well concealed, for its typical occurrence embedded in albite. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.