id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
352,905 | https://en.wikipedia.org/wiki/R-process | In nuclear astrophysics, the rapid neutron-capture process, also known as the r-process, is a set of nuclear reactions that is responsible for the creation of approximately half of the atomic nuclei heavier than iron, the "heavy elements", with the other half produced by the p-process and s-process. The r-process usually synthesizes the most neutron-rich stable isotopes of each heavy element. The r-process can typically synthesize the heaviest four isotopes of every heavy element; of these, the heavier two are called r-only nuclei because they are created exclusively via the r-process. Abundance peaks for the r-process occur near mass numbers (elements Se, Br, and Kr), (elements Te, I, and Xe) and (elements Os, Ir, and Pt).
The r-process entails a succession of rapid neutron captures (hence the name) by one or more heavy seed nuclei, typically beginning with nuclei in the abundance peak centered on 56Fe. The captures must be rapid in the sense that the nuclei must not have time to undergo radioactive decay (typically via β− decay) before another neutron arrives to be captured. This sequence can continue up to the limit of stability of the increasingly neutron-rich nuclei (the neutron drip line) to physically retain neutrons as governed by the short range nuclear force. The r-process therefore must occur in locations where there exists a high density of free neutrons.
Early studies theorized that 1024 free neutrons per cm3 would be required, for temperatures of about 1 GK, in order to match the waiting points, at which no more neutrons can be captured, with the mass numbers of the abundance peaks for r-process nuclei. This amounts to almost a gram of free neutrons in every cubic centimeter, an astonishing number requiring extreme locations. Traditionally this suggested the material ejected from the reexpanded core of a core-collapse supernova, as part of supernova nucleosynthesis, or decompression of neutron star matter thrown off by a binary neutron star merger in a kilonova. The relative contribution of each of these sources to the astrophysical abundance of r-process elements is a matter of ongoing research .
A limited r-process-like series of neutron captures occurs to a minor extent in thermonuclear weapon explosions. These led to the discovery of the elements einsteinium (element 99) and fermium (element 100) in nuclear weapon fallout.
The r-process contrasts with the s-process, the other predominant mechanism for the production of heavy elements, which is nucleosynthesis by means of slow captures of neutrons. In general, isotopes involved in the s-process have half-lives long enough to enable their study in laboratory experiments, but this is not typically true for isotopes involved in the r-process. The s-process primarily occurs within ordinary stars, particularly AGB stars, where the neutron flux is sufficient to cause neutron captures to recur every 10–100 years, much too slow for the r-process, which requires 100 captures per second. The s-process is secondary, meaning that it requires pre-existing heavy isotopes as seed nuclei to be converted into other heavy nuclei by a slow sequence of captures of free neutrons. The r-process scenarios create their own seed nuclei, so they might proceed in massive stars that contain no heavy seed nuclei. Taken together, the r- and s-processes account for almost the entire abundance of chemical elements heavier than iron. The historical challenge has been to locate physical settings appropriate to their time scales.
History
Following pioneering research into the Big Bang and the formation of helium in stars, an unknown process responsible for producing heavier elements found on Earth from hydrogen and helium was suspected to exist. One early attempt at explanation came from Subrahmanyan Chandrasekhar and Louis R. Henrich who postulated that elements were produced at temperatures between 6×109 and 8×109 K. Their theory accounted for elements up to chlorine, though there was no explanation for elements of atomic weight heavier than 40 amu at non-negligible abundances.
This became the foundation of a study by Fred Hoyle, who hypothesized that conditions in the core of collapsing stars would enable nucleosynthesis of the remainder of the elements via rapid capture of densely packed free neutrons. However, there remained unanswered questions about equilibrium in stars that was required to balance beta-decays and precisely account for abundances of elements that would be formed in such conditions.
The need for a physical setting providing rapid neutron capture, which was known to almost certainly have a role in element formation, was also seen in a table of abundances of isotopes of heavy elements by Hans Suess and Harold Urey in 1956. Their abundance table revealed larger than average abundances of natural isotopes containing magic numbers of neutrons as well as abundance peaks about 10 amu lighter than stable nuclei containing magic numbers of neutrons which were also in abundance, suggesting that radioactive neutron-rich nuclei having the magic neutron numbers but roughly ten fewer protons were formed. These observations also implied that rapid neutron capture occurred faster than beta decay, and the resulting abundance peaks were caused by so-called waiting points at magic numbers. This process, rapid neutron capture by neutron-rich isotopes, became known as the r-process, whereas the s-process was named for its characteristic slow neutron capture. A table apportioning the heavy isotopes phenomenologically between s-process and r-process isotopes was published in 1957 in the B2FH review paper, which named the r-process and outlined the physics that guides it. Alastair G. W. Cameron also published a smaller study about the r-process in the same year.
The stationary r-process as described by the B2FH paper was first demonstrated in a time-dependent calculation at Caltech by Phillip A. Seeger, William A. Fowler and Donald D. Clayton, who found that no single temporal snapshot matched the solar r-process abundances, but, that when superposed, did achieve a successful characterization of the r-process abundance distribution. Shorter-time distributions emphasize abundances at atomic weights less than , whereas longer-time distributions emphasized those at atomic weights greater than . Subsequent treatments of the r-process reinforced those temporal features. Seeger et al. were also able to construct more quantitative apportionment between s-process and r-process of the abundance table of heavy isotopes, thereby establishing a more reliable abundance curve for the r-process isotopes than B2FH had been able to define. Today, the r-process abundances are determined using their technique of subtracting the more reliable s-process isotopic abundances from the total isotopic abundances and attributing the remainder to r-process nucleosynthesis. That r-process abundance curve (vs. atomic weight) has provided for many decades the target for theoretical computations of abundances synthesized by the physical r-process.
The creation of free neutrons by electron capture during the rapid collapse to high density of a supernova core along with quick assembly of some neutron-rich seed nuclei makes the r-process a primary nucleosynthesis process, a process that can occur even in a star initially of pure H and He. This in contrast to the B2FH designation which is a secondary process building on preexisting iron. Primary stellar nucleosynthesis begins earlier in the galaxy than does secondary nucleosynthesis. Alternatively the high density of neutrons within neutron stars would be available for rapid assembly into r-process nuclei if a collision were to eject portions of a neutron star, which then rapidly expands freed from confinement. That sequence could also begin earlier in galactic time than would s-process nucleosynthesis; so each scenario fits the earlier growth of r-process abundances in the galaxy. Each of these scenarios is the subject of active theoretical research.
Observational evidence of the early r-process enrichment of interstellar gas and of subsequent newly formed stars, as applied to the abundance evolution of the galaxy of stars, was first laid out by James W. Truran in 1981. He and subsequent astronomers showed that the pattern of heavy-element abundances in the earliest metal-poor stars matched that of the shape of the solar r-process curve, as if the s-process component were missing. This was consistent with the hypothesis that the s-process had not yet begun to enrich interstellar gas when these young stars missing the s-process abundances were born from that gas, for it requires about 100 million years of galactic history for the s-process to get started whereas the r-process can begin after two million years. These s-process–poor, r-process–rich stellar compositions must have been born earlier than any s-process, showing that the r-process emerges from quickly evolving massive stars that become supernovae and leave neutron-star remnants that can merge with another neutron star. The primary nature of the early r-process thereby derives from observed abundance spectra in old stars that had been born early, when the galactic metallicity was still small, but that nonetheless contain their complement of r-process nuclei.
Either interpretation, though generally supported by supernova experts, has yet to achieve a totally satisfactory calculation of r-process abundances because the overall problem is numerically formidable. However, existing results are supportive; in 2017, new data about the r-process was discovered when the LIGO and Virgo gravitational-wave observatories discovered a merger of two neutron stars ejecting r-process matter. See Astrophysical sites below.
Noteworthy is that the r-process is responsible for our natural cohort of radioactive elements, such as uranium and thorium, as well as the most neutron-rich isotopes of each heavy element.
Nuclear physics
There are three natural candidate sites for r-process nucleosynthesis where the required conditions are thought to exist: low-mass supernovae, Type II supernovae, and neutron star mergers.
Immediately after the severe compression of electrons in a Type II supernova, beta-minus decay is blocked. This is because the high electron density fills all available free electron states up to a Fermi energy which is greater than the energy of nuclear beta decay. However, nuclear capture of those free electrons still occurs, and causes increasing neutronization of matter. This results in an extremely high density of free neutrons which cannot decay, on the order of 1024 neutrons per cm3, and high temperatures. As this re-expands and cools, neutron capture by still-existing heavy nuclei occurs much faster than beta-minus decay. As a consequence, the r-process runs up along the neutron drip line and highly-unstable neutron-rich nuclei are created.
Three processes which affect the climbing of the neutron drip line are a notable decrease in the neutron-capture cross section in nuclei with closed neutron shells, the inhibiting process of photodisintegration, and the degree of nuclear stability in the heavy-isotope region. Neutron captures in r-process nucleosynthesis leads to the formation of neutron-rich, weakly bound nuclei with neutron separation energies as low as 2 MeV. At this stage, closed neutron shells at N = 50, 82, and 126 are reached, and neutron capture is temporarily paused. These so-called waiting points are characterized by increased binding energy relative to heavier isotopes, leading to low neutron capture cross sections and a buildup of semi-magic nuclei that are more stable toward beta decay. In addition, nuclei beyond the shell closures are susceptible to quicker beta decay owing to their proximity to the drip line; for these nuclei, beta decay occurs before further neutron capture. Waiting point nuclei are then allowed to beta decay toward stability before further neutron capture can occur, resulting in a slowdown or freeze-out of the reaction.
Decreasing nuclear stability terminates the r-process when its heaviest nuclei become unstable to spontaneous fission, when the total number of nucleons approaches 270. The fission barrier may be low enough before 270 such that neutron capture might induce fission instead of continuing up the neutron drip line. After the neutron flux decreases, these highly unstable radioactive nuclei undergo a rapid succession of beta decays until they reach more stable, neutron-rich nuclei. While the s-process creates an abundance of stable nuclei having closed neutron shells, the r-process, in neutron-rich predecessor nuclei, creates an abundance of radioactive nuclei about 10 amu below the s-process peaks. These abundance peaks correspond to stable isobars produced from successive beta decays of waiting point nuclei having N = 50, 82, and 126—which are about 10 protons removed from the line of beta stability.
The r-process also occurs in thermonuclear weapons, and was responsible for the initial discovery of neutron-rich almost stable isotopes of actinides like plutonium-244 and the new elements einsteinium and fermium (atomic numbers 99 and 100) in the 1950s. It has been suggested that multiple nuclear explosions would make it possible to reach the island of stability, as the affected nuclides (starting with uranium-238 as seed nuclei) would not have time to beta decay all the way to the quickly spontaneously fissioning nuclides at the line of beta stability before absorbing more neutrons in the next explosion, thus providing a chance to reach neutron-rich superheavy nuclides like copernicium-291 and -293 which may have half-lives of centuries or millennia.
Astrophysical sites
The most probable candidate site for the r-process has long been suggested to be core-collapse supernovae (spectral types Ib, Ic and II), which may provide the necessary physical conditions for the r-process. However, the very low abundance of r-process nuclei in the interstellar gas limits the amount each can have ejected. It requires either that only a small fraction of supernovae eject r-process nuclei to the interstellar medium, or that each supernova ejects only a very small amount of r-process material. The ejected material must be relatively neutron-rich, a condition which has been difficult to achieve in models, so that astrophysicists remain uneasy about their adequacy for successful r-process yields.
In 2017, new astronomical data about the r-process was discovered in data from the merger of two neutron stars. Using the gravitational wave data captured in GW170817 to identify the location of the merger, several teams observed and studied optical data of the merger, finding spectroscopic evidence of r-process material thrown off by the merging neutron stars. The bulk of this material seems to consist of two types: hot blue masses of highly radioactive r-process matter of lower-mass-range heavy nuclei ( such as strontium) and cooler red masses of higher mass-number r-process nuclei () rich in actinides (such as uranium, thorium, and californium). When released from the huge internal pressure of the neutron star, these ejecta expand and form seed heavy nuclei that rapidly capture free neutrons, and radiate detected optical light for about a week. Such duration of luminosity would not be possible without heating by internal radioactive decay, which is provided by r-process nuclei near their waiting points. Two distinct mass regions ( and ) for the r-process yields have been known since the first time dependent calculations of the r-process. Because of these spectroscopic features it has been argued that such nucleosynthesis in the Milky Way has been primarily ejecta from neutron-star mergers rather than from supernovae.
These results offer a new possibility for clarifying six decades of uncertainty over the site of origin of r-process nuclei. Confirming relevance to the r-process is that it is radiogenic power from radioactive decay of r-process nuclei that maintains the visibility of these spun off r-process fragments. Otherwise they would dim quickly. Such alternative sites were first seriously proposed in 1974 as decompressing neutron star matter. It was proposed such matter is ejected from neutron stars merging with black holes in compact binaries. In 1989 (and 1999) this scenario was extended to binary neutron star mergers (a binary star system of two neutron stars that collide). After preliminary identification of these sites, the scenario was confirmed in GW170817. Current astrophysical models suggest that a single neutron star merger event may have generated between 3 and 13 Earth masses of gold.
See also
HD 222925
Notes
References
Concepts in astrophysics
Neutron
Nuclear physics
Nucleosynthesis
Supernovae | R-process | [
"Physics",
"Chemistry",
"Astronomy"
] | 3,442 | [
"Supernovae",
"Nuclear fission",
"Concepts in astrophysics",
"Astronomical events",
"Astrophysics",
"Nucleosynthesis",
"Explosions",
"Nuclear physics",
"Nuclear fusion"
] |
352,908 | https://en.wikipedia.org/wiki/S-process | The slow neutron-capture process, or s-process, is a series of reactions in nuclear astrophysics that occur in stars, particularly asymptotic giant branch stars. The s-process is responsible for the creation (nucleosynthesis) of approximately half the atomic nuclei heavier than iron.
In the s-process, a seed nucleus undergoes neutron capture to form an isotope with one higher atomic mass. If the new isotope is stable, a series of increases in mass can occur, but if it is unstable, then beta decay will occur, producing an element of the next higher atomic number. The process is slow (hence the name) in the sense that there is sufficient time for this radioactive decay to occur before another neutron is captured. A series of these reactions produces stable isotopes by moving along the valley of beta-decay stable isobars in the table of nuclides.
A range of elements and isotopes can be produced by the s-process, because of the intervention of alpha decay steps along the reaction chain. The relative abundances of elements and isotopes produced depends on the source of the neutrons and how their flux changes over time. Each branch of the s-process reaction chain eventually terminates at a cycle involving lead, bismuth, and polonium.
The s-process contrasts with the r-process, in which successive neutron captures are rapid: they happen more quickly than the beta decay can occur. The r-process dominates in environments with higher fluxes of free neutrons; it produces heavier elements and more neutron-rich isotopes than the s-process. Together the two processes account for most of the relative abundance of chemical elements heavier than iron.
History
The s-process was seen to be needed from the relative abundances of isotopes of heavy elements and from a newly published table of abundances by Hans Suess and Harold Urey in 1956. Among other things, these data showed abundance peaks for strontium, barium, and lead, which, according to quantum mechanics and the nuclear shell model, are particularly stable nuclei, much like the noble gases are chemically inert. This implied that some abundant nuclei must be created by slow neutron capture, and it was only a matter of determining how other nuclei could be accounted for by such a process. A table apportioning the heavy isotopes between s-process and r-process was published in the famous B2FH review paper in 1957. There it was also argued that the s-process occurs in red giant stars. In a particularly illustrative case, the element technetium, whose longest half-life is 4.2 million years, had been discovered in s-, M-, and N-type stars in 1952 by Paul W. Merrill. Since these stars were thought to be billions of years old, the presence of technetium in their outer atmospheres was taken as evidence of its recent creation there, probably unconnected with the nuclear fusion in the deep interior of the star that provides its power.
A calculable model for creating the heavy isotopes from iron seed nuclei in a time-dependent manner was not provided until 1961. That work showed that the large overabundances of barium observed by astronomers in certain red-giant stars could be created from iron seed nuclei if the total neutron flux (number of neutrons per unit area) was appropriate. It also showed that no one single value for neutron flux could account for the observed s-process abundances, but that a wide range is required. The numbers of iron seed nuclei that were exposed to a given flux must decrease as the flux becomes stronger. This work also showed that the curve of the product of neutron-capture cross section times abundance is not a smoothly falling curve, as B2FH had sketched, but rather has a ledge-precipice structure. A series of papers in the 1970s by Donald D. Clayton utilizing an exponentially declining neutron flux as a function of the number of iron seed exposed became the standard model of the s-process and remained so until the details of AGB-star nucleosynthesis became sufficiently advanced that they became a standard model for s-process element formation based on stellar structure models. Important series of measurements of neutron-capture cross sections were reported from Oak Ridge National Lab in 1965 and by Karlsruhe Nuclear Physics Center in 1982 and subsequently, these placed the s-process on the firm quantitative basis that it enjoys today.
The s-process in stars
The s-process is believed to occur mostly in asymptotic giant branch stars, seeded by iron nuclei left by a supernova during a previous generation of stars. In contrast to the r-process which is believed to occur over time scales of seconds in explosive environments, the s-process is believed to occur over time scales of thousands of years, passing decades between neutron captures. The extent to which the s-process moves up the elements in the chart of isotopes to higher mass numbers is essentially determined by the degree to which the star in question is able to produce neutrons. The quantitative yield is also proportional to the amount of iron in the star's initial abundance distribution. Iron is the "starting material" (or seed) for this neutron capture-beta minus decay sequence of synthesizing new elements.
The main neutron source reactions are:
:{|border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|}
One distinguishes the main and the weak s-process component. The main component produces heavy elements beyond Sr and Y, and up to Pb in the lowest metallicity stars. The production sites of the main component are low-mass asymptotic giant branch stars. The main component relies on the 13C neutron source above. The weak component of the s-process, on the other hand, synthesizes s-process isotopes of elements from iron group seed nuclei to 58Fe on up to Sr and Y, and takes place at the end of helium- and carbon-burning in massive stars. It employs primarily the 22Ne neutron source. These stars will become supernovae at their demise and spew those s-process isotopes into interstellar gas.
The s-process is sometimes approximated over a small mass region using the so-called "local approximation", by which the ratio of abundances is inversely proportional to the ratio of neutron-capture cross-sections for nearby isotopes on the s-process path. This approximation is – as the name indicates – only valid locally, meaning for isotopes of nearby mass numbers, but it is invalid at magic numbers where the ledge-precipice structure dominates.
Because of the relatively low neutron fluxes expected to occur during the s-process (on the order of 105 to 1011 neutrons per cm2 per second), this process does not have the ability to produce any of the heavy radioactive isotopes such as thorium or uranium. The cycle that terminates the s-process is:
captures a neutron, producing , which decays to by β− decay. in turn decays to by α decay:
:{|border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ ||
|}
then captures three neutrons, producing , which decays to by β− decay, restarting the cycle:
:{|border="0"
|- style="height:2em;"
| ||+ ||3 ||→ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ ||
|}
The net result of this cycle therefore is that 4 neutrons are converted into one alpha particle, two electrons, two anti-electron neutrinos and gamma radiation:
:{|border="0"
|- style="height:2em;"
| || ||4 ||→ || ||+ ||2 ||+ ||2 ||+ ||
|}
The process thus terminates in bismuth, the heaviest "stable" element, and polonium, the first non-primordial element after bismuth. Bismuth is actually slightly radioactive, but with a half-life so long—a billion times the present age of the universe—that it is effectively stable over the lifetime of any existing star. Polonium-210, however, decays with a half-life of 138 days to stable lead-206.
The s-process measured in stardust
Stardust is one component of cosmic dust. Stardust is individual solid grains that condensed during mass loss from various long-dead stars. Stardust existed throughout interstellar gas before the birth of the Solar System and was trapped in meteorites when they assembled from interstellar matter contained in the planetary accretion disk in early Solar System. Today they are found in meteorites, where they have been preserved. Meteoriticists habitually refer to them as presolar grains. The s-process enriched grains are mostly silicon carbide (SiC). The origin of these grains is demonstrated by laboratory measurements of extremely unusual isotopic abundance ratios within the grain. First experimental detection of s-process xenon isotopes was made in 1978, confirming earlier predictions that s-process isotopes would be enriched, nearly pure, in stardust from red giant stars. These discoveries launched new insight into astrophysics and into the origin of meteorites in the Solar System. Silicon carbide (SiC) grains condense in the atmospheres of AGB stars and thus trap isotopic abundance ratios as they existed in that star. Because the AGB stars are the main site of the s-process in the galaxy, the heavy elements in the SiC grains contain almost pure s-process isotopes in elements heavier than iron. This fact has been demonstrated repeatedly by sputtering-ion mass spectrometer studies of these stardust presolar grains. Several surprising results have shown that within them the ratio of s-process and r-process abundances is somewhat different from that which was previously assumed. It has also been shown with trapped isotopes of krypton and xenon that the s-process abundances in the AGB-star atmospheres changed with time or from star to star, presumably with the strength of neutron flux in that star or perhaps the temperature. This is a frontier of s-process studies in the 2000s.
References
Nuclear physics
Neutron
Astrophysics
Nucleosynthesis | S-process | [
"Physics",
"Chemistry",
"Astronomy"
] | 2,259 | [
"Nuclear fission",
"Astrophysics",
"Nucleosynthesis",
"Nuclear physics",
"Nuclear fusion",
"Astronomical sub-disciplines"
] |
352,912 | https://en.wikipedia.org/wiki/P-process | The term p-process (p for proton) is used in two ways in the scientific literature concerning the astrophysical origin of the elements (nucleosynthesis). Originally it referred to a proton capture process which was proposed to be the source of certain, naturally occurring, neutron-deficient isotopes of the elements from selenium to mercury. These nuclides are called p-nuclei and their origin is still not completely understood. Although it was shown that the originally suggested process cannot produce the p-nuclei, later on the term p-process was sometimes used to generally refer to any nucleosynthesis process supposed to be responsible for the p-nuclei.
Often, the two meanings are confused. Recent scientific literature therefore suggests to use the term p-process only for the actual proton capture process, as it is customary with other nucleosynthesis processes in astrophysics.
The proton capture p-process
Proton-rich nuclides can be produced by sequentially adding one or more protons to an atomic nucleus. Such a nuclear reaction of type (p,γ) is called proton capture reaction. By adding a proton to a nucleus, the element is changed because the chemical element is defined by the proton number of a nucleus. At the same time the ratio of protons to neutrons is changed, resulting in a more neutron-deficient isotope of the next element. This led to the original idea for the production of p-nuclei: free protons (the nuclei of hydrogen atoms are present in stellar plasmas) should be captured on heavy nuclei (seed nuclei) also already present in the stellar plasma (previously produced in the s-process and/or r-process).
Such proton captures on stable nuclides (or nearly stable), however, are not very efficient in producing p-nuclei, especially the heavier ones, because the electric charge increases with each added proton, leading to an increased repulsion of the next proton to be added, according to Coulomb's law. In the context of nuclear reactions this is called a Coulomb barrier. The higher the Coulomb barrier, the more kinetic energy a proton requires to get close to a nucleus and be captured by it. The average energy of the available protons is given by the temperature of the stellar plasma. Even if this temperature could be increased arbitrarily (which is not the case in stellar environments), protons would be removed faster from a nucleus by photodisintegration than they could be captured at high temperature. A possible alternative would be to have a very large number of protons available to increase the effective number of proton captures per second without having to raise the temperature too much. Such conditions, however, are not found in core-collapse supernovae which were supposed to be the site of the p-process.
Proton captures at extremely high proton densities are called rapid proton capture processes. They are distinct from the p-process not only by the required high proton density but also by the fact that very short-lived radionuclides are involved and the reaction path is located close to the proton drip line. Rapid proton capture processes are the rp-process, the νp-process, and the pn-process.
History
The term p-process was originally proposed in the famous B2FH paper in 1957. The authors assumed that this process was solely responsible for the p-nuclei and proposed that it occurs in the hydrogen-shell (see also stellar evolution) of a star exploding as a type II supernova. It was shown later that the required conditions are not found in such supernovae.
At the same time as B2FH, Alastair Cameron independently realized the necessity to add another nucleosynthesis process to neutron capture nucleosynthesis but simply mentioned proton captures without assigning a special name to the process. He also thought about alternatives, for example photodisintegration (called the γ-process today) or a combination of p-process and photodisintegration.
See also
p-nuclei
Nucleosynthesis
rp-process
References
Nuclear physics
Nucleosynthesis
Supernovae
Proton
Concepts in stellar astronomy | P-process | [
"Physics",
"Chemistry",
"Astronomy"
] | 856 | [
"Nuclear fission",
"Supernovae",
"Concepts in astrophysics",
"Astronomical events",
"Astrophysics",
"Nucleosynthesis",
"Explosions",
"Nuclear physics",
"Concepts in stellar astronomy",
"Nuclear fusion"
] |
352,950 | https://en.wikipedia.org/wiki/List%20of%20general%20topology%20topics | This is a list of general topology topics.
Basic concepts
Topological space
Topological property
Open set, closed set
Clopen set
Closure (topology)
Boundary (topology)
Dense (topology)
G-delta set, F-sigma set
closeness (mathematics)
neighbourhood (mathematics)
Continuity (topology)
Homeomorphism
Local homeomorphism
Open and closed maps
Germ (mathematics)
Base (topology), subbase
Open cover
Covering space
Atlas (topology)
Limits
Limit point
Net (topology)
Filter (topology)
Ultrafilter
Topological properties
Baire category theorem
Nowhere dense
Baire space
Banach–Mazur game
Meagre set
Comeagre set
Compactness and countability
Compact space
Relatively compact subspace
Heine–Borel theorem
Tychonoff's theorem
Finite intersection property
Compactification
Measure of non-compactness
Paracompact space
Locally compact space
Compactly generated space
Axiom of countability
Sequential space
First-countable space
Second-countable space
Separable space
Lindelöf space
Sigma-compact space
Connectedness
Connected space
Separation axioms
T0 space
T1 space
Hausdorff space
Completely Hausdorff space
Regular space
Tychonoff space
Normal space
Urysohn's lemma
Tietze extension theorem
Paracompact
Separated sets
Topological constructions
Direct sum and the dual construction product
Subspace and the dual construction quotient
Topological tensor product
Examples
Discrete space
Locally constant function
Trivial topology
Cofinite topology
Finer topology
Product topology
Restricted product
Quotient space
Unit interval
Continuum (topology)
Extended real number line
Long line (topology)
Sierpinski space
Cantor set, Cantor space, Cantor cube
Space-filling curve
Topologist's sine curve
Uniform norm
Weak topology
Strong topology
Hilbert cube
Lower limit topology
Sorgenfrey plane
Real tree
Compact-open topology
Zariski topology
Kuratowski closure axioms
Unicoherent
Solenoid (mathematics)
Uniform spaces
Uniform continuity
Lipschitz continuity
Uniform isomorphism
Uniform property
Uniformly connected space
Metric spaces
Metric topology
Manhattan distance
Ultrametric space
P-adic numbers, p-adic analysis
Open ball
Bounded subset
Pointwise convergence
Metrization theorems
Complete space
Cauchy sequence
Banach fixed-point theorem
Polish space
Hausdorff distance
Intrinsic metric
Category of metric spaces
Topology and order theory
Stone duality
Stone's representation theorem for Boolean algebras
Specialization (pre)order
Sober space
Spectral space
Alexandrov topology
Upper topology
Scott topology
Scott continuity
Lawson topology
Descriptive set theory
Polish Space
Cantor space
Dimension theory
Inductive dimension
Lebesgue covering dimension
Lebesgue's number lemma
Combinatorial topology
Polytope
Simplex
Simplicial complex
CW complex
Manifold
Triangulation
Barycentric subdivision
Sperner's lemma
Simplicial approximation theorem
Nerve of an open covering
Foundations of algebraic topology
Simply connected
Semi-locally simply connected
Path (topology)
Homotopy
Homotopy lifting property
Pointed space
Wedge sum
Smash product
Cone (topology)
Adjunction space
Topology and algebra
Topological algebra
Topological group
Topological ring
Topological vector space
Topological module
Topological abelian group
Properly discontinuous
Sheaf space
See also
Topology glossary
List of topology topics
List of geometric topology topics
List of algebraic topology topics
Publications in topology
Mathematics-related lists
Outlines of mathematics and logic
Outlines | List of general topology topics | [
"Mathematics"
] | 648 | [
"General topology",
"Topology",
"nan"
] |
352,960 | https://en.wikipedia.org/wiki/Hysterectomy | Hysterectomy is the surgical removal of the uterus and cervix. Supracervical hysterectomy refers to removal of the uterus while the cervix is spared. These procedures may also involve removal of the ovaries (oophorectomy), fallopian tubes (salpingectomy), and other surrounding structures. The term “partial” or “total” hysterectomy are lay-terms that incorrectly describe the addition or omission of oophorectomy at the time of hysterectomy. These procedures are usually performed by a gynecologist. Removal of the uterus renders the patient unable to bear children (as does removal of ovaries and fallopian tubes) and has surgical risks as well as long-term effects, so the surgery is normally recommended only when other treatment options are not available or have failed. It is the second most commonly performed gynecological surgical procedure, after cesarean section, in the United States. Nearly 68 percent were performed for conditions such as endometriosis, irregular bleeding, and uterine fibroids. It is expected that the frequency of hysterectomies for non-malignant indications will continue to fall given the development of alternative treatment options.
Medical uses
Hysterectomy is a major surgical procedure that has risks and benefits. It affects the hormonal balance and overall health of patients. Because of this, hysterectomy is normally recommended as a last resort after pharmaceutical or other surgical options have been exhausted to remedy certain intractable and severe uterine/reproductive system conditions. There may be other reasons for a hysterectomy to be requested. Such conditions and/or indications include, but are not limited to:
Endometriosis: growth of the uterine lining outside the uterine cavity. This inappropriate tissue growth can lead to pain and bleeding.
Adenomyosis: a form of endometriosis, where the uterine lining has grown into and sometimes through the uterine wall musculature. This can thicken the uterine walls and also contribute to pain and bleeding.
Heavy menstrual bleeding: irregular or excessive menstrual bleeding for greater than a week. It can disturb regular quality of life and may be indicative of a more serious condition.
Uterine fibroids: benign growths on the uterus wall. These muscular noncancerous tumors can grow in single form or in clusters and can cause extreme pain and bleeding.
Uterine prolapse: when the uterus sags down due to weakened or stretched pelvic floor muscles potentially causing the uterus to protrude out of the vagina in more severe cases.
Reproductive system cancer prevention: especially if there is a strong family history of reproductive system cancers (especially breast cancer in conjunction with BRCA1 or BRCA2 mutation), or as part of recovery from such cancers.
Gynecologic cancer: depending on the type of hysterectomy, can aid in treatment of cancer or precancer of the endometrium, cervix, or uterus. In order to protect against or treat cancer of the ovaries, would need an oophorectomy.
Transgender (trans) male affirmation: aids in gender dysphoria, prevention of future gynecologic problems, and transition to obtaining new legal gender documentation.
Severe developmental disabilities: this treatment is controversial at best. In the United States, specific cases of sterilization due to developmental disabilities have been found by state-level Supreme Courts to violate the patient's constitutional and common-law rights.
Postpartum: to remove either a severe case of placenta praevia (a placenta that has either formed over or inside the birth canal) or placenta percreta (a placenta that has grown into and through the wall of the uterus to attach itself to other organs), as well as a last resort in case of excessive obstetrical haemorrhage.
Chronic pelvic pain: should try to obtain pain etiology, although may have no known cause.
PMS and menstrual pain and other psychic and physical conditions caused by menstrual period and causing suffering and diminishing life quality.
Risks and adverse effects
In 1995, the short-term mortality (within 40 days of surgery) was reported at 0.38 cases per 1000 when performed for benign causes. Risks for surgical complications were presence of fibroids, younger age (vascular pelvis with higher bleeding risk and larger uterus), dysfunctional uterine bleeding and parity.
The mortality rate is several times higher when performed in patients who are pregnant, have cancer or other complications.
Long-term effect on all case mortality is relatively small. Women under the age of 45 years have a significantly increased long-term mortality that is believed to be caused by the hormonal side effects of hysterectomy and prophylactic oophorectomy. This effect is not limited to pre-menopausal women; even women who have already entered menopause were shown to have experienced a decrease in long-term survivability post-oophorectomy.
Approximately 35% of women after hysterectomy undergo another related surgery within 2 years.
Ureteral injury is not uncommon and occurs in 0.2 per 1,000 cases of vaginal hysterectomy and 1.3 per 1,000 cases of abdominal hysterectomy. The injury usually occurs in the distal ureter close to the infundibulopelvic ligament or as a ureter crosses below the uterine artery, often from blind clamping and ligature placement to control hemorrhage.
Recovery
Hospital stay is 3 to 5 days or more for the abdominal procedure and between 1 and 2 days (but possibly longer) for vaginal or laparoscopically assisted vaginal procedures. After the procedure, the American College of Obstetricians and Gynecologists recommends not inserting anything into the vagina for the first 6 weeks (including inserting tampons or having sex).
Unintended oophorectomy and premature ovarian failure
Removal of one or both ovaries is performed in a substantial number of hysterectomies that were intended to be ovary sparing.
The average onset age of menopause after hysterectomy with ovarian conservation is 3.7 years earlier than average. This has been suggested to be due to the disruption of blood supply to the ovaries after a hysterectomy or due to missing endocrine feedback of the uterus. The function of the remaining ovaries is significantly affected in about 40% of people, some of them even require hormone replacement therapy. Surprisingly, a similar and only slightly weaker effect has been observed for endometrial ablation which is often considered as an alternative to hysterectomy.
A substantial number of women develop benign ovarian cysts after a hysterectomy.
Effects on sexual life and pelvic pain
After hysterectomy for benign indications the majority of patients report improvement in sexual life and pelvic pain. A smaller share of patients report worsening of sexual life and other problems. The picture is significantly different for hysterectomy performed for malignant reasons; the procedure is often more radical with substantial side effects. A proportion of patients who undergo a hysterectomy for chronic pelvic pain continue to have pelvic pain after a hysterectomy and develop dyspareunia (painful sexual intercourse).
Premature menopause and its effects
Estrogen levels fall sharply when the ovaries are removed, removing the protective effects of estrogen on the cardiovascular and skeletal systems. This condition is often referred to as "surgical menopause", although it is substantially different from a naturally occurring menopausal state; the former is a sudden hormonal shock to the body that causes rapid onset of menopausal symptoms such as hot flashes, while the latter is a gradually occurring decrease of hormonal levels over a period of years with uterus intact and ovaries able to produce hormones even after the cessation of menstrual periods.
One study showed that risk of subsequent cardiovascular disease is substantially increased for women who had hysterectomy at age 50 or younger. No association was found for women undergoing the procedure after age 50. The risk is higher when ovaries are removed but still noticeable even when ovaries are preserved.
Several other studies have found that osteoporosis (decrease in bone density) and increased risk of bone fractures are associated with hysterectomies. This has been attributed to the modulatory effect of estrogen on calcium metabolism and the drop in serum estrogen levels after menopause can cause excessive loss of calcium leading to bone wasting.
Hysterectomies have also been linked with higher rates of heart disease and weakened bones. Those who have undergone a hysterectomy with both ovaries removed typically have reduced testosterone levels as compared to those left intact. Reduced levels of testosterone in women are predictive of height loss, which may occur as a result of reduced bone density, while increased testosterone levels in women are associated with a greater sense of sexual desire.
Oophorectomy before the age of 45 is associated with a fivefold mortality from neurologic and mental disorders.
Urinary incontinence and vaginal prolapse
Urinary incontinence and vaginal prolapse are well known adverse effects that develop with high frequency a very long time after the surgery. Typically, those complications develop 10–20 years after the surgery. For this reason exact numbers are not known, and risk factors are poorly understood. It is also unknown if the choice of surgical technique has any effect. It has been assessed that the risk for urinary incontinence is approximately doubled within 20 years after hysterectomy. One long-term study found a 2.4 fold increased risk for surgery to correct urinary stress incontinence following hysterectomy.
The risk for vaginal prolapse depends on factors such as number of vaginal deliveries, the difficulty of those deliveries, and the type of labor. Overall incidence is approximately doubled after hysterectomy.
Adhesion formation and bowel obstruction
The formation of postoperative adhesions is a particular risk after hysterectomy because of the extent of dissection involved as well as the fact the hysterectomy wound is in the most gravity-dependent part of the pelvis into which a loop of bowel may easily fall. In one review, incidence of small bowel obstruction due to intestinal adhesion was found to be 15.6% in non-laparoscopic total abdominal hysterectomies vs. 0.0% in laparoscopic hysterectomies.
Wound infection
Wound infection occurs in approximately 3% of cases of abdominal hysterectomy. The risk is increased by obesity, diabetes, immunodeficiency disorder, use of systemic corticosteroids, smoking, wound hematoma, and preexisting infection such as chorioamnionitis and pelvic inflammatory disease. Such wound infections mainly take the form of either incisional abscess or wound cellulitis. Typically, both confer erythema, but only an incisional abscess confers purulent drainage. The recommended treatment of an incisional abscess after hysterectomy is by incision and drainage, and then coverage by a thin layer of gauze followed by sterile dressing. The dressing should be changed and the wound irrigated with normal saline at least twice each day. In addition, it is recommended to administer an antibiotic active against staphylococci and streptococci, preferably vancomycin when there is a risk of MRSA. The wound can be allowed to close by secondary intention. Alternatively, if the infection is cleared and healthy granulation tissue is evident at the base of the wound, the edges of the incision may be reapproximated, such as by using butterfly stitches, staples or sutures. Sexual intercourse remains possible after hysterectomy. Reconstructive surgery remains an option for women who have experienced benign and malignant conditions.
Other rare problems
Hysterectomy may cause an increased risk of the relatively rare renal cell carcinoma. The increased risk is particularly pronounced for young women; the risk was lower after vaginally performed hysterectomies. Hormonal effects or injury of the ureter were considered as possible explanations. In some cases the renal cell carcinoma may be a manifestation of an undiagnosed hereditary leiomyomatosis and renal cell cancer syndrome.
Removal of the uterus without removing the ovaries can produce a situation that on rare occasions can result in ectopic pregnancy due to an undetected fertilization that had yet to descend into the uterus before surgery. Two cases have been identified and profiled in an issue of the Blackwell Journal of Obstetrics and Gynecology; over 20 other cases have been discussed in additional medical literature. On very rare occasions, sexual intercourse after hysterectomy may cause a transvaginal evisceration of the small bowel. The vaginal cuff is the uppermost region of the vagina that has been sutured closed. A rare complication, it can dehisce and allow the evisceration of the small bowel into the vagina.
Alternatives
Depending on the indication there are alternatives to hysterectomy:
Heavy bleeding
Levonorgestrel intrauterine devices are highly effective at controlling dysfunctional uterine bleeding (DUB) or menorrhagia and should be considered before any surgery.
Menorrhagia (heavy or abnormal menstrual bleeding) may also be treated with the less invasive endometrial ablation which is an outpatient procedure in which the lining of the uterus is destroyed with heat, mechanically or by radio frequency ablation. Endometrial ablation greatly reduces or eliminates monthly bleeding in ninety percent of patients with DUB. It is not effective for patients with very thick uterine lining or uterine fibroids.
Uterine fibroids
Levonorgestrel intrauterine devices are highly effective in limiting menstrual blood flow and improving other symptoms. Side effects are typically very moderate because the levonorgestrel (a progestin) is released in low concentration locally. There is now substantial evidence that Levongestrel-IUDs provide good symptomatic relief for women with fibroids.
Uterine fibroids may be removed and the uterus reconstructed in a procedure called "myomectomy". A myomectomy may be performed through an open incision, laparoscopically, or through the vagina (hysteroscopy).
Uterine artery embolization (UAE) is a minimally invasive procedure for treatment of uterine fibroids. Under local anesthesia a catheter is introduced into the femoral artery at the groin and advanced under radiographic control into the uterine artery. A mass of microspheres or polyvinyl alcohol (PVA) material (an embolus) is injected into the uterine arteries in order to block the flow of blood through those vessels. The restriction in blood supply usually results in significant reduction of fibroids and improvement of heavy bleeding tendency. The 2012 Cochrane review comparing hysterectomy and UAE did not find any major advantage for either procedure. While UAE is associated with shorter hospital stay and a more rapid return to normal daily activities, it was also associated with a higher risk for minor complications later on. There were no differences between UAE and hysterectomy with regards to major complications.
Uterine fibroids can be removed with a non-invasive procedure called Magnetic Resonance guided Focused Ultrasound (MRgFUS).
Uterine prolapse
Prolapse may also be corrected surgically without removal of the uterus. There are several strategies that can be utilized to help strengthen pelvic floor muscles and prevent the worsening of prolapse. These include, but are not limited to, use of "kegel exercises", vaginal pessary, constipation relief, weight management, and care when lifting heavy objects.
Types
Hysterectomy, in the literal sense of the word, means merely removal of the uterus. However other organs such as ovaries, fallopian tubes, and the cervix are very frequently removed as part of the surgery.
Radical hysterectomy: complete removal of the uterus, cervix, upper vagina, and parametrium. Indicated for cancer. Lymph nodes, ovaries, and fallopian tubes are also usually removed in this situation, such as in .
Total hysterectomy: complete removal of the uterus and cervix, with or without oophorectomy.
Subtotal hysterectomy: removal of the uterus, leaving the cervix in situ.
Subtotal (supracervical) hysterectomy was originally proposed with the expectation that it may improve sexual functioning after hysterectomy, it has been postulated that removing the cervix causes excessive neurologic and anatomic disruption, thus leading to vaginal shortening, vaginal vault prolapse, and vaginal cuff granulations. These theoretical advantages were not confirmed in practice, but other advantages over total hysterectomy emerged. The principal disadvantage is that risk of cervical cancer is not eliminated and women may continue cyclical bleeding (although substantially less than before the surgery).
These issues were addressed in a systematic review of total versus supracervical hysterectomy for benign gynecological conditions, which reported the following findings:
There was no difference in the rates of incontinence, constipation, measures of sexual function, or alleviation of pre-surgery symptoms.
Length of surgery and amount of blood lost during surgery were significantly reduced during supracervical hysterectomy compared to total hysterectomy, but there was no difference in post-operative transfusion rates.
Febrile morbidity was less likely and ongoing cyclic vaginal bleeding one year after surgery was more likely after supracervical hysterectomy.
There was no difference in the rates of other complications, recovery from surgery, or readmission rates.
In the short-term, randomized trials have shown that cervical preservation or removal does not affect the rate of subsequent pelvic organ prolapse.
Supracervical hysterectomy does not eliminate the possibility of having cervical cancer since the cervix itself is left intact and may be contraindicated in women with increased risk of this cancer; regular pap smears to check for cervical dysplasia or cancer are still needed.
Technique
Hysterectomy can be performed in different ways. The oldest known technique is vaginal hysterectomy. The first planned hysterectomy was performed by Konrad Johann Martin Langenbeck - Surgeon General of the Hannovarian army, although there are records of vaginal hysterectomy for prolapse going back as far as 50BC.
The first abdominal hysterectomy recorded was by Ephraim McDowell. He performed the procedure in 1809 for a mother of five for a large ovarian mass on her kitchen table.
In modern medicine today, laparoscopic vaginal (with additional instruments passing through ports in small abdominal incisions, close or in the navel) and total laparoscopic techniques have been developed.
Abdominal hysterectomy
Most hysterectomies in the United States are done via laparotomy (abdominal incision, not to be confused with laparoscopy). A transverse (Pfannenstiel) incision is made through the abdominal wall, usually above the pubic bone, as close to the upper hair line of the individual's lower pelvis as possible, similar to the incision made for a caesarean section. This technique allows physicians the greatest access to the reproductive structures and is normally done for removal of the entire reproductive complex. The recovery time for an open hysterectomy is 4–6 weeks and sometimes longer due to the need to cut through the abdominal wall. Historically, the biggest problem with this technique was infections, but infection rates are well-controlled and not a major concern in modern medical practice. An open hysterectomy provides the most effective way to explore the abdominal cavity and perform complicated surgeries. Before the refinement of the vaginal and laparoscopic vaginal techniques, it was also the only possibility to achieve subtotal hysterectomy; meanwhile, the vaginal route is the preferable technique in most circumstances.
Vaginal hysterectomy
Vaginal hysterectomy is performed entirely through the vaginal canal and has clear advantages over abdominal surgery such as fewer complications, shorter hospital stays and shorter healing time. Abdominal hysterectomy, the most common method, is used in cases such as after caesarean delivery, when the indication is cancer, when complications are expected, or surgical exploration is required.
Laparoscopic-assisted vaginal hysterectomy
With the development of laparoscopic techniques in the 1970s and 1980s, the "laparoscopic-assisted vaginal hysterectomy" (LAVH) has gained great popularity among gynecologists because compared with the abdominal procedure it is less invasive and the post-operative recovery is much faster. It also allows better exploration and slightly more complicated surgeries than the vaginal procedure. LAVH begins with laparoscopy and is completed such that the final removal of the uterus (with or without removing the ovaries) is via the vaginal canal. Thus, LAVH is also a total hysterectomy; the cervix is removed with the uterus. If the cervix is removed along with the uterus, the upper portion of the vagina is sutured together and called the vaginal cuff.
Laparoscopic-assisted supracervical hysterectomy
The "laparoscopic-assisted supracervical hysterectomy" (LASH) was later developed to remove the uterus without removing the cervix using a morcellator which cuts the uterus into small pieces that can be removed from the abdominal cavity via the laparoscopic ports.
Total laparoscopic hysterectomy
Total laparoscopic hysterectomy (TLH) was developed in the early 90s by Prabhat K. Ahluwalia in Upstate New York. TLH is performed solely through the laparoscopes in the abdomen, starting at the top of the uterus, typically with a uterine manipulator. The entire uterus is disconnected from its attachments using long thin instruments through the "ports". Then all tissue to be removed is passed through the small abdominal incisions.
Other techniques
Supracervical (subtotal) laparoscopic hysterectomy (LSH) is performed similar to the total laparoscopic surgery but the uterus is amputated between the cervix and fundus.
Dual-port laparoscopy is a form of laparoscopic surgery using two 5 mm midline incisions: the uterus is detached through the two ports and removed through the vagina.
"Robotic hysterectomy" is a variant of laparoscopic surgery using special remotely controlled instruments that allow the surgeon finer control as well as three-dimensional magnified vision.
Comparison of techniques
Patient characteristics such as the reason for needing a hysterectomy, uterine size, descent of the uterus, presence of diseased tissues surrounding the uterus, previous surgery in the pelvic region, obesity, history of pregnancy, the possibility of endometriosis, or the need for an oophorectomy, will influence a surgeon's surgical approach when performing a hysterectomy.
Vaginal hysterectomy is recommended over other variants where possible for women with benign diseases. Vaginal hysterectomy was shown to be superior to LAVH and some types of laparoscopic surgery causing fewer short- and long-term complications, more favorable effect on sexual experience with shorter recovery times and fewer costs.
Laparoscopic surgery offers certain advantages when vaginal surgery is not possible but also has the disadvantage of significantly longer time required for the surgery.
In one 2004 study conducted in the UK comparing abdominal (laparotomic) and laparoscopic techniques, laparoscopic surgery was found to cause longer operation time and a higher rate of major complications while offering much quicker healing. In another study conducted in 2014, laparoscopy was found to be "a safe alternative to laparotomy" in patients receiving total hysterectomy for endometrial cancer. Researchers concluded the procedure "offers markedly improved perioperative outcomes with a lower reoperation rate and fewer postoperative complications when the standard of care shifts from open surgery to laparoscopy in a university hospital".
The abdominal technique is very often applied in difficult circumstances or when complications are expected. Given these circumstances the complication rate and time required for surgery compares very favorably with other techniques, however time required for healing is much longer.
Hysterectomy by abdominal laparotomy is correlated with much higher incidence of intestinal adhesions than other techniques.
Time required for completion of surgery in the eVAL trial is reported as follows:
abdominal 55.2 minutes average, range 19–155
vaginal 46.6 minutes average, range 14–168
laparoscopic (all variants) 82.5 minutes average, range 10–325 (combined data from both trial arms)
Morcellation has been widely used especially in laparoscopic techniques and sometimes for the vaginal technique, but now appears to be associated with a considerable risk of spreading benign or malignant tumors. In April 2014, the FDA issued a memo alerting medical practitioners to the risks of power morcellation.
Robotic assisted surgery is presently used in several countries for hysterectomies. Additional research is required to determine the benefits and risks involved, compared to conventional laparoscopic surgery.
A 2014 Cochrane review found that robotic assisted surgery may have a similar complication rate when compared to conventional laparoscopic surgery. In addition, there is evidence to suggest that although the surgery make take longer, robotic assisted surgery may result in shorter hospital stays. More research is necessary to determine if robotic assisted hysterectomies are beneficial for people with cancer.
Previously reported marginal advantages of robotic assisted surgery could not be confirmed; only differences in hospital stay and cost remain statistically significant. In addition, concerns over widespread misleading marketing claims have been raised.
Incidence
Canada
In Canada, the number of hysterectomies between 2008 and 2009 was almost 47,000. The national rate for the same timeline was 338 per 100,000 population, down from 484 per 100,000 in 1997. The reasons for hysterectomies differed depending on whether the woman was living in an urban or rural location. Urban women opted for hysterectomies due to uterine fibroids and rural women had hysterectomies mostly for menstrual disorders.
United States
Hysterectomy is the second most common major surgery among women in the United States (the first is cesarean section). In the 1980s and 1990s, this statistic was the source of concern among some consumer rights groups and puzzlement among the medical community, and brought about informed choice advocacy groups like Hysterectomy Educational Resources and Services (HERS) Foundation, founded by Nora W. Coffey in 1982.
According to the National Center for Health Statistics, of the 617,000 hysterectomies performed in 2004, 73% also involved the surgical removal of the ovaries. There are currently an estimated 22 million women in the United States who have undergone this procedure. Nearly 68 percent were performed for benign conditions such as endometriosis, irregular bleeding and uterine fibroids. Such rates being highest in the industrialized world has led to the controversy that hysterectomies are being largely performed for unwarranted reasons. More recent data suggests that the number of hysterectomies performed has declined in every state in the United States. From 2010 to 2013, there were 12 percent fewer hysterectomies performed, and the types of hysterectomies were more minimally invasive in nature, reflected by a 17 percent increase in laparoscopic procedures.
United Kingdom
In the UK, 1 in 5 women is likely to have a hysterectomy by the age of 60, and ovaries are removed in about 20% of hysterectomies.
Germany
The number of hysterectomies in Germany has been constant for many years. In 2006, 149,456 hysterectomies were performed. Additionally, of these, 126,743 (84.8%) successfully benefitted the patient without incident. Women between the ages of 40 and 49 accounted for 50 percent of hysterectomies, and those between the ages of 50 and 59 accounted for 20 percent. In 2007, the number of hysterectomies decreased to 138,164. In recent years, the technique of laparoscopic or laparoscopically assisted hysterectomies has been raised into the foreground.
Denmark
In Denmark, the number of hysterectomies from the 1980s to the 1990s decreased by 38 percent. In 1988, there were 173 such surgeries per 100,000 women, and by 1998 this number had been reduced to 107. The proportion of abdominal supracervical hysterectomies in the same time period grew from 7.5 to 41 percent. A total of 67,096 women underwent hysterectomy during these years.
See also
List of surgeries by type
References
External links
Oncolex.org features live footage videos showing radical hysterctomies
Hudson's FTM Resource Guide, "FTM Gender Reassignment Surgery
Masculinizing surgery
Gynaecology
Gynecological surgery
Reproductive system
Sterilization (medicine)
Surgical oncology
Surgical removal procedures | Hysterectomy | [
"Biology"
] | 6,397 | [
"Behavior",
"Reproductive system",
"Sex",
"Reproduction",
"Organ systems"
] |
352,985 | https://en.wikipedia.org/wiki/Chen%20Jingrun | Chen Jingrun (; 22 May 1933 – 19 March 1996), also known as Jing-Run Chen, was a Chinese mathematician who made significant contributions to number theory, including Chen's theorem and the Chen prime.
Life and career
Chen was the third son in a large family from Fuzhou, Fujian, China. His father was a postal worker. Chen Jingrun graduated from the Mathematics Department of Xiamen University in 1953. His advisor at the Chinese Academy of Sciences was Hua Luogeng.
His work on the twin prime conjecture, Waring's problem, Goldbach's conjecture and Legendre's conjecture led to progress in analytic number theory. In a 1966 paper he proved what is now called Chen's theorem: every sufficiently large even number can be written as the sum of a prime and a semiprime (the product of two primes) – e.g., 100 = 23 + 7·11. Despite being persecuted during the Cultural Revolution, he expanded his proof in the 1970s.
After the end of the Cultural Revolution, Xu Chi wrote a biography of Chen entitled Goldbach's Conjecture (). First published in People's Literature in January 1978, it was reprinted on the People's Daily a month later and became a national sensation. Chen became a household name in China and received a sackful of love letters from all over the country within two months.
Chen died of complications of pneumonia on 19 March 1996, at the age of 63 years.
Legacy
The asteroid 7681 Chenjingrun, discovered in 1996, was named after him.
In 1999, China issued an 80-cent postage stamp, titled The Best Result of Goldbach Conjecture, with a silhouette of Chen and the inequality:
Several statues in China have been built in memory of Chen. At Xiamen University, the names of Chen and four other mathematicians — Peter Gustav Lejeune Dirichlet, Matti Jutila, Yuri Linnik, and Pan Chengdong — are inscribed in the marble slab behind Chen's statue (see image).
Works
J.-R. Chen, On the representation of a large even integer as the sum of a prime and a product of at most two primes, Sci. Sinica 16 (1973), 157–176.
Chen, J.R, "On the representation of a large even integer as the sum of a prime and the product of at most two primes". [Chinese] J. Kexue Tongbao 17 (1966), 385–386.
"Fundamental Number Theory"
References
Pan Chengdong and Wang Yuan, Chen Jingrun: a brief outline of his life and works, Acta Math. Sinica (NS) 12 (1996) 225–233.
External links
Chen's home page at the Chinese Institute of Mathematics.
1933 births
1996 deaths
20th-century Chinese mathematicians
Academic staff of Guizhou Nationalities University
Academic staff of Henan University
Academic staff of Huazhong University of Science and Technology
Academic staff of Qingdao University
Academic staff of Xiamen University
Academic staff of Fujian Normal University
Delegates to the 4th National People's Congress
Delegates to the 5th National People's Congress
Delegates to the 6th National People's Congress
Educators from Fujian
Mathematicians from Fujian
Members of the Chinese Academy of Sciences
Number theorists
People from Fuzhou
Xiamen University alumni | Chen Jingrun | [
"Mathematics"
] | 672 | [
"Number theorists",
"Number theory"
] |
352,996 | https://en.wikipedia.org/wiki/Magic%20number%20%28programming%29 | In computer programming, a magic number is any of the following:
A unique value with unexplained meaning or multiple occurrences which could (preferably) be replaced with a named constant
A constant numerical or text value used to identify a file format or protocol )
A distinctive unique value that is unlikely to be mistaken for other meanings (e.g., Globally Unique Identifiers)
Unnamed numerical constants
The term magic number or magic constant refers to the anti-pattern of using numbers directly in source code. This has been referred to as breaking one of the oldest rules of programming, dating back to the COBOL, FORTRAN and PL/1 manuals of the 1960s. The use of unnamed magic numbers in code obscures the developers' intent in choosing that number, increases opportunities for subtle errors (e.g. is every digit correct in 3.14159265358979323846 and can be rounded to 3.14159?) and makes it more difficult for the program to be adapted and extended in the future. Replacing all significant magic numbers with named constants (also called explanatory variables) makes programs easier to read, understand and maintain.
Names chosen to be meaningful in the context of the program can result in code that is more easily understood by a maintainer who is not the original author (or even by the original author after a period of time). An example of an uninformatively named constant is int SIXTEEN = 16, while int NUMBER_OF_BITS = 16 is more descriptive.
The problems associated with magic 'numbers' described above are not limited to numerical types and the term is also applied to other data types where declaring a named constant would be more flexible and communicative. Thus, declaring const string testUserName = "John" is better than several occurrences of the 'magic value' "John" in a test suite.
For example, if it is required to randomly shuffle the values in an array representing a standard pack of playing cards, this pseudocode does the job using the Fisher–Yates shuffle algorithm:
for i from 1 to 52
j := i + randomInt(53 - i) - 1
a.swapEntries(i, j)
where a is an array object, the function randomInt(x) chooses a random integer between 1 and x, inclusive, and swapEntries(i, j) swaps the ith and jth entries in the array. In the preceding example, 52 and 53 are magic numbers, also not clearly related to each other. It is considered better programming style to write the following:
int deckSize:= 52
for i from 1 to deckSize
j := i + randomInt(deckSize + 1 - i) - 1
a.swapEntries(i, j)
This is preferable for several reasons:
It is easier to read and understand. A programmer reading the first example might wonder, What does the number 52 mean here? Why 52? The programmer might infer the meaning after reading the code carefully, but it is not obvious. Magic numbers become particularly confusing when the same number is used for different purposes in one section of code.
It is easier to alter the value of the number, as it is not duplicated. Changing the value of a magic number is error-prone, because the same value is often used several times in different places within a program. Also, when two semantically distinct variables or numbers have the same value they may be accidentally both edited together. To modify the first example to shuffle a Tarot deck, which has 78 cards, a programmer might naively replace every instance of 52 in the program with 78. This would cause two problems. First, it would miss the value 53 on the second line of the example, which would cause the algorithm to fail in a subtle way. Second, it would likely replace the characters "52" everywhere, regardless of whether they refer to the deck size or to something else entirely, such as the number of weeks in a Gregorian calendar year, or more insidiously, are part of a number like "1523", all of which would introduce bugs. By contrast, changing the value of the deckSize variable in the second example would be a simple, one-line change.
It encourages and facilitates documentation. The single place where the named variable is declared makes a good place to document what the value means and why it has the value it does. Having the same value in a plethora of places either leads to duplicate comments (and attendant problems when updating some but missing some) or leaves no one place where it's both natural for the author to explain the value and likely the reader shall look for an explanation.
The declarations of "magic number" variables are placed together, usually at the top of a function or file, facilitating their review and change.
It helps detect typos. Using a variable (instead of a literal) takes advantage of a compiler's checking. Accidentally typing "62" instead of "52" would go undetected, whereas typing "dekSize" instead of "deckSize" would result in the compiler's warning that dekSize is undeclared.
It can reduce typing in some IDEs. If an IDE supports code completion, it will fill in most of the variable's name from the first few letters.
It facilitates parameterization. For example, to generalize the above example into a procedure that shuffles a deck of any number of cards, it would be sufficient to turn deckSize into a parameter of that procedure, whereas the first example would require several changes.
function shuffle (int deckSize)
for i from 1 to deckSize
j := i + randomInt(deckSize + 1 - i) - 1
a.swapEntries(i, j)
Disadvantages are:
When the named constant is not defined near its use, it hurts the locality, and thus comprehensibility, of the code. Putting the 52 in a possibly distant place means that, to understand the workings of the "for" loop completely (for example to estimate the run-time of the loop), one must track down the definition and verify that it is the expected number. This is easy to avoid (by relocating the declaration) when the constant is only used in one portion of the code. When the named constant is used in disparate portions, on the other hand, the remote location is a clue to the reader that the same value appears in other places in the code, which may also be worth looking into.
It may make the code more verbose. The declaration of the constant adds a line. When the constant's name is longer than the value's, particularly if several such constants appear in one line, it may make it necessary to split one logical statement of the code across several lines. An increase in verbosity may be justified when there is some likelihood of confusion about the constant, or when there is a likelihood the constant may need to be changed, such as reuse of a shuffling routine for other card games. It may equally be justified as an increase in expressiveness.
It may be slower to process the expression deckSize + 1 at run-time than the value "53", although most modern compilers and interpreters will notice that deckSize has been declared as a constant and pre-calculate the value 53 in the compiled code. Even when that's not an option, loop optimization will move the addition so that it is performed before the loop. There is therefore usually no (or negligible) speed penalty compared to using magic numbers in code. Especially the cost of debugging and the time needed trying to understand non-explanatory code must be held against the tiny calculation cost.
Accepted uses
In some contexts, the use of unnamed numerical constants is generally accepted (and arguably "not magic"). While such acceptance is subjective, and often depends on individual coding habits, the following are common examples:
the use of 0 and 1 as initial or incremental values in a for loop, such as
the use of 2 to check whether a number is even or odd, as in isEven = (x % 2 == 0), where % is the modulo operator
the use of simple arithmetic constants, e.g., in expressions such as circumference = 2 * Math.PI * radius, or for calculating the discriminant of a quadratic equation as d = b^2 − 4*a*c
the use of powers of 10 to convert metric values (e.g. between grams and kilograms) or to calculate percentage and per mille values
exponents in expressions such as (f(x) ** 2 + f(y) ** 2) ** 0.5 for
The constants 1 and 0 are sometimes used to represent the Boolean values true and false in programming languages without a Boolean type, such as older versions of C. Most modern programming languages provide a boolean or bool primitive type and so the use of 0 and 1 is ill-advised. This can be more confusing since 0 sometimes means programmatic success (when -1 means failure) and failure in other cases (when 1 means success).
In C and C++, 0 represents the null pointer. As with Boolean values, the C standard library includes a macro definition NULL whose use is encouraged. Other languages provide a specific null or nil value and when this is the case no alternative should be used. The typed pointer constant nullptr has been introduced with C++11.
Format indicators
Origin
Format indicators were first used in early Version 7 Unix source code.
Unix was ported to one of the first DEC PDP-11/20s, which did not have memory protection. So early versions of Unix used the relocatable memory reference model. Pre-Sixth Edition Unix versions read an executable file into memory and jumped to the first low memory address of the program, relative address zero. With the development of paged versions of Unix, a header was created to describe the executable image components. Also, a branch instruction was inserted as the first word of the header to skip the header and start the program. In this way a program could be run in the older relocatable memory reference (regular) mode or in paged mode. As more executable formats were developed, new constants were added by incrementing the branch offset.
In the Sixth Edition source code of the Unix program loader, the exec() function read the executable (binary) image from the file system. The first 8 bytes of the file was a header containing the sizes of the program (text) and initialized (global) data areas. Also, the first 16-bit word of the header was compared to two constants to determine if the executable image contained relocatable memory references (normal), the newly implemented paged read-only executable image, or the separated instruction and data paged image. There was no mention of the dual role of the header constant, but the high order byte of the constant was, in fact, the operation code for the PDP-11 branch instruction (octal 000407 or hex 0107). Adding seven to the program counter showed that if this constant was executed, it would branch the Unix exec() service over the executable image eight byte header and start the program.
Since the Sixth and Seventh Editions of Unix employed paging code, the dual role of the header constant was hidden. That is, the exec() service read the executable file header (meta) data into a kernel space buffer, but read the executable image into user space, thereby not using the constant's branching feature. Magic number creation was implemented in the Unix linker and loader and magic number branching was probably still used in the suite of stand-alone diagnostic programs that came with the Sixth and Seventh Editions. Thus, the header constant did provide an illusion and met the criteria for magic.
In Version Seven Unix, the header constant was not tested directly, but assigned to a variable labeled ux_mag and subsequently referred to as the magic number. Probably because of its uniqueness, the term magic number came to mean executable format type, then expanded to mean file system type, and expanded again to mean any type of file.
In files
Magic numbers are common in programs across many operating systems. Magic numbers implement strongly typed data and are a form of in-band signaling to the controlling program that reads the data type(s) at program run-time. Many files have such constants that identify the contained data. Detecting such constants in files is a simple and effective way of distinguishing between many file formats and can yield further run-time information.
Examples
Compiled Java class files (bytecode) and Mach-O binaries start with hex CAFEBABE. When compressed with Pack200 the bytes are changed to CAFED00D.
GIF image files have the ASCII code for "GIF89a" (47 49 46 38 39 61) or "GIF87a" (47 49 46 38 37 61)
JPEG image files begin with FF D8 and end with FF D9. JPEG/JFIF files contain the null terminated string "JFIF" (4A 46 49 46 00). JPEG/Exif files contain the null terminated string "Exif" (45 78 69 66 00), followed by more metadata about the file.
PNG image files begin with an 8-byte signature which identifies the file as a PNG file and allows detection of common file transfer problems: "\211PNG\r\n\032\n" (89 50 4E 47 0D 0A 1A 0A). That signature contains various newline characters to permit detecting unwarranted automated newline conversions, such as transferring the file using FTP with the ASCII transfer mode instead of the binary mode.
Standard MIDI audio files have the ASCII code for "MThd" (MIDI Track header, 4D 54 68 64) followed by more metadata.
Unix or Linux scripts may start with a shebang ("#!", 23 21) followed by the path to an interpreter, if the interpreter is likely to be different from the one from which the script was invoked.
ELF executables start with the byte 7F followed by "ELF" (7F 45 4C 46).
PostScript files and programs start with "%!" (25 21).
PDF files start with "%PDF" (hex 25 50 44 46).
DOS MZ executable files and the EXE stub of the Microsoft Windows PE (Portable Executable) files start with the characters "MZ" (4D 5A), the initials of the designer of the file format, Mark Zbikowski. The definition allows the uncommon "ZM" (5A 4D) as well for dosZMXP, a non-PE EXE.
The Berkeley Fast File System superblock format is identified as either 19 54 01 19 or 01 19 54 depending on version; both represent the birthday of the author, Marshall Kirk McKusick.
The Master Boot Record of bootable storage devices on almost all IA-32 IBM PC compatibles has a code of 55 AA as its last two bytes.
Executables for the Game Boy and Game Boy Advance handheld video game systems have a 48-byte or 156-byte magic number, respectively, at a fixed spot in the header. This magic number encodes a bitmap of the Nintendo logo.
Amiga software executable Hunk files running on Amiga classic 68000 machines all started with the hexadecimal number $000003f3, nicknamed the "Magic Cookie."
In the Amiga, the only absolute address in the system is hex $0000 0004 (memory location 4), which contains the start location called SysBase, a pointer to exec.library, the so-called kernel of Amiga.
PEF files, used by the classic Mac OS and BeOS for PowerPC executables, contain the ASCII code for "Joy!" (4A 6F 79 21) as a prefix.
TIFF files begin with either "II" or "MM" followed by 42 as a two-byte integer in little or big endian byte ordering. "II" is for Intel, which uses little endian byte ordering, so the magic number is 49 49 2A 00. "MM" is for Motorola, which uses big endian byte ordering, so the magic number is 4D 4D 00 2A.
Unicode text files encoded in UTF-16 often start with the Byte Order Mark to detect endianness (FE FF for big endian and FF FE for little endian). And on Microsoft Windows, UTF-8 text files often start with the UTF-8 encoding of the same character, EF BB BF.
LLVM Bitcode files start with "BC" (42 43).
WAD files start with "IWAD" or "PWAD" (for Doom), "WAD2" (for Quake) and "WAD3" (for Half-Life).
Microsoft Compound File Binary Format (mostly known as one of the older formats of Microsoft Office documents) files start with D0 CF 11 E0, which is visually suggestive of the word "DOCFILE0".
Headers in ZIP files often show up in text editors as "PK♥♦" (50 4B 03 04), where "PK" are the initials of Phil Katz, author of DOS compression utility PKZIP.
Headers in 7z files begin with "7z" (full magic number: 37 7A BC AF 27 1C).
Detection
The Unix utility program file can read and interpret magic numbers from files, and the file which is used to parse the information is called magic. The Windows utility TrID has a similar purpose.
In protocols
Examples
The OSCAR protocol, used in AIM/ICQ, prefixes requests with 2A.
In the RFB protocol used by VNC, a client starts its conversation with a server by sending "RFB" (52 46 42, for "Remote Frame Buffer") followed by the client's protocol version number.
In the SMB protocol used by Microsoft Windows, each SMB request or server reply begins with 'FF 53 4D 42', or "\xFFSMB" at the start of the SMB request.
In the MSRPC protocol used by Microsoft Windows, each TCP-based request begins with 05 at the start of the request (representing Microsoft DCE/RPC Version 5), followed immediately by a 00 or 01 for the minor version. In UDP-based MSRPC requests the first byte is always 04.
In COM and DCOM marshalled interfaces, called OBJREFs, always start with the byte sequence "MEOW" (4D 45 4F 57). Debugging extensions (used for DCOM channel hooking) are prefaced with the byte sequence "MARB" (4D 41 52 42).
Unencrypted BitTorrent tracker requests begin with a single byte containing the value 19 representing the header length, followed immediately by the phrase "BitTorrent protocol" at byte position 1.
eDonkey2000/eMule traffic begins with a single byte representing the client version. Currently E3 represents an eDonkey client, C5 represents eMule, and D4 represents compressed eMule.
The first 4 bytes of a block in the Bitcoin Blockchain contains a magic number which serves as the network identifier. The value is a constant 0xD9B4BEF9, which indicates the main network, while the constant 0xDAB5BFFA indicates the testnet.
SSL transactions always begin with a "client hello" message. The record encapsulation scheme used to prefix all SSL packets consists of two- and three- byte header forms. Typically an SSL version 2 client hello message is prefixed with a 80 and an SSLv3 server response to a client hello begins with 16 (though this may vary).
DHCP packets use a "magic cookie" value of '0x63 0x82 0x53 0x63' at the start of the options section of the packet. This value is included in all DHCP packet types.
HTTP/2 connections are opened with the preface '0x505249202a20485454502f322e300d0a0d0a534d0d0a0d0a', or "PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n". The preface is designed to avoid the processing of frames by servers and intermediaries which support earlier versions of HTTP but not 2.0.
In interfaces
Magic numbers are common in API functions and interfaces across many operating systems, including DOS, Windows and NetWare:
Examples
IBM PC-compatible BIOSes use magic values 0000 and 1234 to decide if the system should count up memory or not on reboot, thereby performing a cold or a warm boot. Theses values are also used by EMM386 memory managers intercepting boot requests. BIOSes also use magic values 55 AA to determine if a disk is bootable.
The MS-DOS disk cache SMARTDRV (codenamed "Bambi") uses magic values BABE and EBAB in API functions.
Many DR DOS, Novell DOS and OpenDOS drivers developed in the former European Development Centre in the UK use the value 0EDC as magic token when invoking or providing additional functionality sitting on top of the (emulated) standard DOS functions, NWCACHE being one example.
Other uses
Examples
The default MAC address on Texas Instruments SOCs is DE:AD:BE:EF:00:00.
Data type limits
This is a list of limits of data storage types:
GUIDs
It is possible to create or alter globally unique identifiers (GUIDs) so that they are memorable, but this is highly discouraged as it compromises their strength as near-unique identifiers. The specifications for generating GUIDs and UUIDs are quite complex, which is what leads to them being virtually unique, if properly implemented.
Microsoft Windows product ID numbers for Microsoft Office products sometimes end with 0000-0000-0000000FF1CE ("OFFICE"), such as {90160000-008C-0000-0000-0000000FF1CE}, the product ID for the "Office 16 Click-to-Run Extensibility Component".
Java uses several GUIDs starting with CAFEEFAC.
In the GUID Partition Table of the GPT partitioning scheme, BIOS Boot partitions use the special GUID {21686148-6449-6E6F-744E-656564454649} which does not follow the GUID definition; instead, it is formed by using the ASCII codes for the string "Hah!IdontNeedEFI" partially in little endian order.
Debug values
Magic debug values are specific values written to memory during allocation or deallocation, so that it will later be possible to tell whether or not they have become corrupted, and to make it obvious when values taken from uninitialized memory are being used. Memory is usually viewed in hexadecimal, so memorable repeating or hexspeak values are common. Numerically odd values may be preferred so that processors without byte addressing will fault when attempting to use them as pointers (which must fall at even addresses). Values should be chosen that are away from likely addresses (the program code, static data, heap data, or the stack). Similarly, they may be chosen so that they are not valid codes in the instruction set for the given architecture.
Since it is very unlikely, although possible, that a 32-bit integer would take this specific value, the appearance of such a number in a debugger or memory dump most likely indicates an error such as a buffer overflow or an uninitialized variable.
Famous and common examples include:
Most of these are 32 bits longthe word size of most 32-bit architecture computers.
The prevalence of these values in Microsoft technology is no coincidence; they are discussed in detail in Steve Maguire's book Writing Solid Code from Microsoft Press. He gives a variety of criteria for these values, such as:
They should not be useful; that is, most algorithms that operate on them should be expected to do something unusual. Numbers like zero don't fit this criterion.
They should be easily recognized by the programmer as invalid values in the debugger.
On machines that don't have byte alignment, they should be odd numbers, so that dereferencing them as addresses causes an exception.
They should cause an exception, or perhaps even a debugger break, if executed as code.
Since they were often used to mark areas of memory that were essentially empty, some of these terms came to be used in phrases meaning "gone, aborted, flushed from memory"; e.g. "Your program is DEADBEEF".
See also
Magic string
List of file signatures
FourCC
Hard coding
Magic (programming)
NaN (Not a Number)
Enumerated type
Hexspeak, for another list of magic values
Nothing up my sleeve number about magic constants in cryptographic algorithms
Time formatting and storage bugs, for problems that can be caused by magics
Sentinel value (aka flag value, trip value, rogue value, signal value, dummy data)
Canary value, special value to detect buffer overflows
XYZZY (magic word)
Fast inverse square root, an algorithm that uses the constant 0x5F3759DF
References
Anti-patterns
Debugging
Computer programming folklore
Software engineering folklore | Magic number (programming) | [
"Technology",
"Engineering"
] | 5,379 | [
"Software engineering",
"Software engineering folklore",
"Anti-patterns"
] |
353,021 | https://en.wikipedia.org/wiki/Homeomorphism%20%28graph%20theory%29 | In graph theory, two graphs and are homeomorphic if there is a graph isomorphism from some subdivision of to some subdivision of . If the edges of a graph are thought of as lines drawn from one vertex to another (as they are usually depicted in diagrams), then two graphs are homeomorphic to each other in the graph-theoretic sense precisely if their diagrams are homeomorphic in the topological sense.
Subdivision and smoothing
In general, a subdivision of a graph G (sometimes known as an expansion) is a graph resulting from the subdivision of edges in G. The subdivision of some edge e with endpoints {u,v } yields a graph containing one new vertex w, and with an edge set replacing e by two new edges, {u,w } and {w,v }. For directed edges, this operation shall reserve their propagating direction.
For example, the edge e, with endpoints {u,v }:
can be subdivided into two edges, e1 and e2, connecting to a new vertex w of degree-2, or indegree-1 and outdegree-1 for the directed edge:
Determining whether for graphs G and H, H is homeomorphic to a subgraph of G, is an NP-complete problem.
Reversion
The reverse operation, smoothing out or smoothing a vertex w with regards to the pair of edges (e1, e2) incident on w, removes both edges containing w and replaces (e1, e2) with a new edge that connects the other endpoints of the pair. Here, it is emphasized that only degree-2 (i.e., 2-valent) vertices can be smoothed. The limit of this operation is realized by the graph that has no more degree-2 vertices.
For example, the simple connected graph with two edges, e1 {u,w } and e2 {w,v }:
has a vertex (namely w) that can be smoothed away, resulting in:
Barycentric subdivisions
The barycentric subdivision subdivides each edge of the graph. This is a special subdivision, as it always results in a bipartite graph. This procedure can be repeated, so that the nth barycentric subdivision is the barycentric subdivision of the n−1st barycentric subdivision of the graph. The second such subdivision is always a simple graph.
Embedding on a surface
It is evident that subdividing a graph preserves planarity. Kuratowski's theorem states that
a finite graph is planar if and only if it contains no subgraph homeomorphic to K5 (complete graph on five vertices) or K3,3 (complete bipartite graph on six vertices, three of which connect to each of the other three).
In fact, a graph homeomorphic to K5 or K3,3 is called a Kuratowski subgraph.
A generalization, following from the Robertson–Seymour theorem, asserts that for each integer g, there is a finite obstruction set of graphs such that a graph H is embeddable on a surface of genus g if and only if H contains no homeomorphic copy of any of the . For example, consists of the Kuratowski subgraphs.
Example
In the following example, graph G and graph H are homeomorphic.
If G′ is the graph created by subdivision of the outer edges of G and H′ is the graph created by subdivision of the inner edge of H, then G′ and H′ have a similar graph drawing:
Therefore, there exists an isomorphism between G' and H', meaning G and H are homeomorphic.
mixed graph
The following mixed graphs are homeomorphic. The directed edges are shown to have an intermediate arrow head.
See also
Minor (graph theory)
Edge contraction
References
Further reading
Graph theory
Homeomorphisms
NP-complete problems | Homeomorphism (graph theory) | [
"Mathematics"
] | 817 | [
"Discrete mathematics",
"Homeomorphisms",
"Graph theory",
"Computational problems",
"Combinatorics",
"Topology",
"Mathematical relations",
"Mathematical problems",
"NP-complete problems"
] |
353,022 | https://en.wikipedia.org/wiki/CW%20complex | In mathematics, and specifically in topology, a CW complex (also cellular complex or cell complex) is a topological space that is built by gluing together topological balls (so-called cells) of different dimensions in specific ways. It generalizes both manifolds and simplicial complexes and has particular significance for algebraic topology. It was initially introduced by J. H. C. Whitehead to meet the needs of homotopy theory.
CW complexes have better categorical properties than simplicial complexes, but still retain a combinatorial nature that allows for computation (often with a much smaller complex).
The C in CW stands for "closure-finite", and the W for "weak" topology.
Definition
CW complex
A CW complex is constructed by taking the union of a sequence of topological spaces such that each is obtained from by gluing copies of k-cells , each homeomorphic to the open -ball , to by continuous gluing maps . The maps are also called attaching maps. Thus as a set, .
Each is called the k-skeleton of the complex.
The topology of is weak topology: a subset is open iff is open for each k-skeleton .
In the language of category theory, the topology on is the direct limit of the diagram The name "CW" stands for "closure-finite weak topology", which is explained by the following theorem:
This partition of X is also called a cellulation.
The construction, in words
The CW complex construction is a straightforward generalization of the following process:
A 0-dimensional CW complex is just a set of zero or more discrete points (with the discrete topology).
A 1-dimensional CW complex is constructed by taking the disjoint union of a 0-dimensional CW complex with one or more copies of the unit interval. For each copy, there is a map that "glues" its boundary (its two endpoints) to elements of the 0-dimensional complex (the points). The topology of the CW complex is the topology of the quotient space defined by these gluing maps.
In general, an n-dimensional CW complex is constructed by taking the disjoint union of a k-dimensional CW complex (for some ) with one or more copies of the n-dimensional ball. For each copy, there is a map that "glues" its boundary (the -dimensional sphere) to elements of the -dimensional complex. The topology of the CW complex is the quotient topology defined by these gluing maps.
An infinite-dimensional CW complex can be constructed by repeating the above process countably many times. Since the topology of the union is indeterminate, one takes the direct limit topology, since the diagram is highly suggestive of a direct limit. This turns out to have great technical benefits.
Regular CW complexes
A regular CW complex is a CW complex whose gluing maps are homeomorphisms. Accordingly, the partition of X is also called a regular cellulation.
A loopless graph is represented by a regular 1-dimensional CW-complex. A closed 2-cell graph embedding on a surface is a regular 2-dimensional CW-complex. Finally, the 3-sphere regular cellulation conjecture claims that every 2-connected graph is the 1-skeleton of a regular CW-complex on the 3-dimensional sphere.
Relative CW complexes
Roughly speaking, a relative CW complex differs from a CW complex in that we allow it to have one extra building block that does not necessarily possess a cellular structure. This extra-block can be treated as a (-1)-dimensional cell in the former definition.
Examples
0-dimensional CW complexes
Every discrete topological space is a 0-dimensional CW complex.
1-dimensional CW complexes
Some examples of 1-dimensional CW complexes are:
An interval. It can be constructed from two points (x and y), and the 1-dimensional ball B (an interval), such that one endpoint of B is glued to x and the other is glued to y. The two points x and y are the 0-cells; the interior of B is the 1-cell. Alternatively, it can be constructed just from a single interval, with no 0-cells.
A circle. It can be constructed from a single point x and the 1-dimensional ball B, such that both endpoints of B are glued to x. Alternatively, it can be constructed from two points x and y and two 1-dimensional balls A and B, such that the endpoints of A are glued to x and y, and the endpoints of B are glued to x and y too.
A graph. Given a graph, a 1-dimensional CW complex can be constructed in which the 0-cells are the vertices and the 1-cells are the edges of the graph. The endpoints of each edge are identified with the incident vertices to it. This realization of a combinatorial graph as a topological space is sometimes called a topological graph.
3-regular graphs can be considered as generic 1-dimensional CW complexes. Specifically, if X is a 1-dimensional CW complex, the attaching map for a 1-cell is a map from a two-point space to X, . This map can be perturbed to be disjoint from the 0-skeleton of X if and only if and are not 0-valence vertices of X.
The standard CW structure on the real numbers has as 0-skeleton the integers and as 1-cells the intervals . Similarly, the standard CW structure on has cubical cells that are products of the 0 and 1-cells from . This is the standard cubic lattice cell structure on .
Finite-dimensional CW complexes
Some examples of finite-dimensional CW complexes are:
An n-dimensional sphere. It admits a CW structure with two cells, one 0-cell and one n-cell. Here the n-cell is attached by the constant mapping from its boundary to the single 0-cell. An alternative cell decomposition has one (n-1)-dimensional sphere (the "equator") and two n-cells that are attached to it (the "upper hemi-sphere" and the "lower hemi-sphere"). Inductively, this gives a CW decomposition with two cells in every dimension k such that .
The n-dimensional real projective space. It admits a CW structure with one cell in each dimension.
The terminology for a generic 2-dimensional CW complex is a shadow.
A polyhedron is naturally a CW complex.
Grassmannian manifolds admit a CW structure called Schubert cells.
Differentiable manifolds, algebraic and projective varieties have the homotopy type of CW complexes.
The one-point compactification of a cusped hyperbolic manifold has a canonical CW decomposition with only one 0-cell (the compactification point) called the Epstein–Penner Decomposition. Such cell decompositions are frequently called ideal polyhedral decompositions and are used in popular computer software, such as SnapPea.
Infinite-dimensional CW complexes
The infinite dimensional sphere . It admits a CW-structure with 2 cells in each dimension which are assembled in a way such that the -skeleton is precisely given by the -sphere.
The infinite dimensional projective spaces , and . has one cell in every dimension, , has one cell in every even dimension and has one cell in every dimension divisible by 4. The respective skeletons are then given by , (2n-skeleton) and (4n-skeleton).
Non CW-complexes
An infinite-dimensional Hilbert space is not a CW complex: it is a Baire space and therefore cannot be written as a countable union of n-skeletons, each of which being a closed set with empty interior. This argument extends to many other infinite-dimensional spaces.
The hedgehog space is homotopy equivalent to a CW complex (the point) but it does not admit a CW decomposition, since it is not locally contractible.
The Hawaiian earring has no CW decomposition, because it is not locally contractible at origin. It is also not homotopy equivalent to a CW complex, because it has no good open cover.
Properties
CW complexes are locally contractible.
If a space is homotopy equivalent to a CW complex, then it has a good open cover. A good open cover is an open cover, such that every nonempty finite intersection is contractible.
CW complexes are paracompact. Finite CW complexes are compact. A compact subspace of a CW complex is always contained in a finite subcomplex.
CW complexes satisfy the Whitehead theorem: a map between CW complexes is a homotopy equivalence if and only if it induces an isomorphism on all homotopy groups.
A covering space of a CW complex is also a CW complex.
The product of two CW complexes can be made into a CW complex. Specifically, if X and Y are CW complexes, then one can form a CW complex X × Y in which each cell is a product of a cell in X and a cell in Y, endowed with the weak topology. The underlying set of X × Y is then the Cartesian product of X and Y, as expected. In addition, the weak topology on this set often agrees with the more familiar product topology on X × Y, for example if either X or Y is finite. However, the weak topology can be finer than the product topology, for example if neither X nor Y is locally compact. In this unfavorable case, the product X × Y in the product topology is not a CW complex. On the other hand, the product of X and Y in the category of compactly generated spaces agrees with the weak topology and therefore defines a CW complex.
Let X and Y be CW complexes. Then the function spaces Hom(X,Y) (with the compact-open topology) are not CW complexes in general. If X is finite then Hom(X,Y) is homotopy equivalent to a CW complex by a theorem of John Milnor (1959). Note that X and Y are compactly generated Hausdorff spaces, so Hom(X,Y) is often taken with the compactly generated variant of the compact-open topology; the above statements remain true.
Cellular approximation theorem
Homology and cohomology of CW complexes
Singular homology and cohomology of CW complexes is readily computable via cellular homology. Moreover, in the category of CW complexes and cellular maps, cellular homology can be interpreted as a homology theory. To compute an extraordinary (co)homology theory for a CW complex, the Atiyah–Hirzebruch spectral sequence is the analogue of cellular homology.
Some examples:
For the sphere, take the cell decomposition with two cells: a single 0-cell and a single n-cell. The cellular homology chain complex and homology are given by:
since all the differentials are zero.
Alternatively, if we use the equatorial decomposition with two cells in every dimension
and the differentials are matrices of the form This gives the same homology computation above, as the chain complex is exact at all terms except and
For we get similarly
Both of the above examples are particularly simple because the homology is determined by the number of cells—i.e.: the cellular attaching maps have no role in these computations. This is a very special phenomenon and is not indicative of the general case.
Modification of CW structures
There is a technique, developed by Whitehead, for replacing a CW complex with a homotopy-equivalent CW complex that has a simpler CW decomposition.
Consider, for example, an arbitrary CW complex. Its 1-skeleton can be fairly complicated, being an arbitrary graph. Now consider a maximal forest F in this graph. Since it is a collection of trees, and trees are contractible, consider the space where the equivalence relation is generated by if they are contained in a common tree in the maximal forest F. The quotient map is a homotopy equivalence. Moreover, naturally inherits a CW structure, with cells corresponding to the cells of that are not contained in F. In particular, the 1-skeleton of is a disjoint union of wedges of circles.
Another way of stating the above is that a connected CW complex can be replaced by a homotopy-equivalent CW complex whose 0-skeleton consists of a single point.
Consider climbing up the connectivity ladder—assume X is a simply-connected CW complex whose 0-skeleton consists of a point. Can we, through suitable modifications, replace X by a homotopy-equivalent CW complex where consists of a single point? The answer is yes. The first step is to observe that and the attaching maps to construct from form a group presentation. The Tietze theorem for group presentations states that there is a sequence of moves we can perform to reduce this group presentation to the trivial presentation of the trivial group. There are two Tietze moves:
1) Adding/removing a generator. Adding a generator, from the perspective of the CW decomposition consists of adding a 1-cell and a 2-cell whose attaching map consists of the new 1-cell and the remainder of the attaching map is in . If we let be the corresponding CW complex then there is a homotopy equivalence given by sliding the new 2-cell into X.
2) Adding/removing a relation. The act of adding a relation is similar, only one is replacing X by where the new 3-cell has an attaching map that consists of the new 2-cell and remainder mapping into . A similar slide gives a homotopy-equivalence .
If a CW complex X is n-connected one can find a homotopy-equivalent CW complex whose n-skeleton consists of a single point. The argument for is similar to the case, only one replaces Tietze moves for the fundamental group presentation by elementary matrix operations for the presentation matrices for (using the presentation matrices coming from cellular homology. i.e.: one can similarly realize elementary matrix operations by a sequence of addition/removal of cells or suitable homotopies of the attaching maps.
'The' homotopy category
The homotopy category of CW complexes is, in the opinion of some experts, the best if not the only candidate for the homotopy category (for technical reasons the version for pointed spaces is actually used). Auxiliary constructions that yield spaces that are not CW complexes must be used on occasion. One basic result is that the representable functors on the homotopy category have a simple characterisation (the Brown representability theorem).
See also
Abstract cell complex
The notion of CW complex has an adaptation to smooth manifolds called a handle decomposition, which is closely related to surgery theory.
References
Notes
General references
More details on the first author's home page]
Algebraic topology
Homotopy theory
Topological spaces | CW complex | [
"Mathematics"
] | 3,024 | [
"Mathematical structures",
"Algebraic topology",
"Space (mathematics)",
"Topological spaces",
"Fields of abstract algebra",
"Topology"
] |
353,042 | https://en.wikipedia.org/wiki/Graph%20minor | In graph theory, an undirected graph is called a minor of the graph if can be formed from by deleting edges, vertices and by contracting edges.
The theory of graph minors began with Wagner's theorem that a graph is planar if and only if its minors include neither the complete graph nor the complete bipartite graph . The Robertson–Seymour theorem implies that an analogous forbidden minor characterization exists for every property of graphs that is preserved by deletions and edge contractions.
For every fixed graph , it is possible to test whether is a minor of an input graph in polynomial time; together with the forbidden minor characterization this implies that every graph property preserved by deletions and contractions may be recognized in polynomial time.
Other results and conjectures involving graph minors include the graph structure theorem, according to which the graphs that do not have as a minor may be formed by gluing together simpler pieces, and Hadwiger's conjecture relating the inability to color a graph to the existence of a large complete graph as a minor of it. Important variants of graph minors include the topological minors and immersion minors.
Definitions
An edge contraction is an operation that removes an edge from a graph while simultaneously merging the two vertices it used to connect. An undirected graph is a minor of another undirected graph if a graph isomorphic to can be obtained from by contracting some edges, deleting some edges, and deleting some isolated vertices. The order in which a sequence of such contractions and deletions is performed on does not affect the resulting graph .
Graph minors are often studied in the more general context of matroid minors. In this context, it is common to assume that all graphs are connected, with self-loops and multiple edges allowed (that is, they are multigraphs rather than simple graphs); the contraction of a loop and the deletion of a cut-edge are forbidden operations. This point of view has the advantage that edge deletions leave the rank of a graph unchanged, and edge contractions always reduce the rank by one.
In other contexts (such as with the study of pseudoforests) it makes more sense to allow the deletion of a cut-edge, and to allow disconnected graphs, but to forbid multigraphs. In this variation of graph minor theory, a graph is always simplified after any edge contraction to eliminate its self-loops and multiple edges.
A function is referred to as "minor-monotone" if, whenever is a minor of , one has .
Example
In the following example, graph H is a minor of graph G:
H.
G.
The following diagram illustrates this. First construct a subgraph of G by deleting the dashed edges (and the resulting isolated vertex), and then contract the gray edge (merging the two vertices it connects):
Major results and conjectures
It is straightforward to verify that the graph minor relation forms a partial order on the isomorphism classes of finite undirected graphs: it is transitive (a minor of a minor of is a minor of itself), and and can only be minors of each other if they are isomorphic because any nontrivial minor operation removes edges or vertices. A deep result by Neil Robertson and Paul Seymour states that this partial order is actually a well-quasi-ordering: if an infinite list of finite graphs is given, then there always exist two indices such that is a minor of . Another equivalent way of stating this is that any set of graphs can have only a finite number of minimal elements under the minor ordering. This result proved a conjecture formerly known as Wagner's conjecture, after Klaus Wagner; Wagner had conjectured it long earlier, but only published it in 1970.
In the course of their proof, Seymour and Robertson also prove the graph structure theorem in which they determine, for any fixed graph , the rough structure of any graph that does not have as a minor. The statement of the theorem is itself long and involved, but in short it establishes that such a graph must have the structure of a clique-sum of smaller graphs that are modified in small ways from graphs embedded on surfaces of bounded genus.
Thus, their theory establishes fundamental connections between graph minors and topological embeddings of graphs.
For any graph , the simple -minor-free graphs must be sparse, which means that the number of edges is less than some constant multiple of the number of vertices. More specifically, if has vertices, then a simple -vertex simple -minor-free graph can have at most edges, and some -minor-free graphs have at least this many edges. Thus, if has vertices, then -minor-free graphs have average degree and furthermore degeneracy . Additionally, the -minor-free graphs have a separator theorem similar to the planar separator theorem for planar graphs: for any fixed , and any -vertex -minor-free graph , it is possible to find a subset of vertices whose removal splits into two (possibly disconnected) subgraphs with at most vertices per subgraph. Even stronger, for any fixed , -minor-free graphs have treewidth .
The Hadwiger conjecture in graph theory proposes that if a graph does not contain a minor isomorphic to the complete graph on vertices, then has a proper coloring with colors. The case is a restatement of the four color theorem. The Hadwiger conjecture has been proven for , but is unknown in the general case. call it "one of the deepest unsolved problems in graph theory." Another result relating the four-color theorem to graph minors is the snark theorem announced by Robertson, Sanders, Seymour, and Thomas, a strengthening of the four-color theorem conjectured by W. T. Tutte and stating that any bridgeless 3-regular graph that requires four colors in an edge coloring must have the Petersen graph as a minor.
Minor-closed graph families
Many families of graphs have the property that every minor of a graph in F is also in F; such a class is said to be minor-closed. For instance, in any planar graph, or any embedding of a graph on a fixed topological surface, neither the removal of edges nor the contraction of edges can increase the genus of the embedding; therefore, planar graphs and the graphs embeddable on any fixed surface form minor-closed families.
If F is a minor-closed family, then (because of the well-quasi-ordering property of minors) among the graphs that do not belong to F there is a finite set X of minor-minimal graphs. These graphs are forbidden minors for F: a graph belongs to F if and only if it does not contain as a minor any graph in X. That is, every minor-closed family F can be characterized as the family of X-minor-free graphs for some finite set X of forbidden minors.
The best-known example of a characterization of this type is Wagner's theorem characterizing the planar graphs as the graphs having neither K5 nor K3,3 as minors.
In some cases, the properties of the graphs in a minor-closed family may be closely connected to the properties of their excluded minors. For example a minor-closed graph family F has bounded pathwidth if and only if its forbidden minors include a forest, F has bounded tree-depth if and only if its forbidden minors include a disjoint union of path graphs, F has bounded treewidth if and only if its forbidden minors include a planar graph, and F has bounded local treewidth (a functional relationship between diameter and treewidth) if and only if its forbidden minors include an apex graph (a graph that can be made planar by the removal of a single vertex). If H can be drawn in the plane with only a single crossing (that is, it has crossing number one) then the H-minor-free graphs have a simplified structure theorem in which they are formed as clique-sums of planar graphs and graphs of bounded treewidth. For instance, both K5 and K3,3 have crossing number one, and as Wagner showed the K5-free graphs are exactly the 3-clique-sums of planar graphs and the eight-vertex Wagner graph, while the K3,3-free graphs are exactly the 2-clique-sums of planar graphs and K5.
Variations
Topological minors
A graph H is called a topological minor of a graph G if a subdivision of H is isomorphic to a subgraph of G. Every topological minor is also a minor. The converse however is not true in general (for instance the complete graph K5 in the Petersen graph is a minor but not a topological one), but holds for graph with maximum degree not greater than three.
The topological minor relation is not a well-quasi-ordering on the set of finite graphs and hence the result of Robertson and Seymour does not apply to topological minors. However it is straightforward to construct finite forbidden topological minor characterizations from finite forbidden minor characterizations by replacing every branch set with k outgoing edges by every tree on k leaves that has down degree at least two.
Induced minors
A graph H is called an induced minor of a graph G if it can be obtained from an induced subgraph of G by contracting edges. Otherwise, G is said to be H-induced minor-free.
Immersion minor
A graph operation called lifting is central in a concept called immersions. The lifting is an operation on adjacent edges. Given three vertices v, u, and w, where (v,u) and (u,w) are edges in the graph, the lifting of vuw, or equivalent of (v,u), (u,w) is the operation that deletes the two edges (v,u) and (u,w) and adds the edge (v,w). In the case where (v,w) already was present, v and w will now be connected by more than one edge, and hence this operation is intrinsically a multi-graph operation.
In the case where a graph H can be obtained from a graph G by a sequence of lifting operations (on G) and then finding an isomorphic subgraph, we say that H is an immersion minor of G.
There is yet another way of defining immersion minors, which is equivalent to the lifting operation. We say that H is an immersion minor of G if there exists an injective mapping from vertices in H to vertices in G where the images of adjacent elements of H are connected in G by edge-disjoint paths.
The immersion minor relation is a well-quasi-ordering on the set of finite graphs and hence the result of Robertson and Seymour applies to immersion minors. This furthermore means that every immersion minor-closed family is characterized by a finite family of forbidden immersion minors.
In graph drawing, immersion minors arise as the planarizations of non-planar graphs: from a drawing of a graph in the plane, with crossings, one can form an immersion minor by replacing each crossing point by a new vertex, and in the process also subdividing each crossed edge into a path. This allows drawing methods for planar graphs to be extended to non-planar graphs.
Shallow minors
A shallow minor of a graph G is a minor in which the edges of G that were contracted to form the minor form a collection of disjoint subgraphs with low diameter. Shallow minors interpolate between the theories of graph minors and subgraphs, in that shallow minors with high depth coincide with the usual type of graph minor, while the shallow minors with depth zero are exactly the subgraphs. They also allow the theory of graph minors to be extended to classes of graphs such as the 1-planar graphs that are not closed under taking minors.
Parity conditions
An alternative and equivalent definition of a graph minor is that H is a minor of G whenever the vertices of H can be represented by a collection of vertex-disjoint subtrees of G, such that if two vertices are adjacent in H, there exists an edge with its endpoints in the corresponding two trees in G.
An odd minor restricts this definition by adding parity conditions to these subtrees. If H is represented by a collection of subtrees of G as above, then H is an odd minor of G whenever it is possible to assign two colors to the vertices of G in such a way that each edge of G within a subtree is properly colored (its endpoints have different colors) and each edge of G that represents an adjacency between two subtrees is monochromatic (both its endpoints are the same color). Unlike for the usual kind of graph minors, graphs with forbidden odd minors are not necessarily sparse. The Hadwiger conjecture, that k-chromatic graphs necessarily contain k-vertex complete graphs as minors, has also been studied from the point of view of odd minors.
A different parity-based extension of the notion of graph minors is the concept of a bipartite minor, which produces a bipartite graph whenever the starting graph is bipartite. A graph H is a bipartite minor of another graph G whenever H can be obtained from G by deleting vertices, deleting edges, and collapsing pairs of vertices that are at distance two from each other along a peripheral cycle of the graph. A form of Wagner's theorem applies for bipartite minors: A bipartite graph G is a planar graph if and only if it does not have the utility graph K3,3 as a bipartite minor.
Algorithms
The problem of deciding whether a graph G contains H as a minor is NP-complete in general; for instance, if H is a cycle graph with the same number of vertices as G, then H is a minor of G if and only if G contains a Hamiltonian cycle. However, when G is part of the input but H is fixed, it can be solved in polynomial time. More specifically, the running time for testing whether H is a minor of G in this case is O(n3), where n is the number of vertices in G and the big O notation hides a constant that depends superexponentially on H; since the original Graph Minors result, this algorithm has been improved to O(n2) time. Thus, by applying the polynomial time algorithm for testing whether a given graph contains any of the forbidden minors, it is theoretically possible to recognize the members of any minor-closed family in polynomial time. This result is not used in practice since the hidden constant is so huge (needing three layers of Knuth's up-arrow notation to express) as to rule out any application, making it a galactic algorithm. Furthermore, in order to apply this result constructively, it is necessary to know what the forbidden minors of the graph family are. In some cases, the forbidden minors are known, or can be computed.
In the case where H is a fixed planar graph, then we can test in linear time in an input graph G whether H is a minor of G. In cases where H is not fixed, faster algorithms are known in the case where G is planar.
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
External links
Graph theory objects | Graph minor | [
"Mathematics"
] | 3,155 | [
"Mathematical relations",
"Graph minor theory",
"Graph theory",
"Graph theory objects"
] |
353,070 | https://en.wikipedia.org/wiki/Epiglottis | The epiglottis (: epiglottises or epiglottides) is a leaf-shaped flap in the throat that prevents food and water from entering the trachea and the lungs. It stays open during breathing, allowing air into the larynx. During swallowing, it closes to prevent aspiration of food into the lungs, forcing the swallowed liquids or food to go along the esophagus toward the stomach instead. It is thus the valve that diverts passage to either the trachea or the esophagus.
The epiglottis is made of elastic cartilage covered with a mucous membrane, attached to the entrance of the larynx. It projects upwards and backwards behind the tongue and the hyoid bone.
The epiglottis may be inflamed in a condition called epiglottitis, which is most commonly due to the vaccine-preventable bacterium Haemophilus influenzae. Dysfunction may cause the inhalation of food, called aspiration, which may lead to pneumonia or airway obstruction. The epiglottis is also an important landmark for intubation.
The epiglottis has been identified as early as Aristotle, and gets its name from being above the glottis (epi- + glottis).
Structure
The epiglottis sits at the entrance of the larynx. It is shaped like a leaf of purslane and has a free upper part that rests behind the tongue, and a lower stalk (). The stalk originates from the back surface of the thyroid cartilage, connected by a thyroepiglottic ligament. At the sides, the stalk is connected to the arytenoid cartilages at the walls of the larynx by folds.
The epiglottis originates at the entrance of the larynx, and is attached to the hyoid bone. From there, it projects upwards and backwards behind the tongue. The space between the epiglottis and the tongue is called the vallecula.
Microanatomy
The epiglottis has two surfaces; a forward-facing surface, and a surface facing the larynx. The forward-facing surface is covered with several layers of thin cells (stratified squamous epithelium), and is not covered with keratin, the same surface as the back of the tongue. The back surface is covered in a layer of column-shaped cells with cilia, similar to the rest of the respiratory tract. It also has mucus-secreting goblet cells. There is an intermediate zone between these surfaces that contains cells that transition in shape. The body of the epiglottis consists of elastic cartilage.
Development
The epiglottis arises from the fourth pharyngeal arch. It can be seen as a distinct structure later than the other cartilage of the pharynx, visible around the fifth month of development. The position of the epiglottis also changes with ageing. In infants, it touches the soft palate, whereas in adults, its position is lower.
Variation
A high-rising epiglottis is a normal anatomical variation, visible during an examination of the mouth. It does not cause any serious problem apart from maybe a mild sensation of a foreign body in the throat. It is seen more often in children than adults and does not need any medical or surgical intervention. The front surface of the epiglottis is occasionally notched.
Function
The epiglottis is normally pointed upward during breathing with its underside functioning as part of the pharynx. There are taste buds on the epiglottis.
Swallowing
During swallowing, the epiglottis bends backwards, folding over the entrance to the trachea, and preventing food from going into it. The folding backwards is a complex movement the causes of which are not completely understood. It is likely that during swallowing the hyoid bone and the larynx move upwards and forwards, which increases passive pressure from the back of the tongue; the aryepiglottic muscles contract; the passive weight of the food pushes down; and the laryngeal and thyroarytenoid muscles contract. The consequence of this is that during swallowing the bent epiglottis blocks off the trachea, preventing food from going into it; food instead travels down the esophagus, which is behind it.
Speech sounds
In many languages, the epiglottis is not essential for producing sounds. In some languages, the epiglottis is used to produce epiglottal consonant speech sounds, though this sound-type is rather rare.
Clinical significance
Inflammation
Inflammation of the epiglottis is known as epiglottitis. Epiglottitis is mainly caused by Haemophilus influenzae. A person with epiglottitis may have a fever, sore throat, difficulty swallowing, and difficulty breathing. For this reason, acute epiglottitis is considered a medical emergency, because of the risk of obstruction of the pharynx. Epiglottitis is often managed with antibiotics, inhaled aerosolised epinephrine to act as a bronchodilator, and may require tracheal intubation or a tracheostomy if breathing is difficult.
The incidence of epiglottitis has decreased significantly in countries where vaccination against Haemophilus influenzae is administered.
Aspiration
When food or other objects travel down the respiratory tract rather than down the esophagus to the stomach, this is called . This can lead to the obstruction of airways, inflammation of lung tissue, and aspiration pneumonia; and in the long term, atelectasis and bronchiectasis. One reason aspiration can occur is because of failure of the epiglottis to close completely.
If food or liquid enters the airway due to the epiglottis failing to close properly, throat-clearing or a cough reflex may occur to protect the respiratory system and expel material from the airway. Where there is impairment in laryngeal vestibule sensation, silent aspiration (entry of material to the airway that does not result in a cough reflex) may occur.
Other
The epiglottis and vallecula are important anatomical landmarks in intubation. Abnormal positioning of the epiglottis is a rare cause of obstructive sleep apnoea.
Other animals
The epiglottis is present in mammals, including land mammals and cetaceans, also as a cartilaginous structure. Like in humans, it functions to prevent entry of food into the trachea during swallowing. The position of the larynx is flat in mice and other rodents, as well as rabbits. For this reason, because the epiglottis is located behind the soft palate in rabbits, they are obligate nose breathers, as are mice and other rodents. In rodents and mice, there is a unique pouch in front of the epiglottis, and the epiglottis is commonly injured by inhaled substances, particularly at the transition zone between the flattened and cuboidal epithelium. It is also common to see taste buds on the epiglottis in these species.
History
The epiglottis was noted by Aristotle, although the epiglottis' function was first defined by Vesalius in 1543. The word has Greek roots. The epiglottis gets its name from being above () the glottis ().
Additional images
See also
Epiglottal consonant
Epiglotto-pharyngeal consonant
Pharyngeal consonant
References
External links
()
Where is the Epiglottis? at Study Sciences
Digestive system
Larynx
Human throat | Epiglottis | [
"Biology"
] | 1,605 | [
"Digestive system",
"Organ systems"
] |
353,077 | https://en.wikipedia.org/wiki/EROS%20%28microkernel%29 | Extremely Reliable Operating System (EROS) is an operating system developed starting in 1991 at the University of Pennsylvania, and then Johns Hopkins University, and The EROS Group, LLC. Features include automatic data and process persistence, some preliminary real-time support, and capability-based security. EROS is purely a research operating system, and was never deployed in real world use. , development stopped in favor of a successor system, CapROS.
Key concepts
The overriding goal of the EROS system (and its relatives) is to provide strong support at the operating system level for the efficient restructuring of critical applications into small communicating components. Each component can communicate with the others only through protected interfaces, and is isolated from the rest of the system. A protected interface, in this context, is one that is enforced by the lowest level part of the operating system, the kernel. That is the only part of the system that can move information from one process to another. It also has complete control of the machine and (if properly constructed) cannot be bypassed. In EROS, the kernel-provided mechanism by which one component names and invokes the services of another is a capability, using inter-process communication (IPC). By enforcing capability-protected interfaces, the kernel ensures that all communications to a process arrive via an intentionally exported interface. It also ensures that no invocation is possible unless the invoking component holds a valid capability to the invoked component. Protection in capability systems is achieved by restricting the propagation of capabilities from one component to another, often through a security policy termed confinement.
Capability systems naturally promote component-based software structure. This organizational approach is similar to the programming language concept of object-oriented programming, but occurs at larger granularity and does not include the concept of inheritance. When software is restructured in this way, several benefits emerge:
The individual components are most naturally structured as event loops. Examples of systems that are commonly structured this way include aircraft flight control systems (see also DO-178B Software Considerations in Airborne Systems and Equipment Certification), and telephone switching systems (see 5ESS switch). Event-driven programming is chosen for these systems mainly because of simplicity and robustness, which are essential attributes in life-critical and mission-critical systems.
Components become smaller and individually testable, which helps to more readily isolate and identify flaws and bugs.
The isolation of each component from the others limits the scope of any damage that may occur when something goes wrong or the software misbehaves.
Collectively, these benefits lead to measurably more robust and secure systems. The Plessey System 250 was a system originally designed for use in telephony switches, which capability-based design was chosen specifically for reasons of robustness.
In contrast to many earlier systems, capabilities are the only mechanism for naming and using resources in EROS, making it what is sometimes referred to as a pure capability system. In contrast, IBM i is an example of a commercially successful capability system, but it is not a pure capability system.
Pure capability architectures are supported by well-tested and mature mathematical security models. These have been used to formally demonstrate that capability-based systems can be made secure if implemented correctly. The so-called "safety property" has been shown to be decidable for pure capability systems (see Lipton). Confinement, which is the fundamental building block of isolation, has been formally verified to be enforceable by pure capability systems, and is reduced to practical implementation by the EROS constructor and the KeyKOS factory. No comparable verification exists for any other primitive protection mechanism. There is a fundamental result in the literature showing that safety is mathematically undecidable in the general case (see HRU, but note that it is of course provable for an unbounded set of restricted cases). Of greater practical importance, safety has been shown to be false for all of the primitive protection mechanisms shipping in current commodity operating systems. Safety is a necessary precondition to successful enforcement of any security policy. In practical terms, this result means that it is not possible in principle to secure current commodity systems, but it is potentially possible to secure capability-based systems provided they are implemented with sufficient care. Neither EROS nor KeyKOS has ever been successfully penetrated, and their isolation mechanisms have never been successfully defeated by any inside attacker, but it is not known whether the two implementations were careful enough. One goal of the Coyotos project was to demonstrate that component isolation and security has been definitively achieved by applying software verification techniques.
The L4.sec system, which is a successor to the L4 microkernel family, is a capability-based system, and has been significantly influenced by the results of the EROS project. The influence is mutual, since the EROS work on high-performance invocation was motivated strongly by Jochen Liedtke's successes with the L4 microkernel family.
History
The primary developer of EROS was Jonathan S. Shapiro. He was also the driving force behind Coyotos, which was an "evolutionary step" beyond the EROS operating system.
The EROS project started in 1991 as a clean-room reconstruction of an earlier operating system, KeyKOS. KeyKOS was developed by Key Logic, Inc., and was a direct continuation of work on the earlier Great New Operating System In the Sky (GNOSIS) system created by Tymshare, Inc. The circumstances surrounding Key Logic's demise in 1991 made licensing KeyKOS impractical. Since KeyKOS did not run on popular commodity processors in any case, the decision was made to reconstruct it from the publicly available documentation.
By late 1992, it had become clear that processor architecture had changed significantly since the introduction of the capability idea, and it was no longer obvious that component-structured systems were practical. Microkernel-based systems, which similarly favor large numbers of processes and IPC, were facing severe performance challenges, and it was uncertain if these could be successfully resolved. The x86 architecture was clearly emerging as the dominant architecture but the expensive user/supervisor transition latency on the 386 and 486 presented serious challenges for process-based isolation. The EROS project was turning into a research effort, and moved to the University of Pennsylvania to become the focus of Shapiro's dissertation research. By 1999, a high performance implementation for the Pentium processor had been demonstrated that was directly performance competitive with the L4 microkernel family, which is known for its exceptional speed in IPC. The EROS confinement mechanism had been formally verified, in the process creating a general formal model for secure capability systems.
In 2000, Shapiro joined the faculty of Computer Science at Johns Hopkins University. At Hopkins, the goal was to show how to use the facilities provided by the EROS kernel to construct secure and defensible servers at application level. Funded by the Defense Advanced Research Projects Agency and the Air Force Research Laboratory, EROS was used as the basis for a trusted window system, a high-performance, defensible network stack, and the beginnings of a secure web browser. It was also used to explore the effectiveness of lightweight static checking. In 2003, some very challenging security issues were discovered that are intrinsic to any system architecture based on synchronous IPC primitives (notably including EROS and L4). Work on EROS halted in favor of Coyotos, which resolved these issues.
, EROS and its successors are the only widely available capability systems that run on commodity hardware.
Status
Work on EROS and Coyotos by the original group has halted, but there is a successor system. CapROS (Capability Based Reliable Operating System), a successor of EROS, is an open-source, commercially oriented operating system.
See also
Nanokernel
References
Journals
External links
Microkernels
Real-time operating systems
Capability systems
X86 operating systems | EROS (microkernel) | [
"Technology"
] | 1,603 | [
"Real-time operating systems",
"Capability systems",
"Real-time computing",
"Computer systems"
] |
353,091 | https://en.wikipedia.org/wiki/Malacology | Malacology is the branch of invertebrate zoology that deals with the study of the Mollusca (molluscs or mollusks), the second-largest phylum of animals in terms of described species after the arthropods. Mollusks include snails and slugs, clams, and cephalopods, along with numerous other kinds, many of which have shells. Malacology derives '-logy', 'study of'.
Fields within malacological research include taxonomy, ecology and evolution. Several subdivisions of malacology exist, including conchology, devoted to the study of mollusk shells, and teuthology, the study of cephalopods such as octopus, squid, and cuttlefish. Applied malacology studies medical, veterinary, and agricultural applications, for example the study of mollusks as vectors of schistosomiasis and other diseases.
Archaeology employs malacology to understand the evolution of the climate, the biota of the area, and the usage of the site.
Zoological methods are used in malacological research. Malacological field methods and laboratory methods (such as collecting, documenting and archiving, and molecular techniques) were summarized by Sturm et al. (2006).
History
Malacology evolved from the earlier discipline of conchology, which focused solely on the collection and classification of shells. The transformation into a comprehensive field of biological study occurred over several key historical milestones.
Early period pre-1795
Before the late 18th century, the study of mollusks was limited to conchology, emphasizing the aesthetic and taxonomic value of shells. During this time, the term "mollusks" referred only to shell-less species such as cephalopods and slugs. Organisms with shells were classified under "Testacea", reflecting a limited understanding of their broader biological characteristics.
The contributions of Cuvier
In 1795, French naturalist Georges Cuvier introduced a new classification system for invertebrates based on anatomical observations. He proposed that mollusks represented a distinct group of organisms unified by common morphological traits. This approach laid the groundwork for the transition from conchology to malacology, as it highlighted the importance of internal anatomy over external shell features.
Early 19th century
Following Cuvier’s work, the early 19th century saw an expansion of the field’s focus. Scientists began studying not only the external shells of mollusks but also their internal anatomy, physiological functions, and ecological roles. This marked a shift toward viewing mollusks as complete organisms, rather than merely as shell producers. The term "malacology" was officially introduced in 1825 by French zoologist and anatomist Henri-Marie Ducrotay de Blainville. Derived from the Greek word "malakos" (meaning "soft"), it reflected a broader interest in the biological and ecological characteristics of mollusks, including their soft body structures. This moment is considered the formal establishment of malacology as a distinct scientific discipline.
Late 19th century and beyond
By the late 19th century, malacology had expanded further to encompass evolutionary biology, taxonomy, and ecology. Researchers investigated the relationships between mollusks and other invertebrates, as well as their roles in various ecosystems. The discipline continued to integrate new methodologies and technologies, solidifying its place within zoology.
Malacologists
Those who study malacology are known as malacologists. Those who study primarily or exclusively the shells of mollusks are known as conchologists, while those who study mollusks of the class Cephalopoda are teuthologists.
Societies
(Asociación Argentina de Malacología)
American Malacological Society
Association of Polish Malacologists ()
Belgian Malacological Society () – French speaking
– Dutch speaking
Brazilian Malacological Society ()
Conchological Society of Great Britain and Ireland
Conchologists of America
Dutch Malacological Society
Estonian Malacological Society
European Quaternary Malacologists
Freshwater Mollusk Conservation Society
German Malacological Society ()
Hungarian Malacological Society ()
Italian Malacological Society ()
Malacological Society of Australasia
Malacological Society of London
Malacological Society of the Philippines, Inc.
Mexican Malacological Society ()
Spanish Malacological Society ()
Western Society of Malacologists
Journals
More than 150 journals within the field of malacology are being published from more than 30 countries, producing an overwhelming amount of scientific articles. They include:
American Journal of Conchology (1865–1872)
American Malacological Bulletin
Basteria
Bulletin of Russian Far East Malacological Society
Fish & Shellfish Immunology
Folia conchyliologica
Folia Malacologica
Heldia
Johnsonia
Journal de Conchyliologie – volumes 1850–1922 at Biodiversity Heritage Library; volumes 1850–1938 at Bibliothèque nationale de France
Journal of Conchology
Journal of Medical and Applied Malacology
Journal of Molluscan Studies
Malacologia
Malacologica Bohemoslovaca
Malacological Review – volume 1 (1968) – today, contents of volume 27 (1996) – volume 40 (2009)
Soosiana
Zeitschrift für Malakozoologie (1844–1853) → Malakozoologische Blätter (1854–1878)
Miscellanea Malacologica
Mollusca
Molluscan Research – impact factor: 0.606 (2007)
Mitteilungen der Deutschen Malakozoologischen Gesellschaft
Occasional Molluscan Papers (since 2008)
Occasional Papers on Mollusks (1945–1989), 5 volumes
Ruthenica
Strombus
Tentacle – The Newsletter of the Mollusc Specialist Group of the Species Survival Commission of the International Union for Conservation of Nature.
The Conchologist (1891–1894) → The Journal of Malacology (1894–1905)
The Festivus – a journal which started as a club newsletter in 1970, published by the San Diego Shell Club
The Nautilus – since 1886 published by Bailey-Matthews Shell Museum. First two volumes were published under name The Conchologists’ Exchange. Impact factor: 0.500 (2009)
The Veliger – impact factor: 0.606 (2003)
貝類学雑誌 Venus (Japanese Journal of Malacology)
Vita Malacologica a Dutch journal published in English – one themed issue a year
Vita Marina (discontinued in May 2001)
Museums
Museums that have either exceptional malacological research collections (behind the scenes) and/or exceptional public exhibits of mollusks:
Academy of Natural Sciences of Philadelphia
American Museum of Natural History
Bailey-Matthews Shell Museum
Cau del Cargol Shell Museum
Maria Mitchell Association
Museum of Comparative Zoology at Harvard
National Museum of Natural History, France
Natural History Museum, London
Rinay
Royal Belgian Institute of Natural Sciences, Brussels: with a collection of more than 9 million shells (mainly from the collection of Philippe Dautzenberg)
Smithsonian Institution
See also
Invertebrate paleontology
History of invertebrate paleozoology
Treatise on Invertebrate Paleontology
Notes
References
Further reading
Cox L. R. & Peake J. F. (eds.). Proceedings of the First European Malacological Congress. September 17–21, 1962. Text in English with black-and-white photographic reproductions, also maps and diagrams. Published by the Conchological Society of Great Britain and Ireland and the Malacological Society of London in 1965 with no ISBN.
Heppel D. (1995). "The long dawn of Malacology: a brief history of malacology from prehistory to the year 1800." Archives of Natural History 22(3): 301–319.
External links
Periodicals about molluscs at WorldCat
Subfields of zoology
Marine biology | Malacology | [
"Biology"
] | 1,568 | [
"Subfields of zoology",
"Marine biology"
] |
353,673 | https://en.wikipedia.org/wiki/FOX%20proteins | FOX (forkhead box) proteins are a family of transcription factors that play important roles in regulating the expression of genes involved in cell growth, proliferation, differentiation, and longevity. Many FOX proteins are important to embryonic development. FOX proteins also have pioneering transcription activity by being able to bind condensed chromatin during cell differentiation processes.
The defining feature of FOX proteins is the forkhead box, a sequence of 80 to 100 amino acids forming a motif that binds to DNA. This forkhead motif is also known as the winged helix, due to the butterfly-like appearance of the loops in the protein structure of the domain. Forkhead proteins are a subgroup of the helix-turn-helix class of proteins.
Biological roles
Many genes encoding FOX proteins have been identified. For example, the FOXF2 gene encodes forkhead box F2, one of many human homologues of the Drosophila melanogaster transcription factor forkhead. FOXF2 is expressed in the lung and placenta.
Some FOX genes are downstream targets of the hedgehog signaling pathway, which plays a role in the development of basal cell carcinomas. Members of class O (FOXO- proteins) regulate metabolism, cellular proliferation, stress tolerance and possibly lifespan. The activity of FoxO is controlled by post-translational modifications, including phosphorylation, acetylation and ubiquitination.
Discovery
The founding member and namesake of the FOX family is the fork head transcription factor in Drosophila, discovered by German biologists Detlef Weigel and Herbert Jäckle. Since then a large number of family members have been discovered, especially in vertebrates. Originally, they were given vastly different names (such as HFH, FREAC, and fkh), but in 2000 a unified nomenclature was introduced that grouped the FOX proteins into subclasses (FOXA-FOXS) based on sequence conservation.
Genes
FOXA1, FOXA2, FOXA3 (See also Hepatocyte nuclear factors.)
FOXB1, FOXB2
FOXC1 (associated with glaucoma), FOXC2 (varicose veins)
FOXD1, FOXD2, FOXD3 (vitiligo), FOXD4, FOXD4L1, FOXD4L3, FOXD4L4, FOXD4L5, FOXD4L6
FOXE1 (thyroid), FOXE3 (lens)
FOXF1 (lung), FOXF2
FOXG1 (brain)
FOXH1 (widely expressed)
FOXI1 (ear), FOXI2, FOXI3
FOXJ1 (cilia), FOXJ2 (erythroid), FOXJ3
FOXK1, FOXK2 (HIV, IL-2, adrenal)
FOXL1 (ovary), FOXL2
FOXM1 (cell cycle, erythroid, cancer)
FOXN1 (hair, thymus), FOXN2, FOXN3 (cell cycle checkpoints; widely expressed), FOXN4
FOXO1 (widely expressed: muscle, liver, pancreas), FOXO3 (apoptosis, erythroid, longevity), FOXO4 (widely expressed), FOXO6 (liver, skeletal muscle, brain)
FOXP1 (pluripotency then brain, heart and lung), FOXP2 (widely expressed? brain; language), FOXP3 (T cells), FOXP4 – may be ancestrally responsible for motor learning, based on insect studies (where there's only one FoxP)
FOXQ1
FOXR1, FOXR2
FOXS1
Cancer
A member of the FOX family, FOXD2, has been detected progressively overexpressed in human-papillomavirus-positive neoplastic keratinocytes derived from uterine cervical preneoplastic lesions at different levels of malignancy. For this reason, this gene is likely to be associated with tumorigenesis and may be a potential prognostic marker for uterine cervical preneoplastic lesions progression.
References
External links
Aging-related proteins | FOX proteins | [
"Biology"
] | 859 | [
"Senescence",
"Aging-related proteins"
] |
353,697 | https://en.wikipedia.org/wiki/Sonic%20hedgehog%20protein | Sonic hedgehog protein (SHH) is encoded for by the SHH gene. The protein is named after the video game character Sonic the Hedgehog.
This signaling molecule is key in regulating embryonic morphogenesis in all animals. SHH controls organogenesis and the organization of the central nervous system, limbs, digits and many other parts of the body. Sonic hedgehog is a morphogen that patterns the developing embryo using a concentration gradient characterized by the French flag model. This model has a non-uniform distribution of SHH molecules which governs different cell fates according to concentration. Mutations in this gene can cause holoprosencephaly, a failure of splitting in the cerebral hemispheres, as demonstrated in an experiment using SHH knock-out mice in which the forebrain midline failed to develop and instead only a single fused telencephalic vesicle resulted.
Sonic hedgehog still plays a role in differentiation, proliferation, and maintenance of adult tissues. Abnormal activation of SHH signaling in adult tissues has been implicated in various types of cancers including breast, skin, brain, liver, gallbladder and many more.
Discovery and naming
The hedgehog gene (hh) was first identified in the fruit fly Drosophila melanogaster in the classic Heidelberg screens of Christiane Nüsslein-Volhard and Eric Wieschaus, as published in 1980. These screens, which led to the researchers winning a Nobel Prize in 1995 along with developmental geneticist Edward B. Lewis, identified genes that control the segmentation pattern of the Drosophila embryos. The hh loss of function mutant phenotype causes the embryos to be covered with denticles, i.e. small pointy projections resembling the spikes of a hedgehog. Investigations aimed at finding a hedgehog equivalent in vertebrates by Philip Ingham, Andrew P. McMahon and Clifford Tabin revealed three homologous genes.
Two of these genes, desert hedgehog and Indian hedgehog, were named for species of hedgehogs, while sonic hedgehog was named after the video game character Sonic the Hedgehog. The gene was named by Robert Riddle, a postdoctoral fellow at the Tabin Lab, after his wife Betsy Wilder came home with a magazine containing an advert for the first game in the series, Sonic the Hedgehog (1991). In the zebrafish, two of the three vertebrate hh genes are duplicated: SHH a and SHH b (formerly described as tiggywinkle hedgehog, named for Mrs. Tiggy-Winkle, a character from Beatrix Potter's books for children) and ihha and ihhb (formerly described as echidna hedgehog, named for the spiny anteater and not for the character Knuckles the Echidna in the Sonic franchise).
Function
Of the hh homologues, SHH has been found to have the most critical roles in development, acting as a morphogen involved in patterning many systems—including the anterior pituitary, pallium of the brain, spinal cord, lungs, teeth and the thalamus by the zona limitans intrathalamica. In vertebrates, the development of limbs and digits depends on the secretion of sonic hedgehog by the zone of polarizing activity, located on the posterior side of the embryonic limb bud. Mutations in the human sonic hedgehog gene SHH cause holoprosencephaly type 3 HPE3, as a result of the loss of the ventral midline. The sonic hedgehog transcription pathway has also been linked to the formation of specific kinds of cancerous tumors, including the embryonic cerebellar tumor and medulloblastoma, as well as the progression of prostate cancer tumours. For SHH to be expressed in the developing embryo limbs, a morphogen called fibroblast growth factors must be secreted from the apical ectodermal ridge.
Sonic hedgehog has also been shown to act as an axonal guidance cue. It has been demonstrated that SHH attracts commissural axons at the ventral midline of the developing spinal cord. Specifically, SHH attracts retinal ganglion cell (RGC) axons at low concentrations and repels them at higher concentrations. The absence (non-expression) of SHH has been shown to control the growth of nascent hind limbs in cetaceans (whales and dolphins).
The SHH gene is a member of the hedgehog gene family with five variations of DNA sequence alterations or splice variants. SHH is located on chromosome seven and initiates the production of Sonic Hedgehog protein. This protein sends short- and long-range signals to embryonic tissues to regulate development. If the SHH gene is mutated or absent, the protein Sonic Hedgehog cannot do its job properly. Sonic hedgehog contributes to cell growth, cell specification and formation, structuring and organization of the body plan. This protein functions as a vital morphogenic signaling molecule and plays an important role in the formation of many different structures in developing embryos. The SHH gene affects several major organ systems, such as the nervous system, cardiovascular system, respiratory system and musculoskeletal system. Mutations in the SHH gene can cause malformation of components of these systems, which can result in major problems in the developing embryo. The brain and eyes, for example, can be significantly impacted by mutations in this gene and cause disorders such as Microphthalmia and Holoprosencephaly. Microphthalmia is a condition that affects the eyes, which results in small, underdeveloped tissues in one or both eyes. This can lead to issues ranging from a coloboma to a single small eye to the absence of eyes altogether. Holoprosencephaly is a condition most commonly caused by a mutation of the SHH gene that causes improper separation or turn of the left and right brain and facial dysmorphia. Many systems and structures rely heavily on proper expression of the SHH gene and subsequent sonic hedgehog protein, earning it the distinction of being an essential gene to development.
Patterning of the central nervous system
The sonic hedgehog (SHH) signaling molecule assumes various roles in patterning the central nervous system (CNS) during vertebrate development. One of the most characterized functions of SHH is its role in the induction of the floor plate and diverse ventral cell types within the neural tube. The notochord—a structure derived from the axial mesoderm—produces SHH, which travels extracellularly to the ventral region of the neural tube and instructs those cells to form the floor plate. Another view of floor plate induction hypothesizes that some precursor cells located in the notochord are inserted into the neural plate before its formation, later giving rise to the floor plate.
The neural tube itself is the initial groundwork of the vertebrate CNS, and the floor plate is a specialized structure, located at the ventral midpoint of the neural tube. Evidence supporting the notochord as the signaling center comes from studies in which a second notochord is implanted near a neural tube in vivo, leading to the formation of an ectopic floor plate within the neural tube.
Sonic hedgehog is the secreted protein that mediates signaling activities of the notochord and floor plate. Studies involving ectopic expression of SHH in vitro and in vivo result in floor plate induction and differentiation of motor neuron and ventral interneurons. On the other hand, mice mutants for SHH lack ventral spinal cord characteristics. In vitro blocking of SHH signaling using antibodies against it shows similar phenotypes. SHH exerts its effects in a concentration-dependent manner, so that a high concentration of SHH results in a local inhibition of cellular proliferation. This inhibition causes the floor plate to become thin compared to the lateral regions of the neural tube. Lower concentration of SHH results in cellular proliferation and induction of various ventral neural cell types. Once the floor plate is established, cells residing in this region will subsequently express SHH themselves, generating a concentration gradient within the neural tube.
Although there is no direct evidence of a SHH gradient, there is indirect evidence via the visualization of Patched (Ptc) gene expression, which encodes for the ligand binding domain of the SHH receptor throughout the ventral neural tube. In vitro studies show that incremental two- and threefold changes in SHH concentration give rise to motor neuron and different interneuronal subtypes as found in the ventral spinal cord. These incremental changes in vitro correspond to the distance of domains from the signaling tissue (notochord and floor plate) which subsequently differentiates into different neuronal subtypes as it occurs in vitro. Graded SHH signaling is suggested to be mediated through the Gli family of proteins, which are vertebrate homologues of the Drosophila zinc-finger-containing transcription factor Cubitus interruptus (Ci). Ci is a crucial mediator of hedgehog (Hh) signaling in Drosophila. In vertebrates, three different Gli proteins are present, viz. Gli1, Gli2 and Gli3, which are expressed in the neural tube. Mice mutants for Gli1 show normal spinal cord development, suggesting that it is dispensable for mediating SHH activity. However, Gli2 mutant mice show abnormalities in the ventral spinal cord, with severe defects in the floor plate and ventral-most interneurons (V3). Gli3 antagonizes SHH function in a dose-dependent manner, promoting dorsal neuronal subtypes. SHH mutant phenotypes can be rescued in a SHH/Gli3 double mutant. Gli proteins have a C-terminal activation domain and an N-terminal repressive domain.
SHH is suggested to promote the activation function of Gli2 and inhibit repressive activity of Gli3. SHH also seems to promote the activation function of Gli3, but this activity is not strong enough. The graded concentration of SHH gives rise to graded activity of Gli 2 and Gli3, which promote ventral and dorsal neuronal subtypes in the ventral spinal cord. Evidence from Gli3 and SHH/Gli3 mutants show that SHH primarily regulates the spatial restriction of progenitor domains rather than being inductive, as SHH/Gli3 mutants show intermixing of cell types.
SHH also induces other proteins with which it interacts, and these interactions can influence the sensitivity of a cell towards SHH. Hedgehog-interacting protein (HHIP) is induced by SHH, which in turn attenuates its signaling activity. Vitronectin is another protein that is induced by SHH; it acts as an obligate co-factor for SHH signaling in the neural tube.
There are five distinct progenitor domains in the ventral neural tube: V3 interneurons, motor neurons (MN), V2, V1, and V0 interneurons (in ventral to dorsal order). These different progenitor domains are established by "communication" between different classes of homeobox transcription factors. (See Trigeminal Nerve.) These transcription factors respond to SHH gradient concentration. Depending upon the nature of their interaction with SHH, they are classified into two groups—class I and class II—and are composed of members from the Pax, Nkx, Dbx and Irx families. Class I proteins are repressed at different thresholds of SHH delineating ventral boundaries of progenitor domains, while class II proteins are activated at different thresholds of SHH delineating the dorsal limit of domains. Selective cross-repressive interactions between class I and class II proteins give rise to five cardinal ventral neuronal subtypes.
It is important to note that SHH is not the only signaling molecule exerting an effect on the developing neural tube. Many other molecules, pathways and mechanisms are active (e.g., RA, FGF, BMP), and complex interactions between SHH and other molecules are possible. BMPs are suggested to play a critical role in determining the sensitivity of neural cell to SHH signaling. Evidence supporting this comes from studies using BMP inhibitors that ventralize the fate of the neural plate cell for a given SHH concentration. On the other hand, mutation in BMP antagonists (e.g., noggin) produces severe defects in the ventral-most characteristics of the spinal cord, followed by ectopic expression of BMP in the ventral neural tube. Interactions of SHH with Fgf and RA have not yet been studied in molecular detail.
Morphogenetic activity
The concentration- and time-dependent, cell-fate-determining activity of SHH in the ventral neural tube makes it a prime example of a morphogen. In vertebrates, SHH signaling in the ventral portion of the neural tube is most notably responsible for the induction of floor plate cells and motor neurons. SHH emanates from the notochord and ventral floor plate of the developing neural tube to create a concentration gradient that spans the dorso-ventral axis and is antagonized by an inverse Wnt gradient, which specifies the dorsal spinal cord. Higher concentrations of the SHH ligand are found in the most ventral aspects of the neural tube and notochord, while lower concentrations are found in the more dorsal regions of the neural tube. The SHH concentration gradient has been visualized in the neural tube of mice engineered to express a SHH::GFP fusion protein to show this graded distribution of SHH during the time of ventral neural tube patterning.
It is thought that the SHH gradient works to elicit multiple different cell fates by a concentration- and time-dependent mechanism that induces a variety of transcription factors in the ventral progenitor cells. Each of the ventral progenitor domains expresses a highly individualized combination of transcription factors—Nkx2.2, Olig2, Nkx6.1, Nkx6.2, Dbx1, Dbx2, Irx3, Pax6, and Pax7—that is regulated by the SHH gradient. These transcription factors are induced sequentially along the SHH concentration gradient with respect to the amount and time of exposure to SHH ligand. As each population of progenitor cells responds to the different levels of SHH protein, they begin to express a unique combination of transcription factors that leads to neuronal cell fate differentiation. This SHH-induced differential gene expression creates sharp boundaries between the discrete domains of transcription factor expression, which ultimately patterns the ventral neural tube.
The spatial and temporal aspect of the progressive induction of genes and cell fates in the ventral neural tube is illustrated by the expression domains of two of the most well-characterized transcription factors, Olig2 and Nkx2.2. Early in development, the cells at the ventral midline have only been exposed to a low concentration of SHH for a relatively short time and express the transcription factor Olig2. The expression of Olig2 rapidly expands in a dorsal direction concomitantly with the continuous dorsal extension of the SHH gradient over time. However, as the morphogenetic front of SHH ligand moves and begins to grow more concentrated, cells that are exposed to higher levels of the ligand respond by switching off Olig2 and turning on Nkx2.2, creating a sharp boundary between the cells expressing the transcription factor Nkx2.2 ventral to the cells expressing Olig2. It is in this way that each of the domains of the six progenitor cell populations are thought to be successively patterned throughout the neural tube by the SHH concentration gradient. Mutual inhibition between pairs of transcription factors expressed in neighboring domains contributes to the development of sharp boundaries; however, in some cases, inhibitory relationship has been found even between pairs of transcription factors from more distant domains. Particularly, NKX2-2 expressed in the V3 domain is reported to inhibit IRX3 expressed in V2 and more dorsal domains, although V3 and V2 are separated by a further domain termed MN.
SHH expression in the frontonasal ectodermal zone (FEZ), which is a signaling center that is responsible for the patterned development of the upper jaw, regulates craniofacial development mediating through the miR-199 family in the FEZ. Specifically, SHH-dependent signals from the brain regulate genes of the miR-199 family with downregulations of the miR-199 genes increasing SHH expression and resulting in wider faces, while upregulations of the miR-199 genes decrease SHH expression resulting in narrow faces.
Tooth development
SHH plays an important role in organogenesis and, most importantly, craniofacial development. Being that SHH is a signaling molecule, it primarily works by diffusion along a concentration gradient, affecting cells in different manners. In early tooth development, SHH is released from the primary enamel knot—a signaling center—to provide positional information in both a lateral and planar signaling pattern in tooth development and regulation of tooth cusp growth. SHH in particular is needed for growth of epithelial cervical loops, where the outer and inner epitheliums join and form a reservoir for dental stem cells. After the primary enamel knots are apoptosed, the secondary enamel knots are formed. The secondary enamel knots secrete SHH in combination with other signaling molecules to thicken the oral ectoderm and begin patterning the complex shapes of the crown of a tooth during differentiation and mineralization. In a knockout gene model, absence of SHH is indicative of holoprosencephaly. However, SHH activates downstream molecules of Gli2 and Gli3. Mutant Gli2 and Gli3 embryos have abnormal development of incisors that are arrested in early tooth development as well as small molars.
Lung development
Although SHH is most commonly associated with brain and limb digit development, it is also important in lung development. Studies using qPCR and knockouts have demonstrated that SHH contributes to embryonic lung development. The mammalian lung branching occurs in the epithelium of the developing bronchi and lungs. SHH expressed throughout the foregut endoderm (innermost of three germ layers) in the distal epithelium, where the embryonic lungs are developing. This suggests that SHH is partially responsible for the branching of the lungs. Further evidence of SHH's role in lung branching has been seen with qPCR. SHH expression occurs in the developing lungs around embryonic day 11 and is strongly expressed in the buds of the fetal lungs but low in the developing bronchi. Mice who are deficient in SHH can develop tracheoesophageal fistula (abnormal connection of the esophagus and trachea). Additionally, a double (SHH-/- ) knockout mouse model exhibited poor lung development. The lungs of the SHH double knockout failed to undergo lobation and branching (i.e., the abnormal lungs only developed one branch, compared to an extensively branched phenotype of the wildtype).
Potential regenerative function
Sonic hedgehog may play a role in mammalian hair cell regeneration. By modulating retinoblastoma protein activity in rat cochlea, sonic hedgehog allows mature hair cells that normally cannot return to a proliferative state to divide and differentiate. Retinoblastoma proteins suppress cell growth by preventing cells from returning to the cell cycle, thereby preventing proliferation. Inhibiting the activity of Rb seems to allow cells to divide. Therefore, sonic hedgehog—identified as an important regulator of Rb—may also prove to be an important feature in regrowing hair cells after damage.
SHH is important for regulating dermal adipogenesis by hair follicle transit-amplifying cells (HF-TACs). Specifically, SHH induces dermal angiogenesis by acting directly on adipocyte precursors and promoting their proliferation through their expression of the peroxisome proliferator-activated receptor γ (Pparg) gene.
Processing
SHH undergoes a series of processing steps before it is secreted from the cell. Newly synthesised SHH weighs 45 kDa and is referred to as the preproprotein. As a secreted protein, it contains a short signal sequence at its N-terminus, which is recognised by the signal recognition particle during the translocation into the endoplasmic reticulum (ER), the first step in protein secretion. Once translocation is complete, the signal sequence is removed by signal peptidase in the ER. There, SHH undergoes autoprocessing to generate a 20 kDa N-terminal signaling domain (SHH-N) and a 25 kDa C-terminal domain with no known signaling role. The cleavage is catalysed by a protease within the C-terminal domain. During the reaction, a cholesterol molecule is added to the C-terminus of SHH-N. Thus, the C-terminal domain acts as an intein and a cholesterol transferase. Another hydrophobic moiety, a palmitate, is added to the alpha-amine of N-terminal cysteine of SHH-N. This modification is required for efficient signaling, resulting in a 30-fold increase in potency over the non-palmitylated form and is carried out by a member of the membrane-bound O-acyltransferase family Protein-cysteine N-palmitoyltransferase HHAT.
Robotnikinin
A potential inhibitor of the Hedgehog signaling pathway has been found and dubbed "Robotnikinin"—after Sonic the Hedgehog's nemesis and the main antagonist of the Sonic the Hedgehog game series, Dr. Ivo "Eggman" Robotnik.
Former controversy surrounding name
The gene has been linked to a condition known as holoprosencephaly, which can result in severe brain, skull and facial defects, causing a few clinicians and scientists to criticize the name on the grounds that it sounds too frivolous. It has been noted that mention of a mutation in a sonic hedgehog gene might not be well received in a discussion of a serious disorder with a patient or their family. This controversy has largely died down, and the name is now generally seen as a humorous relic of the time before the rise of fast, cheap complete genome sequencing and standardized nomenclature. The problem of the "inappropriateness" of the names of genes such as "Mothers against decapentaplegic," "Lunatic fringe," and "Sonic hedgehog" is largely avoided by using standardized abbreviations when speaking with patients and their families.
Gallery
See also
Pikachurin, a retinal protein named after Pikachu
Zbtb7, an oncogene which was originally named "Pokémon"
References
Further reading
External links
An introductory article on SHH at Davidson College
Rediscovering biology: Unit 7 Genetics of development .. Expert interview transcripts interview with John Incardona PhD .. explanation of the discovery and naming of the sonic hedgehog gene
‘Sonic Hedgehog’ sounded funny at first .. New York Times November 12, 2006 ..
GeneReviews/NCBI/NIH/UW entry on Anophthalmia / Microphthalmia Overview
SHH – sonic hedgehog US National Library of Medicine
Proteins
Morphogens
HINT domain
Cell signaling
Ligands (biochemistry)
Genes on human chromosome 7
Sonic the Hedgehog | Sonic hedgehog protein | [
"Chemistry",
"Biology"
] | 4,899 | [
"Biomolecules by chemical classification",
"Signal transduction",
"Ligands (biochemistry)",
"Molecular biology",
"Proteins",
"Morphogens",
"Induced stem cells"
] |
353,748 | https://en.wikipedia.org/wiki/List%20of%20computability%20and%20complexity%20topics | This is a list of computability and complexity topics, by Wikipedia page.
Computability theory is the part of the theory of computation that deals with what can be computed, in principle. Computational complexity theory deals with how hard computations are, in quantitative terms, both with upper bounds (algorithms whose complexity in the worst cases, as use of computing resources, can be estimated), and from below (proofs that no procedure to carry out some task can be very fast).
For more abstract foundational matters, see the list of mathematical logic topics. See also list of algorithms, list of algorithm general topics.
Calculation
Lookup table
Mathematical table
Multiplication table
Generating trigonometric tables
History of computers
Multiplication algorithm
Peasant multiplication
Division by two
Exponentiating by squaring
Addition chain
Scholz conjecture
Presburger arithmetic
Computability theory: models of computation
Arithmetic circuits
Algorithm
Procedure, recursion
Finite-state automaton
Mealy machine
Minsky register machine
Moore machine
State diagram
State transition system
Deterministic finite automaton
Nondeterministic finite automaton
Generalized nondeterministic finite automaton
Regular language
Pumping lemma
Myhill-Nerode theorem
Regular expression
Regular grammar
Prefix grammar
Tree automaton
Pushdown automaton
Context-free grammar
Büchi automaton
Chomsky hierarchy
Context-sensitive language, context-sensitive grammar
Recursively enumerable language
Register machine
Stack machine
Petri net
Post machine
Rewriting
Markov algorithm
Term rewriting
String rewriting system
L-system
Knuth–Bendix completion algorithm
Star height
Star height problem
Generalized star height problem
Cellular automaton
Rule 110 cellular automaton
Conway's Game of Life
Langton's ant
Edge of chaos
Turing machine
Deterministic Turing machine
Non-deterministic Turing machine
Alternating automaton
Alternating Turing machine
Turing-complete
Turing tarpit
Oracle machine
Lambda calculus
Combinatory logic
Combinator
B, C, K, W System
Parallel computing
Flynn's taxonomy
Quantum computer
Universal quantum computer
Church–Turing thesis
Recursive function
Decision problems
Entscheidungsproblem
Halting problem
Correctness
Post correspondence problem
Decidable language
Undecidable language
Word problem for groups
Wang tile
Penrose tiling
Definability questions
Computable number
Definable number
Halting probability
Algorithmic information theory
Algorithmic probability
Data compression
Complexity theory
Advice (complexity)
Amortized analysis
Arthur–Merlin protocol
Best and worst cases
Busy beaver
Circuit complexity
Constructible function
Cook-Levin theorem
Exponential time
Function problem
Linear time
Linear speedup theorem
Natural proof
Polynomial time
Polynomial-time many-one reduction
Polynomial-time Turing reduction
Savitch's theorem
Space hierarchy theorem
Speed Prior
Speedup theorem
Subquadratic time
Time hierarchy theorem
Complexity classes
See the list of complexity classes
Exponential hierarchy
Polynomial hierarchy
Named problems
Clique problem
Hamiltonian cycle problem
Hamiltonian path problem
Integer factorization
Knapsack problem
Satisfiability problem
2-satisfiability
Boolean satisfiability problem
Subset sum problem
3SUM
Traveling salesman problem
Vertex cover problem
One-way function
Set cover problem
Independent set problem
Extensions
Probabilistic algorithm, randomized algorithm
Las Vegas algorithm
Non-determinism
Non-deterministic Turing machine
Interactive computation
Interactive proof system
Probabilistic Turing Machine
Approximation algorithm
Simulated annealing
Ant colony optimization algorithms
Game semantics
Generalized game
Multiple-agent system
Parameterized complexity
Process calculi
Pi-calculus
Hypercomputation
Real computation
Computable analysis
Weihrauch reducibility
Mathematics-related lists
Theory of computation
Outlines of mathematics and logic
Outlines | List of computability and complexity topics | [
"Mathematics"
] | 711 | [
"nan"
] |
353,761 | https://en.wikipedia.org/wiki/Chicha | Chicha is a fermented (alcoholic) or non-fermented beverage of Latin America, emerging from the Andes and Amazonia regions. In both the pre- and post-Spanish conquest periods, corn beer (chicha de jora) made from a variety of maize landraces has been the most common form of chicha. However, chicha is also made from a variety of other cultigens and wild plants, including, among others, quinoa (Chenopodium quinia), kañiwa (Chenopodium pallidicaule), peanut, manioc (also called yuca or cassava), palm fruit, rice, potato, oca (Oxalis tuberosa), and chañar (Geoffroea decorticans). There are many regional variations of chicha. In the Inca Empire, chicha had ceremonial and ritual uses.
Etymology and related phrases
The exact origin of the word chicha is debated. One belief is that the word chicha is of Taino origin and became a generic term used by the Spanish to define any and all fermented beverages brewed by indigenous peoples in the Americas. It is possible that one of the first uses of the term chicha was from a group of people who lived in Colombia and Panama, the Kuna. However, according to the Real Academia Española and other authors, the word chicha comes from the Kuna word or “chiab” which means maize. According to Don Luis G. Iza it comes from the Nahuatl word , which means “fermented water”; the verb chicha meaning “to sour a drink“ and the noun atl meaning “water”. These etymologies are not mutually exclusive.
The Spanish idiom ni chicha ni limonada (neither chicha nor lemonade) means “neither one thing nor another” (roughly equivalent to the English “neither fish nor fowl”).
Maize chicha
Preparation
Chicha de jora is a corn beer prepared by germinating maize, extracting the malt sugars, boiling the wort, and fermenting it in large vessels, traditionally huge earthenware vats, for several days. The original Quechua name is aqa ~ aqha (Ayacucho vs. Cuzco-Bolivia varieties), and it is traditionally made and sold in chicherías, called also aqa wasi or aqha wasi (lit. “chicha house”).
Usually, the brewer makes chicha in large amounts and uses many of these clay vats to do so. These vats break down easily and can only be used a few times. The brewers can arrange their vessels in rows, with fires in the middle, to reduce heat loss.
The process for making chicha is essentially the same as the process for the production of malted barley beer. It is traditionally made with Jora corn, a type of malted corn from the Andes. The specific type or combination of corn used in the making of chicha de jora shows where it was made. Some add quinoa or other adjuncts to give it consistency; then it is boiled. During the boiling process, the chicha is stirred and aerated so as to prevent overboiling. Chancaca, a hard form of sugar (like sugar cane), helps with the fermentation process.
After the milling of the corn and the brewing of the drink, the chicha is then sieved. Traditionally, it is sieved through a large cloth. This is to separate the corn from the desired chicha.
In some cultures, instead of germinating the maize to release the starches therein, the maize is ground, moistened by saliva in the chicha maker's mouth, and formed into small balls, which are then flattened and laid out to dry. Naturally occurring ptyalin enzymes in the maker's saliva catalyses the breakdown of starch in the maize into maltose. This process of chewing grains or other starches was used in the production of alcoholic beverages in pre-modern cultures around the world, including, for example, sake in Japan. Chicha prepared in this manner is known as chicha de muko.
Chicha morada is a non-fermented chicha usually made from ears of purple maize (maíz morado), which are boiled with pineapple rind, cinnamon, and cloves. This gives a strong, purple-colored liquid, which is then mixed with sugar and lemon. This beverage is usually taken as a refreshment. Chicha morada is common in Bolivian and Peruvian cultures and is generally drunk as an accompaniment to food.
Women are most associated with the production of chicha. Men and children are still involved with the process of making chicha, but women control the production and distribution. For many women in Andean society, making and selling chicha is a key part of their identity because it provides a substantial amount of political power and leverage.
Use
Chicha de jora has been prepared and consumed in communities throughout in the Andes for millennia. The Inca used chicha for ritual purposes and consumed it in vast quantities during religious festivals. Mills in which it was probably made were found at Machu Picchu.
During the Inca Empire women were taught the techniques of brewing chicha in Aqlla Wasi (feminine schools).
Chicherias (chicha taverns) were places to consume chicha. Many have historically been unlicensed, home-based businesses that produce chicha on site.
Normally sold in large caporal (1/2 liter) glasses to be drunk on location, or by liter, if taken home, chicha is generally sold straight from the earthenware chomba where it was brewed. On the Northern coast of Peru, it is often served in a dried gourd known as a Poto while in the Peruvian Andes it is often served in a qero. Qeros are traditionally made from wood with intricate designs carved on the outside. In colonial times qeros transitioned to be painted with figurative depictions on the exterior instead of carving. Some qeros were also made of metals and many are now made of glass. Inca leaders used identical pairs of qeros to extend invitations to drink. These invitations represented an indebtedness upon the invitee. In this way, the drinking of chicha via qeros cemented relationships of power and alliances between people and groups.
Chicha can be mixed with Coca Sek, a Colombian beverage made from coca leaf.
Regional variations
There are a number of regional varieties of chicha, which can be roughly divided into lowland (Amazonia) and highland varieties, of which there are many.
Amazonia
Throughout the Amazon Basin (including the interiors of Ecuador, Peru, Colombia, and Brazil), chicha is usually made from cassava, but also cooking plantain is known to be used. Traditionally, the women chew the washed and peeled cassava and spit the juice into a bowl. Cassava root is very starchy, and therefore the enzymes in the preparer's saliva rapidly convert the starch to simple sugar, which is further converted by wild yeast or bacteria into alcohol. After the juice has fermented in the bowl for a few hours, the result will be mildly sweet and sour chicha, similar in appearance to defatted milk. In Colombian Amazonia, the drink is called masato.
It is traditional for families to offer chicha to arriving guests. Children are offered new chicha that has not fermented, whereas adults are offered fermented chicha; the most highly fermented chicha, with its significant alcohol content, is reserved for men.
Bolivia
In Bolivia chicha is most often made from maize, especially in the highlands, but amaranth chicha is also traditional and popular. Chicha made from sweet manioc, plantain, or banana is also common in the lowlands. Bolivian chicha often has alcohol. A good description of the preparation of a Bolivian way to make chicha can be found in Cutler, Hugh and Martin Cardenas, "Chicha a Native South American Beer".
Chile
In Chile, there are two main types of chicha: apple chicha produced in southern Chile and grape chicha produced in central Chile. Both are alcoholic beverages with no distillation, only fermentation. Chicha is mostly consumed in the countryside and during festivities, such as Fiestas Patrias on September 18. Chicha is usually not found in formal supermarkets unless close to September 18.
Sites like Máfil in southern Chile were traditional centres of apple chicha produce that was sold in the nearby city of Valdivia. With the introduction of beer by the German settlers who arrived in the second half of the 19th century the chicha production in Máfil declined and is now done by few and mostly for consumption within the family.
Colombia
In Bogotá, the capital of present-day Colombia, the recipe is simple; cooked maize is ground with black panela in a large pot. The mix is let to ferment for seven to eight days depending on the degree of liquor desired.
Chicha was outlawed in Colombia in 1949 and remains formally illegal, but brewing continued underground and the drink is openly available in some areas.
Ecuador
A major chicha beer festival, Yamor, is held in early September in Otavalo. It has its roots in the 1970s, when the locals decided to revive an ancient tradition of marking the maize harvest before the September equinox. These locals spoke Quechua, and "Yamor" was the name for chicha. The festival includes bands, parades, fireworks, and chicha sampling.
El Salvador
In El Salvador, chicha usually refers to an alcoholic drink made with maize, panela, and pineapple. It is used as a drink and also as an ingredient on many traditional dishes, such as gallo en chicha, a local version of coq au vin. A non-alcoholic version usually named fresco de chicha (chicha soft drink) is made with the same ingredients, but without allowing it to ferment.
Honduras
In Honduras, the Pech people practiced a ritual called Kesh where a shaman contacted the spiritual world. A Kesh could be held for various reasons, a few including to help appease the angry spirits or to help a deceased member of the community on his or her journey after death. During this ritual, they drank Chicha made of yuca, minia, and yuca tamales. The ritual is no longer practiced, but the drink is still reserved for special occasions with family only.
Nicaragua
In Managua and Granada, "chicha de maiz" is a typical drink, unfermented and served very cold. It is often flavored with banana or vanilla flavors, and its saleswomen can be heard calling "¡Chicha, cafe y jugo frio!" in the squares.
Nicaraguan "chicha de maiz" is made by soaking the corn in water overnight. On the following day it is ground and placed in water, red food coloring is added, and the whole mixture is cooked. Once cooled, sugar and more water is added. On the following day, one adds further water, sugar and flavoring. Although fermented chicha is available, the unfermented type is the most common.
Panama
In Panama, chicha can simply mean "fruit drink". Unfermented chicha often is called batido, another name for any drink containing a fruit puree. Locally, among the Kuna or Gundetule of the San Blas chain of islands "chicha fuerte" refers to the fermented maize and Grandmother Saliva mixture, which chicha is enjoyed in special or Holy days. While chicha fuerte most traditionally refers to chicha made of germinated corn (germination helps to convert starch to sugar), any number of fruits can be fermented into unique, homemade versions of the beverage. In rural areas, chicha fuerte is the refreshment of choice during and after community work parties (juntas), as well as during community dances (tamboritos).
Peru
Chicha's importance in the social and religious world of Latin America can best be seen by focusing on the drink's central role in ancient Peru. Corn was considered a sacred crop, but Chicha, in particular, was considered very high status. Chicha was consumed in great quantities during and after the work of harvesting, making for a festive mood of singing, dancing, and joking. Chicha was offered to gods and ancestors, much like other fermented beverages around the world were. For example, at the Incan capital of Cuzco, the king poured chicha into a gold bowl at the navel of the universe, an ornamental stone dais with throne and pillar, in the central plaza. The chicha cascaded down this “gullet of the Sun God” to the Temple of Sun, as awestruck spectators watched the high god quaff the precious brew. At most festivals, ordinary people participated in days of prodigious drinking after the main feast, as the Spanish looked on aghast at the drunkenness.
Human sacrifices first had to be rubbed in the dregs of chicha, and then tube-fed with more chicha for days while lying buried alive in tombs. Special sacred places, scattered throughout the empire, and mummies of previous kings and ancestors were ritually bathed in maize flour and presented with chicha offerings, to the accompaniment of dancing and panpipe music. Even today, Peruvians sprinkle some chicha to “mother earth” from the communal cup when they sit down together to drink; the cup then proceeds in the order of each drinker's social status, as an unending succession of toasts are offered.
Venezuela
In Venezuela chicha or chicha de arroz is made of boiled rice, milk, sugar; it is generally of white color and has the consistency of eggnog. It is usually served as a sweet, refreshing beverage with ground cinnamon or condensed milk toppings. This chicha de arroz contains no alcohol as it is not fermented. Sometimes it is made with pasta or semolina instead of rice and is commonly called chicha de pasta.
In most large cities, chicha is offered by street vendors, commonly referred to as chicheros; these vendors usually use a flour-like mix and just add water, and generally serve them with chopped ice and a straw and may ask to add cinnamon, chocolate chips or sugared condensed milk on top. It can also be found in commercial presentations just like milk and juices. The Venezuelan Andean regions (such as Mérida) prepare an alternative version, with added fermented pineapple, which has a more liquory taste. This variety is commonly referred to as Chicha Andina and is a typical Christmastime beverage.
Significance in Inca society
Identity
Chicha use can reveal how people perceive their own cultural identity and express ideas about gender, race, nationality, and community. Chicha use contributes to how people build community and a collective identity for maintaining social networks. It is often consumed in the context of feasts and festivals, which are valuable contexts for strengthening social and cultural connections. The production and consumption of chicha contributes to social organization and can affect social status.
Rites of passage
Chicha consumption included its use in rites of passage for indigenous peoples such as the Incas. Chicha was important in ceremonies for adolescent boys coming of age, especially for the sons of Inca nobility. Young men would get their adult names in ceremonies using chicha. One thing that these boys did was to go on a pilgrimage to mountains such as Huanacauri that had significant meaning. Boys did this about a month before a ceremony honoring maturation. After the pilgrimage, the boys chewed maize to make the chicha they would drink at the end of the month-long ceremony. One activity was running down the side of a mountain to get a kero of chicha given to them by young women in order to encourage them.
Women's role
The use of chicha can also be seen when looking at women who lived during the Incas reign before the arrival of the Spanish. Women were important to the community of the Incas. There was a select group of women that would receive formal instruction, these women were the aclla, also known as "Chosen Women". This group of women was extracted from their family-homes and taken to the acllahuasi or "House of the Chosen Women". These women were dedicated to Inca religion, weaving, cooking and chicha-brewing. Much of the chicha they brewed would go to ceremonies, or when the community would get together to worship their god. They started the chicha process by chewing maize to create mushy texture that would be fermented. The product of the acllas was considered sacred because of the women who produced it. This was a special privilege that many women did not have except for the "most attractive women."
Perceptions by the Inca royalty
The Incas themselves show the importance of chicha. The lords or royalty probably drank chicha from silver and gold cups known as keros. Also, after defeating an enemy Inca rulers would have heads of the defeated enemy converted into cup to drink chicha from. An example of this could be seen when Atawallpa drank chicha from opposing foes skull. By doing this it showed how superior the Incas themselves were to by leading their army to victory and chicha was at the forefront. After major military victories the Incas would celebrate by drinking chicha. When the Incas and the Spanish conquistadors met, the conquistadors would not understand the significance of chicha. Titu Cusi explains how his uncle, Atahualpa reacted when the intruders did not respect chicha. Kusi says, "The Spaniard, upon receiving the drink in his hand, spilled it which greatly angered my uncle. And after that, the two Spaniards showed my uncle a letter, or book, or something, saying that this was the inscription of God and the King and my uncle, as he felt offended by the spilling of the chicha, took the letter and knocked to the ground saying: I don't know what you have given me. Go on, leave." Another instance like this occurred between Atawallpa and the Spanish, it left with Atawallpa saying, "Since you don't respect me I won't respect you either." This story recorded by Titu Cusi shows the significant relationship the Incas had with chicha. If someone insulted this beverage they would take it personally because it offended their beliefs and community.
Economy
In the economy of the Incas, there was not an exchange of currencies. Rather, the economy depended on trading products, the exchanging of services, and the Inca distributing items out to the people that work for him. Chicha that was produced by men along the coastline in order to trade or present to their Inca. This differed from the women that were producing the chicha inland because they were doing so for community gathers and other important ceremonies. Relationships were important in the Inca community and good relations with the Inca could allow a family to be provided with supplementary goods that not everyone had access to. The Inca would give chicha to families and to the males that that contributed to mit'a.
In the economy of the Incas it was important that there was a steady flow of chicha, amongst other goods that were important to everyday life. In the fields of the Andes, there was special emphasis where maize would be planted and it was taken seriously where the maize fields would be located. "Agricultural rituals linked the production of maize to the liquid transfer of power in society with chicha." The ability to plant maize showed an important social role someone had amongst their community. Due to the significance of planting maize, the state would probably be in charge of these farms. The significance of drinking chicha together as a community was another important aspect to the way the Incas went about everyday life. It was incorporated into the meals that the Incas ate.
Religious purposes
The production of chicha was a necessity to all because it was a sacred item to the people. "Among the Incas, corn was a divine gift to humanity, and its consumption as a fermented beverage in political meetings formed communion between those where drinking and the ancestors, the and the entirety of the Inca cosmology." This beverage allowed the people to go back to the story of creation and be reminded of the creator god Wiraqocha. The Incas saw this beverage in sexual way because of the way the earth produced for them. The Incas saw chicha as semen and when dumped onto the Earth they thought that they were feeding the Earth.
See also
Cauim
Chicha de jora
Chicha morada
List of fermented foods
List of maize dishes
List of saliva-fermented beverages
Pox (drink)
Pulque
Punucapa
Tejuino
Tesguino
References
Further reading
Morris, C. "Maize Beer in the Economics, Politics, and Religion of the Inca Empire" in Fermented Food Beverages in Nutrition, eds. Clifford F. Gastineau, William J. Darby, and Thomas B. Turner (1979), pp. 21–35.
Super, John C. Food, Conquest, and Colonization in Sixteenth-Century Spanish America. 1988.
Vázquez, Mario C. "La chicha en los paises andinos," América Indígena 27 (1967): 265–82.
External links
Chicha - an Ancestral Beverage to Feed Body and Soul
The Chicha Page Recipes & Information
Chicha - the University of Pennsylvania's Dept. Of Biomolecular Archaeology Information on the religious importance of Chicha to the Incas.
Chicha de Muko Recipe & Information on the preparation of the traditional Chicha de Muko
Cuban alcoholic drinks
Mexican alcoholic drinks
Mexican drinks
Maize-based drinks
Types of beer
Inca
Muisca
Bolivian cuisine
Chilean cuisine
Ecuadorian cuisine
Native American cuisine
Mapuche cuisine
Panamanian cuisine
Peruvian cuisine
Amylase induced fermentation
Colombian cuisine
Salvadoran cuisine
Honduran cuisine
Nicaraguan cuisine
Venezuelan cuisine
Entheogens
Historical drinks | Chicha | [
"Chemistry"
] | 4,597 | [
"Amylase induced fermentation",
"Fermentation"
] |
353,767 | https://en.wikipedia.org/wiki/Grand%20Coulee%20Dam | Grand Coulee Dam is a concrete gravity dam on the Columbia River in the U.S. state of Washington, built to produce hydroelectric power and provide irrigation water. Constructed between 1933 and 1942, Grand Coulee originally had two powerhouses. The third powerhouse ("Nat"), completed in 1974 to increase energy production, makes Grand Coulee the largest power station in the United States by nameplate capacity at 6,809 MW.
The proposal to build the dam was the focus of a bitter debate during the 1920s between two groups. One group wanted to irrigate the ancient Grand Coulee with a gravity canal while the other pursued a high dam and pumping scheme. The dam supporters won in 1933, but, although they fully intended otherwise, the initial proposal by the Bureau of Reclamation was for a "low dam" tall which would generate electricity without supporting irrigation. That year, the U.S. Bureau of Reclamation and a consortium of three companies called MWAK (Mason-Walsh-Atkinson Kier Company) began construction on a high dam, although they had received approval for a low dam. After visiting the construction site in August 1934, President Franklin Delano Roosevelt endorsed the "high dam" design, which at high would provide enough electricity to pump water into the Columbia basin for irrigation. Congress approved the high dam in 1935, and it was completed in 1942. The first waters overtopped Grand Coulee's spillway on of that year.
Power from the dam fueled the growing industries of the Northwest United States during World War II. Between 1967 and 1974, the third powerplant was constructed. The decision to construct the additional facility was influenced by growing energy demand, regulated river flows stipulated in the Columbia River Treaty with Canada, and competition with the Soviet Union. Through a series of upgrades and the installation of pump-generators, the dam now supplies four power stations with an installed capacity of 6,809 MW. As the centerpiece of the Columbia Basin Project, the dam's reservoir supplies water for the irrigation of .
The reservoir is called Franklin Delano Roosevelt Lake, named after the president who endorsed the dam's construction. Creation of the reservoir forced the relocation of over 3,000 people, including Native Americans whose lands were partially flooded. The dam was constructed without fish passage. The next one downstream, Chief Joseph Dam, which was built decades later, also does not have fish passage. This means no salmon reach the Grand Coulee Dam or the Colville Indian Reservation. The third large dam downstream, Wells Dam, has an intricate system of fish ladders to accommodate yearly salmon spawning and migration.
Background
The Grand Coulee is an ancient river bed on the Columbia Plateau created during the Pliocene Epoch (Calabrian) by retreating glaciers and floods. Originally, geologists believed a glacier that diverted the Columbia River formed the Grand Coulee, but it was revealed in the mid-late 20th century that massive floods from Lake Missoula carved most of the gorge. The earliest known proposal to irrigate the Grand Coulee with the Columbia River dates to 1892, when the Coulee City News and The Spokesman Review reported on a scheme by a man named Laughlin McLean to construct a dam across the Columbia River, high enough that water would back up into the Grand Coulee. A dam that size would have its reservoir encroach into Canada, which would violate treaties. Soon after the Bureau of Reclamation was founded, it investigated a scheme for pumping water from the Columbia River to irrigate parts of central Washington. An attempt to raise funds for irrigation failed in 1914, as Washington voters rejected a bond measure.
In 1917, William M. Clapp, a lawyer from Ephrata, Washington, proposed the Columbia be dammed immediately below the Grand Coulee. He suggested a concrete dam could flood the plateau, just as nature blocked it with ice centuries ago. Clapp was joined by James O'Sullivan, another lawyer, and by Rufus Woods, publisher of The Wenatchee World newspaper in the nearby agricultural centre of Wenatchee. Together, they became known as the "Dam College". Woods began promoting the Grand Coulee Dam in his newspaper, often with articles written by O'Sullivan.
The dam idea gained popularity with the public in 1918. Backers of reclamation in Central Washington split into two camps. The "pumpers" favored a dam with pumps to elevate water from the river into the Grand Coulee from which canals and pipes could irrigate farmland. The "ditchers" favored diverting water from northeast Washington's Pend Oreille River via a gravity canal to irrigate farmland in Central and Eastern Washington. Many locals such as Woods, O'Sullivan and Clapp were pumpers, while many influential businessmen in Spokane associated with the Washington Water and Power Company (WWPC) were staunch ditchers. The pumpers argued that hydroelectricity from the dam could cover costs and claimed the ditchers sought to maintain a monopoly on electric power.
The ditchers took several steps to ensure support for their proposals. In 1921, WWPC secured a preliminary permit to build a dam at Kettle Falls, about upstream from the Grand Coulee. If built, the Kettle Falls Dam would have lain in the path of the Grand Coulee Dam's reservoir, essentially blocking its construction. WWPC planted rumors in the newspapers, stating exploratory drilling at the Grand Coulee site found no granite on which a dam's foundations could rest, only clay and fragmented rock. This was later disproved with Reclamation-ordered drilling. Ditchers hired General George W. Goethals, engineer of the Panama Canal, to prepare a report. Goethals visited the state and produced a report backing the ditchers. The Bureau of Reclamation was unimpressed by Goethals' report, believing it filled with errors.
In , President Warren G. Harding visited Washington state and expressed support for irrigation work there, but died a month later. His successor, Calvin Coolidge, had little interest in irrigation projects. The Bureau of Reclamation, desirous of a major project that would bolster its reputation, was focusing on the Boulder Canyon Project that resulted in the Hoover Dam. Reclamation was authorized to conduct a study in 1923, but the project's cost made federal officials reluctant. The Washington state proposals received little support from those further east, who feared the irrigation would result in more crops, depressing prices. With President Coolidge opposed to the project, bills to appropriate money for surveys of the Grand Coulee site failed.
In 1925, Congress authorized a U.S. Army Corps of Engineers study of the Columbia River. This study was included in the Rivers and Harbors Act of , which provided for studies on the navigation, power, flood control and irrigation potential of rivers. In , the Army Corps responded with the first of the "308 Reports" named after the 1925 House Document No. 308 (69th Congress, 1st Session). With the help of Washington's Senators, Wesley Jones and Clarence Dill, Congress ordered $600,000 in further studies to be carried out by the Army Corps and Federal Power Commission on the Columbia River Basin and Snake Rivers. U.S. Army Major John Butler was responsible for the upper Columbia River and Snake River and in 1932, his 1,000-page report was submitted to Congress. It recommended the Grand Coulee Dam and nine others on the river, including some in Canada. The report stated electricity sales from the Grand Coulee Dam could pay for construction costs. Reclamation—whose interest in the dam was revitalized by the report—endorsed it.
Although there was support for the Grand Coulee Dam, others argued there was little need for more electricity in the Northwest and crops were in surplus. The Army Corps did not believe construction should be a federal project and saw low demand for electricity. Reclamation argued energy demand would rise by the time the dam was complete. The head of Reclamation, Elwood Mead, stated he wanted the dam built no matter the cost. President Franklin D. Roosevelt, who took office in March 1933, supported the dam because of its irrigation potential and the power it would provide, but he was uneasy with its price tag. For this reason, he supported a "low dam" instead of the "high dam". He provided in federal funding, while Washington State provided $377,000. In 1933, Washington governor Clarence Martin set up the Columbia Basin Commission to oversee the dam project, and Reclamation was selected to oversee construction.
Construction
Low dam
On July 16, 1933, a crowd of 3,000 watched the driving of the first stake at the low dam site, and excavation soon began. Core drilling commenced that September while the Bureau of Reclamation accelerated its studies and designs for the dam. It would still help control floods and provide for irrigation and hydroelectricity, though at a reduced capacity. Most importantly, it would not raise its reservoir high enough to irrigate the plateau around the Grand Coulee. The dam's design provided for future raising and upgrading.
Before and during construction, workers and engineers experienced problems. Contracts for companies to construct the various parts of the dam were difficult to award as few companies were sizable enough to fill them. This forced companies to consolidate. Native American graves had to be relocated and temporary fish ladders had to be constructed. During construction additional problems included landslides and the need to protect newly poured concrete from freezing. Construction on the downstream Grand Coulee Bridge began in and more considerable earth-moving began in August. Excavation for the dam's foundation required the removal of 22 million cubic yards (17 million m³) of dirt and stone.
To reduce the amount of trucking required in the excavation, a conveyor belt nearly long was built. To further secure the foundation, workers drilled holes into the granite and filled any fissures with grout, creating a grout curtain. At times, excavated areas collapsed from overburden. In order to secure these areas from further movement and continue excavation, diameter pipes were inserted into the mass and chilled with cold liquid from a refrigeration plant. This froze the earth and secured it so construction could continue.
Final contract bidding for the dam began , 1934, in Spokane, and four bids were submitted. One bid was from a lawyer with no financial backing; another was from actress Mae West which consisted of nothing more than a poem and promise to divert the river. Of the two serious bids, the lowest bid was from a consortium of three companies: Silas Mason Co. from Louisville, Kentucky; Walsh Construction Co. of Davenport, Iowa and New York; and Atkinson-Kier Company of San Francisco and San Diego. The consortium was known as MWAK, and their bid was $29,339,301, almost 15% lower than the option submitted by the next bidder, Six Companies, Inc., which was building Hoover Dam at the time.
Cofferdams
Two large cofferdams were constructed for the dam, but they were parallel to the river rather than straddling its width, so drilling into the canyon walls was not required. By the end of 1935 about 1,200 workers completed the west and east cofferdams. The west cofferdam was long, thick and was constructed above the bedrock. The cofferdams allowed workers to dry portions of the riverbed and begin constructing the dam, while water continued to flow down the center of the riverbed.
In , once the west foundation was complete, portions of the west cofferdam were dismantled, allowing water to flow through part of the dam's new foundation. In , MWAK had begun constructing cofferdams above and below the channel between the east and west cofferdams. By December, the entire Columbia River was diverted over the foundations constructed within the east and west cofferdams. On , 1936, the Wenatchee Daily World announced the river was diverted and by early the next year, people were arriving in large numbers to see the riverbed.
Design change
On August 4, 1934, President Franklin D. Roosevelt visited the construction site and was impressed by the project and its purpose. He spoke to workers and spectators, closing with this statement: "I leave here today with the feeling that this work is well undertaken; that we are going ahead with a useful project, and we are going to see it through for the benefit of our country." Soon afterward, Reclamation was allowed to proceed with the high dam plan but faced the problems of transitioning the design and negotiating an altered contract with MWAK. In , for an additional , MWAK and Six Companies, Inc. agreed to join together as Consolidated Builders Inc. and construct the high dam. Six Companies had just finished the Hoover Dam and was nearing completion of Parker Dam. The new design, chosen and approved by the Reclamation office in Denver, included several improvements, one of which was the irrigation pumping plant.
Roosevelt envisioned the dam would fit into his New Deal under the Public Works Administration; it would create jobs and farming opportunities and would pay for itself. In addition, as part of a larger public effort, Roosevelt wanted to keep electricity prices low by limiting private ownership of utility companies, which could charge high prices for energy. Many opposed a federal takeover of the project, including its most prominent supporters, but Washington State lacked the resources to fully realize the project. In , with the help of Roosevelt and a Supreme Court decision allowing the acquisition of public land and Indian Reservations, Congress authorized funding for the upgraded high dam under the 1935 River and Harbors Act. The most significant legislative hurdle for the dam was over:
First concrete pour and completion
On December 6, 1935, Governor Clarence Martin presided over the ceremonial first concrete pour. During construction, bulk concrete was delivered on site by rail-cars where it was further processed by eight large mixers before being placed in form. Concrete was poured into columns by crane-lifted buckets, each supporting eight tons of concrete. To cool the concrete and facilitate curing, about of piping was placed throughout the hardening mass. Cold water from the river was pumped into the pipes, reducing the temperature within the forms from to . This caused the dam to contract about in length; the resulting gaps were filled with grout.
Until the project began, the stretch of the Columbia River where the dam was to rise was as yet unbridged, making it difficult to move men and materials. In , the Grand Coulee Bridge, a permanent highway bridge, was opened after major delays caused by high water. Three additional and temporary bridges downstream had moved vehicles and workers along with sand and gravel for cement mixing. In , MWAK completed the lower dam and Consolidated Builders Inc. began constructing the high dam. In , the west power house was completed. About 5,500 workers were on site that year. Between 1940 and 1941, the dam's eleven floodgates were installed on the spillway. In , the dam's first generator went into operation. On , 1942, the reservoir was full and the first water flowed over the dam's spillway. On , 1943, work was officially complete. The last of the original 18 generators did not operate until 1949.
Reservoir clearing
In 1933, Reclamation began efforts to purchase land behind the dam as far as upstream for the future reservoir zone. The reservoir, known later as Lake Roosevelt, flooded and Reclamation acquired an additional around the future shoreline. Within the zone were eleven towns, two railroads, three state highways, about one hundred and fifty miles of country roads, four sawmills, fourteen bridges, four telegraph and telephone systems, and many power lines and cemeteries. All facilities had to be purchased or relocated, and 3,000 residents were relocated. The Anti-Speculation Act was passed in 1937, limiting the amount of land farmers could own to prevent inflated prices.
The government appraised the land and offered to purchase it from the affected residents. Many refused to accept the offers, and Reclamation filed condemnation suits. Members of the Colville Confederated and Spokane tribes who had settlements within the reservoir zone were also resettled. The Acquisition of Indian Lands for Grand Coulee Dam Act of , 1940, allowed the Secretary of the Interior to acquire land on the Colville and Spokane Reservations, eventually accounting for . By 1942, all land had been purchased at market value: a cost of that included the relocation of farms, bridges, highways and railroads. Relocation reimbursement was not offered to property owners, which was common until U.S. laws changed in 1958.
In late 1938, the Works Progress Administration began clearing what would be of trees and other plants. The cut timber was floated downstream and sold to the highest bidder, Lincoln Lumber Company, which paid $2.25 per thousand board feet, . The pace of clearing was accelerated in when it was declared a national defense project, and the last tree was felled on , 1941. The felling was done by Reclamation Supervising Engineer Frank A. Banks and State WPA Administrator Carl W. Smith during a ceremony. 2,626 people living in five main camps along the Columbia worked on the project. When it was finished, had been spent in labor.
Labor and supporting infrastructure
Workers building the dam received an average of 80¢ an hour; the payroll for the dam was among the largest in the nation. The workers were mainly pulled from Grant, Lincoln, Douglas, and Okanogan counties and women were allowed to work only in the dorms and the cookhouse. Around 8,000 people worked on the project, and Frank A. Banks served as the chief construction engineer. Bert A. Hall was the chief inspector who would accept the dam from the contractors. Orin G. Patch served as the chief of concrete. Construction conditions were dangerous and 77 workers died.
To prepare for construction, housing for workers was needed along with four bridges downstream of the dam site, one of which, the Grand Coulee Bridge, exists today. The Bureau of Reclamation provided housing and located their administrative building at Engineer's Town, which was directly downstream of the construction site on the west side of the river. Opposite Engineer's Town, MWAK constructed Mason City in 1934. Mason City contained a hospital, post office, electricity and other amenities along with a population of 3,000. Three-bedroom houses in the city were rented for $32 a month.
Of the two living areas, Engineer's City was considered to have the better housing. Several other living areas formed around the construction site in an area known as Shack Town, which did not have reliable access to electricity and the same amenities as the other towns. Incorporated in 1935, the city of Grand Coulee supported workers as well and is just west of the dam on the plateau. MWAK eventually sold Mason City to Reclamation in 1937 before its contract was completed. In 1956, Reclamation combined both Mason City and Engineer's Town to form the city of Coulee Dam. It was incorporated as a city in .
Irrigation pumps
With the onset of World War II, power generation was given priority over irrigation. In 1943, Congress authorized the Columbia Basin Project and the Bureau of Reclamation began construction of irrigation facilities in 1948. Directly to the west and above the Grand Coulee Dam, the North Dam was constructed. This dam, along with the Dry Falls Dam to the south, enclosed and created Banks Lake, which covered the northern of the Grand Coulee. Additional dams, such as the Pinto and O'Sullivan Dams, were constructed alongside siphons and canals, creating a vast irrigation supply network called the Columbia Basin Project. Irrigation began between 1951 and 1953 as six of the 12 pumps were installed and Banks Lake was filled.
Expansion
Third powerplant
After World War II, the growing demand for electricity sparked interest in constructing another power plant supported by the Grand Coulee Dam. One obstacle to an additional power plant was the great seasonality of the Columbia River's streamflow. Today the flow is closely managed—there is almost no seasonality. Historically, about 75% of the river's annual flow occurred between April and September. During low flow periods, the river's discharge was between and while maximum spring runoff flows were around . Only nine out of the dam's eighteen generators could run year-round. The remaining nine operated for less than six months a year. In 1952, Congress authorized $125,000 for Reclamation to conduct a feasibility study on the Third Powerplant which was completed in 1953 and recommended two locations. Nine identical 108 MW generators were recommended, but as matters stood, they would be able to operate only in periods of high water.
Further regulation of the Columbia's flows was necessary to make the new power plant feasible. It would require water storage and regulation projects in Canada and a treaty to resolve the many economic and political issues involved. The Bureau of Reclamation and Army Corps of Engineers explored alternatives that would not depend on a treaty with Canada, such as raising the level of Flathead Lake or Pend Oreille Lake, but both proposals faced strong local opposition. The Columbia River Treaty, which had been discussed between the U.S. and Canada since 1944, was seen as the answer. Efforts to build the Third Powerplant were also influenced by competition with the Soviet Union, which had constructed power plants on the Volga River larger than Grand Coulee.
On , 1964, the Columbia River Treaty was ratified and included an agreement by Canada to construct the Duncan, Keenleyside, Mica Dams upstream and the U.S. would build the Libby Dam in Montana. Shortly afterward, Washington Senator Henry M. Jackson, who was influential in constructing the new power plant, announced Reclamation would present the project to Congress for appropriation and funding. To keep up with Soviet competition and increase the generating capacity it was determined the generators could be upgraded to much larger designs. With the possibility of international companies bidding on the project, the Soviets who had just installed a 500 MW hydroelectric generator on the Yenisei River indicated their interest. To avoid the potential embarrassment of an international rival building a domestic power plant, the Department of the Interior declined international bidding. The Third Powerplant was approved and President Lyndon Johnson signed its appropriation bill on , 1966.
Between 1967 and 1974, the dam was expanded to add the Third Powerplant, with architectural design by Marcel Breuer. Beginning in , this involved demolishing the northeast side of the dam and building a new fore-bay section. The excavation of of dirt and rock had been completed before the new long section of dam was built. The addition made the original dam almost a mile long. Original designs for the powerhouse had twelve smaller units but were altered to incorporate six of the largest generators available. To supply them with water, six diameter penstocks were installed. Of the new turbines and generators, three 600 MW units were built by Westinghouse and three 700 MW units by General Electric. The first new generator was commissioned in 1975 and the final one in 1980. The three 700 MW units were later upgraded to 805 MW by Siemens.
Pump-generating plant
After power shortages in the Northwest during the 1960s, it was determined the six remaining planned pumps be pump-generators. When energy demand is high, the pump-generators can generate electricity with water from the Banks Lake feeder canal adjacent to the dam at a higher elevation. By 1973, the Pump-Generating Plant was completed and the first two generators (P/G-7 and P/G-8) were operational. In 1983, two more generators went online, and by the final two were operational. The six pump-generators added 314 MW to the dam's capacity. In , the Pump-Generating Plant was officially renamed the John W. Keys III Pump-Generating Power Plant after John W. Keys III, the U.S. Bureau of Reclamation's commissioner from 2001 to 2006.
Overhauls
A major overhaul of the Third Powerplant, which contains generators numbered G19 through G24, began in and will be continuing for many years. Among the projects to be completed before the generators themselves can begin to be overhauled include replacing underground 500 kV oil-filled cables for G19, G20 and G21 generators with overhead transmission lines (started in ), new 236 MW transformers for G19 and G20 (started in ), and several other projects.
Planning, design, procurement and site preparation for the 805 MW G22, G23 and G24 generator overhauls are scheduled to begin in 2011. The overhauls will start in 2013 with the G22 generator, then G23 starting in 2014, and finally G24 starting in 2016, with planned completions in 2014, 2016 and 2017, respectively. The generator overhauls for G19, G20 and G21 have not been scheduled as of 2010.
Operation and benefits
The dam's primary goal, irrigation, was postponed as the wartime need for electricity increased. The dam's powerhouse began production around the time World War II began, and its electricity was vital to the war effort. The dam powered aluminum smelters in Longview and Vancouver, Washington, Boeing factories in Seattle and Vancouver, and Portland's shipyards. In 1943, its electricity was also used for plutonium production in Richland, Washington, at the Hanford Site, which was part of the top-secret Manhattan Project. The demand for power at that project was so great that in 1943, two generators originally intended for the Shasta Dam in California were installed at Grand Coulee to hurry the generator installation schedule.
Irrigation
Water is pumped via the Pump-Generating Plant's diameter pipes from Lake Roosevelt to a feeder canal. From the feeder canal, the water is transferred to Banks Lake which has an active storage of . The plant's twelve pumps can transfer up to to the lake. Currently, the Columbia Basin Project irrigates with a potential for . Over 60 different crops are grown within the project and distributed throughout the United States.
Power
Grand Coulee Dam supports four different power houses containing 33 hydroelectric generators. The original Left and Right Powerhouses contain 18 main generators and the Left has an additional three service generators for total installed capacity of 2,280 MW. The first generator was commissioned in 1941 and all 18 were operating by 1950. The Third Power plant contains a total of six main generators with a 4,215 MW installed capacity. Generators G-19, G-20 and G-21 in the Third Power Plant have a 600 MW installed capacity but can operate at a maximum capacity of 690 MW which brings the overall maximum capacity of the dam's power facilities to 7,079 MW. The Pump-Generating Plant contains six pump-generators with an installed capacity of 314 MW. When pumping water into Banks Lake they consume 600 MW of electricity. Each generator is supplied with water by an individual penstock. The largest of these feed the Third Power Plant and are in diameter and can supply up to . The dam's power facilities originally had an installed capacity of 1,974 MW but expansions and upgrades have increased generation to 6,809 MW installed, 7,079 MW maximum. Grand Coulee Dam generates 21 TWh of electricity annually. This means the dam generates about 2,397 MW of power on average, which results in a total plant factor efficiency of 35%. In 2014, 20.24 TWh of electricity was generated.
Spillway
Grand Coulee Dam's spillway is long and is an overflow, drum-gate controlled type with a maximum capacity. A record flood in May and flooded the lowlands below the dam and highlighted its limited flood control capability at the time, as its spillway and turbines hit a record flow of . The flood damaged downstream riverbanks and deteriorated the face of the dam and its flip bucket at the base (toe) of the spillway. The flood spurred the Columbia River Treaty and its provisions for dams constructed upstream in Canada, which would regulate the Columbia's flow.
Cost benefits
The Bureau of Reclamation in 1932 estimated the cost of constructing Grand Coulee Dam (not including the Third Powerplant) to be $168 million; its actual cost was $163 million in 1943 ($ in dollars). Expenses to finish the power stations and repair design flaws with the dam throughout the 1940s and '50s added another $107 million, bringing the total cost to $270 million ($ in dollars), about 33% over estimates. The Third Powerplant was estimated to cost in 1967, but higher construction costs and labor disputes drove the project's final cost in 1973 to ($ in dollars), about 87% over estimates. Despite estimates being exceeded, the dam became an economic success, particularly with the Third Powerplant exhibiting a benefit-cost ratio of 2:1. Although Reclamation has only irrigated about half of the land predicted, the gross value of crop output (in constant dollars) had doubled from 1962 to 1992, largely due to different farming practices and crop choices. The Bureau expects the money earned from supplying power and irrigation water will pay off the cost of construction by 2044.
Environmental and social consequences
The dam had severe negative consequences for the local Native American tribes whose traditional way of life revolved around salmon and the original shrub steppe habitat of the area. Because it lacks a fish ladder, Grand Coulee Dam permanently blocks fish migration, removing over of natural spawning habitat. By largely eliminating anadromous fish above the Okanogan River, the Grand Coulee Dam also set the stage for the subsequent decision not to provide for fish passage at Chief Joseph Dam (built in 1953). Chinook, Steelhead, Sockeye and Coho salmon (as well as other important species, including Lamprey) are now unable to spawn in the reaches of the Upper Columbia Basin. The lack of fish passage to the upper reaches of the Columbia River wiped out the June hogs, so-called "supersalmon" known to regularly weigh over 80 pounds (36kg). Today, the largest Chinook caught on the Columbia River are not even half that size. The extinction of the spawning grounds upstream from the dam has prevented the Spokane and other tribes from holding sacred salmon ceremonies since 1940.
Grand Coulee Dam flooded over 21,000 acres (85 km2) of prime bottom land where Native Americans had been living and hunting for thousands of years, forcing the relocation of settlements and graveyards. The Office of Indian Affairs negotiated with the United States Bureau of Reclamation on behalf of tribes who were concerned about the flooding of their grave sites. The Acquisition of Indian Lands for Grand Coulee Dam, 54 Stat.703 Act of June 20, 1940, allowed the Secretary of the Interior to remove human remains to new Native American grave sites. The burial relocation project started in September 1939. Human remains were put into small containers and many artifacts were discovered, but the methods of collection destroyed archaeological evidence. Various estimates for the number of relocated graves in 1939 include 915 graves reported by the Bureau of Reclamation Reclamation, or 1,388 reported by Howard T. Ball, who supervised the field work. Tribal leaders reported another 2,000 graves in 1940, but the Bureau of Reclamation would not continue grave relocation, and the sites were soon covered by water.
The town of Inchelium, Washington, home to around 250 Colville Indians, was submerged and later relocated. Kettle Falls, once a primary Native American fishing grounds, was also inundated. The average catch of over 600,000 salmon per year was eliminated. In one study, the Army Corps of Engineers estimated the annual loss was over fish. In , the Confederated Tribes of the Colville Reservation hosted a three-day event called the "Ceremony of Tears", marking the end of fishing at Kettle Falls. Within a year after the Ceremony, the falls were inundated. The town of Kettle Falls, Washington, was relocated.
The Columbia Basin Project has affected habitat ranges for species such as mule deer, pygmy rabbits and burrowing owls, resulting in decreased populations. However, it has created new habitats such as wetlands, and riparian corridors. The environmental impact of the dam effectively ended the traditional way of life of the native inhabitants. The government eventually compensated the Colville Indians in the 1990s with a lump settlement of approximately , plus annual payments of approximately . In 2019, a bill was passed to provide additional compensation to the Spokane Tribe. It provides roughly annually for the first decade, followed by roughly a year after that.
To compensate for the lack of ladder, three fisheries have been created above the dam, releasing into the upper Columbia River. One half of the fish are reserved for the displaced tribes, and one quarter of the reservoir is reserved for tribal hunting and boating.
Tourism
Built in the late 1970s, the Visitor Center contains many historical photos, geological samples, turbine and dam models, and a theater. The building was designed by Marcel Breuer and resembles a generator rotor. Since , on summer evenings, the laser light show at Grand Coulee Dam is projected onto the dam's wall. The show includes full-size images of battleships and the Statue of Liberty, as well as some environmental comments. Tours of the Third Power Plant are available to the public and last about an hour. Visitors take a shuttle to view the generators and also travel across the main dam span (otherwise closed to the public) as the formerly used glass elevator is indefinitely out of service.
The headquarters of the Lake Roosevelt National Recreation Area is near the dam, and the lake provides opportunities for fishing, swimming, canoeing, and boating.
Woody Guthrie connection
Folk singer Woody Guthrie wrote some of his most famous songs while working in the area in the 1940s. In 1941, after a brief stay in Los Angeles, Guthrie and his family moved north to Oregon on the promise of a job. Gunther von Fritsch was directing a documentary for the Bonneville Power Administration about the construction of the Grand Coulee Dam on the Columbia River and needed a narrator. Alan Lomax had recommended Guthrie to narrate the film and sing songs onscreen. The original project was expected to take 12 months, but as filmmakers became worried about casting a political figure like Guthrie, they minimized his role. The Department of the Interior hired him for one month to write songs about the Columbia River and the construction of the federal dams for the documentary's soundtrack. Guthrie toured the Columbia River and the Pacific Northwest. Guthrie said he "couldn't believe it, it's a paradise", which appeared to inspire him creatively. In one month, Guthrie wrote 26 songs, including three of his most famous: "Roll On, Columbia, Roll On", "Pastures of Plenty", and "Grand Coulee Dam". The surviving songs were released as Columbia River Songs. Guthrie was paid $266.66 for the month's work in 1941 (ca. $5,750 in 2024 dollars) for the project.
The film Columbia River was completed in 1949 and featured Guthrie's music. Guthrie had been commissioned in 1941 to provide songs for the project, but it had been postponed by WWII.
See also
John L. Savage – Bureau of Reclamation's chief design engineer during construction.
List of largest power stations in the world
List of dams in the Columbia River watershed
List of largest power stations in the United States
List of largest hydroelectric power stations in the United States
Citations
General bibliography
Further reading
Bretz, J. Harlen (1932), The Grand Coulee, American Geographical Society
Gresko, Marcia S. (1999), Building America - The Grand Coulee Dam, Blackbirch Press,
McClung, Christian (2009), Grand Coulee Dam: Leaving a Legacy, Great Depression in Washington State Project
Sundborg, George (1954), Hail Columbia: The Thirty-year Struggle for Grand Coulee Dam, New York: Macmillan.
White, Richard (1996), The Organic Machine: The Remaking of the Columbia River, New York: Hill and Wang,
External links
GrandCouleeDam.org —Informational web site
"The Grand Coulee Dam", con't., by Walter E. Mair, Popular Science Monthly, , pp. 11–13, 100. First article to explain full scope of the Grand Coulee Dam project
"More Power for America", Popular Mechanics, May 1942, pp. 17–24. Detailed article and drawing on start of operations of Grand Coulee Dam
University of Idaho Libraries Digital Collections – Dam Construction in the Pacific Northwest—Photographs of the construction of the Columbia Basin Project, with a special emphasis on the construction of Grand Coulee Dam.
University of Washington Libraries Digital Collections – Grand Coulee Dam—Photographs and pamphlets of the construction of the dam. Includes information about the recommendations for and against building the dam as well as images of land clearing activities by the Public Works Administration.
University of Washington Libraries Digital Collections Excerpt from the book Grand Coulee: Harnessing a Dream, by Paul C. Pitzer, Pullman, Wash.: Washington State University Press, 1994
Grand Coulee Dam – a 2012 documentary film for the PBS series American Experience (directed by Stephen Ives)
Grand Coulee Dam Columbia Basin Project Historical site—Personal interest site maintained by Charles Hubbard
Historic American Engineering Record (HAER) documentation, filed under Grand Coulee, Grant County, WA:
Dams on the Columbia River
Dams in Washington (state)
Buildings and structures in Grant County, Washington
Tourist attractions in Grant County, Washington
Hydroelectric power plants in Washington (state)
Landmarks in Washington (state)
Buildings and structures in Okanogan County, Washington
Tourist attractions in Okanogan County, Washington
Historic American Engineering Record in Washington (state)
Historic Civil Engineering Landmarks
Pumped-storage hydroelectric power stations in the United States
Gravity dams
United States Bureau of Reclamation dams
Dams completed in 1942
Energy infrastructure completed in 1942
Energy infrastructure completed in 1974
1942 establishments in Washington (state)
Articles containing video clips
Public Works Administration in Washington (state)
Dams with fish ladders | Grand Coulee Dam | [
"Engineering"
] | 7,731 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
353,805 | https://en.wikipedia.org/wiki/Glycogenolysis | Glycogenolysis is the breakdown of glycogen (n) to glucose-1-phosphate and glycogen (n-1). Glycogen branches are catabolized by the sequential removal of glucose monomers via phosphorolysis, by the enzyme glycogen phosphorylase.
Mechanism
In the muscles, glycogenosis begins due to the binding of cAMP to phosphorylate kinase, converting the latter to its active form so it can convert phosphorylase b to phosphorylase a, which is responsible for catalyzing the breakdown of glycogen.
The overall reaction for the breakdown of glycogen to glucose-1-phosphate is:
glycogen(n residues) + Pi glycogen(n-1 residues) + glucose-1-phosphate
Here, glycogen phosphorylase cleaves the bond linking a terminal glucose residue to a glycogen branch by substitution of a phosphoryl group for the α[1→4] linkage.
Glucose-1-phosphate is converted to glucose-6-phosphate (which often ends up in glycolysis) by the enzyme phosphoglucomutase.
Glucose residues are phosphorolysed from branches of glycogen until four residues before a glucose that is branched with a α[1→6] linkage. Glycogen debranching enzyme then transfers three of the remaining four glucose units to the end of another glycogen branch. This exposes the α[1→6] branching point, which is hydrolysed by α[1→6] glucosidase, removing the final glucose residue of the branch as a molecule of glucose and eliminating the branch. This is the only case in which a glycogen metabolite is not glucose-1-phosphate. The glucose is subsequently phosphorylated to glucose-6-phosphate by hexokinase.
Enzymes
Glycogen phosphorylase with Pyridoxal phosphate as prosthetic group
Alpha-1,4 → alpha-1,4 glucan transferase
Alpha-1,6-glucosidase
Phosphoglucomutase
Glucose-6-phosphatase (absent in muscles)
Function
Glycogenolysis takes place in the cells of the muscle and liver tissues in response to hormonal and neural signals. In particular, glycogenolysis plays an important role in the fight-or-flight response and the regulation of glucose levels in the blood.
In myocytes (muscle cells), glycogen degradation serves to provide an immediate source of glucose-6-phosphate for glycolysis, to provide energy for muscle contraction. Glucose-6-phosphate can not pass through the cell membrane, and is therefore used solely by the myocytes that produce it.
In hepatocytes (liver cells), the main purpose of the breakdown of glycogen is for the release of glucose into the bloodstream for uptake by other cells. The phosphate group of glucose-6-phosphate is removed by the enzyme glucose-6-phosphatase, which is not present in myocytes, and the free glucose exits the cell via GLUT2 facilitated diffusion channels in the hepatocyte cell membrane.
Regulation
Glycogenolysis is regulated hormonally in response to blood sugar levels by glucagon and insulin, and stimulated by epinephrine during the fight-or-flight response. Insulin potently inhibits glycogenolysis.
In myocytes, glycogen degradation may also be stimulated by neural signals; glycogenolysis is regulated by epinephrine and calcium released by the sarcoplasmic reticulum.
Glucagon has no effect on muscle glycogenolysis.
Calcium binds with calmodulin and the complex activates phosphorylase kinase.
Clinical significance
Parenteral (intravenous) administration of glucagon is a common human medical intervention in diabetic emergencies when sugar cannot be given orally. It can also be administered intramuscularly.
Pathology
See also
Glycogenesis
References
External links
The chemical logic of glycogen degradation at ufp.pt
Biochemical reactions
Carbohydrate metabolism
Diabetes
Hepatology | Glycogenolysis | [
"Chemistry",
"Biology"
] | 928 | [
"Carbohydrate metabolism",
"Biochemical reactions",
"Carbohydrate chemistry",
"Biochemistry",
"Metabolism"
] |
353,849 | https://en.wikipedia.org/wiki/Hydraulic%20mining | Hydraulic mining is a form of mining that uses high-pressure jets of water to dislodge rock material or move sediment. In the placer mining of gold or tin, the resulting water-sediment slurry is directed through sluice boxes to remove the gold. It is also used in mining kaolin and coal.
Hydraulic mining developed from ancient Roman techniques that used water to excavate soft underground deposits. Its modern form, using pressurized water jets produced by a nozzle called a "monitor", came about in the 1850s during the California Gold Rush in the United States. Though successful in extracting gold-rich minerals, the widespread use of the process resulted in extensive environmental damage, such as increased flooding and erosion, and sediment blocking waterways and covering farm fields. These problems led to its legal regulation. Hydraulic mining has been used in various forms around the world.
History
Ground Sluicing
Hydraulic mining had its precursor in the practice of ground sluicing, a development of which is also known as "hushing", in which surface streams of water were diverted so as to erode gold-bearing gravels. This technique was developed in the first centuries BC and AD by Roman miners to erode away alluvium. The Romans used ground sluicing to remove overburden and the gold-bearing debris in Las Médulas of Spain, and Dolaucothi in Great Britain. The method was also used in Elizabethan England and Wales (and rarely, Scotland) for developing lead, tin and copper mines.
Water was used on a large scale by Roman engineers in the first centuries BC and AD when the Roman empire was expanding rapidly in Europe. Using a process later known as hushing, the Romans stored a large volume of water in a reservoir immediately above the area to be mined; the water was then quickly released. The resulting wave of water removed overburden and exposed bedrock. Gold veins in the bedrock were then worked using a number of techniques, and water power was used again to remove debris. The remains at Las Médulas and in surrounding areas show badland scenery on a gigantic scale owing to hydraulicking of the rich alluvial gold deposits.
Las Médulas is now a UNESCO World Heritage Site. The site shows the remains of at least seven large aqueducts of up to in length feeding large supplies of water into the site. The gold-mining operations were described in vivid terms by Pliny the Elder in his Natural History published in the first century AD. Pliny was a procurator in Hispania Terraconensis in the 70s AD and witnessed the operations himself. The use of hushing has been confirmed by field survey and archaeology at Dolaucothi in South Wales, the only known Roman gold mine in Great Britain.
California Gold Rush
The modern form of hydraulic mining, using jets of water directed under very high pressure through hoses and nozzles at gold-bearing upland paleogravels, was first used by Edward Matteson near Nevada City, California in 1853 during the California Gold Rush. Matteson used canvas hose which was later replaced with crinoline hose by the 1860s. In California, hydraulic mining often brought water from higher locations for long distances to holding ponds several hundred feet above the area to be mined. California hydraulic mining exploited gravel deposits, making it a form of placer mining.
Early placer miners in California discovered that the more gravel they could process, the more gold they were likely to find. Instead of working with pans, sluice boxes, long toms, and rockers, miners collaborated to find ways to process larger quantities of gravel more rapidly. Hydraulic mining became the largest-scale, and most devastating, form of placer mining. Water was redirected into an ever-narrowing channel, through a large canvas hose, and out through a giant iron nozzle, called a "monitor". The extremely high pressure stream was used to wash entire hillsides through enormous sluices.
By the early 1860s, while hydraulic mining was at its height, small-scale placer mining had largely exhausted the rich surface placers, and the mining industry turned to hard rock (called quartz mining in California) or hydraulic mining, which required larger organizations and much more capital. By the mid-1880s, it is estimated that 11 million ounces of gold (worth approximately US$7.5 billion at mid-2006 prices) had been recovered by hydraulic mining
.
Environmental impacts
While generating millions of dollars in tax revenues for the state and supporting a large population of miners in the mountains, hydraulic mining had a devastating effect on riparian natural environment and agricultural systems in California. Millions of tons of earth and water were delivered to mountain streams that fed rivers flowing into the Sacramento Valley. Once the rivers reached the relatively flat valley, the water slowed, the rivers widened, and the sediment was deposited in the floodplains and river beds causing them to rise, shift to new channels, and overflow their banks, causing major flooding, especially during the spring melt.
Cities and towns in the Sacramento Valley experienced an increasing number of devastating floods, while the rising riverbeds made navigation on the rivers increasingly difficult. Perhaps no other city experienced the boon and the bane of gold mining as much as Marysville. Situated at the confluence of the Yuba and Feather rivers, Marysville was the final "jumping off" point for miners heading to the northern foothills to seek their fortune. Steamboats from San Francisco, carrying miners and supplies, navigated up the Sacramento River, then the Feather River to Marysville where they would unload their passengers and cargo.
Marysville eventually constructed a complex levee system to protect the city from floods and sediment. Hydraulic mining greatly exacerbated the problem of flooding in Marysville and shoaled the waters of the Feather River so severely that few steamboats could navigate from Sacramento to the Marysville docks. The sediment left by such efforts were reprocessed by mining dredges at the Yuba Goldfields, located near Marysville.
The spectacular eroded landscape left at the site of hydraulic mining can be viewed at Malakoff Diggins State Historic Park in Nevada County, California.
The San Francisco Bay became an outlet for polluting byproducts during the Gold Rush. Hydraulic mining left a trail of toxic waste, called "slickens," that flowed from mine sites in the Sierras through the Sacramento River and into the San Francisco Bay. The slickens would contain harmful metals such as mercury. During this period, the industrial mining industry released 1.5 billion yards of toxic slickens into the Sacramento River. As the slickens traveled through California's water arteries, it deposited its toxins into local ecosystems and waterways.
Nearby farmland became contaminated, which led to political pushback against the use of hydraulic mining. The slickens flowed through the Sacramento River before depositing itself into the San Francisco Bay. Currently, the San Francisco Bay remains dangerously contaminated with mercury. Estimates suggest that it will be another century before the Bay naturally removes the mercury from its system.
Legal action landmark case
Vast areas of farmland in the Sacramento Valley were deeply buried by the mining sediment. Frequently devastated by flood waters, farmers demanded an end to hydraulic mining. In the most renowned legal fight of farmers against miners, the farmers sued the hydraulic mining operations and the landmark case of Woodruff v. North Bloomfield Mining and Gravel Company made its way to the United States District Court in San Francisco where Judge Lorenzo Sawyer decided in favor of the farmers and limited hydraulic mining on January 7, 1884, declaring that hydraulic mining was "a public and private nuisance" and enjoining its operation in areas tributary to navigable streams and rivers.
Hydraulic mining on a much smaller scale was recommenced after 1893 when the United States Congress passed the Camminetti Act which allowed licensed mining operations if sediment retention structures were constructed. This led to a number of operations above sediment catching brush dams and log crib dams. Most of the water-delivery hydraulic mining infrastructure had been destroyed by an 1891 flood, so this later stage of mining was carried on at a much smaller scale in California.
Beyond California
Although often associated with California due to its adoption and widespread use there, the technology was exported widely, to Oregon (Jacksonville in 1856), Colorado (Clear Creek, Central City and Breckenridge in 1860), Montana (Bannack in 1865), Arizona (Lynx Creek in 1868), Idaho (Idaho City in 1863), South Dakota (Deadwood in 1876), Alaska (Fairbanks in 1920), British Columbia (Canada), and overseas. It was used extensively in Dahlonega, Georgia and continues to be used in developing nations, often with devastating environmental consequences. The devastation caused by this method of mining caused Edwin Carter, the "Log Cabin Naturalist", to switch from mining to collecting wildlife specimens from 1875–1900 in Breckenridge, Colorado, US.
Hydraulic mining was used during the Australian gold rushes where it was called hydraulic sluicing. One notable location was at the Oriental Claims near Omeo in Victoria where it was used between the 1850s and early 1900s, with abundant evidence of the damage still being visible today.
Hydraulic mining was used extensively in the Central Otago gold rush that took place in the 1860s in the South Island of New Zealand, where it was also known as sluicing.
Starting in the 1870s, hydraulic mining became a mainstay of alluvial tin mining on the Malay Peninsula.
Hydraulicking was formerly used in Polk County, Florida to mine phosphate rock.
Contemporary usage
In addition to its use in true mining, hydraulic mining can be used as an excavation technique, principally to demolish hills. For example, the Denny Regrade in Seattle was largely accomplished by hydraulic mining.
Hydraulic mining is the principal way that kaolinite clay is mined in Cornwall and Devon, in South-West England.
Egypt used hydraulic mining methods to breach the Bar Lev Line sand wall at the Suez Canal, in Operation Badr (1973) which opened the Yom Kippur War.
Rand gold fields
On the South African Rand gold fields, a gold surface tailings re-treatment facility called East Rand Gold and Uranium Company (ERGO) has been in operation since 1977. The facility uses hydraulic monitors to create slurry from older (and consequently richer) tailings sites and pumps it long distances to a concentration plant.
The facility processes nearly two million tons of tailings each month at a processing cost of below US$3.00/t (2013). Gold is recovered at a rate of only 0.20 g/t, but the low yield is compensated for by the extremely low cost of processing, with no risky or expensive mining or milling required for recovery.
The resulting slimes are pumped further away from the built-up areas permitting the economic development of land close to commercially valuable areas and previously covered by the tailings. The historic yellow-coloured mine dumps around Johannesburg are now almost a rarity, seen only in older photographs.
Uranium and pyrite (for sulfuric acid production) are also available for recovery from the process stream as co-products under suitable economic conditions.
Underground hydraulic mining
High-pressure water jets have also been used in the underground mining of coal, to break up the coal seam and wash the resulting coal slurry toward a collection point. The high-pressure water nozzle is referred to as the 'hydro monitor'.
See also
Hydrology
Hydropower
Hydraulic fracturing, use of high-pressure water in oil and gas drilling
Pressure washer, similar use of high-pressure jets of water
Water jet cutter, similar use of high-pressure jets of water
Cigar Lake Mine uses a similar method of high-pressure water to mine uranium
Borehole mining, remotely operated with similar use of high-pressure water jets.
References
Hydraulic Mining in California: A Tarnished Legacy, by Powell Greenland, 2001
Battling the Inland Sea: American Political Culture, Public Policy, and the Sacramento Valley: 1850-1986., U.Calif Press; 395pp.
Gold vs. Grain: The Hydraulic Mining Controversy in California's Sacramento Valley, by Robert L. Kelley, 1959
Lewis, P. R. and G. D. B. Jones, Roman gold-mining in north-west Spain, Journal of Roman Studies 60 (1970): 169–85
California Gold Rush
History of mining
Hydraulic engineering
Surface mining | Hydraulic mining | [
"Physics",
"Engineering",
"Environmental_science"
] | 2,512 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
353,855 | https://en.wikipedia.org/wiki/Outline%20of%20linear%20algebra | <noinclude>This is an outline of topics related to linear algebra, the branch of mathematics concerning linear equations and linear maps and their representations in vector spaces and through matrices.
Linear equations
Linear equation
System of linear equations
Determinant
Minor
Cauchy–Binet formula
Cramer's rule
Gaussian elimination
Gauss–Jordan elimination
Overcompleteness
Strassen algorithm
Matrices
Matrix
Matrix addition
Matrix multiplication
Basis transformation matrix
Characteristic polynomial
Trace
Eigenvalue, eigenvector and eigenspace
Cayley–Hamilton theorem
Spread of a matrix
Jordan normal form
Weyr canonical form
Rank
Matrix inversion, invertible matrix
Pseudoinverse
Adjugate
Transpose
Dot product
Symmetric matrix
Orthogonal matrix
Skew-symmetric matrix
Conjugate transpose
Unitary matrix
Hermitian matrix, Antihermitian matrix
Positive-definite, positive-semidefinite matrix
Pfaffian
Projection
Spectral theorem
Perron–Frobenius theorem
List of matrices
Diagonal matrix, main diagonal
Diagonalizable matrix
Triangular matrix
Tridiagonal matrix
Block matrix
Sparse matrix
Hessenberg matrix
Hessian matrix
Vandermonde matrix
Stochastic matrix
Toeplitz matrix
Circulant matrix
Hankel matrix
(0,1)-matrix
Matrix decompositions
Matrix decomposition
Cholesky decomposition
LU decomposition
QR decomposition
Polar decomposition
Reducing subspace
Spectral theorem
Singular value decomposition
Higher-order singular value decomposition
Schur decomposition
Schur complement
Haynsworth inertia additivity formula
Relations
Matrix equivalence
Matrix congruence
Matrix similarity
Matrix consimilarity
Row equivalence
Computations
Elementary row operations
Householder transformation
Least squares, linear least squares
Gram–Schmidt process
Woodbury matrix identity
Vector spaces
Vector space
Linear combination
Linear span
Linear independence
Scalar multiplication
Basis
Change of basis
Hamel basis
Cyclic decomposition theorem
Dimension theorem for vector spaces
Hamel dimension
Examples of vector spaces
Linear map
Shear mapping or Galilean transformation
Squeeze mapping or Lorentz transformation
Linear subspace
Row and column spaces
Column space
Row space
Cyclic subspace
Null space, nullity
Rank–nullity theorem
Nullity theorem
Dual space
Linear function
Linear functional
Category of vector spaces
Structures
Topological vector space
Normed vector space
Inner product space
Euclidean space
Orthogonality
Orthogonal complement
Orthogonal projection
Orthogonal group
Pseudo-Euclidean space
Null vector
Indefinite orthogonal group
Orientation (geometry)
Improper rotation
Symplectic structure
Multilinear algebra
Multilinear algebra
Tensor
Classical treatment of tensors
Component-free treatment of tensors
Gamas's Theorem
Outer product
Tensor algebra
Exterior algebra
Symmetric algebra
Clifford algebra
Geometric algebra
Topics related to affine spaces
Affine space
Affine transformation
Affine group
Affine geometry
Affine coordinate system
Flat (geometry)
Cartesian coordinate system
Euclidean group
Poincaré group
Galilean group
Projective space
Projective space
Projective transformation
Projective geometry
Projective linear group
Quadric and conic section
See also
Glossary of linear algebra
Glossary of tensor theory
Linear algebra
Linear algebra | Outline of linear algebra | [
"Mathematics"
] | 568 | [
"Linear algebra",
"nan",
"Algebra"
] |
353,880 | https://en.wikipedia.org/wiki/Computer%20art | Computer art is art in which computers play a role in the production or display of the artwork. Such art can be an image, sound, animation, video, CD-ROM, DVD-ROM, video game, website, algorithm, performance or gallery installation. Many traditional disciplines are now integrating digital technologies and, as a result, the lines between traditional works of art and new media works created using computers has been blurred. For instance, an artist may combine traditional painting with algorithm art and other digital techniques. As a result, defining computer art by its end product can thus be difficult. Computer art is bound to change over time since changes in technology and software directly affect what is possible.
Origin of the term
On the title page of the magazine Computers and Automation, January 1963, Edmund Berkeley published a picture by Efraim Arazi from 1962, coining for it the term "computer art." This picture inspired him to initiate the first Computer Art Contest in 1963. The annual contest was a key point in the development of computer art up to the year 1973.
History
The precursor of computer art dates back to 1956–1958, with the generation of what is probably the first image of a human being on a computer screen, a (George Petty-inspired) pin-up girl at a SAGE air defense installation. Desmond Paul Henry created his first electromechanical Henry Drawing Machine in 1961, using an adapted analogue Bombsight Computer. His drawing machine-generated artwork was shown at the Reid Gallery in London in 1962 after his traditional, non-machine artwork won him the privilege of a one-man exhibition there. It was artist L.S.Lowry who encouraged Henry to include examples of his machine-generated art in the Reid Gallery exhibition. .
By the mid-1960s, most individuals involved in the creation of computer art were in fact engineers and scientists because they had access to the only computing resources available at university scientific research labs. Many artists tentatively began to explore the emerging computing technology for use as a creative tool. In the summer of 1962, A. Michael Noll programmed a digital computer at Bell Telephone Laboratories in Murray Hill, New Jersey to generate visual patterns solely for artistic purposes. His later computer-generated patterns simulated paintings by Piet Mondrian and Bridget Riley and became classics. Noll also used the patterns to investigate aesthetic preferences in the mid-1960s.
The two early exhibitions of computer art were held in 1965: Generative Computergrafik, February 1965, at the Technische Hochschule in Stuttgart, Germany, and Computer-Generated Pictures, April 1965, at the Howard Wise Gallery in New York. The Stuttgart exhibit featured work by Georg Nees; the New York exhibit featured works by Bela Julesz and A. Michael Noll and was reviewed as art by The New York Times. A third exhibition was put up in November 1965 at Galerie Wendelin Niedlich in Stuttgart, Germany, showing works by Frieder Nake and Georg Nees. Analogue computer art by Maughan Mason along with digital computer art by Noll were exhibited at the AFIPS Fall Joint Computer Conference in Las Vegas toward the end of 1965.
In 1968, the Institute of Contemporary Arts (ICA) in London hosted one of the most influential early exhibitions of computer art called Cybernetic Serendipity. The exhibition, curated by Jasia Reichardt, included many of those often regarded as the first digital artists, Nam June Paik, Frieder Nake, Leslie Mezei, Georg Nees, A. Michael Noll, John Whitney, and Charles Csuri. One year later, the Computer Arts Society was founded, also in London.
At the time of the opening of Cybernetic Serendipity, in August 1968, a symposium was held in Zagreb, Yugoslavia, under the title "Computers and visual research". It took up the European artists movement of New Tendencies that had led to three exhibitions (in 1961, 63, and 65) in Zagreb of concrete, kinetic, and constructive art as well as op art and conceptual art. New Tendencies changed its name to "Tendencies" and continued with more symposia, exhibitions, a competition, and an international journal (bit international) until 1973.
Katherine Nash and Richard Williams published Computer Program for Artists: ART 1 in 1970.
Xerox Corporation's Palo Alto Research Center (PARC) designed the first Graphical User Interface (GUI) in the 1970s. The first Macintosh computer was released in 1984; since then the GUI became popular. Many graphic designers quickly accepted its capacity as a creative tool.
Andy Warhol created digital art using an Amiga when the computer was publicly introduced at the Lincoln Center, New York in July 1985. An image of Debbie Harry was captured in monochrome from a video camera and digitized into a graphics program called ProPaint. Warhol manipulated the image adding colour by using flood fills.
Output devices
Formerly, technology restricted output and print results. Early machines used pen-and-ink plotters to produce basic hard copy.
In the early 1960s, the Stromberg Carlson SC-4020 microfilm printer was used at Bell Telephone Laboratories as a plotter to produce digital computer art and animation on 35-mm microfilm. Still images were drawn on the face plate of the cathode ray tube and automatically photographed. A series of still images were drawn to create a computer-animated movie, early on a roll of 35-mm film and then on 16-mm film as a 16-mm camera was later added to the SC-4020 printer.
In the 1970s, the dot matrix printer (which uses a print head hitting an ink ribbon somewhat like a typewriter) was used to reproduce varied fonts and arbitrary graphics. The first animations were created by plotting all still frames sequentially on a stack of paper, with motion transfer to 16-mm film for projection. During the 1970s and 1980s, dot matrix printers were used to produce most visual output while microfilm plotters were used for most early animation.
In 1976, the inkjet printer was invented with the increase in the use of personal computers. The inkjet printer is now the cheapest and most versatile option for everyday digital color output. Raster Image Processing (RIP) is typically built into the printer or supplied as a software package for the computer; it is required to achieve the highest quality output. Basic inkjet devices do not feature RIP. Instead, they rely on graphic software to rasterize images. The laser printer, though more expensive than the inkjet, is another affordable output device available today.
Graphic software
Adobe Systems, founded in 1982, developed the PostScript language and digital fonts, making drawing, painting, and image manipulation software popular. Adobe Illustrator, a vector drawing program based on the Bézier curve introduced in 1987 and Adobe Photoshop, written by brothers Thomas and John Knoll in 1990 were developed for use on MacIntosh computers, and compiled for DOS/Windows platforms by 1993.
Robot painting
A robot painting is an artwork painted by a robot. Raymond Auger's Painting Machine, made in 1962, was one of the first robotic painters as was AARON, an artificial intelligence/artist developed by Harold Cohen beginning in the late 1960s. Joseph Nechvatal began making large computer-robotic paintings in 1986. Artist Ken Goldberg created an 11' x 11' painting machine in 1992 and German artist Matthias Groebel also built his own robotic painting machine in the early 1990s.
Neural style transfer
Non-photorealistic rendering (using computers to automatically transform images into stylized art) has been a subject of research since the 1990s. Around 2015, neural style transfer using convolutional neural networks to transfer the style of an artwork onto a photograph or other target image became feasible. One method of style transfer involves using a framework such as VGG or ResNet to break the artwork style down into statistics about visual features. The target photograph is subsequently modified to match those statistics. Notable applications include Prisma, Facebook Caffe2Go style transfer, MIT's Nightmare Machine, and DeepArt.
AI generated art
With the rise of AI image generators such as DALL-E 2, Flux, Midjourney, and others, there is area of AI generated art. There is much controversy and debate over whether AI generated art is actual art.
See also
3D printing art
Algorithm art
Artificial intelligence art
ASCII art
Digital painting
Digital art
Fractal art
Generative art
Glitch art
Internet art
New media art
Software art
Systems art
Video game art / Modding
References
Further reading
Honor Beddard and Douglas Dodds. (2009). Digital Pioneers. London: V&A Publishing.
Timothy Binkley. (1988/89). "The Computer is Not A Medium", Philosophic Exchange. Reprinted in EDB & kunstfag, Rapport Nr. 48, NAVFs EDB-Senter for Humanistisk Forskning. Translated as "L'ordinateur n'est pas un médium", Esthétique des arts médiatiques, Sainte-Foy, Québec: Presses de l'Université du Québec, 1995.
Timothy Binkley. (1997). "The Vitality of Digital Creation" The Journal of Aesthetics and Art Criticism, 55(2), Perspectives on the Arts and Technology, pp. 107–116.
Thomas Dreher: History of Computer Art
Virtual Art: From Illusion to Immersion (MIT Press/Leonardo Books) by Oliver Grau
Mark Hansen. (2004). New Philosophy for New Media. Cambridge, MA: MIT Press.
Dick Higgins. (1966). Intermedia. Reprinted in Donna De Salvo (ed.), Open Systems Rethinking Art , London: Tate Publishing, 2005.
Lieser, Wolf. Digital Art. Langenscheidt: h.f. ullmann. 2009
Lopes, Dominic McIver. (2009). A Philosophy of Computer Art. London: Routledge
Lev Manovich. (2002, October). Ten Key Texts on Digital Art: 1970–2000. Leonardo - Volume 35, Number 5, pp. 567–569.
Frieder Nake. (2009, Spring). The Semiotic Engine: Notes on the History of Algorithmic Images in Europe. Art Journal, pp. 76–89.
Perry M., Margoni T., (2010) From music tracks to Google maps: Who owns computer-generated works? in Computer Law and Security Review, Vol. 26, pp. 621–629, 2010
Edward A. Shanken. (2009). Art and Electronic Media. London: Phaidon.
Grant D. Taylor (2014). When The Machine Made Art: The Troubled History of Computer Art. New York: Bloomsbury.
External links
Postmodern art
Contemporary art movements
Creativity techniques
The arts
Multimedia | Computer art | [
"Technology"
] | 2,200 | [
"Multimedia"
] |
353,890 | https://en.wikipedia.org/wiki/Universality%20class | In statistical mechanics, a universality class is a collection of mathematical models which share a single scale-invariant limit under the process of renormalization group flow. While the models within a class may differ dramatically at finite scales, their behavior will become increasingly similar as the limit scale is approached. In particular, asymptotic phenomena such as critical exponents will be the same for all models in the class.
Some well-studied universality classes are the ones containing the Ising model or the percolation theory at their respective phase transition points; these are both families of classes, one for each lattice dimension. Typically, a family of universality classes will have a lower and upper critical dimension: below the lower critical dimension, the universality class becomes degenerate (this dimension is 2d for the Ising model, or for directed percolation, but 1d for undirected percolation), and above the upper critical dimension the critical exponents stabilize and can be calculated by an analog of mean-field theory (this dimension is 4d for Ising or for directed percolation, and 6d for undirected percolation).
List of critical exponents
Critical exponents are defined in terms of the variation of certain physical properties of the system near its phase transition point. These physical properties will include its reduced temperature , its order parameter measuring how much of the system is in the "ordered" phase, the specific heat, and so on.
The exponent is the exponent relating the specific heat C to the reduced temperature: we have . The specific heat will usually be singular at the critical point, but the minus sign in the definition of allows it to remain positive.
The exponent relates the order parameter to the temperature. Unlike most critical exponents it is assumed positive, since the order parameter will usually be zero at the critical point. So we have .
The exponent relates the temperature with the system's response to an external driving force, or source field. We have , with J the driving force.
The exponent relates the order parameter to the source field at the critical temperature, where this relationship becomes nonlinear. We have (hence ), with the same meanings as before.
The exponent relates the size of correlations (i.e. patches of the ordered phase) to the temperature; away from the critical point these are characterized by a correlation length . We have .
The exponent measures the size of correlations at the critical temperature. It is defined so that the correlation function scales as .
The exponent , used in percolation theory, measures the size of the largest clusters (roughly, the largest ordered blocks) at 'temperatures' (connection probabilities) below the critical point. So .
The exponent , also from percolation theory, measures the number of size s clusters far from (or the number of clusters at criticality): , with the factor removed at critical probability.
For symmetries, the group listed gives the symmetry of the order parameter. The group is the dihedral group, the symmetry group of the n-gon, is the n-element symmetric group, is the octahedral group, and is the orthogonal group in n dimensions. 1 is the trivial group.
References
External links
Universality classes from Sklogwiki
Zinn-Justin, Jean (2002). Quantum field theory and critical phenomena, Oxford, Clarendon Press (2002),
Critical phenomena
Renormalization group
Scale-invariant systems | Universality class | [
"Physics",
"Materials_science",
"Mathematics"
] | 714 | [
"Scaling symmetries",
"Physical phenomena",
"Critical phenomena",
"Renormalization group",
"Scale-invariant systems",
"Condensed matter physics",
"Statistical mechanics",
"Symmetry",
"Dynamical systems"
] |
353,891 | https://en.wikipedia.org/wiki/Full-time%20equivalent | Full-time equivalent (FTE), or whole time equivalent (WTE), is a unit of measurement that indicates the workload of an employed person (or student) in a way that makes workloads or class loads comparable across various contexts. FTE is often used to measure a worker's or student's involvement in a project, or to track cost reductions in an organization. An FTE of 1.0 is equivalent to a full-time worker or student, while an FTE of 0.5 signals half of a full work or school load.
In government
United States
According to the federal government of the United States, FTE is defined by the Government Accountability Office (GAO) as the number of total hours worked divided by the maximum number of compensable hours in a full-time schedule as defined by law. For example, if the normal schedule for a quarter is defined as 411.25 hours ([35 hours per week × (52 weeks per year – 5 weeks' regulatory vacation)] / 4), then someone working 100 hours during that quarter represents 100/411.25 = 0.24 FTE. Two employees working in total 400 hours during that same quarterly period represent 0.97 FTE.
The U.S. Office of Management and Budget, or OMB, the president's budget office, will often place upper limits on the total number of FTE that a given agency may utilize each year. In the past, if agencies were given a ceiling on the actual number of employed workers, which was reported on a given day of the year, the agency could employ more than this number for much of the year. Then, as the reporting deadline approached, employees could be let go to reduce the total number to the authorized ceiling on the reporting date. Providing agencies with an FTE ceiling, which is calculated based on the total number of hours worked by all employees throughout the year, irrespective of the total numbers employed at any point in time, prevents agencies from using such a strategy.
Although the generally accepted human-resources meaning for the "E" in FTE is "equivalent", the term is often overloaded in colloquial usage to indicate a "direct, as opposed to contract, full-time employee".
The term WYE (work year equivalent) is often used instead of FTE when describing the contractor work.
United Kingdom
In the United Kingdom, full time equivalent equates to the standard 40-hour work week: eight hours per day, five days per week and is the total amount of hours that a single full-time employee has worked over any period. This allows employers to adopt a single metric for comparison with the full-time average. For example, a full week of 40 hours has an FTE value of 1.0, so a person working 20 hours would have an FTE value of 0.5. Certain industries may adopt 35 hours, depending on the company, its location and the nature of work. Whole-time equivalent (WTE) is the same as FTE and applies also to students in education.
In education
Full-time equivalent students is one of the key metrics for measuring enrollment in colleges and universities. The measure is often annualized to cover the average annual full-time equivalent students and is designated by the acronym AAFTE.
Academics can increase their contribution by adopting a number of strategies:
(a) increase class size;
(b) teach new classes;
(c) supervise more projects;
(d) supervise more researchers.
The latter strategy has the advantage of contributing to another key metric in universities—creating new knowledge and in particular publishing papers in highly ranked academic journals. It is also linked to another key metric—research funding which is often required to attract researchers.
Australia
In Australia, the equivalent to FTE for students is EFTSL (Equivalent Full-Time Student Load).
Example
A professor teaches two undergraduate courses, supervises two undergraduate projects and supervises four researchers by thesis only (i.e. researchers do not take any courses). Each undergraduate course is worth one-tenth of all credits for the undergraduate program (i.e. 0.1 FTE). An undergraduate project is worth two-tenths of all credits for the undergraduate program (i.e. 0.2 FTE). A research thesis is worth all of the credits for the graduate program (i.e. 1 FTE). The professor's contribution is 29.4 FTEs:
To encourage more research some universities offer 2 FTEs or even 3 FTEs for each full-time researcher.
References
External links
National Park Service Budget Glossary
Business terms
Education economics
Equivalent units
Metrics
Workplace | Full-time equivalent | [
"Mathematics"
] | 958 | [
"Equivalent quantities",
"Metrics",
"Quantity",
"Equivalent units",
"Units of measurement"
] |
354,042 | https://en.wikipedia.org/wiki/Malachite | Malachite is a copper carbonate hydroxide mineral, with the formula Cu2CO3(OH)2. This opaque, green-banded mineral crystallizes in the monoclinic crystal system, and most often forms botryoidal, fibrous, or stalagmitic masses, in fractures and deep, underground spaces, where the water table and hydrothermal fluids provide the means for chemical precipitation. Individual crystals are rare, but occur as slender to acicular prisms. Pseudomorphs after more tabular or blocky azurite crystals also occur.
Etymology and history
The stone's name derives (via , , and Middle English melochites) from Greek Μολοχίτης λίθος molochites lithos, "mallow-green stone", from μολόχη molochē, variant of μαλάχη malāchē, "mallow". The mineral was given this name due to its resemblance to the leaves of the mallow plant. Copper (Cu2+) gives malachite its green color.
Malachite was mined from deposits near the Isthmus of Suez and the Sinai as early as 4000 BCE.
It was extensively mined at the Great Orme Mines in Britain 3,800 years ago, using stone and bone tools. Archaeological evidence indicates that mining activity ended , with up to 1,760 tonnes of copper being produced from the mined malachite.
Archaeological evidence indicates that the mineral has been mined and smelted to obtain copper at Timna Valley in Israel for more than 3,000 years. Since then, malachite has been used as both an ornamental stone and as a gemstone.
The use of azurite and malachite as copper ore indicators led indirectly to the name of the element nickel in the English language. Nickeline, a principal ore of nickel that is also known as niccolite, weathers at the surface into a green mineral (annabergite) that resembles malachite. This resemblance resulted in occasional attempts to smelt nickeline in the belief that it was copper ore, but such attempts always ended in failure due to high smelting temperatures needed to reduce nickel. In Germany this deceptive mineral came to be known as kupfernickel, literally "copper demon." The Swedish alchemist Baron Axel Fredrik Cronstedt (who had been trained by Georg Brandt, the discoverer of the nickel-like metal cobalt) realized that there was probably a new metal hiding within the kupfernickel ore, and in 1751 he succeeded in smelting kupfernickel to produce a previously unknown (except in certain meteorites) silvery white, iron-like metal. Logically, Cronstedt named his new metal after the nickel part of kupfernickel.
Occurrence
Malachite often results from the supergene weathering and oxidation of primary sulfidic copper ores, and is often found with azurite (Cu3(CO3)2(OH)2), goethite, and calcite. Except for its vibrant green color, the properties of malachite are similar to those of azurite and aggregates of the two minerals occur frequently. Malachite is more common than azurite and is typically associated with copper deposits around limestones, the source of the carbonate.
Large quantities of malachite have been mined in the Urals, Russia. Ural malachite is not being mined , but G.N Vertushkova reports the possible discovery of new deposits of malachite in the Urals. It is found worldwide including in the Democratic Republic of the Congo; Gabon; Zambia; Tsumeb, Namibia; Mexico; Broken Hill, New South Wales; Burra, South Australia; Lyon, France; Timna Valley, Israel; and the Southwestern United States, most notably in Arizona.
Anthropogenic malachite was historically believed to be the primary component of the patina which forms on copper and copper alloy structures exposed to open-air weathering; however, atmospheric sources of sulfate and chloride (such as air pollution or sea winds) typically favour the formation of brochantite or atacamite. Malachite can also be produced synthetically, in which case it is referred to as basic copper carbonate or green verditer.
Structure
Malachite crystallizes in the monoclinic system. The structure consists of chains of alternating Cu2+ ions and OH− ions, with a net positive charge, woven between isolated triangular CO32− ions. Thus each copper ion is conjugated to two hydroxyl ions and two carbonate ions; each hydroxyl ion is conjugated with two copper ions; and each carbonate ion is conjugated with six copper ions.
Use
Malachite was used as a mineral pigment in green paints from antiquity until 1800. The pigment is moderately lightfast, sensitive to acids, and varying in color. This natural form of green pigment has been replaced by its synthetic form, verditer, among other synthetic greens.
Malachite is also used for decorative purposes, such as in wands and the Malachite Room in the Hermitage Museum, which features a huge malachite vase, and the Malachite Room in Castillo de Chapultepec in Mexico City. Another example is the Demidov Vase, part of the former Demidov family collection, and now in the Metropolitan Museum of Art. "The Tazza", a large malachite vase, one of the largest pieces of malachite in North America and a gift from Tsar Nicholas II, stands as the focal point in the centre of the room of Linda Hall Library. In the time of Tsar Nicolas I decorative pieces with malachite were among the most popular diplomatic gifts. It was used in China as far back as the Eastern Zhou period. The base of FIFA World Cup Trophy has two layers of malachite.
Symbolism and superstitions
A 17th-century Spanish superstition held that having a child wear a lozenge of malachite would help them sleep, and keep evil spirits at bay. Marbodus recommended malachite as a talisman for young people because of its protective qualities and its ability to help with sleep. It has also historically been worn for protection from lightning and contagious diseases and for health, success, and constancy in the affections. During the Middle Ages it was customary to wear it engraved with a figure or symbol of the Sun to maintain health and to avert depression to which Capricorns were considered vulnerable.
In ancient Egypt the colour green (wadj) was associated with death and the power of resurrection as well as new life and fertility. Ancient Egyptians believed that the afterlife contained an eternal paradise, referred to as the "Field of Malachite", which resembled their lives but with no pain or suffering.
Ore uses
Simple methods of copper ore extraction from malachite involved thermodynamic processes such as smelting. This reaction involves the addition of heat and a carbon, causing the carbonate to decompose leaving copper oxide and an additional carbon source such as coal converts the copper oxide into copper metal.
The basic word equation for this reaction is:
Copper carbonate + heat → carbon dioxide + copper oxide (color changes from green to black).
Copper oxide + carbon → carbon dioxide + copper (color change from black to copper colored).
Malachite is a low grade copper ore, however, due to increase demand for metals, more economic processing such as hydrometallurgical methods (using aqueous solutions such as sulfuric acid) are being used as malachite is readily soluble in dilute acids. Sulfuric acid is the most common leaching agent for copper oxide ores like malachite and eliminates the need for smelting processes.
The chemical equation for sulfuric acid leaching of copper ore from malachite is as follows:
Health and environmental concerns
Mining for malachite for ornamental or copper ore purposes involves open-pit mining or underground mining depending on the grade of the ore deposits. Open-pit and underground mining practices can cause environmental degradation through habitat and biodiversity loss. Acid mine drainage can contaminate water and food sources to negatively impact human health if improperly managed or if leaks from tailing ponds occur. The risk of health and environmental impacts of both traditional metallurgy and newer methods of hydrometallurgy are both significant, however, water conservation and waste management practices for hydrometallurgy processes for ore extraction, such as for malachite, are stricter and relatively more sustainable. New research is also being conducted on better alternatives to methods such as sulfuric acid leaching which has high environmental impacts, even under hydrometallurgy regulation standards and innovation.
Gallery
See also
Aventurine
Brochantite
Chrysocolla
Dioptase
Green pigments
List of inorganic pigments
Plancheite
Pseudomalachite
Turquoise
Verdigris
References
External links
Virtual tour of the Malachite Room
Malachite, Colourlex
Malachite in art and malachite diplomacy
Carbonate minerals
Copper ores
Gemstones
Inorganic pigments
Minerals in space group 14
Monoclinic minerals | Malachite | [
"Physics",
"Chemistry"
] | 1,893 | [
"Inorganic compounds",
"Inorganic pigments",
"Materials",
"Gemstones",
"Matter"
] |
354,076 | https://en.wikipedia.org/wiki/Maternal%20death | Maternal death or maternal mortality is defined in slightly different ways by several different health organizations. The World Health Organization (WHO) defines maternal death as the death of a pregnant mother due to complications related to pregnancy, underlying conditions worsened by the pregnancy or management of these conditions. This can occur either while she is pregnant or within six weeks of resolution of the pregnancy. The CDC definition of pregnancy-related deaths extends the period of consideration to include one year from the resolution of the pregnancy. Pregnancy associated death, as defined by the American College of Obstetricians and Gynecologists (ACOG), are all deaths occurring within one year of a pregnancy resolution. Identification of pregnancy associated deaths is important for deciding whether or not the pregnancy was a direct or indirect contributing cause of the death.
There are two main measures used when talking about the rates of maternal mortality in a community or country. These are the maternal mortality ratio and maternal mortality rate, both abbreviated as "MMR". By 2017, the world maternal mortality rate had declined 44% since 1990; however, every day 808 women die from pregnancy or childbirth related causes. According to the United Nations Population Fund (UNFPA) 2017 report, about every 2 minutes a woman dies because of complications due to child birth or pregnancy. For every woman who dies, there are about 20 to 30 women who experience injury, infection, or other birth or pregnancy related complication.
UNFPA estimated that 303,000 women died of pregnancy or childbirth related causes in 2015. The WHO divides causes of maternal deaths into two categories: direct obstetric deaths and indirect obstetric deaths. Direct obstetric deaths are causes of death due to complications of pregnancy, birth or termination. For example, these could range from severe bleeding to obstructed labor, for which there are highly effective interventions. Indirect obstetric deaths are caused by pregnancy interfering or worsening an existing condition, like a heart problem.
As women have gained access to family planning and skilled birth attendant with backup emergency obstetric care, the global maternal mortality ratio has fallen from 385 maternal deaths per 100,000 live births in 1990 to 216 deaths per 100,000 live births in 2015. Many countries halved their maternal death rates in the last 10 years. Although attempts have been made to reduce maternal mortality, there is much room for improvement, particularly in low-resource regions. Over 85% of maternal deaths are in low-resource communities in Africa and Asia. In higher resource regions, there are still significant areas with room for growth, particularly as they relate to racial and ethnic disparities and inequities in maternal mortality and morbidity rates.
Overall, maternal mortality is an important marker of the health of the country and reflects on its health infrastructure. Lowering the amount of maternal death is an important goal of many health organizations world-wide.
Causes
Direct obstetric deaths
Overview
Direct obstetric deaths are due to complications of pregnancy, birth, termination or complications arising from their management.
The causes of maternal death vary by region and level of access. According to a study published in the Lancet which covered the period from 1990 to 2013, the most common causes of maternal death world-wide are postpartum bleeding (15%), complications from unsafe abortion (15%), hypertensive disorders of pregnancy (10%), postpartum infections (8%), and obstructed labor (6%). Other causes include blood clots (3%) and pre-existing conditions (28%).
Descriptions by condition
Postpartum bleeding happens when there is uncontrollable bleeding from the uterus, cervix or vaginal wall after birth. This can happen when the uterus does not contract correctly after birth, there is left over placenta in the uterus, or there are cuts in the cervix or vagina from birth.
Hypertensive disorders of pregnancy happen when the body does not regulate blood pressure correctly. In pregnancy, this is due to changes at the level of the blood vessels, likely because of the placenta. This includes medical conditions like gestational hypertension and pre-eclampsia.
Postpartum infections are infections of the uterus or other parts of the reproductive tract after the resolution of a pregnancy. They are usually bacterial and cause fever, increased pain, and foul-smelling discharge.
Obstructed labor happens when the baby does not properly move into the pelvis and out of the body during labor. The most common cause of obstructed labor is when the baby's head is too big or angled at a way that does not allow it to pass through the pelvis and birth canal.
Blood clots can occur in different vessels in the body, including vessels in the arms, legs, and lungs. They can cause problems in the lung, as well as travel to the heart or brain, leading to complications.
Unsafe abortion
When abortion is legal and accessible, it is widely regarded as safer than carrying a pregnancy to term and delivery. In fact, a study published in the journal Obstetrics & Gynecology reported that in the United States, carrying a pregnancy to term and delivering a baby comes with 14 times increased risk of death as compared to a legal abortion. However, in many regions of the world, abortion is not legal and can be unsafe. Maternal deaths caused by improperly performed procedures are preventable and contribute 13% to the maternal mortality rate worldwide. This number is increased to 25% in countries where other causes of maternal mortality are low, such as in Eastern European and South American countries. This makes unsafe abortion practices the leading cause of maternal death worldwide.
Unsafe abortion is another major cause of maternal death worldwide. In regions where abortion is legal and accessible, abortion is safe and does not contribute greatly to overall rates of maternal death. However, in regions where abortions are not legal, available, or regulated, unsafe abortion practices can cause significant rates of maternal death. According to the World Health Organization in 2009, every eight minutes a woman died from complications arising from unsafe abortions.
Unsafe abortion practices are defined by the WHO as procedures that are performed by someone without the appropriate training and/or ones that are performed in an environment that is not considered safe or clean. Using this definition, the WHO estimates that out of the 45 million abortions that are performed each year globally, 19 million of these are considered unsafe, and 97% of these unsafe abortions occur in developing countries. Complications include hemorrhage, infection, sepsis and genital trauma.
Rates
There are four primary types of data sources that are used to collect abortion-related maternal mortality rates: confidential enquiries, registration data, verbal autopsy, and facility-based data sources. A verbal autopsy is a systematic tool that is used to collect information on the cause of death from laypeople and not medical professionals.
Confidential enquires for maternal deaths do not occur very often on a national level in most countries. Registration systems are usually considered the "gold-standard" method for mortality measurements. However, they have been shown to miss anywhere between 30 and 50% of all maternal deaths. Another concern for registration systems is that 75% of all global births occur in countries where vital registration systems do not exist, meaning that many maternal deaths occurring during these pregnancies and deliveries may not be properly record through these methods. There are also issues with using verbal autopsies and other forms of survey in recording maternal death rates. For example, the family's willingness to participate after the loss of a loved one, misclassification of the cause of death, and under-reporting all present obstacles to the proper reporting of maternal mortality causes. Finally, a potential issue with facility-based data collection on maternal mortality is the likelihood that women who experience abortion-related complications to seek care in medical facilities. This is due to fear of social repercussions or legal activity in countries where unsafe abortion is common since it is more likely to be legally restrictive and/or more highly stigmatizing. Another concern for issues related to errors in proper reporting for accurate understanding of maternal mortality is the fact that global estimates of maternal deaths related to a specific cause present those related to abortion as a proportion of the total mortality rate. Therefore, any change, whether positive or negative, in the abortion-related mortality rate is only compared relative to other causes, and this does not allow for proper implications of whether abortions are becoming more safe or less safe with respect to the overall mortality of women.
Prevention
The prevention and reduction of maternity death is one of the United Nations' Sustainable Development Goals, specifically Goal 3, "Good health and well being". Promoting effective contraceptive use and information distributed to a wider population, with access to high-quality care, can make steps towards reducing the number of unsafe abortions. For nations that allow contraceptives, programs should be instituted to allow the easier accessibility of these medications. However, this alone will not eliminate the demand for safe services, awareness on safe abortion services, health education on prenatal check ups and proper implementation of diets during pregnancy and lactation also contributes to its prevention.
Indirect obstetric deaths
Indirect obstetric deaths are caused by preexisting health problem worsened by pregnancy or newly developed health problem unrelated to pregnancy . Fatalities during but unrelated to a pregnancy are termed accidental, incidental, or non-obstetrical maternal deaths.
Indirect causes include malaria, anemia, HIV/AIDS, and cardiovascular disease, all of which may complicate pregnancy or be aggravated by it. Risk factors associated with increased maternal death include the age of the mother, obesity before becoming pregnant, other pre-existing chronic medical conditions, and cesarean delivery.
Risk factors
According to a 2004 WHO publication, sociodemographic factors such as age, access to resources and income level are significant indicators of maternal outcomes. Young mothers face higher risks of complications and death during pregnancy than older mothers, especially adolescents aged 15 years or younger. Adolescents have higher risks for postpartum hemorrhage, endometritis, operative vaginal delivery, episiotomy, low birth weight, preterm delivery, and small-for-gestational-age infants, all of which can lead to maternal death. The leading cause of death for girls at the age of 15 in developing countries is complication through pregnancy and childbirth. They have more pregnancies, on average, than women in developed countries, and it has been shown that 1 in 180 15-year-old girls in developing countries who become pregnant will die due to complications during pregnancy or childbirth. This is compared to women in developed countries, where the likelihood is 1 in 4900 live births. However, in the United States, as many women of older age continue to have children, the maternal mortality rate has risen in some states, especially among women over 40 years old.
Structural support and family support influences maternal outcomes. Furthermore, social disadvantage and social isolation adversely affects maternal health which can lead to increases in maternal death. Additionally, lack of access to skilled medical care during childbirth, the travel distance to the nearest clinic to receive proper care, number of prior births, barriers to accessing prenatal medical care and poor infrastructure all increase maternal deaths.
Causes of maternal death in the US
Pregnancy-related deaths between 2011 and 2014 in the United States have been shown to have major contributions from non-communicable diseases and conditions, and the following are some of the more common causes related to maternal death: cardiovascular diseases (15.2%.), non-cardiovascular diseases (14.7%), infection or sepsis (12.8%), hemorrhage (11.5%), cardiomyopathy (10.3%), pulmonary embolism (9.1%), cerebrovascular accidents (7.4%), hypertensive disorders of pregnancy (6.8%), amniotic fluid embolism (5.5%), and anesthesia complications (0.3%).
Three delays model
The three delays model describes three critical factors that inhibit women from receiving appropriate maternal health care. These factors include:
Delay in seeking care
Delay in reaching care
Delay in receiving adequate and appropriate care
Delays in seeking care are due to the decisions made by the women who are pregnant and/or other decision-making individuals. Decision-making individuals can include a spouse and family members. Examples of reasons for delays in seeking care include lack of knowledge about when to seek care, inability to afford health care, and women needing permission from family members.
Delays in reaching care include factors such as limitations in transportation to a medical facility, lack of adequate medical facilities in the area, and lack in confidence in medicine.
Delays in receiving adequate and appropriate care may result from an inadequate number of trained providers, lack of appropriate supplies, and the lack of urgency or understanding of an emergency.
The three delays model illustrates that there are a multitude of complex factors, both socioeconomic and cultural, that can result in maternal death.
Measurement
The four measures of maternal death are the maternal mortality ratio (MMR), maternal mortality rate, lifetime risk of maternal death and proportion of maternal deaths among deaths of women of reproductive years (PM).
Maternal mortality ratio (MMR) is the ratio of the number of maternal deaths during a given time period per 100,000 live births during the same time-period. The MMR is used as a measure of the quality of a health care system.
Maternal mortality rate (MMRate) is the number of maternal deaths in a population divided by the number of women of reproductive age, usually expressed per 1,000 women.
Lifetime risk of maternal death is a calculated prediction of a woman's risk of death after each consecutive pregnancy. The calculation pertains to women during their reproductive years. The adult lifetime risk of maternal mortality can be derived using either the maternal mortality ratio (MMR), or the maternal mortality rate (MMRate).
Proportion of maternal deaths among deaths of women of reproductive age (PM) is
the number of maternal deaths in a given time period divided by the total deaths among women aged 15–49 years.
Approaches to measuring maternal mortality include civil registration system, household surveys, census, reproductive age mortality studies (RAMOS) and verbal autopsies. The most common household survey method, recommended by the WHO as time- and cost-effective, is the sisterhood method.
Trends
The United Nations Population Fund (UNFPA; formerly known as the United Nations Fund for Population Activities) have established programs that support efforts in reducing maternal death. These efforts include education and training for midwives, supporting access to emergency services in obstetric and newborn care networks, and providing essential drugs and family planning services to pregnant women or those planning to become pregnant. They also support efforts for review and response systems regarding maternal deaths.
According to the 2010 United Nations Population Fund report, low-resource nations account for ninety-nine percent of maternal deaths with the majority of those deaths occurring in Sub-Saharan Africa and Southern Asia. Globally, high and middle income countries experience lower maternal deaths than low income countries. The Human Development Index (HDI) accounts for between 82 and 85 percent of the maternal mortality rates among countries. In most cases, high rates of maternal deaths occur in the same countries that have high rates of infant mortality. These trends are a reflection that higher income countries have stronger healthcare infrastructure, more doctors, use more advanced medical technologies and have fewer barriers to accessing care than low income countries. In low income countries, the most common cause of maternal death is obstetrical hemorrhage, followed by hypertensive disorders of pregnancy. This is contrast to high income countries, for which the most common cause is thromboembolism.
Between 1990 and 2015, the maternal mortality ratio has decreased from 385 deaths per 100,000 live births to 216 maternal deaths per 100,000 live births. Some factors that have been attributed to the decreased maternal deaths seen between this period are in part to the access that women have gained to family planning services and skilled birth attendance, meaning a midwife, doctor, or trained nurse), with back-up obstetric care for emergency situations that may occur during the process of labor. This can be examined further by looking at statistics in some areas of the world where inequities in access to health care services reflect an increased number of maternal deaths. The high maternal death rates also reflect disparate access to health services between resource communities and those that are high-resource or affluent.
The disparities in maternal health outcomes are also present among racial groups. In the United States, black women are 3-4 times more likely to die from maternal mortality than white women. Unequal access to quality medical care, socioeconomic disparities, and systemic racism by health care providers are factors that have contributed to the high maternal mortality rates among black women. Discounting factors such as pre-existing conditions, do not impact the rate of this disparity. In 2019, Black maternal health advocate and Parents writer Christine Michel Carter interviewed Vice President Kamala Harris. As a senator, in 2019 Harris reintroduced the Maternal Care Access and Reducing Emergencies (CARE) Act which aimed to address the maternal mortality disparity faced by women of color by training providers on recognizing implicit racial bias and its impact on care. Harris stated:"We need to speak the uncomfortable truth that women—and especially Black women—are too often not listened to or taken seriously by the health care system, and therefore they are denied the dignity that they deserve. And we need to speak this truth because today, the United States is 1 of only 13 countries in the world where the rate of maternal mortality is worse than it was 25 years ago. That risk is even higher for Black women, who are three to four times more likely than white women to die from pregnancy-related causes. These numbers are simply outrageous."The Covid-19 pandemic heightened maternal mortality rates, disproportionately impacting communities of color. Multiple factors contribute to this widening disparity, notably, social factors such as implicit bias, repeated racial discrimination, and limited access to healthcare. All issues are further exacerbated for people of color who face systemic barriers to adequate medical care. Overall, the maternal mortality rate increased from 23.8 deaths per 100,000 live births in 2020, to 32.9 deaths per 100,000 live births in 2021. An apparent spike in this rate can be noted in 2021. For non-hispanic black women the rate of maternal deaths per 100,00 live births increased from 44.0 in 2019 to 69.9 in 2021.
Prevention
According to UNFPA, there are four essential elements for prevention of maternal death. These include, prenatal care, assistance with birth, access to emergency obstetric care and adequate postnatal care. It is recommended that expectant mothers receive at least four antenatal visits to check and monitor the health of mother and fetus. Second, skilled birth attendance with emergency backup such as doctors, nurses and midwives who have the skills to manage normal deliveries and recognize the onset of complications. Third, emergency obstetric care to address the major causes of maternal death which are hemorrhage, sepsis, unsafe abortion, hypertensive disorders and obstructed labor. Lastly, postnatal care which is the six weeks following delivery. During this time, bleeding, sepsis and hypertensive disorders can occur, and newborns are extremely vulnerable in the immediate aftermath of birth. Therefore, follow-up visits by a health worker to assess the health of both mother and child in the postnatal period is strongly recommended.
Additionally, reliable access to information, compassionate counseling and quality services for the management of any issues that arise from abortions (whether safe or unsafe) can be beneficial in reducing the number of maternal deaths. In regions where abortion is legal, abortion practices need to be safe in order to effectively reduce the number of maternal deaths related to abortion.
Maternal Death Surveillance and Response is another strategy that has been used to prevent maternal death. This is one of the interventions proposed to reduce maternal mortality where maternal deaths are continuously reviewed to learn the causes and factors that led to the death. The information from the reviews is used to make recommendations for action to prevent future similar deaths. Maternal and perinatal death reviews have been in practice for a long time worldwide, and the World Health Organization (WHO) introduced the Maternal and Perinatal Death Surveillance and Response (MPDSR) with a guideline in 2013. Studies have shown that acting on recommendations from MPDSR can reduce maternal and perinatal mortality by improving quality of care in the community and health facilities.
Prenatal care
It was estimated that in 2015, a total of 303,000 women died due to causes related to pregnancy or childbirth. The majority of these were due to severe bleeding, sepsis or infections, eclampsia, obstructed labor, and consequences from unsafe abortions. Most of these causes are either preventable or have highly effective interventions. An important factor that contributes to the maternal mortality rate is access and opportunity to receive prenatal care. Women who do not receive prenatal care are between three and four times more likely to die from complications resulting from pregnancy or delivery than those who receive prenatal care. Even in high-resource countries, many women do not receive the appropriate preventative or prenatal care. For example, 25% of women in the United States do not receive the recommended number of prenatal visits. This number increases for women among traditionally marginalized populations—32% of African American women and 41% for American Indian and Alaska Native women do not receive the recommended preventative health services prior to delivery.
In 2023, a study reported that deaths among Native American women was three-and-a-half times that of white women. The report attributed the high rate in part to the fact that Native American women are cared for under a poorly funded Federal Health Care System that is so stretched that the average monthly visit lasts only from three to seven minutes. Such a short visit allows neither time for performing an adequate health assessment nor time for the patient to discuss any problems she may be experiencing.
Medical technologies
The decline in maternal deaths has been due largely to improved aseptic techniques, better fluid management and quicker access to blood transfusions, and better prenatal care.
Technologies have been designed for resource poor settings that have been effective in reducing maternal deaths as well. The non-pneumatic anti-shock garment is a low-technology pressure device that decreases blood loss, restores vital signs and helps buy time in delay of women receiving adequate emergency care during obstetric hemorrhage. It has proven to be a valuable resource. Condoms used as uterine tamponades have also been effective in stopping post-partum hemorrhage.
Medications and surgical management
Some maternal deaths can be prevented through medication use. Injectable oxytocin can be used to prevent death due to postpartum bleeding. Additionally, postpartum infections can be treated using antibiotics. In fact, the use of broad-spectrum antibiotics both for the prevention and treatment of maternal infection is common in low-income countries. Maternal death due to eclampsia can also be prevented through the use of medications such as magnesium sulfate.
Many complications can be managed with procedures and/or surgery if there is access to a qualified surgeon and appropriate facilities and supplies. For example, the contents of the uterus can be cleaned if there is concern for remaining pregnancy tissue or infection. If there is concern for excess bleeding, special ties, stitches or tools (Bakri Balloon) can be placed if there is concern for excess bleeding.
Public health
A public health approach to addressing maternal mortality includes gathering information on the scope of the problem, identifying key causes, and implementing interventions, both prior to pregnancy and during pregnancy, to combat those causes and prevent maternal mortality.
Public health has a role to play in the analysis of maternal death. One important aspect in the review of maternal death and its causes are Maternal Mortality Review Committees or Boards. The goal of these review committees are to analyze each maternal death and determine its cause. After this analysis, the information can be combined in order to determine specific interventions that could lead to preventing future maternal deaths. These review boards are generally comprehensive in their analysis of maternal deaths, examining details that include mental health factors, public transportation, chronic illnesses, and substance use disorders. All of this information can be combined to give a detailed picture of what is causing maternal mortality and help to determine recommendations to reduce their impact.
Many states within the US are taking Maternal Mortality Review Committees a step further and are collaborating with various professional organizations to improve quality of perinatal care. These teams of organizations form a "perinatal quality collaborative" (PQC) and include state health departments, the state hospital association and clinical professionals such as doctors and nurses. These PQCs can also involve community health organizations, Medicaid representatives, Maternal Mortality Review Committees and patient advocacy groups. By involving all of these major players within maternal health, the goal is to collaborate and determine opportunities to improve quality of care. Through this collaborative effort, PQCs can aim to make impacts on quality both at the direct patient care level and through larger system devices like policy. It is thought that the institution of PQCs in California was the main contributor to the maternal mortality rate decreasing by 50% in the years following. The PQC developed review guides and quality improvement initiatives aimed at the most preventable and prevalent maternal deaths: those due to bleeding and high blood pressure. Success has also been observed with PQCs in Illinois and Florida.
Several interventions prior to pregnancy have been recommended in efforts to reduce maternal mortality. Increasing access to reproductive healthcare services, such as family planning services and safe abortion practices, is recommended in order to prevent unintended pregnancies. Several countries, including India, Brazil, and Mexico, have seen some success in efforts to promote the use of reproductive healthcare services. Other interventions include high quality sex education, which includes pregnancy prevention and sexually transmitted infection (STI) prevention and treatment. By addressing STIs, this not only reduces perinatal infections, but can also help reduce ectopic pregnancy caused by STIs. Adolescent mothers are between two and five times more likely to die than a female twenty years or older. Access to reproductive services and sex education could make a large impact, specifically on adolescents, who are generally uneducated in regards to carrying a healthy pregnancy. Education level is a strong predictor of maternal health as it gives women the knowledge to seek care when it is needed. Public health efforts can also intervene during pregnancy to improve maternal outcomes. Areas for intervention have been identified in access to care, public knowledge, awareness about signs and symptoms of pregnancy complications, and improving relationships between healthcare professionals and expecting mothers.
Access to care during pregnancy is a significant issue in the face of maternal mortality. "Access" encompasses a wide range of potential difficulties including costs, location of healthcare services, availability of appointments, availability of trained health care workers, transportation services, and cultural or language barriers that could inhibit a woman from receiving proper care. For women carrying a pregnancy to term, access to necessary antenatal (prior to delivery) healthcare visits is crucial to ensuring healthy outcomes. These antenatal visits allow for early recognition and treatment of complications, treatment of infections and the opportunity to educate the expecting mother on how to manage her current pregnancy and the health advantages of spacing pregnancies apart.
Access to birth at a facility with a skilled healthcare provider present has been associated with safer deliveries and better outcomes. The two areas bearing the largest burden of maternal mortality, Sub-Saharan Africa and South Asia, also had the lowest percentage of births attended by a skilled provider, at just 45% and 41% respectively. Emergency obstetric care is also crucial in preventing maternal mortality by offering services like emergency cesarean sections, blood transfusions, antibiotics for infections and assisted vaginal delivery with forceps or vacuum. In addition to physical barriers that restrict access to healthcare, financial barriers also exist. Close to one out of seven women of child-bearing age have no health insurance. This lack of insurance impacts access to pregnancy prevention, treatment of complications, as well as perinatal care visits contributing to maternal mortality.
By increasing public knowledge and awareness through health education programs about pregnancy, including signs of complications that need addressed by a healthcare provider, this will increase the likelihood of an expecting mother to seek help when it is necessary. Higher levels of education have been associated with increased use of contraception and family planning services as well as antenatal care. Addressing complications at the earliest sign of a problem can improve outcomes for expecting mothers, which makes it extremely important for a pregnant woman to be knowledgeable enough to seek healthcare for potential complications. Improving the relationships between patients and the healthcare system as a whole will make it easier for a pregnant woman to feel comfortable seeking help. Good communication between patients and providers, as well as cultural competence of the providers, could also assist in increasing compliance with recommended treatments.
Another important preventive measure being implemented is specialized education for mothers. Doctors and medical professionals providing simple information to women, especially women in lower socioeconomic areas will decrease the miscommunication that often occurs between doctors and patients. Training health care professionals will be another important aspect in decreasing the rate of maternal death, "The study found that white medical students and residents often believed incorrect and sometimes 'fantastical' biological fallacies about racial differences in patients. For these assumptions, researchers blamed not individual prejudice but deeply ingrained unconscious stereotypes about people of color, as well as physicians' difficulty in empathizing with patients whose experiences differ from their own."
Policy
The biggest global policy initiative for maternal health came from the United Nations' Millennium Declaration which created the Millennium Development Goals. In 2012, this evolved at the United Nations Conference on Sustainable Development to become the Sustainable Development Goals (SDGs) with a target year of 2030. The SDGs are 17 goals that call for global collaboration to tackle a wide variety of recognized problems. Goal 3 is focused on ensuring health and well-being for women of all ages. A specific target is to achieve a global maternal mortality ratio of less than 70 per 100,000 live births. So far, specific progress has been made in births attended by a skilled provider, now at 80% of births worldwide compared with 62% in 2005.
Countries and local governments have taken political steps in reducing maternal deaths. Researchers at the Overseas Development Institute studied maternal health systems in four apparently similar countries: Rwanda, Malawi, Niger, and Uganda. In comparison to the other three countries, Rwanda has an excellent record of improving maternal death rates. Based on their investigation of these varying country case studies, the researchers conclude that improving maternal health depends on three key factors:
reviewing all maternal health-related policies frequently to ensure that they are internally coherent;
enforcing standards on providers of maternal health services;
any local solutions to problems discovered should be promoted, not discouraged.
In terms of aid policy, proportionally, aid given to improve maternal mortality rates has shrunken as other public health issues, such as HIV/AIDS and malaria have become major international concerns. Maternal health aid contributions tend to be lumped together with newborn and child health, so it is difficult to assess how much aid is given directly to maternal health to help lower the rates of maternal mortality. Regardless, there has been progress in reducing maternal mortality rates internationally.
In countries where abortion practices are not considered legal, it is necessary to look at the access that women have to high-quality family planning services, since some of the restrictive policies around abortion could impede access to these services. These policies may also affect the proper collection of information for monitoring maternal health around the world.
Epidemiology
Maternal mortality and morbidity are leading contributors in women's health. It is estimated that 303,000 women are killed each year in childbirth and pregnancy worldwide. The global rate in 2017 is 211 maternal deaths per 100,000 live births and 45% of postpartum deaths occur within 24 hours. Whereas in 2020, the global rate was 223 deaths per 100,000 live births. Ninety-nine percent of maternal deaths occur in low-resource countries.
Prevalence by country
India (19% or 56,000) and Nigeria (14% or 40,000) accounted for roughly one third of the maternal deaths in 2010. Democratic Republic of the Congo, Pakistan, Sudan, Indonesia, Ethiopia, United Republic of Tanzania, Bangladesh and Afghanistan accounted for between 3 and 5 percent of maternal deaths each. These ten countries combined accounted for 60% of all the maternal deaths in 2010 according to the United Nations Population Fund report. Countries with the lowest maternal deaths were Greece, Iceland, Poland, and Finland.
In 2017, countries in Southeast Asia and Sub-Saharan Africa account for approximately 86% of all maternal deaths worldwide. As of 2020, Sub-Saharan African countries such as South Sudan, Chad, and Nigeria had the highest maternal deaths per 100,000 live births. Since 2000, Southeast Asian countries have seen a significant decrease in maternal mortality of almost 60%. Sub-Saharan Africa also saw an almost 40% decrease in maternal mortality between 2000 and 2017.
Ethnicity
In the United States, women who are black and non-Hispanic experience pregnancy-related death at a significantly higher rate. They are three to four times as likely to succumb to maternal mortality than non-Hispanic white women. In the United States between the years of 2007 and 2014, women who identify as non-Hispanic and black had a significant increase in death related to pregnancy. Similar patterns exist in other countries. In Brazil, women who are not white were 3.5 times as likely to die because of obstetric mortality compared to white women. The maternal mortality ratio is larger in women who are from Sub-Saharan African in France.
In the United States, according to the Center for Disease Control and Prevention (CDC), the maternal mortality rate in 2021 was 32.9 deaths per 100,000 live births. This is significantly higher than the rates in 2020 defined as 23.8 deaths per 100,000 live births and 20.1 in 2019. In 2021, the maternal mortality rate for non-Hispanic Black women was 69.9 deaths per 100,000 live births, which is 2.6 times higher than non-Hispanic White women. The mortality rate for women over the age of 40 was 6.8 times higher than the rate for women under the age of 25.
COVID-19 effects
Global maternal mortality and fetal outcomes have worsened during the COVID-19 pandemic. Increases in maternal deaths, stillbirths, ruptured ectopic pregnancies, and maternal depression occurred globally during this time. According to The Lancet Global Health, their search, which included over 40 studies, identified significant increases in stillbirth and maternal death during the pandemic versus before the pandemic. According to the United Nations Population Fund, UNFPA, a proportion of total COVID-19 deaths were indirect obstetric deaths where a woman's death was due to the aggravation between the disease and the state of pregnancy. Some outcomes show considerable disparity between low- and high-resource settings. This drives the urgent global need to prioritize safe, equitable, and accessible maternal care in future healthcare crises.
Progression of policy
Significant progress has been made since the United Nations made the reduction of maternal mortality part of the Millennium Development Goals (MDGs) in 2000. Bangladesh, for example, cut the number of deaths per live births by almost two-thirds from 1990 to 2015. A further reduction of maternal mortality is now part of the Agenda 2030 for sustainable development. The United Nations recently developed a list of goals termed the Sustainable Development Goals. Some of the specific aims of the Sustainable Development Goals are to prevent unintended pregnancies by ensuring more women have access to contraceptives, as well as providing women who become pregnant with a safe environment for delivery with respectful and skilled care. This initiative also included access to emergency services for women who developed complications during delivery.
Prevention strategies
The World Health Organization (WHO) has developed a global goal to end preventable death related to maternal mortality. A major goal of this strategy is to identify and address the causes of maternal and reproductive morbidities and mortalities. This strategy aims to address inequalities in access to reproductive, maternal, and newborn services, as well as the quality of care with universal health coverage. Maternal mortality is difficult to measure. Health information systems, such as the CRVS (Civil registration and Vital Statistics), in most low-income countries are weak. Therefore, these systems cannot provide accurate assessments of maternal mortality. Even estimates derived from complete system such as the CRVs, suffer misclassification, and underreporting statistics of maternal death. The WHO strategy also aims to ensure quality data collection in order to better respond to the needs of women and girls while improving the equity and quality of care provided to women.
Variation within countries
There are significant maternal mortality intra-country variations, especially in nations with large equality gaps in income and education and high healthcare disparities. Women living in rural areas experience higher maternal mortality than women living in urban and sub-urban centers because those living in wealthier households, having higher education, or living in urban areas, have higher use of healthcare services than their poorer, less-educated, or rural counterparts. There are also racial and ethnic disparities in maternal health outcomes which increases maternal mortality in marginalized groups.
Maternal mortality ratio by country
The maternal mortality ratio (MMR) is the annual number of female deaths per 100,000 live births from any cause related to or aggravated by pregnancy or its management (excluding accidental or incidental causes).
In the year 2017, 810 women died from preventable causes related to pregnancy and birth per day which totaled to approximately 295,000 maternal deaths that year alone. It was also estimated that 94% of maternal deaths occurred in low-resource countries in the same year.
In a retrospective study done across several countries in 2007, the cause of death and causal relationship to the mode of delivery in pregnant women was examined from the years 2000 to 2006. It was discovered that the excess maternal death rate of women who experienced a pulmonary embolism was casually related to undergoing a cesarean delivery. There was also an association found between neuraxial anesthesia, more commonly known as an epidural, and an increased risk for an epidural hematoma. Both of these risks could be reduced by the institution of graduated compression, whether by compression stockings or a compression device. There is also speculation that eliminating the concept of elective cesarean sections in the United States would significantly lower the maternal death rate.
Related terms
Severe maternal morbidity
Severe maternal morbidity (SMM) is an unanticipated acute or chronic health outcome after labor and delivery that detrimentally affects a woman's health. Severe Maternal Morbidity (SMM) includes any unexpected outcomes from labor or delivery that cause both short and long-term consequences to the mother's overall health. There are nineteen total indicators used by the CDC to help identify SMM, with the most prevalent indicator being a blood transfusion. Other indicators include an acute myocardial infarction ("heart attack"), aneurysm, and kidney failure. All of this identification is done by using ICD-10 codes, which are disease identification codes found in hospital discharge data. Using these definitions that rely on these codes should be used with careful consideration since some may miss some cases, have a low predictive value, or may be difficult for different facilities to operationalize. There are certain screening criteria that may be helpful and are recommended through the American College of Obstetricians and Gynecologists as well as the Society for Maternal-Fetal Medicine (SMFM). These screening criteria for SMM are for transfusions of four or more units of blood and admission of a pregnant woman or a postpartum woman to an ICU facility or unit.
The greatest proportion of women with SMM are those who require a blood transfusion during delivery, mostly due to excessive bleeding. Blood transfusions given during delivery due to excessive bleeding has increased the rate of mothers with SMM. The rate of SMM has increased almost 200% between 1993 (49.5 per 100,000 live births) and 2014 (144.0 per 100,000 live births). This can be seen with the increased rate of blood transfusions given during delivery, which increased from 1993 (24.5 per 100,000 live births) to 2014 (122.3 per 100,000 live births).
In the United States, severe maternal morbidity has increased over the last several years, impacting greater than 50,000 women in 2014 alone. There is no conclusive reason for this dramatic increase. It is thought that the overall state of health for pregnant women is impacting these rates. For example, complications can derive from underlying chronic medical conditions like diabetes, obesity, HIV/AIDS, and high blood pressure. These underlying conditions are also thought to lead to increased risk of maternal mortality.
The increased rate for SMM can also be indicative of potentially increased rates for maternal mortality, since without identification and treatment of SMM, these conditions would lead to increased maternal death rates. Therefore, diagnosis of SMM can be considered a "near miss" for maternal mortality. With this consideration, several different expert groups have urged obstetric hospitals to review SMM cases for opportunities that can lead to improved care, which in turn would lead to improvements with maternal health and a decrease in the number of maternal deaths.
See also
Child health
Confidential Enquiry into Maternal Deaths in the UK
Infant mortality
List of women who died in childbirth
Maternal mortality in fiction
Maternal near miss
Obstetric transition
Perinatal mortality
Black maternal mortality in the United States
References
Bibliography
External links
The World Health Report 2005 – Make Every Mother and Child Count
Medical aspects of death
Pathology of pregnancy, childbirth and the puerperium
Medical terminology
Demography
Midwifery | Maternal death | [
"Environmental_science"
] | 8,626 | [
"Demography",
"Environmental social science"
] |
354,077 | https://en.wikipedia.org/wiki/Suspended%20animation | Suspended animation is the temporary (short- or long-term) slowing or stopping of biological function so that physiological capabilities are preserved. States of suspended animation are common in micro-organisms and some plant tissue, such as seeds. Many animals, including large ones, may undergo hibernation, and most plants have periods of dormancy. This article focuses primarily on the potential of large animals, especially humans, to undergo suspended animation.
In animals, suspended animation may be either hypometabolic or ametabolic in nature. It may be induced by either endogenous, natural or artificial biological, chemical or physical means. In its natural form, it may be spontaneously reversible as in the case of species demonstrating hypometabolic states of hibernation. When applied with therapeutic intent, as in deep hypothermic circulatory arrest (DHCA), usually technologically mediated revival is required.
Basic principles
Suspended animation is understood as the pausing of life processes by external or internal means without terminating life itself. Breathing, heartbeat and other involuntary functions may still occur, but they can only be detected by artificial means. For this reason, this procedure has been associated with a lethargic state in nature when animals or plants appear, over a period, to be dead but then can wake up or prevail without suffering any harm. This has been termed in different contexts hibernation, dormancy or anabiosis (the latter in some aquatic invertebrates and plants in scarcity conditions).
In July 2020, marine biologists reported that aerobic microorganisms (mainly), in "quasi-suspended animation", were found in organically-poor sediments, up to 101.5 million years old, below the sea floor in the South Pacific Gyre (SPG) ("the deadest spot in the ocean"), and could be the longest-living life forms ever found.
Delayed resuscitation in humans
This condition of apparent death or interruption of vital signs in humans may be similar to a medical interpretation of suspended animation. It is only possible to recover signs of life if the brain and other vital organs suffer no cell deterioration, necrosis or molecular death principally caused by oxygen deprivation or excess temperature (especially high temperature).
Cases have been reported of individuals having returned from this apparent interruption of life lasting over one half hour, two hours, eight hours, or more (while adhering to these specific conditions for oxygen and temperature) have been analysed in depth, but these cases are considered rare and unusual phenomena. The brain begins to die after five minutes without oxygen; nervous tissues die intermediately when a "somatic death" occurs while muscles die over one to two hours following this last condition.
It has been possible to obtain a successful resuscitation and recover life after apparent suspended animation in such instances as after anaesthesia, heat stroke, electrocution, narcotic poisoning, heart attack or cardiac arrest, shock, newborn infants, cerebral concussion, or cholera.
Supposedly, in suspended animation, a person technically would not die, as long as he or she were able to preserve the minimum conditions in an environment extremely close to death and return to a normal living state. An example of such a case is Anna Bågenholm, a Swedish radiologist who allegedly survived 80 minutes under ice in a frozen lake in a state of cardiac arrest with no brain damage in 1999.
Other cases of hypothermia where people survived without damage are:
John Smith, a 14-year-old boy who survived 15minutes under ice in a frozen lake before paramedics arrived to pull him onto dry land and saved him.
Mitsutaka Uchikoshi, a Japanese man, was reported by media to have survived the cold for 24days in 2006 without food or water when he purportedly fell into a state similar to hibernation. This was doubted by some medical experts, claiming that surviving such a prolonged period without fluids was physiologically impossible.
Paulie Hynek, who, at age two, survived several hours of hypothermia-induced cardiac arrest and whose body temperature reached .
Erika Nordby, a toddler who in 2001 was revived after two hours without apparent heartbeat with a body temperature of about .
Human hibernation
It has been suggested that bone lesions provide evidence of hibernation among the early human population whose remains have been retrieved at the Archaeological site of Atapuerca. In a paper published in the journal L'Anthropologie, researchers Juan-Luis Arsuaga and Antonis Bartsiokas point out that "primitive mammals and primates" like bush babies and lorises hibernate, which suggests that "the genetic basis and physiology for such a hypometabolism could be preserved in many mammalian species, including humans".
Since the 1970s, induced hypothermia has been performed for some open-heart surgeries as an alternative to heart-lung machines. Hypothermia, however, provides only a limited amount of time in which to operate and there is a risk of tissue and brain damage for prolonged periods.
There are many research projects currently investigating how to achieve "induced hibernation" in humans. This ability to hibernate humans would be useful for a number of reasons, such as saving the lives of seriously ill or injured people by temporarily putting them in a state of hibernation until treatment can be given.
The primary focus of research for human hibernation is to reach a state of torpor, defined as a gradual physiological inhibition to reduce oxygen demand and obtain energy conservation by hypometabolic behaviors altering biochemical processes. In previous studies, it was demonstrated that physiological and biochemical events could inhibit endogenous thermoregulation before the onset of hypothermia in a challenging process known as "estivation". This is indispensable to survive harsh environmental conditions, as seen in some amphibians and reptiles.
Scientific possibilities
Temperature-induced
Lowering the temperature of a substance reduces its chemical activity by the Arrhenius equation. This includes life processes such as metabolism. Cryonics could eventually provide long-term suspended animation.
Emergency Preservation and Resuscitation
Emergency Preservation and Resuscitation (EPR) is a way to slow the bodily processes that would lead to death in cases of severe injury. This involves lowering the body's temperature below , which is the current standard for therapeutic hypothermia.
Hypothermic experiments on animals
In June 2005, scientists at the University of Pittsburgh's Safar Center for Resuscitation Research announced they had managed to place dogs in suspended animation and bring them back to life, most of them without brain damage, by draining the blood out of the dogs' bodies and injecting a low temperature solution into their circulatory systems, which in turn keeps the bodies alive in stasis. After three hours of being clinically dead, the dogs' blood was returned to their circulatory systems, and the animals were revived by delivering an electric shock to their hearts. The heart started pumping the blood around the body, and the dogs were brought back to life.
On 20 January 2006, doctors from the Massachusetts General Hospital in Boston announced they had placed pigs in suspended animation with a similar technique. The pigs were anaesthetized and major blood loss was induced, along with simulated - via scalpel - severe injuries (e.g. a punctured aorta as might happen in a car accident or shooting). After the pigs lost about half their blood the remaining blood was replaced with a chilled saline solution. As the body temperature reached the damaged blood vessels were repaired and the blood was returned. The method was tested 200 times with a 90% success rate.
Chemically induced
The laboratory of Mark Roth at the Fred Hutchinson Cancer Research Center and institutes such as Suspended Animation, Inc are trying to implement suspended animation as a medical procedure which involves the therapeutic induction to a complete and temporary systemic ischemia, directed to obtain a state of tolerance for the protection-preservation of the entire organism, this during a circulatory collapse "only by a limited period of one hour". The purpose is to avoid a serious injury, risk of brain damage or death, until the patient reaches specialized attention.
See also
Brain death
Coma
Cryptobiosis
Immortality
Life extension
Stasis
Suspended animation in fiction
Technological utopianism
References
Cryonics
Senescence | Suspended animation | [
"Chemistry",
"Biology"
] | 1,725 | [
"Senescence",
"Metabolism",
"Cellular processes"
] |
354,102 | https://en.wikipedia.org/wiki/TI-92%20series | The TI-92 series are a line of graphing calculators produced by Texas Instruments. They include: the TI-92 (1995), the TI-92 II (1996), the TI-92 Plus (1998, 1999) and the Voyage 200 (2002). The design of these relatively large calculators includes a QWERTY keyboard. Because of this keyboard, it was given the status of a "computer" rather than "calculator" by American testing facilities and cannot be used on tests such as the SAT or AP Exams while the similar TI-89 can be.
TI-92
The TI-92 was originally released in 1995, and was the first symbolic calculator made by Texas Instruments. It came with a computer algebra system (CAS) based on Derive, geometry based on Cabri II, and was one of the first calculators to offer 3D graphing. The TI-92 was not allowed on most standardized tests due mostly to its QWERTY keyboard. Its larger size was also rather cumbersome compared to other graphing calculators. In response to these concerns, Texas Instruments introduced the TI-89 which is functionally similar to the original TI-92, but featured Flash ROM and 188 KB RAM, and a smaller design without the QWERTY keyboard. The TI-92 was then replaced by the TI-92 Plus, which was essentially a TI-89 with the larger QWERTY keyboard design of the TI-92. Eventually, TI released the Voyage 200, which is a smaller, lighter version of the TI-92 Plus with more Flash ROM.
The TI-92 is no longer sold through TI or its dealers, and is very hard to come by in stores.
TI-92 II
The TI-92 II was released in 1996, and was the first successor to the TI-92.
The TI-92 II was available both as a stand-alone product, and as a user-installable II module which could be added to original TI-92 units to gain most of the feature improvements. The TI-92 II module was introduced early in 1996 and added the choice of 5 user languages (English, French, German, Italian and Spanish) and an additional 128k User memory. Along with the TI-92, the TI-92 II was replaced by the TI-92 Plus in 1999, which offered even more Flash ROM and RAM.
TI-92 Plus
The TI-92 Plus (or TI-92+) was released in 1998, slightly after the creation of the almost-identical (in terms of software) TI-89, while physically looking exactly like its predecessor, the TI-92 (which lacked flash memory). Besides increased memory over its predecessor, the TI-92 Plus also featured a sharper "black" screen, which had first appeared on the TI-89 and which eases viewing.
The TI-92 Plus was available both as a stand-alone product, and as a user-installable Plus module which could be added to original TI-92 and TI-92 II units to gain most of the feature improvements, most notably Flash Memory. A stand-alone TI-92 Plus calculator was functionally similar to the HW2 TI-89, while a module-upgraded TI-92 was functionally similar to the HW1 TI-89. Both versions could run the same releases of operating system software.
As of 2002, the TI-92 Plus was succeeded by the Voyage 200 and is no longer sold through TI or its dealers.
The TI-92 Plus is now available in an online emulator, featuring a list of frequently used commands.
Voyage 200
Voyage 200 (also V200 and Voyage 200 PLT) was released in 2002, being the replacement for the TI-92 Plus, with its only hardware upgrade over that calculator being an increase in the amount of flash memory available (2.7 megabytes for the Voyage 200 vs. 702 kilobytes for the TI-92 Plus). It also features a somewhat smaller and more rounded case design.
Like its predecessor, Voyage 200 is an advanced calculator that supports plotting multiple functions on the same graph, parametric, polar, 3D, and differential equation graphing as well as sequence representations. Its symbolic calculation system is based on a trimmed version of the calculation software Derive. In addition to its algebra and calculus capabilities, the Voyage 200 is packaged with list, spreadsheet, and data processing applications and can perform curve fitting to a number of standard functions and other statistical analysis operations. The calculator can also run most programs written for the TI-89 and TI-92 as well as programs specifically written for it. A large number of applications, ranging from games to interactive periodic tables can be found online.
The V200 is easily mistaken for a PDA or a small computer because of its large enclosure and its full QWERTY keyboard — a feature which disqualifies the calculator for use in many tests and examinations, including the American ACT and SAT. The TI-89 Titanium offers exactly the same functionality in a smaller format that is also legal on the SAT test, but not the ACT test.
Features
Technical specifications
See also
Comparison of Texas Instruments graphing calculators
References
External links
Official documentation: features of the Voyage 200.
Graphing calculators
Texas Instruments programmable calculators
Computer algebra systems
68k-based mobile devices
Products introduced in 1995
Products introduced in 1998
Products introduced in 2002 | TI-92 series | [
"Mathematics"
] | 1,116 | [
"Computer algebra systems",
"Mathematical software"
] |
354,193 | https://en.wikipedia.org/wiki/Graphical%20Kernel%20System | The Graphical Kernel System (GKS) was the first ISO standard for low-level computer graphics, introduced in 1977. A draft international standard was circulated for review in September 1983.
Final ratification of the standard was achieved in 1985.
Overview
GKS provides a set of drawing features for two-dimensional vector graphics suitable for charting and similar duties. The calls are designed to be portable across different programming languages, graphics devices and hardware, so that applications written to use GKS will be readily portable to many platforms and devices.
GKS was fairly common on computer workstations in the 1980s and early 1990s.
GKS formed the basis of Digital Research's GSX which evolved into VDI, one of the core components of GEM. GEM was the native GUI on the Atari ST and was occasionally seen on PCs, particularly in conjunction with Ventura Publisher. GKS was little used commercially outside these markets, but remains in use in some scientific visualization packages. It is also the underlying API defining the Computer Graphics Metafile. A descendant of GKS was PHIGS. One popular application based on an implementation of GKS is the GR Framework, a C library for high-performance scientific visualization that has become a common plotting backend among Julia users.
A main developer and promoter of the GKS was José Luis Encarnação, formerly director of the Fraunhofer Institute for Computer Graphics (IGD) in Darmstadt, Germany.
GKS has been standardized in the following documents:
ANSI standard ANSI X3.124 of 1985.
ISO 7942:1985 standard, revised as ISO 7942:1985/Amd 1:1991 and ISO/IEC 7942-1:1994, as well as ISO/IEC 7942-2:1997, ISO/IEC 7942-3:1999 and ISO/IEC 7942-4:1998
The language bindings are ISO standard ISO 8651.
GKS-3D (Graphical Kernel System for Three Dimensions) functional definition is ISO standard ISO 8805, and the corresponding C bindings are ISO/IEC 8806.
The functionality of GKS is wrapped up as a data model standard in the STEP standard, section ISO 10303-46.
See also
General Graphics Interface
GSS-KERNEL
IGES (Initial Graphics Exchange Specification)
NAPLPS
References
Further reading
External links
Unofficial source of current implementation information
GKS at FOLDOC
Computer graphics
Application programming interfaces
Graphics standards
Graphical Kernel System | Graphical Kernel System | [
"Technology"
] | 493 | [
"Computer standards",
"Graphics standards"
] |
354,285 | https://en.wikipedia.org/wiki/List%20of%20rules%20of%20inference | This is a list of rules of inference, logical laws that relate to mathematical formulae.
Introduction
Rules of inference are syntactical transform rules which one can use to infer a conclusion from a premise to create an argument. A set of rules can be used to infer any valid conclusion if it is complete, while never inferring an invalid conclusion, if it is sound. A sound and complete set of rules need not include every rule in the following list, as many of the rules are redundant, and can be proven with the other rules.
Discharge rules permit inference from a subderivation based on a temporary assumption. Below, the notation
indicates such a subderivation from the temporary assumption to .
Rules for propositional calculus
Rules for negations
Reductio ad absurdum (or Negation Introduction)
Reductio ad absurdum (related to the law of excluded middle)
Ex contradictione quodlibet
Rules for conditionals
Deduction theorem (or Conditional Introduction)
Modus ponens (or Conditional Elimination)
Modus tollens
Rules for conjunctions
Adjunction (or Conjunction Introduction)
Simplification (or Conjunction Elimination)
Rules for disjunctions
Addition (or Disjunction Introduction)
Case analysis (or Proof by Cases or Argument by Cases or Disjunction elimination)
Disjunctive syllogism
Constructive dilemma
Rules for biconditionals
Biconditional introduction
Biconditional elimination
Rules of classical predicate calculus
In the following rules, is exactly like except for having the term wherever has the free variable .
Universal Generalization (or Universal Introduction)
Restriction 1: is a variable which does not occur in .
Restriction 2: is not mentioned in any hypothesis or undischarged assumptions.
Universal Instantiation (or Universal Elimination)
Restriction: No free occurrence of in falls within the scope of a quantifier quantifying a variable occurring in .
Existential Generalization (or Existential Introduction)
Restriction: No free occurrence of in falls within the scope of a quantifier quantifying a variable occurring in .
Existential Instantiation (or Existential Elimination)
Restriction 1: is a variable which does not occur in .
Restriction 2: There is no occurrence, free or bound, of in .
Restriction 3: is not mentioned in any hypothesis or undischarged assumptions.
Rules of substructural logic
The following are special cases of universal generalization and existential elimination; these occur in substructural logics, such as linear logic.
Rule of weakening (or monotonicity of entailment) (aka no-cloning theorem)
Rule of contraction (or idempotency of entailment) (aka no-deleting theorem)
Table: Rules of Inference
The rules above can be summed up in the following table. The "Tautology" column shows how to interpret the notation of a given rule.
All rules use the basic logic operators. A complete table of "logic operators" is shown by a truth table, giving definitions of all the possible (16) truth functions of 2 boolean variables (p, q):
where T = true and F = false, and, the columns are the logical operators:
0, false, Contradiction;
1, NOR, Logical NOR (Peirce's arrow);
2, Converse nonimplication;
3, ¬p, Negation;
4, Material nonimplication;
5, ¬q, Negation;
6, XOR, Exclusive disjunction;
7, NAND, Logical NAND (Sheffer stroke);
8, AND, Logical conjunction;
9, XNOR, If and only if, Logical biconditional;
10, q, Projection function;
11, if/then, Material conditional;
12, p, Projection function;
13, then/if, Converse implication;
14, OR, Logical disjunction;
15, true, Tautology.
Each logic operator can be used in an assertion about variables and operations, showing a basic rule of inference. Examples:
The column-14 operator (OR), shows Addition rule: when p=T (the hypothesis selects the first two lines of the table), we see (at column-14) that p∨q=T.
We can see also that, with the same premise, another conclusions are valid: columns 12, 14 and 15 are T.
The column-8 operator (AND), shows Simplification rule: when p∧q=T (first line of the table), we see that p=T.
With this premise, we also conclude that q=T, p∨q=T, etc. as shown by columns 9–15.
The column-11 operator (IF/THEN), shows Modus ponens rule: when p→q=T and p=T only one line of the truth table (the first) satisfies these two conditions. On this line, q is also true. Therefore, whenever p → q is true and p is true, q must also be true.
Machines and well-trained people use this look at table approach to do basic inferences, and to check if other inferences (for the same premises) can be obtained.
Example 1
Consider the following assumptions: "If it rains today, then we will not go on a canoe today. If we do not go on a canoe trip today, then we will go on a canoe trip tomorrow. Therefore (Mathematical symbol for "therefore" is ), if it rains today, we will go on a canoe trip tomorrow".
To make use of the rules of inference in the above table we let be the proposition "If it rains today", be "We will not go on a canoe today" and let be "We will go on a canoe trip tomorrow". Then this argument is of the form:
Example 2
Consider a more complex set of assumptions: "It is not sunny today and it is colder than yesterday". "We will go swimming only if it is sunny", "If we do not go swimming, then we will have a barbecue", and "If we will have a barbecue, then we will be home by sunset" lead to the conclusion "We will be home by sunset."
Proof by rules of inference: Let be the proposition "It is sunny today", the proposition "It is colder than yesterday", the proposition "We will go swimming", the proposition "We will have a barbecue", and the proposition "We will be home by sunset". Then the hypotheses become and . Using our intuition we conjecture that the conclusion might be . Using the Rules of Inference table we can prove the conjecture easily:
See also
List of logic systems
Modus ponendo tollens
References
Mathematics-related lists
Logic-related lists
de:Schlussregel
he:חוקי היקש | List of rules of inference | [
"Mathematics"
] | 1,407 | [
"Rules of inference",
"Proof theory"
] |
354,286 | https://en.wikipedia.org/wiki/Yamuna | The Yamuna (; ) is the second-largest tributary river of the Ganges by discharge and the longest tributary in India. Originating from the Yamunotri Glacier at a height of about on the southwestern slopes of Bandarpunch peaks of the Lower Himalaya in Uttarakhand, it travels and has a drainage system of , 40.2% of the entire Ganges Basin. It merges with the Ganges at Triveni Sangam, Prayagraj, which is a site of the Kumbh Mela, a Hindu festival held every 12 years.
Like the Ganges, the Yamuna is highly venerated in Hinduism and worshipped as the goddess Yamuna. In Hinduism, she is believed to be the daughter of the sun god, Surya, and the sister of Yama, the god of death, and so she is also known as Yami. According to popular Hindu legends, bathing in Yamuna's sacred waters frees one from the torments of death.
The river crosses several states such as Haryana, Uttar Pradesh, Uttarakhand and Delhi. It also meets several tributaries along the way, including Tons, Chambal, its longest tributary which has its own large basin, followed by Sindh, the Betwa, and Ken. From Uttarakhand, the river flows into the state of Himachal Pradesh. After passing Paonta Sahib, Yamuna flows along the boundary of Haryana and Uttar Pradesh and after exiting Haryana it continues to flow till it merges with the river Ganges at Sangam or Prayag in Prayagraj (Uttar Pradesh). It helps create the highly fertile alluvial Ganges-Yamuna Doab region between itself and the Ganges in the Indo-Gangetic plain.
Nearly 57 million people depend on the Yamuna's waters, and the river accounts for more than 70 percent of Delhi's water supply. It has an annual flow of 97 billion cubic metres, and nearly 4 billion cubic metres are consumed every year (of which irrigation constitutes 96%). At the Hathni Kund Barrage, its waters are diverted into two large canals: the Western Yamuna Canal flowing towards Haryana, and the Eastern Yamuna Canal flowing towards Uttar Pradesh. Beyond that point the Yamuna is joined by the Somb, a seasonal rivulet from Haryana, and by the highly polluted Hindon River near Noida, by Najafgarh drain near Wazirabad and by various other drains, so that it continues only as a trickling sewage-bearing drain before joining the Chambal at Pachnada in the Etawah District of Uttar Pradesh.
The water quality in Upper Yamuna, as the long stretch of Yamuna is called from its origin at Yamunotri to Okhla barrage, is of "reasonably good quality" until the Wazirabad barrage in Delhi. Below this, the discharge of wastewater in Delhi through 15 drains between Wazirabad barrage and Okhla barrage renders the river severely polluted. Wazirabad barrage to Okhla Barrage, stretch of Yamuna in Delhi, is less than 2% of Yamuna's total length but accounts for nearly 80% of the total pollution in the river. Untreated wastewater and poor quality of water discharged from the wastewater treatment plants are the major reasons of Yamuna's pollution in Delhi. To address river pollution, measures have been taken by the Ministry of Environment and Forests (MoEF) under the Yamuna Action Plan (YAP) which has been implemented since 1993 by the MoEF's National River Conservation Directorate (NRCD).
Basin
Palaeochannels: Sarasvati's tributary
The present Sarsuti river which originates in the Shivalik hills in Himachal and Haryana border and merges with Ghaggar River near Pehowa is the palaeochannel of Yamuna. Yamuna changed its course to the east due to a shift in the slope of the Earth's crust caused by plate tectonics.
Sources: Banderpoonch peak and Yamunotri glacier
The source of Yamuna lies in the Yamunotri Glacier at an elevation of , on the southwestern slopes of Banderpooch peaks, which lie in the Mussoorie range of the Lower Himalayas, north of Haridwar in Uttarkashi district, Uttarakhand. Yamunotri temple, a shrine dedicated to the goddess Yamuna, is one of the holiest shrines in Hinduism, and part of the Chota Char Dham Yatra circuit. Also standing close to the temple, on its trek route that follows the right bank of the river, lies Markendeya Tirtha, where the sage Markandeya wrote the Markandeya Purana.
Current channel
The river flows southwards for about , through the Lower Himalayas and the Shivalik Hills Range. Morainic deposits are found along the steep Upper Yamuna, highlighted with geomorphic features such as interlocking spurs, steep rock benches, gorges and stream terraces. Large terraces formed over a long period of time can be seen in the lower course of the river, such as those near Naugoan. An important part of its early catchment area, totalling , lies in Himachal Pradesh. The Tons, Yamuna's largest tributary, drains a large portion of the upper catchment area and holds more water than the main stream. It rises from the Hari-ki-dun valley and merges after Kalsi near Dehradun. The drainage system of the river stretches between Giri-Sutlej catchment in Himachal and Yamuna-Bhilangna catchment in Garhwal, also draining the ridge of Shimla. Kalanag () is the highest point of the Yamuna basin. Other tributaries in the region are the Giri, Rishi Ganga Kunta, Hanuman Ganga and Bata, which drain the upper catchment area of the Yamuna basin.
From the upper catchment area, the river descends onto the plains of Doon Valley, at Dak Pathar near Dehradun. Flowing through the Dakpathar Barrage, the water is diverted into a canal for power generation. Further downstream, the Assan River joins the Yamuna at the Asan Barrage, which hosts a bird sanctuary. After passing the Sikh pilgrimage town of Paonta Sahib, the Yamuna reaches Tajewala in Yamuna Nagar district (named after the river) of Haryana. A dam built here in 1873 is the origin of two important canals, the Western and Eastern Yamuna Canals, which irrigate the states of Haryana and Uttar Pradesh. The Western Yamuna Canal (WYC) crosses Yamuna Nagar, Karnal, Panipat and Sonipat before reaching the Haiderpur treatment plant, which contributes to Delhi's municipal water supply. The Yamuna receives wastewater from Yamuna Nagar and Panipat cities; beyond this it is replenished by seasonal streams and groundwater accrual. During the dry season, the Yamuna remains dry in many stretches between the Tajewala dam and Delhi, where it enters near the Palla barrage after traversing .
The Yamuna defines the state borders between Himachal Pradesh and Uttarakhand, and between Haryana, Delhi and Uttar Pradesh. When the Yamuna reaches the Indo-Gangetic plain, it runs almost parallel to the Ganges, the two rivers creating the Ganges-Yamuna Doab region. Spread across , one-third of the alluvial plain, the region is known for its agricultural output, particularly for the cultivation of basmati rice. The plain's agriculture supports one-third of India's population.
Subsequently, the Yamuna flows through the states of Delhi, Haryana and Uttar Pradesh before merging with the Ganges at a sacred spot known as Triveni Sangam in Prayagraj. Pilgrims travel by boats to platforms erected in midstream to offer prayers. During the Kumbh Mela, held every 12 years, large congregations of people immerse themselves in the sacred waters of the confluence. The cities of Baghpat, Delhi, Noida, Mathura, Agra, Firozabad, Etawah, Kalpi, Hamirpur, and Prayagraj lie on its banks. At Etawah, it meets it another important tributary, Chambal, followed by a host of tributaries further down, including, Sindh, the Betwa, and Ken.
Important tributaries
Yamuna's tributaries make up 70.9% of the catchment area and the river has six important tributaries:
Tons River is Yamuna's largest tributary and rises in the Bandarpoonch mountain. It meets Yamuna below Kalsi, near Dehradun, Uttarakhand.
Hindon River originates from Upper Shivalik, in the Lower Himalayan Range. It is a rain fed river and has a catchment area of and traverses .
Chambal River, also known as Charmanvati in ancient texts, flows through Rajasthan and Madhya Pradesh and traverses a total distance of from its source in the Vindhya Range, near Mhow. It has a drainage basin of and it supports hydro-power generation at Gandhi Sagar dam, Rana Pratap Sagar dam and Jawahar Sagar dam. The Chambal river merges with the Yamuna at Sahon village.
Kali River, rises in the Doon Valley and merges with the Hindon River.
Ken River, flows through Madhya Pradesh and Uttar Pradesh. It originates near Ahirgawan village in Jabalpur district and travels a distance of before merging with the Yamuna at Chilla village, near Fatehpur in Uttar Pradesh. It has an overall drainage basin of .
Betwa River originates in Bhopal district, in Madhya Pradesh. Its confluence with the Yamuna is in Hamirpur district, Uttar Pradesh. It has a catchment area of .
Background
Etymology
The name Yamuna seems to be derived from the Sanskrit word "yama", meaning 'twin', and it may have been applied to the river because it runs parallel to the Ganges.
History
The earliest mention of Yamuna is found at many places in the Rig Veda (c. 1500–1000 BCE), which was composed during the Vedic period BCE, and also in the later Atharvaveda, and the Brahmanas including Aitareya Brahmana and Shatapatha Brahmana. In the Rigveda, the story of the Yamuna describes her "excessive love" for her twin, Yama, who in turn asks her to find a suitable match for herself, which she does in Krishna.
Yamuna is mentioned as Iomanes (Ioames) in the surveys of Seleucus I Nicator, an officer of Alexander the Great and one of the Diadochi, who visited India in 305 BCE. Greek traveller and geographer Megasthenes visited India sometime before 288 BCE (the date of Chandragupta's death) and mentioned the river in his Indica, where he described the region around it as the land of Surasena. In Mahabharata, the Pandava capital of Indraprastha was situated on the banks of Yamuna, considered to be the site of modern Delhi.
Geological evidence indicates that in the distant past the Yamuna was a tributary of the Ghaggar River (identified by some as the Vedic Sarasvati River). It later changed its course eastward, becoming a tributary of the Ganges. While some have argued that this was due to a tectonic event, and may have led to the Sarasvati River drying up, the end of many Harappan civilisation settlements, and creation of the Thar desert, recent geological research suggests that the diversion of the Yamuna to the Ganges may have occurred during the Pleistocene, and thus could not be connected to the decline of the Harappan civilisation in the region.
Most of the great empires which ruled over a majority of India were based in the highly fertile Ganges–Yamuna basin, including the Magadha (), Maurya Empire (321–185 BCE), Shunga Empire (185–73 BCE), Kushan Empire (1st–3rd centuries CE), and Gupta Empire (280–550 CE), and many had their capitals here, in cities like Pataliputra or Mathura. These rivers were revered throughout these kingdoms that flourished on their banks; since the period of Chandragupta II ( 375–415 CE), statues both the Ganges and Yamuna became common throughout the Gupta Empire. Further to the South, images of the Ganges and Yamuna are found amidst shrines of the Chalukyas, Rashtrakutas (753–982), and on their royal seals; prior to them, the Chola Empire also added the river into their architectural motifs. The Three River Goddess shrine, next to the Kailash rock-cut Temple at Ellora, shows the Ganges flanked by the Yamuna and Saraswati.
Use of water
1994 water sharing agreement
The stretch of the river from its origin at Yamunotri to Okhla barrage in Delhi is called "Upper Yamuna". A Memorandum of Understanding (MoU) was signed amongst the five basin states (Himachal Pradesh, Uttar Pradesh, Uttarakhand, Haryana, Rajasthan, and Delhi) on 12 May 1994 for sharing of its waters. This led to the formation of the Upper Yamuna River Board under India's Ministry of Water Resources, whose primary functions are: regulation of the available flows amongst the beneficiary states and monitoring the return flows; monitoring conservation and upgrading the quality of surface and groundwater; maintaining hydro-meteorological data for the basin; overviewing plans for watershed management; and monitoring and reviewing the progress of all projects up to and including Okhla barrage.
Flood forecasting systems are established at Poanta Sahib, where Tons, Pawar and Giri tributaries meet. The river take 60 hours to travel from Tajewala to Delhi, thus allowing a two-day advance flood warning period. The Central Water Commission started flood-forecasting services in 1958 with its first forecasting station on Yamuna at Delhi Railway Bridge.
Barrages
Yamuna has the following six functional barrages (eight including old replaced barrages, nine including a new proposed barrage), from north-west to southeast:
Dakpathar Barrage in Uttarakhand, managed by the Uttarakhand government.
Hathni Kund Barrage in Haryana, from the source of Yamuna, built in 1999 and managed by Haryana government.
Tajewala Barrage was built in 1873 and replaced by the Hathni Kund.
Wazirabad barrage in north Delhi, from Hathni Kund barrage, managed by the Delhi government.
"New Wazirabad barrage", proposed in 2013, to be built north of the Wazirabad barrage.
ITO barrage (Indraparstha barrage) in central Delhi, managed by the Haryana govt.
Okhla barrage is from Wazirabad to south Delhi, managed by the Uttar Pradesh (UP) government.
New Okhla Barrage, a new barrage, managed by the UP government.
Palla barrage downstream on "Delhi-Faridabad canal" in Haryana, managed by the Haryana government.
Gokul barrage (a.k.a. Mathura barrage) is at Gokul in Uttar Pradesh, managed by the UP government.
Irrigation
Use of the Yamuna's waters for irrigation in the Indo-Gangetic Plains is enhanced by its many canals, some dating to the 14th century Tughlaq dynasty, which built the Nahr-i-Bahisht (Paradise) parallel to the river. The Nahr-i-Bahisht was restored and extended by the Mughals in the first half of the 17th century, by engineer Ali Mardan Khan, starting from Benawas where the river enters the plains and terminating near the Mughal capital of Shahjahanabad, the present city of Delhi.
Eastern Yamuna Canal
As the Yamuna enters the Northern Plains near Dakpathar at an elevation of , the Eastern Yamuna Canal commences at the Dakpathar Barrage and pauses at the Asan and Hathnikund Barrages before continuing south.
Western Yamuna Canal
The Western Yamuna Canal (WYC) was built in 1335 CE by Firuz Shah Tughlaq. Excessive silting caused it to stop flowing , when the British Raj undertook a three-year renovation in 1817 by Bengal Engineer Group. The Tajewala Barrage dam was built in 1832–33 to regulate the flow of water, and was replaced by the modern Hathni Kund Barrage in 1999.
The main canal is long. When including its branches and many major and minor irrigation channels, it has a total length of The WYC begins at the Hathni Kund Barrage about from Dakpathar and south of Doon Valley. The canals irrigate vast tracts of land in the region in Ambala, Karnal, Sonipat, Rohtak, Jind, Hisar and Bhiwani districts.
The major branch canals are:
Agra Canal, built in 1874, which starts from the Okhla barrage beyond the Nizamuddin bridge, joining the Banganga river about below Agra. During the dry summer season, the stretch above Agra resembles a minor stream.
Munak canal, built in 1819 and renovated in 2008, originates at Munak in Karnal district and extends 22 km to Delhi, carrying of water.
Delhi Branch
Bhalaut Branch, originating at Khubru village, flows through Jhajjar district.
Jhajjar Branch, flows through Jhajjar district.
Sirsa Branch, the largest branch of the WYC, constructed in 1889–1895. It originates at Indri and meanders through Jind district, Fatehabad district and Sirsa district.
Jind Branch
Bhiwani Branch, which meanders through Bhiwani district and passes Bidhwan.
Barwala Branch
Hansi Branch, built in 1825 and remodelled in 1959. It originates at Munak and meanders through Hansi tehsil of Hisar district.
Butana Branch
Sunder Branch, which passes Kanwari in Hisar district.
Rohtak Branch
Sutlej–Yamuna Link Canal
A proposed heavy freight canal, the Sutlej–Yamuna Link (SYL), is being built westwards from near Yamuna's headwaters through the Punjab region near an ancient caravan route and highlands pass to the navigable parts of the Sutlej–Indus watershed. This will connect the Ganges, which flows to the east coast of the subcontinent, with points west (via Pakistan). When completed, the SYL will allow shipping from India's east coast to the west coast and the Arabian Sea, shortening important commercial links for north-central India's large population. The canal starts near Delhi, and is designed to transfer Haryana's share of from the Indus Basin.
National Waterway
Yamuna is one of the National Waterways of India, designated as NW110 in Haryana, Delhi and Uttar Pradesh. Some of its sections are being developed for navigation:
Delhi–Faridabad (Wazirabad barrage to Palla barrage, via ITO barrage), is being developed for passenger and cargo ferry service.
Delhi–Agra (Okhla barrage to Agra Canal), is planned for steamer service by the end of June 2017 with the help of the Netherlands.
Religious significance
Purifying waters
Like the Ganges, the Yamuna River is highly venerated in Hinduism in the form of a river and as the goddess Yamuna. The Yamuna is considered a river of heaven. The Rig Veda includes the Yamuna River as one of the seven sacred rivers, along with the Ganges. According to Hindu mythology, the River was brought to Earth by the ascetic practice of the Seven Sages where she first descended on Mount Kalinda. Therefore, Yamuna is also known as Kalindi.
The Padma Purana describes Yamuna's purifying properties and states that her waters cleanse the mind from sin. It also mentions that bathing in her sacred waters frees one from the torments of death. Art from the Gupta period depict Yamuna and Ganga on the entrances and doorjambs of temples and sacred places. Upon passing through these doors, visitors were symbolically purified by these rivers.
Some religious figures (notably pilgrim priests of Mathura and Vrindavan) do not regard the physical pollution of the Yamuna to have any effect on the river's spiritual purity. The Braj region is where the worship of the Yamuna and its pollution is most pronounced. However, more and more Hindus no longer ritually bathe in the Yamuna, drink its water, or use its water for worship. In Vrindavan's holy shrines, bottled water is used instead.
Goddess personified
In her human form, Yamuna is the daughter of Surya, the sun god, and his wife Saranyu. She is the twin sister of Yama, the god of death, and is also known as Yami. The Agni Purana describes Yamuna as having a dark complexion, mounted on a turtle, and holding a pot in her hand.
Devotion
Yamuna, as a river and goddess, has a close association with Krishna. The Puranas narrate many stories about Krishna in relation to the river and its surroundings. One such story is of Kaliya Daman, the subduing of Kaliya, a Nāga which had inhabited the river and terrorised the people of Braja. Due to Krishna's connection with the River and the Braja region, the Yamuna River is a center of pilgrimage for his devotees. In the Pushti Marga, founded by Vallabhacharya and in which Krishna is the main deity, Yamuna is worshipped as a goddess.
The Yamunashtakam is a 16th-century Sanskrit hymn composed by Vallabhacharya which describes the story of Yamuna's descent to meet her beloved Krishna and to purify the world. The hymn also praises her for being the source of all spiritual abilities. And while the Ganges is considered an epitome of asceticism and higher knowledge and can grant Moksha or liberation, it is Yamuna, who, being a holder of infinite love and compassion, can grant freedom, even from death, the realm of her elder brother. Vallabhacharya writes that she rushes down the Kalinda Mountain, and describes her as the daughter of Kalinda, giving her the name Kalindi, the backdrop of Krishna Leela. The text also talks about her water being the colour of Lord Krishna, which is dark (Shyam). The river is referred to as Asita in some historical texts.
Shlokas on Yamuna
Numerous Hindu texts have shlokas (hymns) on Yamuna as follows:
"One should not give up the process of austerity. If possible, one should bathe in the water of the Yamuna. This is an item of austerity. Therefore, our Krishna consciousness movement has established a center in Vrindavana so that one may bathe in the Yamuna, chant the Hare Krishna mantra and then become perfect and return back to Godhead." (Srimad Bhagavatam 6.5.28 purport)
Ecology
Fauna
The Yamuna from the source to its culmination in Ganges is a habitat for fish for approximately stretch and supports a rich diversity of species. Fish from the family Cyprinidae dominate the variety of fish species found in the river. This includes Indian carp and also invasive species from the family. In a study, 93 species of fish were found in the river including catfish. Species of non-native Tilapia have become established in the river. They have been implicated in the decline of the Ghariyal (Indian crocodile) population in the river. Large turtles used to be a common sight on the river a few decades ago but they have mostly disappeared.
Pollution
In 1909, the waters of the Yamuna were distinguishable as clear blue, when compared to the silt-laden yellow of the Ganges. However, due to high-density population growth and fast industrialisation, Yamuna has become one of the most polluted rivers in the world. The Yamuna is particularly polluted downstream of New Delhi, the capital of India, which dumps about 58% of its waste into the river. A 2016 study shows that there is 100% urban metabolism of River Yamuna as it passes through the National Capital Territory (NCT) of Delhi. The most pollution comes from Wazirabad, from where Yamuna enters Delhi.
In November of 2024, a video went viral in which women were depicted bathing in foam that had emerged in the river. Although it appeared similar to that resulting from cosmetic products such as soap or shampoo, experts determined that the foam was caused by heavy pollution, and was therefore hazardous. Local authorities instructed residents not to bathe in the river for health concerns.
Causes
The Wazirabad barrage to the New Okhla Barrage segment, "22 km stretch of Yamuna in Delhi, is less than 2% of Yamuna's total length but accounts for nearly 80% of the total pollution in the river", 22 out of 35 sewage treatment plants in Delhi do not meet the wastewater standards prescribed by the Delhi Pollution Control Committee (DPCC), thus untreated wastewater and poor quality of water discharged from the wastewater treatment plants are the major reasons. As of 2019, the river receives 800 million litres of largely untreated sewage and additional 44 million litres of industrial effluents each day, of which only 35 percent of the sewage released into the river are believed to be treated. In 1994, the states of Uttarakhand, Himachal Pradesh, Uttar Pradesh, Haryana, Rajasthan and Delhi made a water sharing agreement that is due for revision in 2025. To achieve a water quality suitable for bathing (BOD<3 mg/L and DO>5 mg/L) would require a greater rate of water flow in the river. A study has recommended that per second of water should be released from Hathni Kund Barrage during the lean season to provide a minimum environmental flow in the Yamuna.
The last barrage across the Yamuna river is the Mathura barrage at Gokul to supply its drinking water. Downstream of this barrage, many pumping stations are constructed to feed the river water for irrigation needs. These pumping stations are near Pateora Danda , Samgara , Ainjhi , Bilas Khadar , and Samari . Depletion of the base flows available in the river during the non-monsoon months by these pump houses is exacerbating river pollution from Mathura to Prayagraj in the absence of adequate fresh water to dilute the polluted drainage from habitations and industries.
Cleanup efforts
To address river pollution, measures have been taken by the Ministry of Environment and Forests (MoEF) in 12 towns of Haryana, 8 towns of Uttar Pradesh, and Delhi, under the Yamuna Action Plan (YAP) which has been implemented since 1993 by the MoEF's National River Conservation Directorate (NRCD). The Japan Bank for International Cooperation is participating in the YAP in 15 of the towns (excluding 6 towns of Haryana included later on the direction of the Supreme Court of India) with soft loan assistance of 17.773 billion Japanese yen (equivalent to about 700 crore [7 billion rupees]) while the government of India is providing the funds for the remaining 6 towns. In 2007, the Indian government's plans to repair sewage lines were predicted to improve the water quality of the river 90% by 2010.
Under the YAP- III scheme, a new sewage treatment plant is being built at the largest such facility in India by the Delhi Jal Board (DJB). The plant is predicted to be able to treat 124 million gallons of wastewater per day, amounting to a daily removal of of organic pollutants as well as of solids.
In August 2009, the Delhi Jal Board (DJB) initiated its plan for resuscitating the Yamuna's stretch in Delhi by constructing interceptor sewers, at the cost of about 1,800 crore (18 billion rupees).
On 25 April 2014, the National Green Tribunal Act (NGA) recommended the government to declare a stretch of the Yamuna in Delhi and Uttar Pradesh as a conservation zone. A report prepared by the Ministry of Environment and Forests (MoEF) panel was submitted to the NGA on the same day.
The High Court in the northern Indian state of Uttarakhand ordered in March 2017 that the Ganges and its main tributary, the Yamuna, be assigned the status of legal entities, making the rivers "legal and living entities having the status of a legal person with all corresponding rights, duties and liabilities". This decision meant that polluting or damaging the rivers is equivalent to harming a person. The court cited the example of the New Zealand Whanganui River, which was also declared to possess full rights of a legal person.
Gallery
See also
Environmental personhood
List of rivers of India
List of most-polluted rivers
Western Jamuna Canal Link
Yamuna in Hinduism
Yamuna Pushkaram
Yamuna Pushta
References
Further reading
External links
The Geography of the Rigveda
Yamuna Action Plan
. The Guardian (7 July 2017)
Yamuna Mission
Yamuna and other Rivers SSC Questions
The painting A Ruin on the Banks of the Jumna, Above the City of Delhi by William Purser, engraved by William Joseph Taylor, as an illustration to , a poem by Letitia Elizabeth Landon.
Rigvedic rivers
Rivers of Delhi
Rivers of Haryana
Rivers of Uttar Pradesh
Rivers of Uttarakhand
Sacred rivers
Sea and river goddesses
Tributaries of the Ganges
Sarasvati River
Rivers in Buddhism
Environmental personhood | Yamuna | [
"Environmental_science"
] | 6,275 | [
"Environmental personhood",
"Environmental ethics"
] |
354,319 | https://en.wikipedia.org/wiki/Outline%20of%20category%20theory | The following outline is provided as an overview of and guide to category theory, the area of study in mathematics that examines in an abstract way the properties of particular mathematical concepts, by formalising them as collections of objects and arrows (also called morphisms, although this term also has a specific, non category-theoretical sense), where these collections satisfy certain basic conditions. Many significant areas of mathematics can be formalised as categories, and the use of category theory allows many intricate and subtle mathematical results in these fields to be stated, and proved, in a much simpler way than without the use of categories.
Essence of category theory
Category
Functor
Natural transformation
Branches of category theory
Homological algebra
Diagram chasing
Topos theory
Enriched category theory
Higher category theory
Categorical logic
Applied category theory
Specific categories
Category of sets
Concrete category
Category of vector spaces
Category of graded vector spaces
Category of chain complexes
Category of finite dimensional Hilbert spaces
Category of sets and relations
Category of topological spaces
Category of metric spaces
Category of preordered sets
Category of groups
Category of abelian groups
Category of rings
Category of magmas
Objects
Initial object
Terminal object
Zero object
Subobject
Group object
Magma object
Natural number object
Exponential object
Morphisms
Epimorphism
Monomorphism
Zero morphism
Normal morphism
Dual (category theory)
Groupoid
Image (category theory)
Coimage
Commutative diagram
Cartesian morphism
Slice category
Functors
Isomorphism of categories
Natural transformation
Equivalence of categories
Subcategory
Faithful functor
Full functor
Forgetful functor
Yoneda lemma
Representable functor
Functor category
Adjoint functors
Galois connection
Pontryagin duality
Affine scheme
Monad (category theory)
Comonad
Combinatorial species
Exact functor
Derived functor
Dominant functor
Enriched functor
Kan extension of a functor
Hom functor
Limits
Product (category theory)
Equaliser (mathematics)
Kernel (category theory)
Pullback (category theory)/fiber product
Inverse limit
Pro-finite group
Colimit
Coproduct
Coequalizer
Cokernel
Pushout (category theory)
Direct limit
Biproduct
Direct sum
Additive structure
Preadditive category
Additive category
Pre-Abelian category
Abelian category
Exact sequence
Exact functor
Snake lemma
Nine lemma
Five lemma
Short five lemma
Mitchell's embedding theorem
Injective cogenerator
Derived category
Triangulated category
Model category
2-category
Dagger categories
Dagger symmetric monoidal category
Dagger compact category
Strongly ribbon category
Monoidal categories
Closed monoidal category
Braided monoidal category
Cartesian closed category
Topos
Category of small categories
Structure
Semigroupoid
Comma category
Localization of a category
Enriched category
Bicategory
Topoi, toposes
Sheaf
Gluing axiom
Descent (category theory)
Grothendieck topology
Introduction to topos theory
Subobject classifier
Pointless topology
Heyting algebra
History of category theory
History of category theory
Persons influential in the field of category theory
Category theory scholars
Saunders Mac Lane
Samuel Eilenberg
Max Kelly
William Lawvere
André Joyal
See also
Abstract nonsense
Glossary of category theory
Category theory
Category theory
Category theory | Outline of category theory | [
"Mathematics"
] | 622 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations",
"nan"
] |
354,320 | https://en.wikipedia.org/wiki/Institution%20of%20Civil%20Engineers | The Institution of Civil Engineers (ICE) is an independent professional association for civil engineers and a charitable body in the United Kingdom. Based in London, ICE has over 92,000 members, of whom three-quarters are located in the UK, while the rest are located in more than 150 other countries. The ICE aims to support the civil engineering profession by offering professional qualification, promoting education, maintaining professional ethics, and liaising with industry, academia and government. Under its commercial arm, it delivers training, recruitment, publishing and contract services. As a professional body, ICE aims to support and promote professional learning (both to students and existing practitioners), managing professional ethics and safeguarding the status of engineers, and representing the interests of the profession in dealings with government, etc. It sets standards for membership of the body; works with industry and academia to progress engineering standards and advises on education and training curricula.
History
The late 18th century and early 19th century saw the founding of many learned societies and professional bodies (for example, the Royal Society and the Law Society). Groups calling themselves civil engineers had been meeting for some years from the late 18th century, notably the Society of Civil Engineers formed in 1771 by John Smeaton (renamed the Smeatonian Society after his death). At that time, formal engineering in Britain was limited to the military engineers of the Corps of Royal Engineers, and in the spirit of self-help prevalent at the time and to provide a focus for the fledgling 'civilian engineers', the Institution of Civil Engineers was founded as the world's first professional engineering body.
The initiative to found the Institution was taken in 1818 by eight young engineers, Henry Robinson Palmer (23), William Maudslay (23), Thomas Maudslay (26), James Jones (28), Charles Collinge (26), John Lethbridge, James Ashwell (19) and Joshua Field (32), who held an inaugural meeting on 2 January 1818, at the Kendal Coffee House in Fleet Street. The institution made little headway until a key step was taken – the appointment of Thomas Telford as the first President of the body. Greatly respected within the profession and blessed with numerous contacts across the industry and in government circles, he was instrumental in drumming up membership and getting a Royal Charter for ICE in 1828. This official recognition helped establish ICE as the pre-eminent organisation for engineers of all disciplines.
Early definitions of a Civil Engineer can be found in the discussions held on 2 January 1818 and in the application for Royal Chartership. In 1818 Palmer said that:
The objects of such institution, as recited in the charter, and reported in The Times, were
After Telford's death in 1834, the organisation moved into premises in Great George Street in the heart of Westminster in 1839, and began to publish learned papers on engineering topics. Its members, notably William Cubitt, were also prominent in the organisation of the Great Exhibition of 1851.
For 29 years ICE provided the forum for engineers practising in all the disciplines recognised today. Mechanical engineer and tool-maker Henry Maudslay was an early member and Joseph Whitworth presented one of the earliest papers – it was not until 1847 that the Institution of Mechanical Engineers was established (with George Stephenson as its first President).
By the end of the 19th century, ICE had introduced examinations for professional engineering qualifications to help ensure and maintain high standards among its members – a role it continues today.
The ICE's Great George Street headquarters, designed by James Miller, was built by John Mowlem & Co and completed in 1911.
Membership and professional qualification
The institution is a membership organisation comprising 95,460 members worldwide (as of 31 December 2022); around three-quarters are located in the United Kingdom. Membership grades include:
Student
Graduate (GMICE)
Associate (AMICE)
Technician (MICE)
Member (MICE)
Fellow (FICE)
ICE is a licensed body of the Engineering Council and can award the Chartered Engineer (CEng), Incorporated Engineer (IEng) and Engineering Technician (EngTech) professional qualifications. Members who are Chartered Engineers can use the protected title Chartered Civil Engineer.
ICE is also licensed by the Society for the Environment to award the Chartered Environmentalist (CEnv) professional qualification.
Publishing
The Institution of Civil Engineers also publishes technical studies covering research and best practice in civil engineering. Under its commercial arm, Thomas Telford Ltd, it delivers training, recruitment, publishing and contract services, such as the NEC Engineering and Construction Contract. All the profits of Thomas Telford Ltd go back to the Institution to further its stated aim of putting civil engineers at the heart of society. The publishing division has existed since 1836 and is today called ICE Publishing. ICE Publishing produces roughly 30 books a year, including the ICE Manuals series, and 30 civil engineering journals, including the ICE Proceedings in nineteen parts, Géotechnique, and the Magazine of Concrete Research. The ICE Science series is now also published by ICE Publishing. ICE Science currently consists of five journals: Nanomaterials and Energy, Emerging Materials Research, Bioinspired, Biomimetic and Nanobiomaterials, Green Materials and Surface Innovations.
Nineteen individual parts now make up the Proceedings, as follows:
Proceedings of the Institution of Civil Engineers: Bridge Engineering
Proceedings of the Institution of Civil Engineers: Civil Engineering
Proceedings of the Institution of Civil Engineers: Construction Materials
Proceedings of the Institution of Civil Engineers: Energy
Proceedings of the Institution of Civil Engineers: Engineering and Computational Mechanics
Proceedings of the Institution of Civil Engineers: Engineering History and Heritage
Proceedings of the Institution of Civil Engineers: Engineering Sustainability
Proceedings of the Institution of Civil Engineers: Forensic Engineering
Proceedings of the Institution of Civil Engineers: Geotechnical Engineering
Proceedings of the Institution of Civil Engineers: Ground Improvement
Proceedings of the Institution of Civil Engineers: Management, Procurement and Law
Proceedings of the Institution of Civil Engineers: Maritime Engineering
Proceedings of the Institution of Civil Engineers: Municipal Engineer
Proceedings of the Institution of Civil Engineers: Smart Infrastructure and Construction
Proceedings of the Institution of Civil Engineers: Structures and Buildings
Proceedings of the Institution of Civil Engineers: Transport
Proceedings of the Institution of Civil Engineers: Urban Design and Planning
Proceedings of the Institution of Civil Engineers: Waste and Resource Management
Proceedings of the Institution of Civil Engineers: Water Management
ICE members, except for students, also receive the New Civil Engineer magazine (published weekly from 1995 to 2017 by Emap, now published monthly by Metropolis International).
Specialist Knowledge Societies
The ICE also administers 15 Specialist Knowledge Societies created at different times to support special interest groups within the civil engineering industry, some of which are British sections of international and/or European bodies. The societies provide continuing professional development and assist in the transfer of knowledge concerning specialist areas of engineering.
The Specialist Knowledge Societies are:
Governance
The institution is governed by the ICE Trustee Board, comprising the President, three Vice Presidents, four members elected from the membership, three ICE Council members, and one nominated member. The President is the public face of the institution and day-to-day management is the responsibility of the Director General.
President
The ICE President is elected annually and the holder for 2024–2025 is Jim Hall.
Each year a number of young engineers have been chosen as President's apprentices. The scheme was started in 2005 during the presidency of Gordon Masterton, who also initiated a President's blog, now the ICE Infrastructure blog. Each incoming President sets out the main theme of his or her year of office in a Presidential Address.
Many of the profession's greatest engineers have served as President of the ICE including:
One of Britain's greatest engineers, Isambard Kingdom Brunel died before he could take up the post (he was vice-president from 1850).
Female civil engineers
The first woman member of ICE was Dorothy Donaldson Buchanan in 1927. The first female Fellows elected were Molly Fergusson (1957), Marie Lindley (1972), Helen Stone (1991) and Joanna Kennedy (1992). In January 2025, 30-year-old Costain engineer Georgia Thompson became the youngest woman to be elected a Fellowship of the ICE.
The three female Presidents (to date) are Jean Venables, who became the 144th holder of the office in 2008, Rachel Skinner, who became President in 2020, and Anusha Shah, the President in 2023.
In January 1969 the Council of the Institution set up a working party to consider the role of women in engineering. Among its conclusions were that 'while women have certainly established their competence throughout the professional engineering field, there is clearly a built-in or unconscious prejudice against them'. The WISE Campaign (Women into Science and Engineering) was launched in 1984; by 1992 3% of the total ICE membership of 79,000 was female, and only 0.8% of chartered civil engineers were women. By 2016 women comprised nearly 12% of total membership, almost 7% of chartered civil engineers and just over 2% of Fellows. In June 2015 a Presidential Commission on diversity was announced. By the start of 2023 women made up 16% of overall membership, with female fellows comprising 6% of the fellowship.
Awards
The Institution makes various awards to recognise the work of its members. In addition to awards for technical papers, reports and competition entries it awards medals for different achievements.
Gold Medal – The Gold Medal is awarded to an individual who has made valuable contributions to civil engineering over many years. This may cover contributions in one or more areas, such as, design, research, development, investigation, construction, management (including project management), education and training.
Garth Watson Medal – The Garth Watson Medal is awarded for dedicated and valuable service to ICE by an ICE Member or member of staff.
Brunel Medal – The Brunel Medal is awarded to teams, individuals or organisations operating within the built environment and recognises excellence in civil engineering.
Edmund Hambly Medal – The Edmund Hambly Medal awarded for creative design in an engineering project that makes a substantial contribution to sustainable development. It is awarded to projects, of any scale, which take into account such factors as full life-cycle effects, including de-commissioning, and show an understanding of the implications of infrastructure impact upon the environment. The medal is awarded in honour of past president Edmund Hambly who was a proponent of sustainable engineering.
International Medal – The International Medal is awarded annually to a civil engineer who has made an outstanding contribution to civil engineering outside the United Kingdom or an engineer who resides outside the United Kingdom.
Warren Medal – The Warren Medal is awarded annually to an ICE member in recognition of valuable services to his or her region.
Telford Medal – Telford Medal is the highest prize that can be awarded by the ICE for a paper.
James Alfred Ewing Medal – The James Alfred Ewing Medal is awarded by the council on the joint nomination of the president and the President of the Royal Society.
James Forrest Medal – The James Forrest Medal was established in honour of James Forrest upon his retirement as secretary in 1896.
Baker Medal – The Baker Medal was established in 1934 to recognise papers that promote or cover developments in engineering practice, or investigation into problems with which Sir Benjamin Baker was specially identified.
Jean Venables Medal – Since 2011, the Institution has awarded a Jean Venables Medal to its best Technician Professional Review candidate.
President's Medal
Emerging Engineer Award
James Rennie Medal – For the best Chartered Professional Review candidate of the year. Named after James Rennie, a civil engineer noted for his devotion to the training of new engineers.
Renée Redfern Hunt Memorial Prize – For the best chartered or member professional review written exercise of the year. Named for an ICE staff member who served as examinations officer from 1945 to 1981.
Tony Chapman Medal – For the best member professional review candidate of the year. Named after an ICE council member who played a key role in the integration of the Board of Incorporated Engineers and Technicians into the institution and in promoting incorporated engineer status.
Chris Binnie Award for Sustainable Water Management
The Bev Waugh Award – Since 2021, for productivity and culture, recognises a leader or individual who has had a positive impact on joint team working
Adrian Long Medal
Student chapters
The ICE has student chapters in several countries including Hong Kong, India, Indonesia, Malaysia, Malta, Pakistan, Poland, Sudan, Trinidad, and United Arab Emirates.
Arms
See also
Chartered Institution of Civil Engineering Surveyors
Construction Industry Council
References
Charles Matthew Norrie (1956). Bridging the Years – a short history of British Civil Engineering. Edward Arnold (Publishers) Ltd.
Garth Watson (1988). The Civils – The story of the Institution of Civil Engineers. Thomas Telford Ltd
Hugh Ferguson and Mike Chrimes (2011). The Civil Engineers – The story of the Institution of Civil Engineers and the People Who Made It. Thomas Telford Ltd
External links
Royal Charter and other documentation for governance of ICE
ICE Royal Charter, By-laws and Regulations,
ICE Publishing website
ICE Science website (archived 11 April 2013)
Civil engineering professional associations
ECUK Licensed Members
Organisations based in the City of Westminster
Organizations established in 1818
1818 establishments in the United Kingdom | Institution of Civil Engineers | [
"Engineering"
] | 2,643 | [
"Civil engineering professional associations",
"Civil engineering organizations"
] |
354,418 | https://en.wikipedia.org/wiki/Peter%20Norton | Peter Norton (born November 14, 1943) is an American programmer, software publisher, author, and philanthropist. He is best known for the computer programs and books that bear his name and portrait. Norton sold his software business to Symantec Corporation (now Gen Digital) in 1990.
Norton was born in Aberdeen, Washington, and raised in Seattle. He attended Reed College and later worked on mainframes and minicomputers for companies like Boeing and Jet Propulsion Laboratory. Norton founded Peter Norton Computing in 1982, pioneering IBM PC compatible utilities software. His first computer book, "Inside the IBM PC: Access to Advanced Features & Programming," was published in 1983. By 1988, Norton Computing had grown to $15 million in revenue with 38 employees. In 1990, Norton Computing released the Norton Backup program, and in 1990, Norton sold the company to Symantec for $70 million.
Norton later chaired Acorn Technologies and eChinaCash. He has a significant personal art collection and has been involved in various philanthropic endeavors, including the Peter Norton Family Foundation. He has also donated art to numerous museums and universities.
Early life
Norton was born in Aberdeen, Washington, and raised in Seattle. He attended Reed College in Portland, Oregon, and majored in math and philosophy. He graduated in 1965. Before he became involved with microcomputers, he spent a dozen years working on mainframes and minicomputers for companies including Boeing and Jet Propulsion Laboratory. His earliest low-level system utilities were designed to allow mainframe programmers access to a block of RAM that IBM normally reserved for diagnostics.
Career
Utility software
When the IBM PC made its debut in 1981, Norton was among the first to buy one. After he was laid off during an aerospace industry cutback, he took up microcomputer programming to make ends meet. One day he accidentally deleted a file. Rather than re-enter the data, as most would have, he decided to write a program to recover the information from the disk. His friends were delighted with the program and he developed a group of utility programs that he sold – one at a time – to user groups. In 1982, he founded Peter Norton Computing with $30,000 and an IBM computer. The company was a pioneer in IBM PC compatible utilities software. Its 1982 introduction of the Norton Utilities included Norton's UNERASE tool to retrieve erased data from MS-DOS and IBM PC DOS formatted disks.
In 1984, Norton Computing reached $1 million in revenue, and version 3.0 of the Norton Utilities was released. Norton had three clerical people working for him. He was doing all of the software development, all of the book writing, all of the manual writing and running the business. He hired his fourth employee and first programmer, Brad Kingsbury, in July 1985. In late 1985, Norton hired a business manager to take care of the day-to-day operations.
In 1985, Norton Computing produced the Norton Editor, a programmer's text editor created by Stanley Reifel, and Norton Guides, a terminate-and-stay-resident program which showed reference information for assembly language and other IBM PC internals, but could also display other reference information compiled into the appropriate file format. Norton Commander, a file managing tool for DOS, was introduced in 1986.
Norton Computing revenue rose to $5 million in 1986, $11 million in 1987, and $15 million in 1988. Its products won several utility awards, and it was ranked 136th on the 1988 Inc. magazine list of the 500 fastest-growing private companies in America, with 38 employees. Norton himself was named "Entrepreneur of the Year" by Arthur Young & Co. (1988 High Technology Award Winner Greater Los Angeles Region) and Venture magazine.
On April 12, 1989, Norton appointed Ron Posner chief executive of Norton Computing. Norton continued as chairman. Posner's goal was to rapidly grow the company into a major software vendor. Soon after his arrival, Posner hired a new president, a new chief financial officer, and added a vice president of sales.
In March 1990, Norton Computing released the Norton Backup program dedicated to backing up and restoring hard disks. Norton Utilities for the Macintosh was launched in July.
In 1989, Norton Computing had $25 million in sales. In August 1990, Norton sold it to Symantec for $70 million. Posner orchestrated the merger. Norton was given one-third of Symantec's stock, worth about $60 million, and a seat on Symantec's board of directors. The acquired company became a division of Symantec and was renamed Peter Norton Computing Group. About one-third of Norton Computing's 115 employees were laid off after the merger. The Norton brand name lives on in such Symantec products as Norton AntiVirus, Norton 360, Norton Internet Security, Norton Personal Firewall, Norton SystemWorks (which now contains a current version of the Norton Utilities), Norton AntiBot, Norton AntiSpam, Norton GoBack (formerly Roxio GoBack), Norton PartitionMagic (formerly PowerQuest PartitionMagic), and Norton Ghost. Norton's image was used on the packaging of all Norton-branded products until 2001.
Author
Jerry Pournelle wrote in 1985 that Norton had "remarkable talents for explaining the complex with clarity". Norton marketed his early software in person, leaving behind little pamphlets with technical notes at users group meetings and computer stores. A publisher saw his pamphlets, and saw that he could write about a technical subject. The publisher called him and asked him if he wanted to write a book. Norton's first computer book, Inside the IBM PC: Access to Advanced Features & Programming (Techniques), was published in 1983. Eight editions of this bestseller were published, the last in 1999. Norton wrote several other technical manuals and introductory computing books. He began writing monthly columns in 1983 for PC Magazine and later PC Week magazine as well, which he wrote until 1987. He soon became recognized as a principal authority on IBM personal computer technology.
In September 1983, Norton started work on The Peter Norton Programmer's Guide to the IBM PC. The book was a popular and comprehensive guide to programming the original IBM PC platform (covering BIOS and MS-DOS system calls in great detail). The first (1985) edition was nicknamed "the pink shirt book", after the pink shirt that Norton wore for the cover photo, and Norton's crossed-arm pose on that cover is a U.S. registered trademark.
The second (1988) edition, renamed The New Peter Norton Programmer's Guide to the IBM PC & PS/2, again featured the crossed arms, pink shirt cover image. Richard Wilton co-authored the second edition. This was followed by the third (1993) edition of "the Norton book", renamed The Peter Norton PC Programmer's Bible, co-authored with Wilton and Peter Aitken. Later editions of Peter Norton's Inside the PC, a broad-brush introduction to personal computer technology, featured Norton in his crossed-arm pose on the cover, wearing a white shirt.
Later career
In 2002, Acorn Technologies lured Norton out of a 10-year business hibernation. Norton has a "significant investment" in the company and serves as Chairman of Acorn's board of directors.
Norton is chairman of eChinaCash, a company he founded in 2003. Posner is CEO.
Personal life
Norton spent around five years in a Buddhist monastery in the San Francisco Bay area during the 1970s. In 1983, Norton married Eileen Harris, a black woman who grew up in Watts, California. They lived in the Los Angeles area where they had two children.
In the summer of 1990, they vacationed on Martha's Vineyard, Massachusetts. They enjoyed it enough to return the following year and look for a house there. They purchased Corbin House, an 1891 eight-bedroom Queen Anne house in Oak Bluffs, originally built for lock and hardware industrialist Philip Corbin. They also purchased a nearby house to be close on hand to the redesign and renovation of the main house. The renovation was completed in 1994. "My children are half-black, and we thought Oak Bluffs would give them an opportunity to summer around other kids like them," Norton said in a 2007 interview with Laura D. Roosevelt for Martha's Vineyard Magazine, alluding to Oak Bluff's century-old reputation as a popular summer spot among upper-class black people.
In 2000, the couple divorced. Norton afterward lived much of the time in New York. In February 2001, a fire caused by faulty wiring destroyed the Martha's Vineyard home; Norton had it rebuilt to almost exactly as it was before the fire. Meanwhile, he began a relationship with New York financier Gwen Adams, originally from the Virgin Islands who lived in the area. The couple spends ten weeks of summer in the Corbin-Norton House annually. In May 2007, they married in a church in nearby Edgartown; the ceremony was performed by their neighbor, scholar and author Henry Louis Gates Jr.
Philanthropy
In 1989 Peter and Eileen Norton founded the Peter Norton Family Foundation, which gave financial support to visual and contemporary non-profit arts organizations, as well as human social services organizations. The foundation was dissolved as part of the divorce, and two successor foundations were created.
Norton serves on the boards of the California Institute of Technology, California Institute of the Arts, Crossroads School (Santa Monica, California), and the Museum of Modern Art in New York (since 1999). He is a trustee emeritus of Reed College.
In 2003, Norton became the chairman of the board of MoMA PS1 in Long Island City, New York. In 2004, he re-joined the Whitney Museum of American Art's board after leaving it in 1998. He also serves on the executive committee of the Guggenheim Museum’s International Directors’ Council, the museum's primary acquisition committee, and on the board of the Los Angeles County Museum of Art.
With his first wife, Norton accumulated one of the largest modern contemporary art collections in the United States. Many of the pieces are on loan all over the world at any given time; many were on view at Symantec Corporation. The foundation and the Norton Family Office are located in Santa Monica. ARTnews magazine regularly lists Norton among the world's top 200 collectors.
In 1999, Norton purchased letters written to Joyce Maynard by reclusive author J. D. Salinger for $156,500. (Salinger had a year-long affair with Maynard in 1972 when she was 18.) Maynard said she was forced to auction the letters for financial reasons. Norton announced that his intention was to return the letters to Salinger.
In 1999 Norton donated $600,000 to the Signature Theatre Company (New York City) which renamed its home Off Broadway theatre at 555 West 42nd Street to "Signature Theatre Company at the Peter Norton Space." It maintained that name until the theatre moved to a new venue in 2012.
In March 2015, Norton organized a second major art donation project: he donated numerous pieces from his personal art collection to museums internationally. The Rose Art Museum received 41 artworks, ranging from prints, sculptures, photography, and other mixed media.
In April 2016, Norton donated an additional 100+ pieces from his personal art collection to selected university art museums, namely, 75 pieces to the University of California, Riverside ARTSblock organization and 68 pieces to Northwestern University's Block Museum.
Books
Inside the IBM PC: Access to Advanced Features & Programming Techniques (1983)
The Peter Norton Programmer's Guide to the IBM PC (1985)
Visual Basic For Windows Versão 3.0, Tradução 3a.Edição Americana, Author: Steven Olzner/The Peter Norton Computing Group, Editora Campus,
Peter Norton's Assembly Language Book for the IBM PC by Peter Norton, John Socha
Peter Norton's Intro to Computers 6/e by Peter Norton
Inside the IBM PC by Peter Norton
The Peter Norton Programmer's Guide to the IBM PC by Peter Norton
Peter Norton's Guide to UNIX by Peter Norton, Harley Hahn
Peter Norton's Introduction to Computers Fifth Edition, Computing Fundamentals, Student Edition by Peter Norton
Peter Norton's Guide to Visual Basic 6 by Peter Norton, Michael R. Groh
Peter Norton's DOS Guide Peter Norton's DOS Guide by Peter Norton
Advanced Assembly Language, with Disk by Peter Norton
Peter Norton's New Inside the PC by Peter Norton, Scott Clark
Complete Guide to Networking by Peter Norton, David Kearns
Peter Norton's Complete Guide to DOS 6.22 by Peter Norton
Peter Norton's Guide to Windows Programming with MFC: With CDROM by Peter Norton
PC Problem Solver by Peter Norton, Robert Jourdain
Peter Norton's Windows 3.1 Pow by Peter Norton
Peter Norton's Guide to Access 2000 Programming (Peter Norton (Sams)) by Peter Norton, Virginia Andersen
Peter Norton's Complete Guide to Windows XP by Peter Norton, John Paul Mueller
Peter Norton's Upgrading And Repairing PCs by Peter Norton, Michael Desmond
Peter Norton's Introduction to Computers: Essential Concepts by Peter Norton
Peter Norton's Maximizing Windows NT Server 4 by Peter Norton
Peter Norton's Advanced DOS 6 by Peter Norton, Ruth Ashley, Judi N. Fernandez
Peter Norton's Network Security Fundamentals by Peter Norton, Mike Stockman
Peter Norton's Guide to Qanda 4 by Peter Norton, Dave Meyers
The Peter Norton's Introduction to Computers Windows NT 4.0 Tutorial with 3.5 IBM Disk by Peter Norton
Essential Concepts by Peter Norton
Peter Norton's Macintosh by Peter Norton
Word 2002: A Tutorial to Accompany Peter Norton's Introduction to Computers Student Edition with CD-ROM by Peter Norton
Peter Norton's Introduction to Computers MS-Works 4.0 for Windows 95 Tutorial with 3.5 IBM Disk by Peter Norton, Kim Bobzien
Peter Norton's Guide to Visual C++ [With CD (Audio)] by Peter Norton
Complete Guide to TCP/IP by Peter Norton, Doug Eckhart (Joint Author)
Peter Norton's Maximizing Windows 98 Administration (Sams)
References
1943 births
Living people
20th-century American businesspeople
20th-century American male writers
20th-century American journalists
20th-century American philanthropists
21st-century American businesspeople
21st-century American male writers
21st-century American journalists
21st-century American philanthropists
American art collectors
American chairpersons of corporations
American computer businesspeople
American columnists
American company founders
American computer programmers
American magazine writers
American technology writers
American textbook writers
Businesspeople from California
Businesspeople from Massachusetts
Businesspeople from New York City
Businesspeople from Seattle
California Institute of Technology people
California Institute of the Arts people
Founders of charities
Gen Digital people
Journalists from California
Journalists from Massachusetts
Journalists from New York City
Journalists from Washington (state)
People associated with the Los Angeles County Museum of Art
People associated with the Museum of Modern Art (New York City)
People associated with the Whitney Museum of American Art
People from Oak Bluffs, Massachusetts
Reed College alumni
Writers from Aberdeen, Washington
Writers from Greater Los Angeles
Writers from Seattle | Peter Norton | [
"Technology"
] | 3,073 | [
"Lists of people in STEM fields",
"Proprietary technology salespersons"
] |
5,656,649 | https://en.wikipedia.org/wiki/Spillage | In industrial production, spillage is the loss of production output due to production of a series of defective or unacceptable products which must be rejected. Spillage is an often costly event which occurs in manufacturing when a process degradation or failure occurs that is not immediately detected and corrected, and in which defective or reject product therefore continues to be produced for some extended period of time.
Spillage results in costs due to lost production volume, excessive scrap, delayed delivery of product, and wastage of human and capital equipment resources. Minimization of the occurrence and duration of manufacturing spillage requires that closed-loop control and associated process monitoring and metrology functions be integrated into critical steps of the overall manufacturing process. The extent to which process control is complete and metrology is high resolution so as to be comprehensive determines the extent to which spillages will be prevented.
Waste | Spillage | [
"Physics"
] | 172 | [
"Materials",
"Waste",
"Matter"
] |
5,657,037 | https://en.wikipedia.org/wiki/Semi-basement | In architecture, a semi-basement, lower ground, lower level, etc. is a floor of a building that is half below ground, rather than entirely such as a true basement or cellar.
Traditionally, semi-basements were designed in larger houses where staff was housed. A semi-basement usually contained kitchens and domestic offices. The advantage over a basement is that a semi-basement can let outside light in as it can have windows, albeit ones that are often too high to enjoy a view. Historically this was an advantage as the servants, who traditionally inhabited such a floor, would not have the opportunity to waste time by looking out of the window.
The feature also has the aesthetic value of raising the ground floor, containing the building's reception rooms higher from the ground in order that they could enjoy better views, and be more free from the damp problems which always arose before the days of modern technology.
References
Domestic work
Rooms | Semi-basement | [
"Engineering"
] | 188 | [
"Rooms",
"Architecture"
] |
5,657,152 | https://en.wikipedia.org/wiki/Expasy | Expasy is an online bioinformatics resource operated by the SIB Swiss Institute of Bioinformatics. It is an extensible and integrative portal which provides access to over 160 databases and software tools and supports a range of life science and clinical research areas, from genomics, proteomics and structural biology, to evolution and phylogeny, systems biology and medical chemistry. The individual resources (databases, web-based and downloadable software tools) are hosted in a decentralized way by different groups of the SIB Swiss Institute of Bioinformatics and partner institutions.
Search engine
Queries of Expasy allow:
parallel searches SIB databases through a single search
aggregated search results from the complete set of >160 resources accessible from the portal.
Expasy provides up-to-date information from the most recent release of each resources.
The terms used in Expasy are based on the EDAM comprehensive ontology.
History
Expasy was created in August 1993. Originally, it was called ExPASy (Expert Protein Analysis System) and acted as a proteomics server to analyze protein sequences and structures and two-dimensional gel electrophoresis (2-D Page electrophoresis). Among others, ExPASy hosted the protein sequence knowledge base, UniProtKB/Swiss-Prot, and its computer annotated supplement, UniProtKB/TrEMBL.
ExPASy was the first website of the life sciences and among the first 150 websites in the world. , ExPASy had been consulted 1 billion times since its installation on 1 August 1993.
In June 2011, it became the SIB ExPASy Bioinformatics Resources Portal: a diverse catalogue of bioinformatics resources developed by SIB Groups. The current version of Expasy was released in October 2020.
Notes and references
External links
Official website
Bioinformatics
Science and technology in Switzerland | Expasy | [
"Engineering",
"Biology"
] | 398 | [
"Bioinformatics",
"Biological engineering"
] |
5,657,235 | https://en.wikipedia.org/wiki/Nairi%20%28computer%29 | The first Nairi (, ) computer was developed and launched into production in 1964, at the Yerevan Research Institute of Mathematical Machines (Yerevan, Armenia), and were chiefly designed by Hrachya Ye. Hovsepyan. In 1965, a modified version called Nairi-M, and in 1967 versions called Nairi-S and Nairi-2, were developed. Nairi-3 and Nairi-3-1, which used integrated hybrid chips, were developed in 1970. These computers were used for a wide class of tasks in a variety of areas, including Mechanical Engineering and the Economics.
In 1971, the developers of the Nairi computer were awarded the State Prize of the USSR.
Nairi-1
The development of the machine began in 1962, completed in 1964. The chief designer is Hrachya Yesaevich Hovsepyan, the leading design engineer is Mikhail Artavazdovich Khachatryan.
The architectural solution used in this machine has been patented in England, Japan, France and Italy.
Specification
The processor is 36-bit.
The clock frequency is 50 kHz.
ROM (in the original documentation - DZU (long-term memory) of a cassette type, the volume of the cassette is 2048 words of 36 bits each; was used to store firmware (2048 72-bit cells) and firmware (12288 36-bit cells). Part of the ROM it was delivered "empty", with the ability for users to flash their most frequently used programs, thus getting rid of entering programs from the remote control or punched tape.
The amount of RAM is 1024 words (8 cassettes of 128 cells), plus 5 registers.
Operations speed on addition on fixed-point numbers - 2-3 thousand ops / s, multiplication - 100 ops / s, operations on floating point numbers - 100 ops / s.
Since 1964, the machine has been produced at two factories in Armenia, as well as at the Kazan computer plant (from 1964 to 1970, about 500 machines were produced in total). In the spring of 1965, the computer was presented at a fair in Leipzig, Germany.
There were a number machine's modifications:
"Nairi-M" (1965) - the photoreader FS-1501 and the tape puncher PL-80 were introduced into the periphery.
"Nairi-K" with increased RAM up to 4096 words.
"Nairi-S" (1967), an electrified typewriter Consul-254 was used as a terminal.
Nairi-2
Created in 1966, in fact, it is a modification of the Nairi-1 machine. The amount of RAM, made on ferrite rings, has been increased to 2048 36-bit words, more efficient input-output devices were used, which were included in the Nairi-K package. A modification "Nairi-2E" was also produced, specialized for the automation of experiments. It contained in its kit a magnetic tape drive NML-67 and an interface unit with measuring equipment.
Nairi-3
Nairi-3 was the first soviet third generation computer.
Of all the models of the Nairi computer systems, the micro-program control principles of the Nairi-1 were improved and expanded on the most in the Nairi-3 models. Through the advances in computer technology since the initial production year of 1964, it became possible for upwards of 128 thousand micro-instructions to be stored at one time. Not to mention the reduction in access times due to advances in the manufacture process of the components. This enabled for a Multilingual Computing structure, and time-sharing modes with simultaneous access of up to 64 terminals and 64 virtual machines Nairi-2; which could all perform the functions of one computer.
The processing power of the Nairi-3 was considerably higher than competing systems due to multiple storage system improvements. Some of which includes the usage of a long-term form of read-only memory to store the computer's firmware, on a sampling cycle of 8 μs, while on other, similar systems, the firmware as well as external, foreign programs were stored on external storage devices, such as magnetic drums, the predecessor to modern day hard drives.
A feature of the proposed computer architecture for a model of the Nairi-3 computer systems was the use of a permanent, non-volatile memory cassette tape, the main functional block of the ROM device was the storage, which consisted of YAN-9 accumulator cells.
For the first time, microprogram emulation of a computer of a different type was implemented, with a different command system: on Nairi-3 it was possible to execute programs Minsk-22, Razdan-3.
Nairi-4
A series of computers for special applications. Nairi 4 ARM / Nairi 4 and Nairi 41 were developed in 1974-1981. Their development started by Hrachya Ye. Hovsepyan. Lastly their chief designer is German Artashesovich Oganyan. The system was software-compatible with the PDP-11 and the SM series of computers.
In 1980-1981, the development of Nairi 4V and Nairi 4V / C was also carried out, the chief designers were V. Karapetyan and A. Sargsyan.
External links
YerSRIMM The pioneer of Armenian computer science
History of computing in the Soviet Union
References
Computer-related introductions in 1964
Communications in Armenia
Minicomputers
36-bit computers
Soviet computer systems
Yerevan Computer Research and Development Institute | Nairi (computer) | [
"Technology"
] | 1,131 | [
"Computer systems",
"Soviet computer systems"
] |
5,657,385 | https://en.wikipedia.org/wiki/Amylolytic%20process | Amylolytic process or amylolysis is the conversion of starch into sugar by the action of acids or enzymes such as amylase.
Starch begins to pile up inside the leaves of plants during times of light when starch is able to be produced by photosynthetic processes. This ability to make starch disappears in the dark due to the lack of illumination; there is insufficient amount of light produced during the dark needed to carry this reaction forward. Turning starch into sugar is done by the enzyme amylase.
Different pathways of amylase & location of amylase activity
The process in which amylase breaks down starch for sugar consumption is not consistent with all organisms that use amylase to breakdown stored starch. There are different amylase pathways that are involved in starch degradation. The occurrence of starch degradation into sugar by the enzyme amylase was most commonly known to take place in the Chloroplast, but that has been proven wrong. One example is the spinach plant, in which the chloroplast contains both alpha and beta amylase (They are different versions of amylase involved in the breakdown of starch and they differ in their substrate specificity). In spinach leaves, the extrachloroplastic region contains the highest level of amylase degradation of starch. The difference between chloroplast and extrachloroplastic starch degradation is in the amylase pathway they prefer; either beta or alpha amylase. For spinach leaves, Alpha-amylase is preferred but for plants/organisms like wheat, barley, peas, etc. the Beta-amylase is preferred.
Usage
The amylolytic process is used in the brewing of alcohol from grains. Since grains contain starches but little to no simple sugars, the sugar needed to produce alcohol is derived from starch via the amylolytic process. In beer brewing, this is done through malting. In sake brewing, the mold Aspergillus oryzae provides amylolysis, and in Tapai, Saccharomyces cerevisiae. The amylolytic process can also be used to allow for maximum results in production. For instance, glucose formation, when amylolytic enzymes are added to a given compound, the enzymes work to give maximum formation. The amylolytic process is also useful in the breaking down of molecules, it can be closely associated with the process of hydrolysis.
See also
Brewing methods
References
Carbohydrate chemistry
Biochemistry
Cooking techniques
Rice wine | Amylolytic process | [
"Chemistry",
"Biology"
] | 518 | [
"Biotechnology stubs",
"Biochemistry stubs",
"Carbohydrate chemistry",
"nan",
"Chemical synthesis",
"Biochemistry",
"Glycobiology"
] |
5,657,545 | https://en.wikipedia.org/wiki/Advanced%20Message%20Queuing%20Protocol | The Advanced Message Queuing Protocol (AMQP) is an open standard application layer protocol for message-oriented middleware. The defining features of AMQP are message orientation, queuing, routing (including point-to-point and publish-and-subscribe), reliability and security.
AMQP mandates the behavior of the messaging provider and client to the extent that implementations from different vendors are interoperable, in the same way as SMTP, HTTP, FTP, etc. have created interoperable systems. Previous standardizations of middleware have happened at the API level (e.g. JMS) and were focused on standardizing programmer interaction with different middleware implementations, rather than on providing interoperability between multiple implementations. Unlike JMS, which defines an API and a set of behaviors that a messaging implementation must provide, AMQP is a wire-level protocol. A wire-level protocol is a description of the format of the data that is sent across the network as a stream of bytes. Consequently, any tool that can create and interpret messages that conform to this data format can interoperate with any other compliant tool irrespective of implementation language.
Overview
AMQP is a binary application layer protocol, designed to efficiently support a wide variety of messaging applications and communication patterns. It provides flow controlled, message-oriented communication with message-delivery guarantees such as at-most-once (where each message is delivered once or never), at-least-once (where each message is certain to be delivered, but may do so multiple times) and exactly-once (where the message will always certainly arrive and do so only once), and authentication and/or encryption based on SASL and/or TLS. It assumes an underlying reliable transport layer protocol such as Transmission Control Protocol (TCP).
The AMQP specification is defined in several layers: (i) a type system, (ii) a symmetric, asynchronous protocol for the transfer of messages from one process to another, (iii) a standard, extensible message format and (iv) a set of standardised but extensible 'messaging capabilities.'
History
AMQP was originated in 2003 by John O'Hara at JPMorgan Chase in London. AMQP was conceived as a co-operative open effort. The initial design was by JPMorgan Chase from mid-2004 to mid-2006 and it contracted iMatix Corporation to develop a C broker and protocol documentation. In 2005 JPMorgan Chase approached other firms to form a working group that included Cisco Systems, IONA Technologies, iMatix, Red Hat, and Transaction Workflow Innovation Standards Team (TWIST). In the same year JPMorgan Chase partnered with Red Hat to create Apache Qpid, initially in Java and soon after C++. Independently, RabbitMQ was developed in Erlang by Rabbit Technologies, followed later by the Microsoft and StormMQ implementations.
The working group grew to 23 companies including Bank of America, Barclays, Cisco Systems, Credit Suisse, Deutsche Börse, Goldman Sachs, HCL Technologies Ltd, Progress Software, IIT Software, INETCO Systems Limited, Informatica (including 29 West), JPMorgan Chase, Microsoft Corporation, my-Channels, Novell, Red Hat, Software AG, Solace Systems, StormMQ, Tervela Inc., TWIST Process Innovations ltd, VMware (which acquired Rabbit Technologies) and WSO2.
In 2008, Pieter Hintjens, CEO and chief software designer of iMatix, wrote an article called "What is wrong with AMQP (and how to fix it)" and distributed it to the working group to alert of imminent failure, identify problems seen by iMatix and propose ways to fix the AMQP specification. By then, iMatix had already started work on ZeroMQ. In 2010, Hintjens announced that iMatix would leave the AMQP workgroup and did not plan to support AMQP/1.0 in favor of the significantly simpler and faster ZeroMQ.
In August 2011, the AMQP working group announced its reorganization into an OASIS member section.
AMQP 1.0 was released by the AMQP working group on 30 October 2011, at a conference in New York. At the event Microsoft, Red Hat, VMware, Apache, INETCO and IIT Software demonstrated software running the protocol in an interoperability demonstration. The next day, on 1 November 2011, the formation of an OASIS Technical Committee was announced to advance this contributed AMQP version 1.0 through the international open standards process. The first draft from OASIS was released in February 2012, the changes as compared to that published by the Working Group being restricted to edits for improved clarity (no functional changes). The second draft was released for public review on 20 June (again with no functional changes), and AMQP was approved as an OASIS standard on 31 October 2012.
OASIS AMQP was approved for release as an ISO and IEC International Standard in April 2014. AMQP 1.0 was balloted through the Joint Technical Committee on Information Technology (JTC1) of the International Standards Organization (ISO) and the International Electrotechnical Commission (IEC). The approved OASIS AMQP submission has been given the designation, ISO/IEC 19464.
Previous versions of AMQP were 0-8, published in June 2006, 0-9, published in December 2006, 0-10 published in February 2008 and 0-9-1, published in November 2008. These earlier releases are significantly different from the 1.0 specification.
Whilst AMQP originated in the financial services industry, it has general applicability to a broad range of middleware problems.
Description of AMQP 1.0
Type system
AMQP defines a self-describing encoding scheme allowing interoperable representation of a wide range of commonly used types. It also allows typed data to be annotated with additional meaning, for example a particular string value might be annotated so that it could be understood as a URL. Likewise a map value containing key-value pairs for 'name', 'address' etc., might be annotated as being a representation of a 'customer' type.
The type-system is used to define a message format allowing standard and extended meta-data to be expressed and understood by processing entities. It is also used to define the communication primitives through which messages are exchanged between such entities, i.e. the AMQP frame bodies.
Performatives and the link protocol
The basic unit of data in AMQP is a frame. There are nine AMQP frame bodies defined that are used to initiate, control and tear down the transfer of messages between two peers. These are:
open (the connection)
begin (the session)
attach (the link)
transfer
flow
disposition
detach (the link)
end (the session)
close (the connection)
The link protocol is at the heart of AMQP.
An attach frame body is sent to initiate a new link; a detach to tear down a link. Links may be established in order to receive or send messages.
Messages are sent over an established link using the transfer frame. Messages on a link flow in only one direction.
Transfers are subject to a credit-based flow control scheme, managed using flow frames. This allows a process to protect itself from being overwhelmed by too large a volume of messages or more simply to allow a subscribing link to pull messages as and when desired.
Each transferred message must eventually be settled. Settlement ensures that the sender and receiver agree on the state of the transfer, providing reliability guarantees. Changes in state and settlement for a transfer (or set of transfers) are communicated between the peers using the disposition frame. Various reliability guarantees can be enforced this way: at-most-once, at-least-once and exactly-once.
Multiple links, in both directions, can be grouped together in a session. A session is a bidirectional, sequential conversation between two peers that is initiated with a begin frame and terminated with an end frame. A connection between two peers can have multiple sessions multiplexed over it, each logically independent. Connections are initiated with an open frame in which the sending peer's capabilities are expressed, and terminated with a close frame.
Message format
AMQP defines as the bare message, that part of the message that is created by the sending application. This is considered immutable as the message is transferred between one or more processes.
Ensuring the message as sent by the application is immutable allows for end-to-end message signing and/or encryption and ensures that any integrity checks (e.g. hashes or digests) remain valid. The message can be annotated by intermediaries during transit, but any such annotations are kept distinct from the immutable bare message. Annotations may be added before or after the bare message.
The header is a standard set of delivery-related annotations that can be requested or indicated for a message and includes time to live, durability, priority.
The bare message itself is structured as an optional list of standard properties (message id, user id, creation time, reply to, subject, correlation id, group id etc.), an optional list of
application-specific properties (i.e., extended properties) and a body, which AMQP refers to as application data.
Properties are specified in the AMQP type system, as are annotations. The application data can be of any form, and in any encoding the application chooses. One option is to use the AMQP type system to send structured, self-describing data.
Messaging capabilities
The link protocol transfers messages between two nodes but assumes very little as to what those nodes are or how they are implemented.
A key category is those nodes used as a rendezvous point between senders and receivers of messages (e.g. queues or topics). The AMQP specification calls such nodes distribution nodes and codifies some common behaviors.
This includes:
some standard outcomes for transfers, through which receivers of messages can for example accept or reject messages
a mechanism for indicating or requesting one of the two basic distribution patterns, competing- and non-competing- consumers, through the distribution modes move and copy respectively
the ability to create nodes on-demand, e.g. for temporary response queues
the ability to refine the set of message of interest to a receiver through filters
Though AMQP can be used in simple peer-to-peer systems, defining this framework for messaging capabilities additionally enables interoperability with messaging intermediaries (brokers, bridges etc.)
in larger, richer messaging networks. The framework specified covers basic behaviors but allows for extensions to evolve that can be further codified and standardised.
Implementations
AMQP 1.0 broker implementations
Apache Qpid, an open-source project at the Apache Foundation
Apache ActiveMQ, an open-source project at the Apache Foundation
Azure Event Hubs
Azure Service Bus
IBM MQ
Solace PubSub+, a multi-protocol broker in hardware, software, and cloud
RabbitMQ, an open-source project sponsored by VMware, supports AMQP 1.0 with release 4.0
Pre-1.0 AMQP broker implementations
JORAM, a Java open-source implementation from the OW2 Consortium.
Apache Qpid maintains support for multiple AMQP versions
Specification
AMQP protocol version 1.0 is the current specification version. It focuses on core features which are necessary for interoperability at Internet scale. It contains less explicit routing than previous versions because core functionality is the first to be rigorously standardized. AMQP 1.0 interoperability has been more extensively tested with more implementors than prior versions.
The AMQP website contains the OASIS specification for version 1.0.
Earlier versions of AMQP, published prior to the release of 1.0 (see History above) and significantly different from it, include:
AMQP 0-9-1, which has clients available "for many popular programming languages and platforms"
AMQP 0-10
Comparable specifications
These open protocol specifications cover the same or a similar space as AMQP:
Streaming Text Oriented Messaging Protocol (STOMP), a text-based protocol developed at Codehaus; uses the JMS-like semantics of 'destination'.
Extensible Messaging and Presence Protocol (XMPP), the Extensible Messaging and Presence Protocol.
MQTT, a lightweight publish-subscribe protocol.
OpenWire as used by ActiveMQ.
Java Message Service (JMS), is often compared to AMQP. However, JMS is an API specification (part of the Java EE specification) that defines how message producers and consumers are implemented. JMS does not guarantee interoperability between implementations, and the JMS-compliant messaging system in use may need to be deployed on both client and server. On the other hand, AMQP is a wire-level protocol specification. In theory AMQP provides interoperability as different AMQP-compliant software can be deployed on the client and server sides.
See also
Peer-to-peer
Message queue
Message queuing service
Data Distribution Service
IBM MQ
References
External links
OASIS AMQP technical committee
High-level Overview of AMQP and the AMQP Model (version 0-9-1)
OMG Analysis of AMQP and comparison with DDS-RTPS
Google Tech Talk, with video and slides, about RabbitMQ
Presentation of AMQP and RestMS messaging at FOSDEM 2009
List of AMQP clients
Application layer protocols
Inter-process communication
Message-oriented middleware
Middleware
Open standards | Advanced Message Queuing Protocol | [
"Technology",
"Engineering"
] | 2,831 | [
"Software engineering",
"Middleware",
"IT infrastructure"
] |
5,657,659 | https://en.wikipedia.org/wiki/Egyptian%20faience | Egyptian faience is a sintered-quartz ceramic material from Ancient Egypt. The sintering process "covered [the material] with a true vitreous coating" as the quartz underwent vitrification, creating a bright lustre of various colours "usually in a transparent blue or green isotropic glass". Its name in the Ancient Egyptian language was , and modern archeological terms for it include sintered quartz, glazed frit, and glazed composition. is distinct from the crystalline pigment Egyptian blue, for which it has sometimes incorrectly been used as a synonym.
It is not faience in the usual sense of tin-glazed pottery, and is different from the enormous range of clay-based Ancient Egyptian pottery, from which utilitarian vessels were made. It is similar to later Islamic stonepaste (or "fritware") from the Middle East, although that generally includes more clay.
Egyptian faience is considerably more porous than glass proper. It can be cast in molds to create small vessels, jewelry and decorative objects. Although it contains the major constituents of glass (silica, lime) and no clay until late periods, Egyptian faience is frequently discussed in surveys of ancient pottery, as in stylistic and art-historical terms, objects made of it are closer to pottery styles than ancient Egyptian glass.
Egyptian faience was very widely used for small objects, from beads to small statues, and is found in both elite and popular contexts. It was the most common material for scarabs and other forms of amulet and ushabti figures, and it was used in most forms of ancient Egyptian jewellery, as the glaze made it smooth against the skin. Larger applications included dishware, such as cups and bowls, and wall tiles, which were mostly used for temples. The well-known blue hippopotamus figurines, placed in the tombs of officials, can be up to long, approaching the maximum practical size for Egyptian faience, though the Victoria and Albert Museum in London has a sceptre, dated 1427–1400 BC.
Scope of the term
It is called "Egyptian faience" to distinguish it from faience, the tin-glazed pottery whose name came from Faenza in northern Italy, a center of maiolica (one type of faience) production in the late Middle Ages. Egyptian faience was both exported widely in the ancient world and made locally in many places, and is found in Mesopotamia, around the Mediterranean and in northern Europe as far away as Scotland. The term is used for the material wherever it was made and modern scientific analyses are often the only way of establishing the provenance of simple objects such as the very common beads.
The term is therefore unsatisfactory in several respects, although clear in an Ancient Egyptian context, and is increasingly rejected in museum and archaeological usage. The British Museum now calls this material "glazed composition", with the following note in their online collection database: The term is used for objects with a body made of finely powdered quartz grains fused together with small amounts of alkali and/or lime through partial heating. The bodies are usually colourless but natural impurities give them a brown or greyish tint. Colourants can also be added to give it an artificial colour. It can be modelled by hand, thrown or moulded, and hardens with firing. This material is used in the context of Islamic ceramics where it is described as stonepaste (or fritware). Glazed composition is related to glass, but glass is formed by completely fusing the ingredients in a liquid melted at high temperature. This material is also popularly called faience in the contexts of Ancient Egypt and Ancient Near East. However, this is a misnomer as these objects have no relationship to the glazed pottery vessels made in Faenza, from which the faience term derives. Other authors use the terms sintered quartz, glazed frit, frit, composition, Egyptian Blue, paste or (in the 19th century) even porcelain, although the last two terms are very inappropriate as they also describe imitation gems and a type of ceramic. Frit is technically a flux.
Glazes
From the inception of faience in the archaeological record of Ancient Egypt, the elected colors of the glazes varied within an array of blue-green hues. Glazed in these colours, faience was perceived as substitute for blue-green materials such as turquoise, found in the Sinai Peninsula, and lapis lazuli from Afghanistan. According to the archaeologist David Frederick Grose, the quest to imitate precious stones "explains why most all early glasses are opaque and brilliantly colored" and that the deepest blue color imitating lapis lazuli was likely the most sought-after. As early as the Predynastic graves at Naqada, Badar, el-Amrah, Matmar, Harageh, Avadiyedh and El-Gerzeh, glazed steatite and faience beads are found associated with these semi-precious stones. The association of faience with turquoise and lapis lazuli becomes even more conspicuous in Quennou's funerary papyrus, giving his title as the director of overseer of faience-making, using the word which strictly means lapis lazuli, which by the New Kingdom had also come to refer to the 'substitute', faience. The symbolism embedded in blue glazing could recall both the Nile, the waters of heaven and the home of the gods, whereas green could possibly evoke images of regeneration, rebirth and vegetation.
Relationship with Egyptian copper industry
The discovery of faience glazing has tentatively been associated with the copper industry: bronze scale and corrosion products of leaded copper objects are found in the manufacture of faience pigments. However, although the likelihood of glazed quartz pebbles developing accidentally in traces in copper smelting furnaces from the copper and wood ash is high, the regions in which these processes originate do not coincide.
Relationship with Egyptian glass industry
Although it appears that no glass was intentionally produced in Egypt before the Eighteenth Dynasty (as the establishment of glass manufacture is generally attributed to the reign of Thutmose III), it is likely that faience, frit and glass were all made in close proximity or in the same workshop complex, since developments in one industry are reflected in others. Such close relationship is reflected in the prominent similarity of the formulations of faience glaze and contemporary glass compositions. Despite the differences in the pyrotechnology of glass and faience, faience being worked cold, archaeological evidence suggests that New Kingdom glass and faience production was undertaken in the same workshops.
Production
Typical composition and access to raw materials
Faience has been defined as the first high technology ceramic, to emphasize its status as an artificial medium, rendering it effectively a precious stone. Egyptian faience is a non-clay based ceramic composed of crushed quartz or sand, with small amounts of calcite lime and a mixture of alkalis, displaying surface vitrification due to the soda lime silica glaze often containing copper pigments to create a bright blue-green luster. While in most instances domestic ores seem to have provided the bulk of the mineral pigments, evidence suggests that during periods of prosperity, raw materials not available locally, such as lead and copper, were imported. Plant ash, from "halophyte" (salt-tolerant) plants typical of dry and sea areas, was the major source of alkali until the Ptolemaic Period, when natron-based alkalis almost completely replaced the previous source. Although the chemical composition of faience materials varies over time and according to the status of the workshop, also as a cause of change of accessibility of raw materials, the material constitution of the glaze is at all times consistent with the generally accepted version of faience glazing.
Faience working technology
Typical faience mixture is thixotropic, that is thick at first and then soft and flowing as it begins to be formed.
This property, together with the angularity of silica particles, accounts for the gritty slumps formed when the material is wetted, rendering faience a difficult material to hold a shape. If pressed too vigorously, this material will resist flow until it yields and cracks, due to its limited plastic deformation and low yield strength.
Body binding technology
A number of possible binding agents, amongst Arabic gum, clay, lime, egg white and resin, have been suggested to help in the binding process. Although traces of clay have been found in most Pharaonic faience, reconstruction experiments showed that clay, organic gums or lime while successfully improving the wet working performance, failed to improve the fired strength of the faience, or proved the gum was too sticky for the removal of objects from their molds. The use of alkalis as binders, in the form of natron or plant ash, produced suitable results in experiments. Pulverized glass or sintered material of similar composition could also enhance the fired strength of faience bodies: the compositions of such glasses is in fact comparable to the published compositions of New Kingdom glass.
Body working technology
Three methods have been hypothesized to shape the body of faience objects: modeling, moulding and abrasion, the last being used in conjunction with the first two. Modeling, scraping and grinding are the techniques most widely used in earlier times, as represented in the material qualities of Predynastic and Protodynastic faience objects. Predynastic bead manufacture is essentially a cold technology, more akin to stone working than glass: a general form of faience is modeled, possibly free formed by hand, then holes are drilled to create beads.
In the Middle Kingdom, the techniques employed are molding and forming on a core, sometimes in conjunction with intermediate layers between the glaze and the body. Marbleized faience, resulting from the working of different colored faience bodies together, so as to produce a uniform adherent body, also appears in this period. Towards the end of the Middle Kingdom, incising, inlaying and resisting techniques appear: these were bound to become progressively popular towards the New Kingdom. In the New Kingdom, beads, amulets and finger rings are produced by a combination of modeling and molding techniques. In this period, sculptural detail is created using inlays of different colored faience or by scraping the body to reveal the white intermediate layer in relief. Moulding was first applied to faience manufacture in the Middle Kingdom by forming a model of an object, or employing a finished faience piece, impressing it in wet clay, and later by firing the clay to create a durable mold. The faience paste could then be pressed into the mold, and following drying, be re-worked through surface abrasion before firing. Moulds could facilitate mass production of faience objects such as amulets rings and inlays, as evidenced by the several thousand of small open face, earth-ware clay molds excavated at Tell el Amarna. The level of standardisation that use of moulds produced varied, with a compositional and morphological study of faience ushabtis suggested that mass-production is an oversimplification of a complex process that may more accurately described as batch-processing.
Wheel throwing, possibly occurring from the New Kingdom onwards, is certainly established by the Greco-Roman period, when large amounts of clay seem to have been added to the faience body. Because of the limited plasticity of faience, rendering throwing extremely difficult, a progressive increase of clay in the faience bodies culminating in the quartz, clay and glass frit bodies of Islamic times, is observed in the archaeological record.
Ptolemaic and Roman faience tends to be typologically and technologically distinct from the earlier material: it is characterized by the widespread use of moulding and high relief on vessels. A very unusual and finely made group of figures of deities and falcons in the Metropolitan Museum of Art in New York, apparently representing hieroglyphs that are elements from a royal inscription, perhaps from a wooden shrine, is decorated in a form of champlevé (typically a technique for enamel on metal). Depressions in the faience body were filled with coloured "vitreous pastes" and refired, followed by polishing.
Polychrome pieces were usually made by inlaying different colours of paste.
Glazing technology
The technology of glazing a siliceous body with a soda lime silica glaze employs various methods discovered over time: namely application, efflorescence and cementation glazing.
Application glazing
In the application method, formerly assumed to be the only one used for faience glazing; silica, lime and alkalis are ground in the raw state to a small particle size, thus mixed in water to form a slurry which is then applied to the quartz core. Partial fritting of the slurry favors the first stages of vitrification, which in turn lowers the final firing temperature. The slurry can be then applied to the body, through brushing or dipping, to create a fine, powdery coating. Upon firing, the water from the melting glaze partially diffuses in the sand body, sintering the quartz particles and thus creating some solid bridges in the body.
Efflorescence glazing
In the self-glazing process of efflorescence, the glazing materials, in the form of water-soluble alkali salts, are mixed with the raw crushed quartz of the core of the object. As the water in the body evaporates, the salts migrate to the surface of the object to recrystallize, creating a thin surface, which glazes upon firing.
Cementation glazing
Cementation glazing, a technique discovered in the Middle Kingdom, is also a self-glazing technique. The possibility of the existence of cementation glazing, also known as 'Qom technique', followed the observation of this method being used in the city of Qom in Iran in the 1960s. In this method the artifact, while buried in a glazing powder with a high flux content, is heated inside a vessel, causing the fusion of the object with the cement. During firing, the flux migrates to the quartz and combines with it to form a glassy coating.
Alternative techniques
A vapour glaze reaction similar to salt glazing, as an alternative glazing process, has been suggested. In this process, the vaporization or dissociation of salts leads to vapour transport through the enveloping powder to the quartz body where a glaze is formed.
Recognition of glazing techniques
Although glaze compositions vary regionally and chronologically, depending on the formation of the body and the glazing process employed, objects produced with different glazing techniques do not exhibit immediate diagnostic chemical variations in their compositions. The recognition of the various glazing techniques, through microscopic observations of the degree of sintering and the vitreous phase of quartz body, is also ambiguous. For instance, objects with applied glazes and those which may have been glazed by efflorescence have overlapping characteristic features. The following proposed criteria are subject to variation caused by increases in flux concentration, firing temperatures and time at peak temperatures.
Recognition of application glazing- Macroscopically, applied glazes vary in thickness through the body, displaying thicker glazes on bases. The traces of kiln supports, as well as the characteristic tendency to run and drip leading to pooling, may indicate the orientation of the object during firing. In high magnification observations, the interface boundary of body and glaze appears well defined. The absence of interstitial glass in the core is characteristic of application glazing: however, the possibility of adding glazing mixture to the quartz sand body, as well as the use of pre-melted glazes in the later periods, can predictably increase the degree of sintering of the core.
Recognition of cementation- Objects glazed through cementation display a thin even glaze all over the body, with no drying or firing marks, and portray a fairly friable and soft body. Microscopically, the concentration of copper characteristically decreases from the surface: the interaction layer is thin and well defined and the interstitial glass is absent with exception to the vicinity of the boundary layer.
Recognition of efflorescence glazing- Pieces glazed by efflorescence may show traces of stand marks: the glaze appears thick and prone to cracking, thinning toward the edge of the piece and in concave areas. In high magnification the interstitial glass is extensive; the unreacted salts which have not reached the surface fuse of the body accumulate in the core, creating bridges between the quartz particles.
Typologies
An extensive literature has accumulated in attempt to explain the processing of Egyptian faience and develop an adequate typology that encompasses both technological choices and chemical variations of faience bodies. Body color, density and luster provided the basis of the first typology developed for faience: seven variants were proposed by Lucas and Harris and still permit the archaeologist to distinguish faience objects during field sorting.
Classification of body variants
Most of the seven variants introduced by Lucas fail to recognize the glazing technology utilized or to suggest the stylistic and technological choices embedded in the manufacture of a faience object. However, variant A describes a technologically unique product and as such is still applicable: it has a finely ground underglaze consisting of quartz particles in a glass matrix, often revealed by incisions or depressions cut into the overlying glaze. Glassy faience, variant E, displays no distinct outer layer from the interior, thus it has been suggested that the term 'faience' is a misnomer and the alternative name 'imperfect glass' has been advised. Regarding variant F specimens, Lucas suggests the use of lead glazes, however it appears that lead glazing was never practiced in Ancient Egypt.
Workshop evidence
The excavations led by Petrie at Tell-Amarna and Naucratis have reported finding workshop evidence.
Nicholson explains, however, that while a square furnace-like structure at Amarna may be related to faience production, Petrie did not encounter any actual faience kilns at the site. Lucas documented a large number of molds at the palace area of Amenhotep III at Qantir, from 19th to 20th Dynasties, and at the palace area of Naucratis, also described in different sources as a scarab maker's and faience factory. However, seeing there is a lack of carefully documented archaeological evidence as to the nature of faience factory sites, direct information about the glazing process does not exist.
Although recent excavations at the archaeological sites of Abydos and Amarna have supplemented our knowledge of the ancient production of faience gained from the earlier excavated sites of Lisht, Memphis and Naukratis, the differentiation of glass furnaces from faience kilns still remains problematic. Replication experiments, using modern kilns and replica faience pastes, indicate that faience is fired in the range of 800–1000°C
Current use
A number of ceramists are experimenting with Egyptian faience, though some of the compositions bear only a passing resemblance to the original Egyptian formulae. There has also been a recent interest in the use of Egyptian faience in 3-d printing technology. It may be possible to fire faience-like materials in a microwave.
Gallery
Notes
Further reading
Binns. 1932. An experiment in Egyptian blue glaze. Journal of the American Ceramic Society.
"BM": "glazed composition", British Museum term note
Boyce, A. 1989. Notes on the manufacture and use of faience rings at Amarna. In: Kemp, B.J. Amarna Reports V. London: Egypt Exploration Society. 160–168.
Brill, R.H. 1999. Chemical Analyses of Early Glasses: Volume 1 (tables) and Volume 2 (catalogue), Corning, NY: Corning Museum of Glass,
Clark, Robin JH, and Peter J. Gibbs. 1997. "Non‐Destructive In Situ Study of Ancient Egyptian Faience by Raman Microscopy." ‘’Journal of Raman Spectroscopy ‘’ 28 (2–3): 99–103.
Dayton, J.E. Minerals, Metals, Glazing and Man. Edinburgh: Harrap Publishers. 1978.
Friedman, F.D. (ed.). 1998. Gifts of the Nile-ancient Egyptian faience. London: Thames and Hudson.
Henderson, Julian, Robert Morkot, E. J. Peltenburg, Stephen Quirke, Margaret Serpico, John Tait, and Raymond White. 2000. ‘’Ancient Egyptian Materials and Technology.’’ Cambridge University Press.
Lucas, A. and Harris, J. R., 1962, Ancient Egyptian materials and industries. London: Edward Arnold.
Kaczmarczyk, A. and Hedges. R.E.M. 1983. Ancient Egyptian Faience. Warminster: Aris and Phillips.
Kiefer, C. and Allibert, A. 2007. Pharanoic Blue Ceramics: the Process of Self-glazing. Archaeology 24, 107-117.
Kiefer, C. 1968. Les céramiques blues, pharanoiques et leur procédé révolutionnaire d'emaillage. Industrie Céramique. May 395-402.
Kühne, K. 1974 "Frühgeschichtliche Werkstoffe auf Silikatischer Basis", Das Altertum 20, 67-80
Nicholson, P.T.1993. Egyptian faience and glass. Aylesbury: Shire- Egyptology.
Nicholson, P.T. and Peltenburg, E. 2000. Egyptian faience. In: Nicholson, P.T. and Shaw, I. Ancient Egyptian Materials and Technology. Cambridge: Cambridge University Press, 177–194.
Noble, J. V. 1969. The technique of Egyptian faience. American Journal of Archaeology 73, 435–439.
Rehren, Th. 2008. "A review of factors affecting the composition of early Egyptian glasses and faience: alkali and alkali earth oxides." ‘’Journal of Archaeological Science’’ 35 (5): 1345–54.
Shortland, A.J. and Tite M.S. 2005. A technological study of Ptolemaic – early roman faience from Memphis, Egypt Archaeometry 47/1, 31–46.
Stone, J. F. S. and Thomas, L. C. 1956. The Use and Distribution of Faience in the Ancient East and Prehistoric Europe, Proceedings of the Prehistoric Society, London 22, 37–84.
Stocks, D.A.1997. Derivation of ancient Egyptian faience core and glaze materials. Antiquity 71/271, 179–182.
Petrie, W. M. F.1909. Memphis I, London: British School of Archeology in Egypt.
Tite, M.S. and Bimson, M. 1986. Faience: an investigation of the microstructures associated with the different methods of glazing, Archaeometry 28, 69–78.
Tite, M.S., Freestone I.C. and Bimson. M. 1983. Egyptian faience: an investigation of the methods of production, Archaeometry 25, 17–27.
Vandiver P.B. 1983. Egyptian faience technology, Appendix A. In: A. Kaczmarczyk and R.E.M. Hedges, Editors, Ancient Egyptian Faience, Warminster: Aris and Phillips, A1–A144.
Vandiver, P. and Kingery, W.D. 1987. Egyptian Faience: the first high-tech ceramic. In Kingery, W.D. ed., Ceramics and Civilisation 3, Columbus OH: American Ceramic Society, 19–34.
Verges, F.B. 1992. Bleus Egyptiennes. Paris: Louvain
Wulff, H. E., Wulff, H. S. and Koch, L., 1968. Egyptian faience - a possible survival in Iran. Archeology 21, 98–107. www.qomtechnique.com
Williamson, R.S.1942. The Saqqara Graph. Nature 150, 607-607.
Whitford, Michelle F.; Wyatt-Spratt, Simon; Gore, Damian B.; Johnsson, Mattias T.; Power, Ronika K.; Rampe, Michael; Richards, Candace; Withford, Michael J. (2020-10-01). "Assessing the standardisation of Egyptian shabti manufacture via morphology and elemental analyses". Journal of Archaeological Science: Reports. 33: 102541. doi:10.1016/j.jasrep.2020.102541. ISSN 2352-409X. S2CID 224873688.
See also
William the Faience Hippopotamus
Pottery
African pottery
Glass compositions
History of glass
Ceramic glazes
Types of pottery decoration
Ancient Egyptian pottery
Egyptian inventions | Egyptian faience | [
"Chemistry"
] | 5,219 | [
"Glass compositions",
"Ceramic glazes",
"Coatings",
"Glass chemistry"
] |
5,657,994 | https://en.wikipedia.org/wiki/Efflux%20pump | An efflux pump is an active transporter in cells that moves out unwanted material. Efflux pumps are an important component in bacteria in their ability to remove antibiotics. The efflux could also be the movement of heavy metals, organic pollutants, plant-produced compounds, quorum sensing signals, bacterial metabolites and neurotransmitters. All microorganisms, with a few exceptions, have highly conserved DNA sequences in their genome that encode efflux pumps. Efflux pumps actively move substances out of a microorganism, in a process known as active efflux, which is a vital part of xenobiotic metabolism. This active efflux mechanism is responsible for various types of resistance to bacterial pathogens within bacterial species - the most concerning being antibiotic resistance because microorganisms can have adapted efflux pumps to divert toxins out of the cytoplasm and into extracellular media.
Efflux systems function via an energy-dependent mechanism (active transport) to pump out unwanted toxic substances through specific efflux pumps. Some efflux systems are drug-specific, whereas others may accommodate multiple drugs with small multidrug resistance (SMR) transporters.
Efflux pumps are proteinaceous transporters localized in the cytoplasmic membrane of all kinds of cells. They are active transporters, meaning that they require a source of chemical energy to perform their function. Some are primary active transporters utilizing adenosine triphosphate hydrolysis as a source of energy, whereas others are secondary active transporters (uniporters, symporters, or antiporters) in which transport is coupled to an electrochemical potential difference created by pumping hydrogen or sodium ions into the cell.
Bacterial
Bacterial efflux pumps are classified into five major superfamilies, based on their amino acid sequence and the energy source used to export their substrates:
The major facilitator superfamily (MFS)
The ABC transporters
The small multidrug resistance family (SMR)
The resistance-nodulation-cell division superfamily (RND)
The multi antimicrobial extrusion protein family (MATE).
Of these, only the ABC superfamily are primary transporters, the rest being secondary transporters utilizing proton or sodium gradient as a source of energy. Whereas MFS dominates in Gram positive bacteria, the RND family was once thought to be unique to Gram negative bacteria. They have since been found in all major kingdoms.
Structure
Efflux pumps generally consist of an outer membrane efflux protein, a middle periplasmic protein, an inner membrane protein, and a transmembrane duct. The transmembrane duct is located in the outer membrane of the cell. The duct is also bound to two other proteins: a periplasmic membrane protein and an integral membrane transporter. The periplasmic membrane protein and the inner membrane protein of the system are coupled to control the opening and closing of the duct (channel). When a toxin binds to this inner membrane protein, the inner membrane proteins gives rise to a biochemical cascade that transmits signals to the periplasmic membrane protein and outer membrane protein to open the channel and move the toxin out of the cell. This mechanism uses an energy-dependent, protein-protein interaction that is generated by the transfer of the toxin for an H+ ion by the inner membrane transporter.
The fully assembled in vitro and in vivo structures of AcrAB-TolC pump have been solved by cryoEM and cryoET.
Function
Although antibiotics are the most clinically important substrates of efflux systems, it is probable that most efflux pumps have other natural physiological functions. Examples include:
The E. coli AcrAB efflux system, which has a physiologic role of pumping out bile acids and fatty acids to lower their toxicity.
The MFS family Ptr pump in Streptomyces pristinaespiralis appears to be an autoimmunity pump for this organism when it turns on production of pristinamycins I and II.
The AcrAB–TolC system in E. coli is suspected to have a role in the transport of the calcium-channel components in the E. coli membrane.
The MtrCDE system plays a protective role by providing resistance to faecal lipids in rectal isolates of Neisseria gonorrhoeae.
The AcrAB efflux system of Erwinia amylovora is important for this organism's virulence, plant (host) colonization, and resistance to plant toxins.
The MexXY component of the MexXY-OprM multidrug efflux system of P. aeruginosa is inducible by antibiotics that target ribosomes via the PA5471 gene product.
Efflux pumps have also been shown to play a role in biofilm formation. However, the substrates for such pumps, and whether changes in their efflux activity affect biofilm formation directly or indirectly, remain to be determined.
The ability of efflux systems to recognize a large number of compounds other than their natural substrates is probably because substrate recognition is based on physicochemical properties, such as hydrophobicity, aromaticity and ionizable character rather than on defined chemical properties, as in classical enzyme-substrate or ligand-receptor recognition. Because most antibiotics are amphiphilic molecules - possessing both hydrophilic and hydrophobic characters - they are easily recognized by many efflux pumps.
Impact on antimicrobial resistance
The impact of efflux mechanisms on antimicrobial resistance is large; this is usually attributed to the following:
The genetic elements encoding efflux pumps may be encoded on chromosomes and/or plasmids, thus contributing to both intrinsic (natural) and acquired resistance respectively. As an intrinsic mechanism of resistance, efflux pump genes can survive a hostile environment (for example in the presence of antibiotics) which allows for the selection of mutants that over-express these genes. Being located on transportable genetic elements as plasmids or transposons is also advantageous for the microorganisms as it allows for the easy spread of efflux genes between distant species.
Antibiotics can act as inducers and regulators of the expression of some efflux pumps.
Expression of several efflux pumps in a given bacterial species may lead to a broad spectrum of resistance when considering the shared substrates of some multi-drug efflux pumps, where one efflux pump may confer resistance to a wide range of antimicrobials.
Eukaryotic
In eukaryotic cells, the existence of efflux pumps has been known since the discovery of P-glycoprotein in 1976 by Juliano and Ling. Efflux pumps are one of the major causes of anticancer drug resistance in eukaryotic cells. They include monocarboxylate transporters (MCTs), multiple drug resistance proteins (MDRs)- also referred as P-glycoprotein, multidrug resistance-associated proteins (MRPs), peptide transporters (PEPTs), and Na+ phosphate transporters (NPTs). These transporters are distributed along particular portions of the renal proximal tubule, intestine, liver, blood–brain barrier, and other portions of the brain.
Inhibitors
Several trials are currently being conducted to develop drugs that can be co-administered with antibiotics to act as inhibitors for the efflux-mediated extrusion of antibiotics. As yet, no efflux inhibitor has been approved for therapeutic use, but some are being used to determine the prevalence of efflux pumps in clinical isolates and in cell biology research. Verapamil, for example, is used to block P-glycoprotein-mediated efflux of DNA-binding fluorophores, thereby facilitating fluorescent cell sorting for DNA content. Various natural products have been shown to inhibit bacterial efflux pumps including the carotenoids capsanthin and capsorubin, the flavonoids rotenone and chrysin, and the alkaloid lysergol. Some nanoparticles, for example zinc oxide, also inhibit bacterial efflux pumps.
See also
Antibiotic resistance
References
Membrane biology
Antibiotics | Efflux pump | [
"Chemistry",
"Biology"
] | 1,718 | [
"Biotechnology products",
"Membrane biology",
"Antibiotics",
"Molecular biology",
"Biocides"
] |
5,658,096 | https://en.wikipedia.org/wiki/Exotic%20material | Exotic Materials are metals that have high strength and hardness. It does not mean any metal that is rare, but rather strong in its characteristics. Exotic Materials are used for high performance tasks.
Exotic Materials can include plastics, superalloys, semiconductors, superconductors, and ceramics.
Exotic metals and alloys
Examples of metals and alloys that can be exotic:
Chromium
Cobalt
Hastelloy
Inconel
Mercury (element) (aka quicksilver, hydrargyrum)
Molybdenum
Monel
Platinum
Tantalum
Stainless Steel
Titanium
Tungsten or Wolframite
Waspaloy
Materials with high alloy content, known as super alloys or exotic alloys, offer enhanced performance properties including excellent strength and durability, and resistance to oxidation, corrosion and deforming at high temperatures or under extreme pressure. Because of these properties, super alloys make the best spring materials for demanding working conditions, which can be encountered across various industry sectors, including the automotive, marine and aerospace sectors as well as oil and gas extraction, thermal processing, petrochemical processing and power generation.
Notes
See also
Exotic matter
Materials | Exotic material | [
"Physics"
] | 224 | [
"Materials",
"Matter"
] |
5,658,261 | https://en.wikipedia.org/wiki/Apeirogon | In geometry, an apeirogon () or infinite polygon is a polygon with an infinite number of sides. Apeirogons are the rank 2 case of infinite polytopes. In some literature, the term "apeirogon" may refer only to the regular apeirogon, with an infinite dihedral group of symmetries.
Definitions
Geometric apeirogon
Given a point A0 in a Euclidean space and a translation S, define the point Ai to be the point obtained from i applications of the translation S to A0, so Ai = Si(A0). The set of vertices Ai with i any integer, together with edges connecting adjacent vertices, is a sequence of equal-length segments of a line, and is called the regular apeirogon as defined by H. S. M. Coxeter.
A regular apeirogon can be defined as a partition of the Euclidean line E1 into infinitely many equal-length segments. It generalizes the regular n-gon, which may be defined as a partition of the circle S1 into finitely many equal-length segments.
Hyperbolic pseudogon
The regular pseudogon is a partition of the hyperbolic line H1 (instead of the Euclidean line) into segments of length 2λ, as an analogue of the regular apeirogon.
Abstract apeirogon
An abstract polytope is a partially ordered set P (whose elements are called faces) with properties modeling those of the inclusions of faces of convex polytopes. The rank (or dimension) of an abstract polytope is determined by the length of the maximal ordered chains of its faces, and an abstract polytope of rank n is called an abstract n-polytope.
For abstract polytopes of rank 2, this means that: A) the elements of the partially ordered set are sets of vertices with either zero vertex (the empty set), one vertex, two vertices (an edge), or the entire vertex set (a two-dimensional face), ordered by inclusion of sets; B) each vertex belongs to exactly two edges; C) the undirected graph formed by the vertices and edges is connected.
An abstract polytope is called an abstract apeirotope if it has infinitely many elements; an abstract 2-apeirotope is called an abstract apeirogon.
A realization of an abstract polytope is a mapping of its vertices to points a geometric space (typically a Euclidean space). A faithful realization is a realization such that the vertex mapping is injective. Every geometric apeirogon is a realization of the abstract apeirogon.
Symmetries
The infinite dihedral group G of symmetries of a regular geometric apeirogon is generated by two reflections, the product of which translates each vertex of P to the next. The product of the two reflections can be decomposed as a product of a non-zero translation, finitely many rotations, and a possibly trivial reflection.
In an abstract polytope, a flag is a collection of one face of each dimension, all incident to each other (that is, comparable in the partial order); an abstract polytope is called regular if it has symmetries (structure-preserving permutations of its elements) that take any flag to any other flag. In the case of a two-dimensional abstract polytope, this is automatically true; the symmetries of the apeirogon form the infinite dihedral group.
A symmetric realization of an abstract apeirogon is defined as a mapping from its vertices to a finite-dimensional geometric space (typically a Euclidean space) such that every symmetry of the abstract apeirogon corresponds to an isometry of the images of the mapping.
Moduli space
Generally, the moduli space of a faithful realization of an abstract polytope is a convex cone of infinite dimension. The realization cone of the abstract apeirogon has uncountably infinite algebraic dimension and cannot be closed in the Euclidean topology.
Classification of Euclidean apeirogons
The symmetric realization of any regular polygon in Euclidean space of dimension greater than 2 is reducible, meaning it can be made as a blend of two lower-dimensional polygons. This characterization of the regular polygons naturally characterizes the regular apeirogons as well. The discrete apeirogons are the results of blending the 1-dimensional apeirogon with other polygons. Since every polygon is a quotient of the apeirogon, the blend of any polygon with an apeirogon produces another apeirogon.
In two dimensions the discrete regular apeirogons are the infinite zigzag polygons, resulting from the blend of the 1-dimensional apeirogon with the digon, represented with the Schläfli symbol , , or .
In three dimensions the discrete regular apeirogons are the infinite helical polygons, with vertices spaced evenly along a helix. These are the result of blending the 1-dimensional apeirogon with a 2-dimensional polygon, or .
Generalizations
Higher rank
Apeirohedra are the rank 3 analogues of apeirogons, and are the infinite analogues of polyhedra. More generally, n-apeirotopes or infinite n-polytopes are the n-dimensional analogues of apeirogons, and are the infinite analogues of n-polytopes.
See also
Apeirogonal tiling
Apeirogonal prism
Apeirogonal antiprism
Teragon, a fractal generalized polygon that also has infinitely many sides
Notes
References
External links
Polygons by the number of sides
Infinity | Apeirogon | [
"Mathematics"
] | 1,156 | [
"Mathematical objects",
"Infinity"
] |
5,658,345 | https://en.wikipedia.org/wiki/Demand%20characteristics | In social research, particularly in psychology, the term demand characteristic refers to an experimental artifact where participants form an interpretation of the experiment's purpose and subconsciously change their behavior to fit that interpretation. Typically, demand characteristics are considered an extraneous variable, exerting an effect on behavior other than that intended by the experimenter. Pioneering research was conducted on demand characteristics by Martin Orne.
A possible cause for demand characteristics is participants' expectations that they will somehow be evaluated, leading them to figure out a way to 'beat' the experiment to attain good scores in the alleged evaluation. Rather than giving an honest answer, participants may change some or all of their answers to match the experimenter's requirements, that demand characteristics can change participant's behaviour to appear more socially or morally responsible. Demand characteristics cannot be eliminated from experiments, but demand characteristics can be studied to see their effect on such experiments.
Examples of common demand characteristics
Common demand characteristics include:
Rumors of the study – any information, true or false, circulated about the experiment outside of the experiment itself.
Setting of the laboratory – the location where the experiment is being performed, if it is significant.
Explicit or implicit communication – any communication between the participant and experimenter, whether it be verbal or non-verbal, that may influence their perception of the experiment.
Weber and Cook have described some demand characteristics as involving the participant taking on a role in the experiment. These roles include:
The good-participant role (also known as the please-you effect) in which the participant attempts to discern the experimenter's hypotheses and to confirm them. The participant does not want to "ruin" the experiment.
The negative-participant role (also known as the screw-you effect) in which the participant attempts to discern the experimenter's hypotheses, but only in order to destroy the credibility of the study.
The faithful-participant role in which the participant follows the instructions given by the experimenter to the letter.
The apprehensive-participant role in which the participant is so concerned about how the experimenter might evaluate the responses that the participant behaves in a socially desirable way.
Dealing with demand characteristics
Researchers use a number of different approaches for reducing the effect of demand characteristics in research situations. Some of the more common approaches include the following:
Deception: Deceive participants about one or more aspects of the research to conceal the research hypothesis.
Post-experimental questionnaires: For example, Rubin (2016) discusses the Perceived Awareness of the Research Hypothesis (PARH). This 4-item scale is usually presented at the end of a research session. In responding to the scale, participants indicate the extent to which they believe that they are aware of the researchers' hypotheses during the research. Researchers then compute a mean PARH score and correlate this with their key effects. Significant correlations indicate that demand characteristics may be related to the research results. Nonsignificant correlations provide tentative evidence against the demand characteristics explanation. Pre-experimental questionnaires can also cause demand characteristics as well as post-experimental questionnaires. A different experimenter than the one that conducted the actual experiment to the participants should distribute the questionnaires.
Unobtrusive manipulations and measures: Conceal independent and dependent measures, so they do not provide clues about the research hypothesis.
Have self-discipline: The experimenter must display self-discipline to obtain a valid inquiry.
Avoid temptation: If the experiment is performed again, avoid asking the participants what they have experienced.
The more the merrier: To avoid experimenter bias, have more than one experimenter.
Be specific and clear: If the purpose of the experiment is not clear or ambiguous, then the participants may guess many different hypotheses and cause the data to be skewed even more.
Double blind: Do not inform the person who has contact with the participants about the research hypotheses. This reduces the experimenter-expectancy effect.
Minimize interpersonal contact between the researcher and the participant: Reduces experimenter expectancy effect.
Use a between-subjects design rather than a within-subjects design: The central tendency of a social group can affect ratings of its intragroup variability in the absence of social identity concerns.
See also
Scientific method
List of cognitive biases
Allegiance bias
Cultural bias
Epistemic feedback
Funding bias
Hawthorne effect
N rays – imaginary radiation
Naturalistic observation
Observer bias
Observer-expectancy effect
Participant observer
Placebo and Nocebo
Publication bias
Pygmalion effect – teachers who expect higher achievement from some children actually get it
Reality tunnel
Reflexivity (social theory)
Subject-expectancy effect
References
Experimental psychology
Experimental bias
Social research | Demand characteristics | [
"Mathematics"
] | 953 | [
"Experimental bias",
"Statistical concepts"
] |
5,658,523 | https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%208A | Bone morphogenetic protein 8A (BMP8A) is a protein that in humans is encoded by the BMP8A gene.
BMP8A is a polypeptide member of the TGFβ superfamily of proteins. It, like other bone morphogenetic proteins (BMPs), is involved in the development of bone and cartilage. BMP8A may be involved in epithelial osteogenesis. It also plays a role in bone homeostasis. It is a disulfide-linked homodimer.
References
External links
Further reading
Bone morphogenetic protein
Developmental genes and proteins
TGFβ domain | Bone morphogenetic protein 8A | [
"Biology"
] | 137 | [
"Induced stem cells",
"Developmental genes and proteins"
] |
5,658,554 | https://en.wikipedia.org/wiki/Paleobiology | Paleobiology (or palaeobiology) is an interdisciplinary field that combines the methods and findings found in both the earth sciences and the life sciences. Paleobiology is not to be confused with geobiology, which focuses more on the interactions between the biosphere and the physical Earth.
Paleobiological research uses biological field research of current biota and of fossils millions of years old to answer questions about the molecular evolution and the evolutionary history of life. In this scientific quest, macrofossils, microfossils and trace fossils are typically analyzed. However, the 21st-century biochemical analysis of DNA and RNA samples offers much promise, as does the biometric construction of phylogenetic trees.
An investigator in this field is known as a paleobiologist.
Important research areas
Paleobotany applies the principles and methods of paleobiology to flora, especially green land plants, but also including the fungi and seaweeds (algae). See also mycology, phycology and dendrochronology.
Paleozoology uses the methods and principles of paleobiology to understand fauna, both vertebrates and invertebrates. See also vertebrate and invertebrate paleontology, as well as paleoanthropology.
Micropaleontology applies paleobiologic principles and methods to archaea, bacteria, protists and microscopic pollen/spores. See also microfossils and palynology.
Paleovirology examines the evolutionary history of viruses on paleobiological timescales.
Paleobiochemistry uses the methods and principles of organic chemistry to detect and analyze molecular-level evidence of ancient life, both microscopic and macroscopic.
Paleoecology examines past ecosystems, climates, and geographies so as to better comprehend prehistoric life.
Taphonomy analyzes the post-mortem history (for example, decay and decomposition) of an individual organism in order to gain insight on the behavior, death and environment of the fossilized organism.
Paleoichnology analyzes the tracks, borings, trails, burrows, impressions, and other trace fossils left by ancient organisms in order to gain insight into their behavior and ecology.
Stratigraphic paleobiology studies long-term secular changes, as well as the (short-term) bed-by-bed sequence of changes, in organismal characteristics and behaviors. See also stratification, sedimentary rocks and the geologic time scale.
Evolutionary developmental paleobiology examines the evolutionary aspects of the modes and trajectories of growth and development in the evolution of life – clades both extinct and extant. See also adaptive radiation, cladistics, evolutionary biology, developmental biology and phylogenetic tree.
Paleobiologists
The founder or "father" of modern paleobiology was Baron Franz Nopcsa (1877 to 1933), a Hungarian scientist trained at the University of Vienna. He initially termed the discipline "paleophysiology".
However, credit for coining the word paleobiology itself should go to Professor Charles Schuchert. He proposed the term in 1904 so as to initiate "a broad new science" joining "traditional paleontology with the evidence and insights of geology and isotopic chemistry."
On the other hand, Charles Doolittle Walcott, a Smithsonian adventurer, has been cited as the "founder of Precambrian paleobiology". Although best known as the discoverer of the mid-Cambrian Burgess shale animal fossils, in 1883 this American curator found the "first Precambrian fossil cells known to science" – a stromatolite reef then known as Cryptozoon algae. In 1899 he discovered the first acritarch fossil cells, a Precambrian algal phytoplankton he named Chuaria. Lastly, in 1914, Walcott reported "minute cells and chains of cell-like bodies" belonging to Precambrian purple bacteria.
Later 20th-century paleobiologists have also figured prominently in finding Archaean and Proterozoic eon microfossils: In 1954, Stanley A. Tyler and Elso S. Barghoorn described 2.1 billion-year-old cyanobacteria and fungi-like microflora at their Gunflint Chert fossil site. Eleven years later, Barghoorn and J. William Schopf reported finely-preserved Precambrian microflora at their Bitter Springs site of the Amadeus Basin, Central Australia.
In 1993, Schopf discovered O2-producing blue-green bacteria at his 3.5 billion-year-old Apex Chert site in Pilbara Craton, Marble Bar, in the northwestern part of Western Australia. So paleobiologists were at last homing in on the origins of the Precambrian "Oxygen catastrophe".
During the early part of the 21st-century, two paleobiologists Anjali Goswami and Thomas Halliday, studied the evolution of mammaliaforms during the Mesozoic and Cenozoic eras (between 299 million to 12,000 years ago). Additionally, they uncovered and studied the morphological disparity and rapid evolutionary rates of living organisms near the end and in the aftermath of the Cretaceous mass extinction (145 million to 66 million years ago).
Paleobiologic journals
Acta Palaeontologica Polonica
Biology and Geology
Historical Biology
PALAIOS
Palaeogeography, Palaeoclimatology, Palaeoecology
Paleobiology (journal)
Paleoceanography
Paleobiology in the general press
Books written for the general public on this topic include the following:
The Rise and Reign of the Mammals: A New History, from the Shadow of the Dinosaurs to Us written by Steve Brusatte
Otherlands: A Journey Through Earth's Extinct Worlds written by Thomas Halliday
Introduction to Paleobiology and the Fossil Record – 22 April 2020 by Michael J. Benton (Author), David A. T. Harper (Author)
See also
History of biology
History of paleontology
History of invertebrate paleozoology
Molecular paleontology
Taxonomy of commonly fossilised invertebrates
Treatise on Invertebrate Paleontology
Footnotes
Derek E.G. Briggs and Peter R. Crowther, eds. (2003). Palaeobiology II. Malden, Massachusetts: Blackwell Publishing. and . The second edition of an acclaimed British textbook.
Robert L. Carroll (1998). Patterns and Processes of Vertebrate Evolution. Cambridge Paleobiology Series. Cambridge, England: Cambridge University Press. and . Applies paleobiology to the adaptive radiation of fishes and quadrupeds.
Matthew T. Carrano, Timothy Gaudin, Richard Blob, and John Wible, eds. (2006). Amniote Paleobiology: Perspectives on the Evolution of Mammals, Birds and Reptiles. Chicago: University of Chicago Press. and . This new book describes paleobiological research into land vertebrates of the Mesozoic and Cenozoic eras.
Robert B. Eckhardt (2000). Human Paleobiology. Cambridge Studies in Biology and Evolutionary Anthropology. Cambridge, England: Cambridge University Press. and . This book connects paleoanthropology and archeology to the field of paleobiology.
Douglas H. Erwin (2006). Extinction: How Life on Earth Nearly Ended 250 Million Years Ago. Princeton: Princeton University Press. . An investigation by a paleobiologist into the many theories as to what happened during the catastrophic Permian-Triassic transition.
Brian Keith Hall and Wendy M. Olson, eds. (2003). Keywords and Concepts in Evolutionary Biology. Cambridge, Massachusetts: Harvard University Press. and .
David Jablonski, Douglas H. Erwin, and Jere H. Lipps (1996). Evolutionary Paleobiology. Chicago: University of Chicago Press, 492 pages. and . A fine American textbook.
Masatoshi Nei and Sudhir Kumar (2000). Molecular Evolution and Phylogenetics. Oxford, England: Oxford University Press. and . This text links DNA/RNA analysis to the evolutionary "tree of life" in paleobiology.
Donald R. Prothero (2004). Bringing Fossils to Life: An Introduction to Paleobiology. New York: McGraw Hill. and . An acclaimed book for the novice fossil-hunter and young adults.
Mark Ridley, ed. (2004). Evolution. Oxford, England: Oxford University Press. and . An anthology of analytical studies in paleobiology.
Raymond Rogers, David Eberth, and Tony Fiorillo (2007). Bonebeds: Genesis, Analysis and Paleobiological Significance. Chicago: University of Chicago Press. and . A new book regarding the fossils of vertebrates, especially tetrapods on land during the Mesozoic and Cenozoic eras.
Thomas J. M. Schopf, ed. (1972). Models in Paleobiology. San Francisco: Freeman, Cooper. and . A much-cited, seminal classic in the field discussing methodology and quantitative analysis.
Thomas J.M. Schopf (1980). Paleoceanography. Cambridge, Massachusetts: Harvard University Press. and . A later book by the noted paleobiologist. This text discusses ancient marine ecology.
J. William Schopf (2001). Cradle of Life: The Discovery of Earth's Earliest Fossils. Princeton: Princeton University Press. . The use of biochemical and ultramicroscopic analysis to analyze microfossils of bacteria and archaea.
Paul Selden and John Nudds (2005). Evolution of Fossil Ecosystems. Chicago: University of Chicago Press. and . A recent analysis and discussion of paleoecology.
David Sepkoski. Rereading the Fossil Record: The Growth of Paleobiology as an Evolutionary Discipline (University of Chicago Press; 2012) 432 pages; A history since the mid-19th century, with a focus on the "revolutionary" era of the 1970s and early 1980s and the work of Stephen Jay Gould and David Raup.
Paul Tasch (1980). Paleobiology of the Invertebrates. New York: John Wiley & Sons. and . Applies statistics to the evolution of sponges, cnidarians, worms, brachiopods, bryozoa, mollusks, and arthropods.
Shuhai Xiao and Alan J. Kaufman, eds. (2006). Neoproterozoic Geobiology and Paleobiology. New York: Springer Science+Business Media. . This new book describes research into the fossils of the earliest multicellular animals and plants, especially the Ediacaran period invertebrates and algae.
Bernard Ziegler and R. O. Muir (1983). Introduction to Palaeobiology. Chichester, England: E. Horwood. and . A classic, British introductory textbook.
External links
Paleobiology website of the National Museum of Natural History (Smithsonian) in Washington, D.C. (archived 11 March 2007)
The Paleobiology Database
Developmental biology
Evolutionary biology
Subfields of paleontology | Paleobiology | [
"Biology"
] | 2,278 | [
"Evolutionary biology",
"Behavior",
"Developmental biology",
"Reproduction",
"Paleobiology"
] |
5,658,880 | https://en.wikipedia.org/wiki/Stevens%20rearrangement | The Stevens rearrangement in organic chemistry is an organic reaction converting quaternary ammonium salts and sulfonium salts to the corresponding amines or sulfides in presence of a strong base in a 1,2-rearrangement.
The reactants can be obtained by alkylation of the corresponding amines and sulfides. The substituent R next the amine methylene bridge is an electron-withdrawing group.
The original 1928 publication by Thomas S. Stevens concerned the reaction of 1-phenyl-2-(N,N-dimethylamino)ethanone with benzyl bromide to the ammonium salt followed by the rearrangement reaction with sodium hydroxide in water to the rearranged amine.
A 1932 publication described the corresponding sulfur reaction.
Reaction mechanism
The reaction mechanism of the Stevens rearrangement is one of the most controversial reaction mechanisms in organic chemistry. Key in the reaction mechanism for the Stevens rearrangement (explained for the nitrogen reaction) is the formation of an ylide after deprotonation of the ammonium salt by a strong base. Deprotonation is aided by electron-withdrawing properties of substituent R. Several reaction modes exist for the actual rearrangement reaction.
A concerted reaction requires an antarafacial reaction mode but since the migrating group displays retention of configuration this mechanism is unlikely.
In an alternative reaction mechanism the N–C bond of the leaving group is homolytically cleaved to form a di-radical pair (3a). In order to explain the observed retention of configuration, the presence of a solvent cage is invoked. Another possibility is the formation of a cation-anion pair (3b), also in a solvent cage.
Scope
Competing reactions are the Sommelet-Hauser rearrangement and the Hofmann elimination.
In one application a double-Stevens rearrangement expands a cyclophane ring. The ylide is prepared in situ by reaction of the diazo compound ethyl diazomalonate with a sulfide catalyzed by dirhodium tetraacetate in refluxing xylene.
Enzymatic reaction
Recently, γ-butyrobetaine hydroxylase, an enzyme that is involved in the human carnitine biosynthesis pathway, was found to catalyze a C-C bond formation reaction in a fashion analogous to a Stevens type rearrangement. The substrate for the reaction is meldonium.
See also
Sommelet–Hauser rearrangement
γ-Butyrobetaine hydroxylase
References
Rearrangement reactions
Name reactions | Stevens rearrangement | [
"Chemistry"
] | 535 | [
"Name reactions",
"Rearrangement reactions",
"Organic reactions"
] |
5,659,240 | https://en.wikipedia.org/wiki/Inconstant%20Star | Inconstant Star is a science fiction fix-up novel by American writer Poul Anderson. It is formed by the novellas Iron and Inconstant Star, first published in The Man-Kzin Wars (1988) and Man-Kzin Wars III (1990), respectively. The title is from the tumbling alien artifact that sends out radiation. Due to the tumbling effect, the output can only be seen briefly from a given point in space, looking like a star, but then disappearing as the artifact moves.
The title also references another Niven story, "Inconstant Moon", which is not part of the Known Space series. The novel is the story of Robert Saxtorph and his ship Rover, hired for peaceful missions, but which run into Kzinti at every turn.
Plot summary
There are two parts to the novel, Iron, and Inconstant Star.
In “Iron”, Saxtorph and the Rover, hired by the wealthy Crashlander Laurinda Brozik, set out to explore a newly discovered red dwarf star. When they arrive, they are challenged by a Kzinti warship. Separating the crew onto the shuttles, the Rover is captured and landed on one of the moons. The first shuttle sets on Prima, the first planet, and is held fast by a planet-sized organism that begins dissolving the shuttle. They broadcast for rescue, and are refused help by the Kzin.
Meanwhile, helpless to rescue their friends, Robert, Dorcas, and Laurinda make a plan to steal a tug and escape back to friendly space with the news of the Kzin base. Dorcas pilots the tug, and takes out the ship guarding the Rover. Robert and Laurinda land, fight off a Kzinti shuttle, and recover the Rover. They are able to rescue Juan and Carita, and destroy the base with a guided asteroid.
In “Inconstant Star”, Saxtorph and crew are hired by Tyra Nordbo to redeem her father's honor, as he was accused of collaboration with the Kzin during their occupation of Wunderland. To do so, they must use notes he had left behind and follow a ship that had left 30 years prior to investigate a concentration of gamma rays. They travel to the coordinates, and find a massive artifact made of an unknown metal. A hole in the spherical artifact is pouring out lethal radiation. As they study it, they learn it is a weapon of the Tnuctip. It is a shell around a “captured” black hole, one that had been holed by a meteorite and is thus releasing the Hawking radiation. They then deduce the route of the original Kzin ship, and head off to the Father Sun, the star of the Kzin homeworld. En route, they locate the Sherrek, where Tyra's father Peter had worked free of his Kzin captors. They rescue him and head back to the artifact. Another Kzin ship, Swordbeak, also finds the old ship. They, too, head to the artifact, and catch the Rover by surprise. Just when all looks lost, Robert and Dorcas conceive a plan to use the artifact's radiation against the Kzin warship. In a last act of defiance, a dying Weoch-Captain activates the artifact's hyperdrive and heads out into unknown space.
Characters
Robert Saxtorph – Terran. Captain of the Rover. Husband of Dorcas.
Dorcas Saxtorph – Terran. First Mate of the Rover. Wife of Robert.
Kamehameha Ryan – Terran (Hawaiian). Crewman on the Rover and longtime friend of the Saxtorphs’.
Carita Fenger – Crewman on the Rover. Jinxian.
Juan Yoshii – Crewman on the Rover and aspiring poet. Belter.
Laurinda Broznik – Astronomer who discovered the star in "Iron". Crashlander.
Arthur Treginnis – Scientist. Mountaineer of Crew descent (who was a Colonist sympathizer during the revolution).
Ulf Markham – Commissioner of the Interworld Space Commission and spy for the Kzinti. Wunderlander.
Tyra Nordbo – Hires the Rover in "Inconstant Star". Daughter of Peter Nordbo.
Peter Nordbo – Former landholder on Wunderland, amateur astronomer, and slave of the Kzin.
Weoch-Captain – Kzin captain of the Swordbeak.
See also
Man-Kzin Wars
External links
1991 American novels
1991 science fiction novels
Fiction set around 61 Ursae Majoris
Known Space stories
Novels by Poul Anderson
Fiction about stars
Fiction about black holes | Inconstant Star | [
"Physics"
] | 972 | [
"Black holes",
"Unsolved problems in physics",
"Fiction about black holes"
] |
5,659,894 | https://en.wikipedia.org/wiki/Anabasine | Anabasine is a pyridine and piperidine alkaloid found in the tree tobacco (Nicotiana glauca) plant, as well as in tree tobacco's close relative the common tobacco plant (Nicotiana tabacum). It is a structural isomer of, and chemically similar to, nicotine. Its principal (historical) industrial use is as an insecticide.
Anabasine is present in trace amounts in tobacco smoke, and can be used as an indicator of a person's exposure to tobacco smoke.
Pharmacology
Anabasine is a nicotinic acetylcholine receptor agonist. In high doses, it produces a depolarizing block of nerve transmission, which can cause symptoms similar to those of nicotine poisoning and, ultimately, death by asystole. In larger amounts it is thought to be teratogenic in swine.
The intravenous LD50 of anabasine ranges from 11 mg/kg to 16 mg/kg in mice, depending on the enantiomer.
Analogs
B. Bhatti, et al. made some higher potency sterically strained bicyclic analogs of anabasine:
2-(Pyridin-3-yl)-1-azabicyclo[3.2.2]nonane (TC-1698)
2-(Pyridin-3-yl)-1-azabicyclo[2.2.2]octane,
and 2-(Pyridin-3-yl)-1-azabicyclo[3.2.1]octane.
See also
Anatabine
References
Pyridine alkaloids
Nicotinic agonists
Alkaloids found in Nicotiana
Plant toxin insecticides
Piperidine alkaloids
Plant toxins
3-Pyridyl compounds
2-Piperidinyl compounds | Anabasine | [
"Chemistry"
] | 404 | [
"Plant toxin insecticides",
"Chemical ecology",
"Pyridine alkaloids",
"Plant toxins",
"Piperidine alkaloids",
"Alkaloids by chemical classification"
] |
5,660,340 | https://en.wikipedia.org/wiki/Bathythermograph | The bathythermograph, or BT, also known as the Mechanical Bathythermograph, or MBT; is a device that holds a temperature sensor and a transducer to detect changes in water temperature versus depth down to a depth of approximately 285 meters (935 feet). Lowered by a small winch on the ship into the water, the BT records pressure and temperature changes on a coated glass slide as it is dropped nearly freely through the water. While the instrument is being dropped, the wire is payed out until it reaches a predetermined depth, then a brake is applied and the BT is drawn back to the surface. Because the pressure is a function of depth (see Pascal's law), temperature measurements can be correlated with the depth at which they are recorded.
History
The true origins of the BT began in 1935 when Carl-Gustaf Rossby started experimenting. He then forwarded the development of the BT to his graduate student Athelstan Spilhaus, who then fully developed the BT in 1938 as a collaboration between MIT, Woods Hole Oceanographic Institution (WHOI), and the U.S. Navy. The device was modified during World War II to gather information on the varying temperature of the ocean for the U.S. Navy. Originally the slides were prepared "by rubbing a bit of skunk oil on with a finger and then wiping off with the soft side of one's hand," followed by smoking the slide over the flame of a Bunsen burner. Later on the skunk oil was replaced with an evaporated metal film.
Since water temperature may vary by layer and may affect sonar by producing inaccurate location results, bathothermographs (U.S. World War II spelling) were installed on the outer hulls of U.S. submarines during World War II.
By monitoring variances, or lack of variances, in underwater temperature or pressure layers, while submerged, the submarine commander could adjust and compensate for temperature layers that could affect sonar accuracy. This was especially important when firing torpedoes at a target based strictly on a sonar fix.
More importantly, when the submarine was under attack by a surface vessel using sonar, the information from the bathothermograph allowed the submarine commander to seek thermoclines, which are colder layers of water, that would distort the pinging from the surface vessel's sonar, allowing the submarine under attack to "disguise" its actual position and to escape depth charge damage and eventually to escape from the surface vessel.
Throughout the use of the bathythermograph various technicians, watchstanders, and oceanographers noted how dangerous the deployment and retrieval of the BT was. According to watchstander Edward S. Barr: "… In any kind of rough weather, this BT position was frequently subject to waves making a clean sweep of the deck. In spite of breaking waves over the side, the operator had to hold his station, because the equipment was already over the side. One couldn't run for shelter as the brake and hoisting power were combined in a single hand lever. To let go of this lever would cause all the wire on the winch to unwind, sending the recording device and all its cable to the ocean bottom forever. It was not at all uncommon, from the protective position of the laboratory door, to look back and see your watchmate at the BT winch completely disappear from sight as a wave would come crashing over the side. … We also took turns taking BT readings. It wasn't fair for only one person to get wet consistently."
Expendable bathythermograph
After witnessing firsthand the dangers of deploying and retrieving BTs, James M. Snodgrass began developing the expendable bathythermograph (XBT). Snodgrass' description of the XBT:Briefly, the unit would break down in two components, as follows: the ship to surface unit, and surface to expendable unit. I have in mind a package which could be jettisoned, either by the "Armstrong" method, or some simple mechanical device, which would at all times be connected to the surface vessel. The wire would be paid out from the surface ship and not from the surface float unit. The surface float would require a minimum of flotation and a small, very simple sea anchor. From this simple platform the expendable BT unit would sink as outlined for the acoustic unit. However, it would unwind as it goes a very fine thread of probably neutrally buoyant conductor terminating at the float unit, thence connected to the wire leading to the ship. In the early 1960s the U.S. Navy contracted Sippican Corporation of Marion, Massachusetts to develop the XBT, who became the sole supplier.
The unit is composed of a probe; a wire link; and a shipboard canister. Inside of the probe is a thermistor which is connected electronically to a chart recorder. The probe falls freely at 20 feet per second and that determines its depth and provides a temperature-depth trace on the recorder. A pair of fine copper wires which pay out from both a spool retained on the ship and one dropped with the instrument, provide a data transfer line to the ship for shipboard recording. Eventually, the wire runs out and breaks, and the XBT sinks to the ocean floor. Since the deployment of an XBT does not require the ship to slow down or otherwise interfere with normal operations, XBT's are often deployed from vessels of opportunity, such as cargo ships or ferries, and also by dedicated research ships conducting underway operations when a CTD cast would require stopping the ship for several hours. Airborne versions (AXBT) are also used; these use radio frequencies to transmit the data to the aircraft during deployment. Today Lockheed Martin Sippican has manufactured over 5 million XBTs.
Types of XBTs
Source:
Participation by Month of Country and Institutions deploying XBTs
Below is the list of XBT deployments for 2013:
XBT Fall Rate Bias
Since XBTs do not measure depth (e.g. via pressure), fall-rate equations are used to derive depth profiles from what is essentially a time series. The fall rate equation takes the form:
where, z(t) is the depth of the XBT in meters; t is time; and a & b are coefficients determined using theoretical and empirical methods. The coefficient b can be thought of as the initial speed as the probe hits the water. The coefficient a can be thought of as the reduction in mass with time as the wire spools off.
For a considerable time, these equations were relatively well-established, however in 2007 Gouretski and Koltermann showed a bias between XBT temperature measurements and CTD temperature measurements. They also showed that this varies over time and could be due to both errors in the calculation of depth and in measurement of the temperature. From that the 2008 NOAA XBT Fall Rate Workshop began to address the problem, with no viable conclusion as to how to proceed with adjusting the measurements. In 2010 the second XBT Fall Rate Workshop was held in Hamburg, Germany to continue discussing the problem and forge a way forward.
A major implication of this is that a depth-temperature profile can be integrated to estimate upper ocean heat content; the bias in these equations lead to a warm bias in the heat content estimations. The introduction of Argo floats has provided a much more reliable source of temperature profiles than XBTs, however the XBT record remains important for estimating decadal trends and variability and hence much effort has been put into resolving these systematic biases.
XBT correction needs to include both a drop-rate correction and a temperature correction.
Uses
Oceanography and hydrography: to obtain information on the temperature structure of the ocean.
A study in 2019 (published 2023) at the outfall of the Totten Glacier in East Antarctica showed that water at depth above freezing temperature was melting the under-side of the glacier.
Submarine and Anti-submarine warfare: to determine the layer depth (thermocline) used by submarines to avoid active sonar search.
See also
References
External links
Expendable Bathythermograph Expendable Sound Velocimeter (XBT/XSV) Expendable Profiling Systems from Lockheed Martin Sippican
(page 2 shows Jerome Namias with a bathythermograph)
Scripps Institution of Oceanography: Probing the Oceans 1936 to 1976
Oceanographic instrumentation
Anti-submarine warfare
Sonar | Bathythermograph | [
"Technology",
"Engineering"
] | 1,749 | [
"Oceanographic instrumentation",
"Measuring instruments"
] |
5,660,713 | https://en.wikipedia.org/wiki/Signed%20distance%20function | In mathematics and its applications, the signed distance function or signed distance field (SDF) is the orthogonal distance of a given point x to the boundary of a set Ω in a metric space (such as the surface of a geometric shape), with the sign determined by whether or not x is in the interior of Ω. The function has positive values at points x inside Ω, it decreases in value as x approaches the boundary of Ω where the signed distance function is zero, and it takes negative values outside of Ω. However, the alternative convention is also sometimes taken instead (i.e., negative inside Ω and positive outside). The concept also sometimes goes by the name oriented distance function/field.
Definition
Let be a subset of a metric space with metric , and be its boundary. The distance between a point of and the subset of is defined as usual as
where denotes the infimum.
The signed distance function from a point of to is defined by
Properties in Euclidean space
If Ω is a subset of the Euclidean space Rn with piecewise smooth boundary, then the signed distance function is differentiable almost everywhere, and its gradient satisfies the eikonal equation
If the boundary of Ω is Ck for k ≥ 2 (see Differentiability classes) then d is Ck on points sufficiently close to the boundary of Ω. In particular, on the boundary f satisfies
where N is the inward normal vector field. The signed distance function is thus a differentiable extension of the normal vector field. In particular, the Hessian of the signed distance function on the boundary of Ω gives the Weingarten map.
If, further, Γ is a region sufficiently close to the boundary of Ω that f is twice continuously differentiable on it, then there is an explicit formula involving the Weingarten map Wx for the Jacobian of changing variables in terms of the signed distance function and nearest boundary point. Specifically, if T(∂Ω, μ) is the set of points within distance μ of the boundary of Ω (i.e. the tubular neighbourhood of radius μ), and g is an absolutely integrable function on Γ, then
where denotes the determinant and dSu indicates that we are taking the surface integral.
Algorithms
Algorithms for calculating the signed distance function include the efficient fast marching method, fast sweeping method and the more general level-set method.
For voxel rendering, a fast algorithm for calculating the SDF in taxicab geometry uses summed-area tables.
Applications
Signed distance functions are applied, for example, in real-time rendering, for instance the method of SDF ray marching, and computer vision.
SDF has been used to describe object geometry in real-time rendering, usually in a raymarching context, starting in the mid 2000s. By 2007, Valve is using SDFs to render large pixel-size (or high DPI) smooth fonts with GPU acceleration in its games. Valve's method is not perfect as it runs in raster space in order to avoid the computational complexity of solving the problem in the (continuous) vector space. The rendered text often loses sharp corners. In 2014, an improved method was presented by Behdad Esfahbod. Behdad's GLyphy approximates the font's Bézier curves with arc splines, accelerated by grid-based discretization techniques (which culls too-far-away points) to run in real time.
A modified version of SDF was introduced as a loss function to minimise the error in interpenetration of pixels while rendering multiple objects. In particular, for any pixel that does not belong to an object, if it lies outside the object in rendition, no penalty is imposed; if it does, a positive value proportional to its distance inside the object is imposed.
In 2020, the FOSS game engine Godot 4.0 received SDF-based real-time global illumination (SDFGI), that became a compromise between more realistic voxel-based GI and baked GI. Its core advantage is that it can be applied to infinite space, which allows developers to use it for open-world games.
In 2023, the authors of the Zed text editor announced a GPUI framework that draws all UI elements using the GPU at 120 fps. The work makes use of Inigo Quilez's list of geometric primitives in SDF, Figma co-founder Evan Wallace's Gaussian blur in SDF, and a new rounded rectangle SDF.
See also
Distance function
Level-set method
Eikonal equation
Parallel curve (also known as offset curve)
Signed arc length
Signed area
Signed measure
Signed volume
Notes
References
(or the Appendix of the 1977 1st ed.)
Applied mathematics
Distance
Sign (mathematics)
Implicit surface modeling | Signed distance function | [
"Physics",
"Mathematics"
] | 977 | [
"Physical quantities",
"Distance",
"Applied mathematics",
"Quantity",
"Sign (mathematics)",
"Mathematical objects",
"Size",
"Space",
"Spacetime",
"Wikipedia categories named after physical quantities",
"Numbers"
] |
5,661,318 | https://en.wikipedia.org/wiki/Sergey%20Mergelyan | Sergey Mergelyan (; 19 May 1928 – 20 August 2008) was a Soviet and Armenian mathematician, who made major contributions to the Approximation theory. The modern Complex Approximation Theory is based on Mergelyan's classical work. Corresponding Member of the Academy of Sciences of the Soviet Union (since 1953), member of NAS ASSR (since 1956).
The surname "Mergelov" given at birth was changed for patriotic reasons to the more Armenian-sounding "Mergelyan" by the mathematician himself before his trip to Moscow.
He was a laureate of the Stalin Prize (1952) and the Order of St. Mesrop Mashtots (2008). He was the youngest Doctor of Sciences in the history of the USSR (at the age of 20), and the youngest corresponding member of the Academy of Sciences of the Soviet Union (the title was conferred at the age of 24). During his postgraduate studies, the 20-year-old Mergelyan solved one of the fundamental problems of the mathematical theory of functions, which had not been solved for more than 70 years. His theorem on the possibility of uniform polynomial approximation of functions of a complex variable is recognized by the classical Mergelyan theorem, and is included in the course of the theory of functions.
Although he himself was not a computer designer, Mergelyan was a pioneer in Soviet computational mathematics.
Biography
Early years
Sergey Mergelyan was born on 19 May 1928 in Simferopol in an Armenian family. His father Nikita (Mkrtich) Ivanovich Mergelov, a former private entrepreneur (Nepman), his mother Lyudmila Ivanovna Vyrodova, the daughter of the manager of the Azov-Black Sea bank, who was shot in 1918. In 1936 Sergey's father was building a paper mill in Yelets, but soon together with his family was deported to the Siberian settlement of Narym, Tomsk Oblast. In the Siberian frost, Sergey suffered from a serious illness and narrowly survived. In 1937, the mother and son were acquitted by the court's decision and returned to Kerch, and in 1938 Lyudmila Ivanovna obtained (from the USSR Prosecutor General Andrei Yanushyevich Vyshinsky) the rehabilitation of her husband. In 1941, in connection with the offensive of the Hitler armies to the South, the Mergelov family left Kerch and settled in Yerevan.
Education
Before the war, Mergelyan lived in Russia, and studied at the Kerch secondary school. When at the end of 1941 his family has moved from Kerch to Yerevan, he got into a completely unfamiliar environment, he did not know Armenian at all. He studied at the Yerevan school named after Mravyan. Soon he excelled in his abilities. In 1943, Mergelyan won the first place at the republican physics and mathematics Olympiad. In 1944, at age 16, he passed the examinations via extern for grades 9-10, graduated from high school and immediately entered the Physics and Mathematics Faculty of the Yerevan State University (YSU).
He drew attention to himself at the university, where he during a year passed the first and second courses, and soon began attending lectures of academician Artashes Shahinian, the founder of the Armenian mathematical school. In addition to studying and working in the seminar, Mergelyan taught in the mathematical circle at the Yerevan Palace of Pioneers. There he gave full freedom to fantasy, writing puzzles for children by conducting competitions to solve particularly difficult tasks and organizing mathematical games.
He passed a five-year university course for three years, in the first year he studied only a few days, then via extern had passed exams and immediately switched to the second, and in 1946 he received a diploma.
At the same time, he restored the original surname on his father's line and received a diploma already as Sergey Nikitovich Mergelyan.
After YSU (1946), Mergelyan entered the postgraduate study at Steklov Institute of Mathematics to Mstislav Vsevolodovich Keldysh. Although all his colossal employment, Keldysh paid special attention to his new graduate student. They met mainly at Keldysh's house, at 8-9 o'clock in the evening, and conducted long conversations about mathematical problems.
A thesis for the degree on physical and mathematical sciences, Mergelyan wrote for a year and a half. The defense took place in 1949 and was brilliant. After an hour and a half session, the academic council announced the awarding of a doctorate in physics and mathematics to Mergelyan. Although Mergelyan introduced to defend the Ph.D. dissertation, all three official opponents - Academician Lavrentyev, Sergey Nikolsky and Corresponding Member Alexander Gelfоnd - petitioned the Academic Council to award him the Doctor of Science degree.
The petition of opponents was satisfied (for this it was necessary to call the members of the scientific council, which took time), and Mergelyan became the youngest doctor of physical and mathematical sciences in the USSR at the age of 20 (at 21).
Until today this is a record of getting the highest scientific degree (Doctor of Science) at such a young age in former USSR and present Russia.
Career
Mergelyan graduated from Yerevan University in 1947. From 1945 to 1957 he worked at the Yerevan University, and from 1954 to 1958 and from 1964 to 1968 at the Moscow State University named after MV Lomonosov.
When he was 24 he became a corresponding member of the Academy of Sciences of the Soviet Union, which, from the point of view of young age, is yet another absolute record among USSR scientists. He has been a symbol of a young scientist in former USSR. Indira Gandhi, among other famous people in USSR and abroad, has been a friend of Mergelyan from the early 1950s. In 1978, after her official visit to Moscow, Gandhi had also a private visit to Yerevan just as a guest of Mergelyan. In 1952 he was awarded the Stalin Prize.
Mergelyan was also a talented organizer of science. He played a leading role in establishing Yerevan Scientific Research Institute of Mathematical Machines (YerSRIMM). On 14 July 1956 the Yerevan Scientific Research Institute of Mathematical Machines (YerNIIMM) was founded․ He became the first director of the institute and headed it in 1956-1960. Soon the institute became popular as the "Mergelyan Institute". This unofficial name is preserved and still used nowadays․
In 1961 he returned to the field of pure mathematics. He resumed work at the Moscow Steklov Mathematical Institute of Academy of Sciences of the Soviet Union. In 1963 he was elected Deputy Academician of the Secretary of the Department of Mathematics of the Academy of Sciences of the Soviet Union (Nikolai Nikolaevich Bogolyubov). In 1964 he was appointed head of the department of complex analysis at the Mathematical Institute, the position he retained until 2002, the same year he was reinstated as a professor of the Mechanics and Mathematics Faculty of the Moscow State University.
In 1968, he again left the post of professor of the faculty and only engaged in scientific work. Mergelyan had "traveling permission", and often was on foreign business trips. In 1970 he gave a presentation as a guest speaker at the International Congress of Mathematicians in Nice.
Scientific works
Mergelyan's main works include theory of functions of complex variables, theory of approximation, and theory of potential and harmonic functions. In 1951 he formulated and proved the famous result from complex analysis called Mergelyan's theorem. This solved an old classical problem. The theorem completed a long series of studies, begun in 1885, and composed of the classical results of Karl Weierstrass, Carl Runge, J. Walsh, Mikhail Lavrentyev, Mstislav Keldysh and others. The new terms "Mergelyan's theorem" and "Mergelyan's sets" found their place in textbooks and monographs on approximation theory.
Several years later he solved another famous problem, the Sergei Bernstein Approximation Problem. Mergelyan also has many important results in other areas of complex analysis including the theory of pointwise approximations by polynomials.
His research was the study of the approximation of continuous functions satisfying the smoothness properties for an arbitrary set (1962) and the solution of Bernstein's approximate problem (1963). Mergelyan conducted in-depth studies and obtained valuable results in such areas as best approximation by polynomials on an arbitrary continuum, weighted approximations by polynomials on the real axis, pointwise approximation by polynomials on closed sets of the complex plane, uniform approximation by harmonic functions on compact sets and entire functions on an unbounded continuum, uniqueness harmonic functions. In the theory of differential equations, his results related to the sphere of the Cauchy problem and some other questions. Mergelyan's scientific achievements significantly contributed to the formation, development and international recognition of the Armenian mathematical school, as evidenced by the one organized in Yerevan in 1965, at the initiative and with the active participation of Sergey Mergelyan a major international conference on the theory of functions. Many prominent mathematicians of the world took part in the conference, which promoted international cooperation and further promotion of the Armenian mathematical school.
Death
Sergey Mergelyan died on 20 August 2008. The farewell ceremony took place on 23 August 2008 at the Glendale Cemetery in California. At the request of the deceased, his ashes were transported to Moscow and buried at Novodevichy Cemetery next to his mother and his wife.
Awards and prizes
Stalin Prize, 2nd class (1952) for works on the constructive theory of functions, completed by the article "Some Problems in the Constructive Theory of Functions", published in the Proceedings of the Steklov Mathematical Institute of the Academy of Sciences of the Soviet Union (1951)
Order of St. Mesrop Mashtots (26.05.2008) – On the occasion of the 80th anniversary of mathematician, the Consul General of Armenia in the USA handed over the Order of St. Mesrop Mashtots to the scientist and read the message of the President of Armenia, Serzh Sargsyan.
Order of the Red Banner of Labour (17.09.1975)
Works
«Некоторые вопросы конструктивной теории функций» (Труды Математического института АН СССР, т. 3, 1951)
«Равномерные приближения функций комплексного переменного» (Успехи математических наук, т. 8, вып. 2, 1952),
«О полноте систем аналитических функций» (Успехи математических наук, т. 7, вып. 4, 1953)
References
External links
National Academy of Sciences of Armenia
Russian Academy of Sciences
A Guide to the Russian Academy of Sciences, Part I, by Jack L. Cross
1928 births
2008 deaths
20th-century Armenian mathematicians
Scientists from Simferopol
Academic staff of Yerevan State University
Cornell University faculty
Corresponding Members of the Russian Academy of Sciences
Corresponding Members of the USSR Academy of Sciences
Yerevan Computer Research and Development Institute
Yerevan State University alumni
Recipients of the Order of the Red Banner of Labour
Recipients of the Stalin Prize
Mathematical analysts
Armenian academics
Armenian emigrants to the United States
Armenian mathematicians
Soviet Armenians
Soviet mathematicians
Burials at Novodevichy Cemetery | Sergey Mergelyan | [
"Mathematics"
] | 2,441 | [
"Mathematical analysis",
"Mathematical analysts"
] |
5,661,915 | https://en.wikipedia.org/wiki/Film%20title%20design | Film title design is a term describing the craft and design of motion picture title sequences. Since the beginning of the film form, it has been an essential part of any motion picture. Originally a motionless piece of artwork called title art, it slowly evolved into an artform of its own.
History
In the beginning, main title design consisted of the movie studio's name and/or logo and the presentation of the main characters along with the actor's names, generally using that same artwork presented on title cards. Most independent or major studio had their own title art logo used as the background for their screen credits and they used it almost exclusively on every movie that they produced.
Then, early in the 1930s, the more progressive motion picture studios started to change their approach in presenting their screen credits. The major studios took on the challenge of improving the way they introduced their movies. They made the decision to present a more complete list of credits to go with a higher quality of artwork to be used in their screen credits.
Above-mentioned title design first appeared in 1955 in Otto Preminger’s The Man with the Golden Arm. The theme was introduced with many moving white lines and a white hand reaching into frame, providing small clues on the stories summary.
The 1960s was where the interest in title design really began to grow. Big studios were losing out to TV shows and needed ways to bring people back to the theater. With studios ready and wanting to invest more money into every part of films, title design became a great point of interest. Soon enough, a new generation of designers began to catch the attention of directors such as Alfred Hitchcock, Otto Preminger, and Stanley Donen.
In the 1970, the impact of computer-aided title design really begins to rise. The application of new technology and software make experimentation easier and faster , further pushing the boundaries of what designers were capable of; including the combination of animation, cinematography, graphics, special effects, and typography.
A main title designer is the designer of the movie title. The manner in which title of a movie is displayed on screen is widely considered an art form. It has often been classified as motion graphics, title design, title sequences and animated credits. The title sequence is often presented through animated visuals and kinetic type while the credits are introduced on screen. The Morrison Studio is a leading title sequence company in both film and TV, with great examples of title design from films such as Tim Burton's Batman (1989) and Sweeney Todd (2007) through to Creation Stories (2021). Led by title designers Richard Morrison and Dean Wares.
From the mid-1930s through the late-1940s the major film studios led the way in Film Title Art by employing artists like Al Hirschfeld, George Petty, Ted Ireland (Vencentini), William Galraith Crawford, Symeon Shimin, and Jacques Kapralik.
Quality artists met this challenge by designing their artwork to "set a mood" and "capture the audience" before the movie started. An overall 10% jump in box-office receipts was proof that this was a profitable improvement to the introduction of their motion pictures.
Pacific Title & Art Studio was an American company founded in Hollywood in 1919 by Leon Schlesinger. Originally they produced title cards for silent films, but moved into film title design. One of their artists, Wayne Fitzgerald was encouraged by Warren Beatty to design titles on his own. Phill Norman was a contemporary American film title designer at the same time
One famous example of the form is the work of Saul Bass in the 1950s and 1960s. His modish title sequences for the films of Alfred Hitchcock were key in setting the style and mood of the movie even before the action began, and contributed to Hitchcock's "house style" that was a key element in his approach to marketing. Another well known designer is Maurice Binder, who designed the often erotic titles for most of the James Bond films from the 1960s to the 1980s; Robert Brownjohn designed two of the films. After Binder's death, Daniel Kleinman has done several of the titles.
However, the leader in the industry in the 1990s - 2000 was Cinema Research Corporation, with over 400 movie titles to its credit in that time period alone, and almost 700 titles in total from the 1950s to 2000.
Modern technology has enabled a much more fantastical way of presenting them through use of programs such as Adobe After Effects and Maxon Cinema4D. Although a form of editing, it's considered a different role and art form rather than of a traditional film editor.
Further reading
Art of the Title
References
External links
The Morrison Studio – Title sequence company, led by Richard Morrison and Dean Wares
Film and television opening sequences
Design
Film and video terminology | Film title design | [
"Engineering"
] | 959 | [
"Design"
] |
5,662,011 | https://en.wikipedia.org/wiki/Thompson%20subgroup | In finite group theory, a branch of mathematics, the Thompson subgroup of a finite p-group P refers to one of several characteristic subgroups of P. originally defined to be the subgroup generated by the abelian subgroups of P of maximal rank. More often the Thompson subgroup is defined to be the subgroup generated by the abelian subgroups of P of maximal order or the subgroup generated by the elementary abelian subgroups of P of maximal rank. In general these three subgroups can be different, though they are all called the Thompson subgroup and denoted by .
See also
Glauberman normal p-complement theorem
ZJ theorem
Puig subgroup, a subgroup analogous to the Thompson subgroup
References
Finite groups | Thompson subgroup | [
"Mathematics"
] | 141 | [
"Mathematical structures",
"Algebraic structures",
"Finite groups"
] |
5,662,689 | https://en.wikipedia.org/wiki/Reification%20%28information%20retrieval%29 | In information retrieval and natural language processing reification is the process by which an abstract idea about a person, place or thing, is turned into an explicit data model or other object created in a programming language, such as a feature set of demographic or psychographic attributes or both. By means of reification, something that was previously implicit, unexpressed, and possibly inexpressible is explicitly formulated and made available to conceptual (logical or computational) manipulation.
The process by which a natural language statement is transformed so actions and events in it become quantifiable variables is semantic parsing. For example "John chased the duck furiously" can be transformed into something like
(Exists e)(chasing(e) & past_tense(e) & actor(e,John) & furiously(e) & patient(e,duck)).
Another example would be "Sally said John is mean", which could be expressed as something like
(Exists u,v)(saying(u) & past_tense(u) & actor(u,Sally) & that(u,v) & is(v) & actor(v,John) & mean(v)).
Such formal meaning representations allow one to use the tools of classical first-order predicate calculus even for statements which, due to their use of tense, modality, adverbial constructions, propositional arguments (e.g. "Sally said that X"), etc., would have seemed intractable. This is an advantage because predicate calculus is better understood and simpler than the more complex alternatives (higher-order logics, modal logics, temporal logics, etc.), and there exist better automated tools (e.g. automated theorem provers and model checkers) for manipulating it.
Meaning representations can be used for other purposes besides the application of first-order logic; one example is the automatic discovery of synonymous phrases.
The meaning representations are sometimes called quasi-logical forms, and the existential variables are sometimes treated as Skolem constants.
Not all natural language constructs admit a uniform translation to first order logic. See donkey sentence for examples and a discussion.
See also
Drinker paradox
Nonfirstorderizability
Reification (computer science)
Reification (fallacy)
Reification (knowledge representation)
References
Computational linguistics | Reification (information retrieval) | [
"Technology"
] | 478 | [
"Natural language and computing",
"Computational linguistics"
] |
5,663,113 | https://en.wikipedia.org/wiki/Flue-gas%20stack | A flue-gas stack, also known as a smoke stack, chimney stack or simply as a stack, is a type of chimney, a vertical pipe, channel or similar structure through which flue gases are exhausted to the outside air. Flue gases are produced when coal, oil, natural gas, wood or any other fuel is combusted in an industrial furnace, a power plant's steam-generating boiler, or other large combustion device. Flue gases can also be produced from chemical or physical processes that do not involve combustion, such as natural gas processing.
Flue gas from combustion is usually composed of carbon dioxide (CO2) and water vapor, as well as nitrogen and excess oxygen remaining from the intake combustion air. It also contains a small percentage of pollutants such as particulate matter, carbon monoxide, nitrogen oxides and sulfur oxides. The flue gas stacks are often quite tall, up to , to increase the stack effect and dispersion of pollutants.
When the flue gases are exhausted from stoves, ovens, fireplaces, heating furnaces and boilers, or other small sources within residential abodes, restaurants, hotels, or other public buildings and small commercial enterprises, their flue gas stacks are referred to as chimneys.
History
The first industrial chimneys were built in the mid-17th century, when it was first understood how they could improve the combustion of a furnace by increasing the draft of air into the combustion zone. As such, they played an important part in the development of reverberatory furnaces and a coal-based metallurgical industry, one of the key sectors of the early Industrial Revolution. Most 18th-century industrial chimneys (now commonly referred to as flue gas stacks) were built into the walls of the furnace, much like a domestic chimney. The first freestanding industrial chimneys were probably those erected at the end of the long condensing flues associated with smelting lead.
The powerful association between industrial chimneys and the characteristic smoke-filled landscapes of the industrial revolution was due to the universal application of the steam engine for most manufacturing processes. The chimney is part of a steam-generating boiler, and its evolution is closely linked to increases in the power of the steam engine. The chimneys of Thomas Newcomen’s steam engine were incorporated into the walls of the engine house. The taller, freestanding industrial chimneys that appeared in the early 19th century were related to the changes in boiler design associated with James Watt’s "double-powered" engines, and they continued to grow in stature throughout the Victorian period. Decorative embellishments are a feature of many industrial chimneys from the 1860s, with over-sailing caps and patterned brickwork.
The invention of fan-assisted forced draft in the early 20th century removed the industrial chimney's original function, that of drawing air into the steam-generating boilers or other furnaces. With the replacement of the steam engine as a prime mover, first by diesel engines and then by electric motors, the early industrial chimneys began to disappear from the industrial landscape. Building materials changed from stone and brick to steel and later reinforced concrete, and the height of the industrial chimney was determined by the need to disperse combustion flue gases to comply with governmental air pollution control regulations.
Flue-gas stack draft
The combustion flue gases inside the flue gas stacks are much hotter than the ambient outside air and therefore less dense than the ambient air. That causes the bottom of the vertical column of hot flue gas to have a lower pressure than the pressure at the bottom of a corresponding column of outside air. That higher pressure outside the chimney is the driving force that moves the required combustion air into the combustion zone and also moves the flue gas up and out of the chimney. That movement or flow of combustion air and flue gas is called "natural draft", "natural ventilation", "chimney effect", or "stack effect". The taller the stack, the more draft is created.
The equation below provides an approximation of the pressure difference, ΔP, (between the bottom and the top of the flue gas stack) that is created by the draft:
where:
ΔP: available pressure difference, in Pa
C = 0.0342
a: atmospheric pressure, in Pa
h: height of the flue gas stack, in m
To: absolute outside air temperature, in K
Ti: absolute average temperature of the flue gas inside the stack, in K.
The above equation is an approximation because it assumes that the molar mass of the flue gas and the outside air are equal and that the pressure drop through the flue gas stack is quite small. Both assumptions are fairly good but not exactly accurate.
Flue-gas flow-rate induced by the draft
As a "first guess" approximation, the following equation can be used to estimate the flue-gas flow-rate induced by the draft of a flue-gas stack. The equation assumes that the molar mass of the flue gas and the outside air are equal and that the frictional resistance and heat losses are negligible:.
where:
Q: flue-gas flow-rate, m³/s
A: cross-sectional area of chimney, m2 (assuming it has a constant cross-section)
C : discharge coefficient (usually taken to be 0.65–0.70)
g: gravitational acceleration at sea level = 9.807 m/s²
H : height of chimney, m
Ti : absolute average temperature of the flue gas in the stack, K
To : absolute outside air temperature, K
Also, this equation is only valid when the resistance to the draft flow is caused by a single orifice characterized by the discharge coefficient C. In many, if not most situations, the resistance is primarily imposed by the flue stack itself. In these cases, the resistance is proportional to the stack height H. This causes a cancellation of the H in the above equation predicting Q to be invariant with respect to the flue height.
Designing chimneys and stacks to provide the correct amount of natural draft involves a great many factors such as:
The height and diameter of the stack.
The desired amount of excess combustion air needed to assure complete combustion.
The temperature of the flue gases leaving the combustion zone.
The composition of the combustion flue gas, which determines the flue-gas density.
The frictional resistance to the flow of the flue gases through the chimney or stack, which will vary with the materials used to construct the chimney or stack.
The heat loss from the flue gases as they flow through the chimney or stack.
The local atmospheric pressure of the ambient air, which is determined by the local elevation above sea level.
The calculation of many of the above design factors requires trial-and-error reiterative methods.
Government agencies in most countries have specific codes which govern how such design calculations must be performed. Many non-governmental organizations also have codes governing the design of chimneys and stacks (notably, the ASME codes).
Stack design
The design of large stacks poses considerable engineering challenges. Vortex shedding in high winds can cause dangerous oscillations in the stack, and may lead to its collapse. The use of helical strake is common to prevent this process occurring at or close to the resonant frequency of the stack.
Other items of interest
Some fuel-burning industrial equipment does not rely upon natural draft. Many such equipment items use large fans or blowers to accomplish the same objectives, namely: the flow of combustion air into the combustion chamber and the flow of the hot flue gas out of the chimney or stack.
A great many power plants are equipped with facilities for the removal of sulfur dioxide (i.e., flue-gas desulfurization), nitrogen oxides (i.e., selective catalytic reduction, exhaust gas recirculation, thermal deNOx, or low NOx burners) and particulate matter (i.e., electrostatic precipitators). At such power plants, it is possible to use a cooling tower as a flue gas stack. Examples can be seen in Germany at the Power Station Staudinger Grosskrotzenburg and at the Rostock Power Station. Power plants without flue gas purification would experience serious corrosion in such stacks.
In the United States and a number of other countries, atmospheric dispersion modeling studies are required to determine the flue gas stack height needed to comply with the local air pollution regulations. The United States also limits the maximum height of a flue gas stack to what is known as the "Good Engineering Practice" (GEP) stack height. In the case of existing flue gas stacks that exceed the GEP stack height, any air pollution dispersion modelling studies for such stacks must use the GEP stack height rather than the actual stack height.
See also
Chimney
Flue gas
Flue-gas desulfurization
Flue-gas emissions from fossil-fuel combustion
Incineration
Stack effect
List of tallest chimneys
References
External links
ASHRAE's Fundamentals Handbook is available here from ASHRAE
ASME Codes and Standards available from ASME
Air pollution
Atmospheric dispersion modeling
Combustion
Incineration
Industrial furnaces
Industrial processes
ru:Дымовая труба | Flue-gas stack | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,889 | [
"Metallurgical processes",
"Combustion engineering",
"Incineration",
"Combustion",
"Atmospheric dispersion modeling",
"Industrial furnaces",
"Environmental engineering",
"Environmental modelling"
] |
5,663,769 | https://en.wikipedia.org/wiki/Enormous%20Toroidal%20Plasma%20Device | The Enormous Toroidal Plasma Device (ETPD) is an experimental physics device housed at the Basic Plasma Science Facility at University of California, Los Angeles (UCLA). It previously operated as the Electric Tokamak (ET) between 1999 and 2006 and was noted for being the world's largest tokamak before being decommissioned due to the lack of support and funding. The machine was renamed to ETPD in 2009. At present, the machine is undergoing upgrades to be re-purposed into a general laboratory for experimental plasma physics research.
As the Electric Tokamak
The Electric Tokamak (ET) was the last of a series of small tokamak machines built in 1998 under the direction of principal investigator and designer, Robert Taylor, a UCLA professor. The machine was designed to be a low field (0.25 T) magnetic confinement fusion device with a large aspect ratio. It is composed of 16 vacuum chambers made of 1-inch thick steel, with a major radius of 5 meters and a minor radius of 1 meter. The ET was the largest tokamak ever built at its time, with a vacuum vessel slightly bigger than that of the Joint European Torus.
The first plasma was achieved in January 1999. The ET is capable of producing a plasma current of 45 kiloamperes and can produce a core electron plasma temperature of 300 eV.
Four sets of independent coils are necessary for OH (ohmic heating) current drive, vertical equilibrium field, plasma elongation and plasma shaping (D or reverse-D). The OH system provides 10 V·s using a 10 kA power supply. Up to 0.1 T of vertical field can be applied for horizontal control and this is more than sufficient for all plasma configurations, including high beta. An additional set of coils provide a small horizontal field to correct for error field and to stabilize the plasma vertically. All the coils are located outside the vessel and are constructed out of aluminium.
A Rogowski probe outside the vessel and sets of Hall probes inside the vessel are used to monitor plasma current, position and shaping and are used in the control feedback loop. The poloidal system was designed using an in-house equilibrium code as well as a variety of other codes in order to cross-check computations and to assess the stability of the resulting plasma.
Like most tokamaks, the machine uses a combination of RF heating and neutral beam injection to drive and shape the plasma.
Decommission in 2006
In 2006, the ET had run out of funding and was decommissioned following the retirement of Taylor. Factors leading to loss of funding are attributed to the lack of extensive plasma diagnostics, its large size, and its place in the politics of fusion. When it was operating, the ET was funded mostly by the Department of Energy (DOE).
As the Enormous Toroidal Plasma Device
In 2009, the Electric Tokamak (ET) was renamed to the Enormous Toroidal Plasma Device (ETPD) and was re-purposed for basic plasma research. A lanthanum hexaboride (LaB6) plasma source was developed for the ETPD (similar to the one used in the Large Plasma Device), and is capable of producing a long column of magnetized plasma (~100 m) that winds itself multiple times along the toroidal axis of the machine. The plasma column was shown to be current-free and terminates on the neutral gas within the chamber without touching the machine walls.
The typical operational parameters of the ETPD are:
Density: n ≤ 3 × 1013 cm−3
Electron Temperature: 5 eV < Te < 30 eV
Ion Temperature 1 eV < Ti < 16 eV
Background field: B = 250 gauss (25 mT)
Plasma beta: β ~ 1
The ETPD is currently in the process of being upgraded (i.e. larger sources, better diagnostic capabilities) to support a wide range of plasma physics experiments.
See also
Large Plasma Device, a linear plasma device housed in the same facility as the ETPD
References
External links
"Installation and Initial Testing of the Electric Tokamak Folded Waveguide" with photos.
UCLA Electric Tokamak Homepage
UCLA Tokamak Research
Plasma physics facilities
Tokamaks
University of California, Los Angeles buildings and structures | Enormous Toroidal Plasma Device | [
"Physics"
] | 859 | [
"Plasma physics facilities",
"Plasma physics"
] |
5,663,939 | https://en.wikipedia.org/wiki/Luxembourg%20Internet%20Exchange | The Luxembourg Internet eXchange (LU-CIX) is a facility for Internet Service Providers (ISPs) based in Luxembourg, allowing them to interconnect within Luxembourg and hence improve connectivity and service for their customers. The LU-CIX is an association with a neutral and open philosophy.
More information at the other page
References
External links
Official LU-CIX information
Internet exchange points in Luxembourg
Telecommunications in Luxembourg | Luxembourg Internet Exchange | [
"Technology"
] | 84 | [
"Computing stubs",
"Computer network stubs"
] |
5,664,123 | https://en.wikipedia.org/wiki/Immunoglobulin%20superfamily | The immunoglobulin superfamily (IgSF) is a large protein superfamily of cell surface and soluble proteins that are involved in the recognition, binding, or adhesion processes of cells. Molecules are categorized as members of this superfamily based on shared structural features with immunoglobulins (also known as antibodies); they all possess a domain known as an immunoglobulin domain or fold. Members of the IgSF include cell surface antigen receptors, co-receptors and co-stimulatory molecules of the immune system, molecules involved in antigen presentation to lymphocytes, cell adhesion molecules, certain cytokine receptors and intracellular muscle proteins. They are commonly associated with roles in the immune system. Otherwise, the sperm-specific protein IZUMO1, a member of the immunoglobulin superfamily, has also been identified as the only sperm membrane protein essential for sperm-egg fusion.
Immunoglobulin domains
Proteins of the IgSF possess a structural domain known as an immunoglobulin (Ig) domain. Ig domains are named after the immunoglobulin molecules. They contain about 70-110 amino acids and are categorized according to their size and function. Ig-domains possess a characteristic Ig-fold, which has a sandwich-like structure formed by two sheets of antiparallel beta strands. Interactions between hydrophobic amino acids on the inner side of the sandwich and highly conserved disulfide bonds formed between cysteine residues in the B and F strands, stabilize the Ig-fold.
Classification
The Ig like domains can be classified as IgV, IgC1, IgC2, or IgI.
Most Ig domains are either variable (IgV) or constant (IgC).
IgV: IgV domains with 9 beta strands are generally longer than IgC domains with 7 beta strands.
IgC1 and IgC2: Ig domains of some members of the IgSF resemble IgV domains in the amino acid sequence, yet are similar in size to IgC domains. These are called IgC2 domains, while standard IgC domains are called IgC1 domains.
IgI: Other Ig domains exist that are called intermediate (I) domains.
Members
The Ig domain was reported to be the most populous family of proteins in the human genome with 765 members identified. Members of the family can be found even in the bodies of animals with a simple physiological structure such as poriferan sponges. They have also been found in bacteria, where their presence is likely to be due to divergence from a shared ancestor of eukaryotic immunoglobulin superfamily domains.
References
External links
Transmembrane human proteins from immunoglobulin superfamily classified as receptors, ligands and adhesion proteins
Immunoglobulin domain in SUPERFAMILY
Receptors
Immunology
Protein superfamilies | Immunoglobulin superfamily | [
"Chemistry",
"Biology"
] | 609 | [
"Protein classification",
"Signal transduction",
"Immunology",
"Receptors",
"Protein superfamilies"
] |
5,664,126 | https://en.wikipedia.org/wiki/Henyey%20track | The Henyey track is a path taken by pre-main-sequence stars with masses greater than 0.5 solar masses in the Hertzsprung–Russell diagram after the end of the Hayashi track. The astronomer Louis G. Henyey and his colleagues in the 1950s showed that the pre-main-sequence star can remain in radiative equilibrium throughout some period of its contraction to the main sequence.
The Henyey track is characterized by a slow collapse in near hydrostatic equilibrium, approaching the main sequence almost horizontally in the Hertzsprung–Russell diagram (i.e. the luminosity remains almost constant).
Deviation from Hayashi Track
The equation for radiative heat transfer tells us the relation of opacity (κ) and temperature gradient T. Stars with high opacity will be convective, while low opacity will be radiative for heat transfer.
Protostars on the Hayashi track are fully convective and due to the large presence of H- ions, are optically thick. These stars will continue to contract, until the central core reaches a certain temperature threshold, where the H- ions will break apart, causing a decrease in opacity.
What determines when and how long a star moves from the Hayashi track to the Henyey track is heavily dependent on its initial mass. Stars that are massive enough (0.6 solar mass) will deviate onto the Henyey Track, depicted as a near-horizontal line on an HR diagram. A core that becomes sufficiently hot enough will become less opaque, making convection inefficient. The core will instead become fully radiative to transfer its thermal energy. During this phase the luminosity stays constant or gradually increases, with the temperature increasing as the core undergoes radiative contraction. At the end of the track, the star will undergo nuclear burning, however, will experience a dip in luminosity, until it reaches the main sequence.
Larger mass stars will evolve quickly from the Hayashi track, while lower mass stars will enter later. Stars that are not sufficiently massive on the other hand will never develop a radiative core, as the core does not become hot enough, and instead, will remain on the Hayashi track until it reaches the main sequence.
See also
Stellar evolution
Stellar birthline
Stellar isochrone
References
Further reading
Stellar evolution
Hertzsprung–Russell classifications | Henyey track | [
"Physics",
"Astronomy"
] | 496 | [
"Astronomy stubs",
"Astrophysics",
"Stellar evolution",
"Stellar astronomy stubs",
"Astrophysics stubs"
] |
5,664,494 | https://en.wikipedia.org/wiki/Neutral-beam%20injection | Neutral-beam injection (NBI) is one method used to heat plasma inside a fusion device consisting in a beam of high-energy neutral particles that can enter the magnetic confinement field. When these neutral particles are ionized by collision with the plasma particles, they are kept in the plasma by the confining magnetic field and can transfer most of their energy by further collisions with the plasma. By tangential injection in the torus, neutral beams also provide momentum to the plasma and current drive, one essential feature for long pulses of burning plasmas. Neutral-beam injection is a flexible and reliable technique, which has been the main heating system on a large variety of fusion devices. To date, all NBI systems were based on positive precursor ion beams. In the 1990s there has been impressive progress in negative ion sources and accelerators with the construction of multi-megawatt negative-ion-based NBI systems at LHD (H0, 180 keV) and JT-60U (D0, 500 keV). The NBI designed for ITER is a substantial challenge (D0, 1 MeV, 40 A) and a prototype is being constructed to optimize its performance in view of the ITER future operations. Other ways to heat plasma for nuclear fusion include RF heating, electron cyclotron resonance heating (ECRH), ion cyclotron resonance heating (ICRH), and lower hybrid resonance heating (LH).
Mechanism
This is typically done by:
Making a plasma. This can be done by microwaving a low-pressure gas.
Electrostatic ion acceleration. This is done dropping the positively charged ions towards negative plates. As the ions fall, the electric field does work on them, heating them to fusion temperatures.
Reneutralizing the hot plasma by adding in the opposite charge. This gives the fast-moving beam with no charge.
Injecting the fast-moving hot neutral beam in the machine.
It is critical to inject neutral material into plasma, because if it is charged, it can start harmful plasma instabilities. Most fusion devices inject isotopes of hydrogen, such as pure deuterium or a mix of deuterium and tritium. This material becomes part of the fusion plasma. It also transfers its energy into the existing plasma within the machine. This hot stream of material should raise the overall temperature. Although the beam has no electrostatic charge when it enters, as it passes through the plasma, the atoms are ionized. This happens because the beam bounces off ions already in the plasma .
Neutral-beam injectors installed in fusion experiments
At present, all main fusion experiments use NBIs. Traditional positive-ion-based injectors (P-NBI) are installed for instance in JET and in AUG. To allow power deposition in the center of the burning plasma in larger devices, a higher neutral-beam energy is required. High-energy (>100 keV) systems require the use of negative ion technology (N-NBI).
Legend
Coupling with fusion plasma
Because the magnetic field inside the torus is circular, these fast ions are confined to the background plasma. The confined fast ions mentioned above are slowed down by the background plasma, in a similar way to how air resistance slows down a baseball. The energy transfer from the fast ions to the plasma increases the overall plasma temperature.
It is very important that the fast ions are confined within the plasma long enough for them to deposit their energy. Magnetic fluctuations are a big problem for plasma confinement in this type of device (see plasma stability) by scrambling what were initially well-ordered magnetic fields. If the fast ions are susceptible to this type of behavior, they can escape very quickly. However, some evidence suggests that they are not susceptible.
The interaction of fast neutrals with the plasma consist of
ionisation by collision with plasma electrons and ions,
drift of newly created fast ions in the magnetic field,
collisions of fast ions with plasma ions and electrons by Coulomb collisions (slow-down and scattering, thermalisation) or charge exchange collisions with background neutrals.
Design of neutral beam systems
Beam energy
The adsorption length for neutral beam ionization in a plasma is roughly
with in m, particle density n in 1019 m−3, atomic mass M in amu, particle energy E in keV. Depending on the plasma minor diameter and density, a minimum particle energy can be defined for the neutral beam, in order to deposit a sufficient power on the plasma core rather than to the plasma edge.
For a fusion-relevant plasma, the required fast neutral energy gets in the range of 1 MeV. With increasing energy, it is increasingly difficult to obtain fast hydrogen atoms starting from precursor beams composed of positive ions. For that reason, recent and future heating neutral beams will be based on negative-ion beams. In the interaction with background gas, it is much easier to detach the extra electron from a negative ion (H− has a binding energy of 0.75 eV and a very large cross-section for electron detachment in this energy range) rather than to attach one electron to a positive ion.
Charge state of the precursor ion beam
A neutral beam is obtained by neutralisation of a precursor ion beam, commonly accelerated in large electrostatic accelerators. The precursor beam could either be a positive-ion beam or a negative-ion beam: in order to obtain a sufficiently high current, it is produced extracting charges from a plasma discharge.
However, few negative hydrogen ions are created in a hydrogen plasma discharge. In order to generate a sufficiently high negative-ion density and obtain a decent negative-ion beam current, caesium vapors are added to the plasma discharge (surface-plasma negative-ion sources). Caesium, deposited at the source walls, is an efficient electron donor; atoms and positive ions scattered at caesiated surface have a relatively high probability of being scattered as negatively charged ions. Operation of caesiated sources is complex and not so reliable. The development of alternative concepts for negative-ion beam sources is mandatory for the use of neutral beam systems in future fusion reactors.
Existing and future negative-ion-based neutral beam systems (N-NBI) are listed in the following table:
Ion beam neutralisation
Neutralisation of the precursor ion beam is commonly performed by passing the beam through a gas cell. For a precursor negative-ion beam at fusion-relevant energies, the key collisional processes are:
D− + D2 → D0 + e + D2 (singe-electron detachment, with −10=1.13×10−20 m2 at 1 MeV)
D− + D2 → D+ + e + D2 (double-electron detachment, with −11=7.22×10−22 m2 at 1 MeV)
D0 + D2 → D+ + e + D2 (reionization, with 01=3.79×10−21 m2 at 1 MeV)
D+ + D2 → D0 + D2+ (charge exchange, 10 negligible at 1 MeV)
Underline indicates the fast particles, while subscripts i, j of the cross-section ij indicate the charge state of fast particle before and after collision.
Cross-sections at 1 MeV are such that, once created, a fast positive ion cannot be converted into a fast neutral, and this is the cause of the limited achievable efficiency of gas neutralisers.
The fractions of negatively charged, positively charged, and neutral particles exiting the neutraliser gas cells depend on the integrated gas density or target thickness with the gas density along the beam path . In the case of D− beams, the maximum neutralisation yield occurs at a target thickness m−2.
Typically, the background gas density shall be minimised all along the beam path (i.e. within the accelerating electrodes, along the duct connecting to the fusion plasma) to minimise losses except in the neutraliser cell. Therefore, the required target thickness for neutralisation is obtained by injecting gas in a cell with two open ends. A peaked density profile is realised along the cell, when injection occurs at mid-length. For a given gas throughput [Pa·m3/s], the maximum gas pressure at the centre of the cell depends on the gas conductance [m3/s]:
and in molecular-flow regime can be calculated as
with the geometric parameters , , indicated in figure, gas molecule mass, and gas temperature.
Very high gas throughput is commonly adopted, and neutral-beam systems have custom vacuum pumps among the largest ever built, with pumping speeds in the range of million liters per second. If there are no space constraints, a large gas cell length is adopted, but this solution is unlikely in future devices due to the limited volume inside the bioshield protecting from energetic neutron flux (for instance, in the case of JT-60U the N-NBI neutraliser cell is about 15 m long, while in the ITER HNB its length is limited to 3 m).
See also
ITER Neutral Beam Test Facility
References
External links
Thermonuclear Fusion Test Reactor with neutral beam injector at PPPL
Auxiliary heating in ITER
IPP website about NBI technology
Fusion power | Neutral-beam injection | [
"Physics",
"Chemistry"
] | 1,896 | [
"Nuclear fusion",
"Fusion power",
"Plasma physics"
] |
5,664,562 | https://en.wikipedia.org/wiki/Dry%20run%20%28testing%29 | A dry run (or practice run) is a software testing process used to make sure that a system works correctly and will not result in severe failure. For example, rsync, a utility for transferring and synchronizing data between networked computers or storage drives, has a "dry-run" option users can use to check that their command-line arguments are valid and to simulate what would happen when actually copying the data.
In acceptance procedures (such as factory acceptance testing, for example), a "dry run" is when the factory, a subcontractor, performs a complete test of the system it has to deliver before it is actually accepted by the customer.
Etymology
The term dry run appears to have originated from fire departments in the US. In order to practice, they would carry out dispatches of the fire brigade where water was not pumped. A run with real fire and water was referred to as a wet run. The more general usage of the term seems to have arisen from widespread use by the United States Armed Forces during World War II.
See also
Code review
Pilot experiment
Preview (computing)
References
External links
World Wide Words: Dry Run
Wiktionary - dry run
Tests
Software testing | Dry run (testing) | [
"Engineering"
] | 247 | [
"Software engineering",
"Software testing"
] |
5,664,841 | https://en.wikipedia.org/wiki/Lorcon | lorcon (acronym for Loss Of Radio CONnectivity) is an open source network tool. It is a library for injecting 802.11 (WLAN) frames, capable of injecting via multiple driver frameworks, without the need to change the application code. Lorcon is built by patching the third-party MadWifi-driver for cards based on the Qualcomm Atheros wireless chipset.
The project is maintained by Joshua Wright and Michael Kershaw ("dragorn").
References
External links
Official Home Page
Network analyzers
Unix security-related software
Unix network-related software
Computer security exploits
IEEE 802.11 | Lorcon | [
"Technology"
] | 135 | [
"Computer security exploits"
] |
5,664,906 | https://en.wikipedia.org/wiki/Slip%20melting%20point | The Slip melting point (SMP) or "slip point" is one conventional definition of the melting point of a waxy solid. It is determined by casting a 10 mm column of the solid in a glass tube with an internal diameter of about 1 mm and a length of about 80 mm, and then
immersing it in a temperature-controlled water bath. The slip point is
the temperature at which the column of the solid begins to rise in the tube
due to buoyancy, and because the outside surface of the solid is molten.
This is a popular method for fats and waxes, because they tend to be mixtures of compounds with a range of molecular masses, without well-defined melting points.
References
Phase transitions
Threshold temperatures | Slip melting point | [
"Physics",
"Chemistry"
] | 151 | [
"Statistical mechanics stubs",
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Critical phenomena",
"Threshold temperatures",
"Statistical mechanics",
"Matter"
] |
5,665,192 | https://en.wikipedia.org/wiki/Energy%20crop | Energy crops are low-cost and low-maintenance crops grown solely for renewable bioenergy production (not for food). The crops are processed into solid, liquid or gaseous fuels, such as pellets, bioethanol or biogas. The fuels are burned to generate electrical power or heat.
The plants are generally categorized as woody or herbaceous. Woody plants include willow and poplar, herbaceous plants include Miscanthus x giganteus and Pennisetum purpureum (both known as elephant grass). Herbaceous crops, while physically smaller than trees, store roughly twice the amount of CO2 (in the form of carbon) below ground compared to woody crops.
Through biotechnological procedures such as genetic modification, plants can be manipulated to create higher yields. Relatively high yields can also be realized with existing cultivars. However, some additional advantages such as reduced associated costs (i.e. costs during the manufacturing process) and less water use can only be accomplished by using genetically modified crops.
Types
Solid biomass
Solid biomass, often pelletized, is used for combustion in thermal power stations, either alone or co-fired with other fuels. Alternatively it may be used for heat or combined heat and power (CHP) production.
In short rotation coppice (SRC) agriculture, fast growing tree species like willow and poplar are grown and harvested in short cycles of three to five years. These trees grow best in wet soil conditions. An influence on local water conditions can not be excluded. Establishment close to vulnerable wetland should be avoided.
Gas biomass (methane)
Whole crops such as maize, Sudan grass, millet, white sweet clover, and many others can be made into silage and then converted into biogas.
Anaerobic digesters or biogas plants can be directly supplemented with energy crops once they have been ensiled into silage. The fastest-growing sector of German biofarming has been in the area of "Renewable Energy Crops" on nearly of land (2006). Energy crops can also be grown to boost gas yields where feedstocks have a low energy content, such as manures and spoiled grain. It is estimated that the energy yield presently of bioenergy crops converted via silage to methane is about annually. Small mixed cropping enterprises with animals can use a portion of their acreage to grow and convert energy crops and sustain the entire farm's energy requirements with about one-fifth of the acreage. In Europe and especially Germany, however, this rapid growth has occurred only with substantial government support, as in the German bonus system for renewable energy. Similar developments of integrating crop farming and bioenergy production via silage-methane have been almost entirely overlooked in N. America, where political and structural issues and a huge continued push to centralize energy production has overshadowed positive developments.
Liquid biomass
Biodiesel
European production of biodiesel from energy crops has grown steadily in the last decade, principally focused on rapeseed used for oil and energy. Production of oil/biodiesel from rape covers more than 12,000 km2 in Germany alone, and has doubled in the past 15 years. Typical yield of oil as pure biodiesel is or higher, making biodiesel crops economically attractive, provided sustainable crop rotations are used that are nutrient-balanced and prevent the spread of disease such as clubroot. Biodiesel yield of soybeans is significantly lower than that of rape.
Bioethanol
Two leading non-food crops for the production of cellulosic bioethanol are switchgrass and giant miscanthus.
There has been a preoccupation with cellulosic bioethanol in America as the agricultural structure supporting biomethane is absent in many regions, with no credits or bonus system in place. Consequently, a lot of private money and investor hopes are being pinned on marketable and patentable innovations in enzyme hydrolysis and similar processes. Grasses are also energy crops for biobutanol.
Bioethanol also refers to the technology of using principally corn (maize seed) to make ethanol directly through fermentation. However, under certain field and process conditions this process can consume as much energy as is the energy value of the ethanol it produces, therefore being non-sustainable. New developments in converting grain stillage (referred to as distillers grain stillage or DGS) into biogas looks promising as a means to improve the poor energy ratio of this type of bioethanol process.
Energy crop use in various countries
In Sweden, willow and hemp are often used.
In Finland, reed canary grass is a popular energy crop.
Switchgrass (panicum virgatum) is another energy crop. It requires from 0.97 to 1.34 GJ fossil energy to produce 1 tonne of switchgrass, compared with 1.99 to 2.66 GJ to produce 1 tonne of corn. Given that switchgrass contains approximately 18.8 GJ/ODT of biomass, the energy output-to-input ratio for the crop can be up to 20:1.
Energy crop use in thermal power stations
Several methods exist to reduce pollution and reduce or eliminate carbon emissions of fossil fuel power plants. A frequently used and cost-efficient method is to convert a plant to run on a different fuel (such as energy crops/biomass). In some instances, torrefaction of biomass may benefit the power plant if energy crops/biomass is the material the converted fossil fuel power plant will be using. Also, when using energy crops as the fuel, and if implementing biochar production, the thermal power plant can even become carbon negative rather than just carbon neutral. Improving the energy efficiency of a coal-fired power plant can also reduce emissions.
Sustainability aspects
In recent years, biofuels have become more attractive to many countries as possible replacements for fossil fuels. Therefore, understanding the sustainability of this renewable resource is very important. There are many benefits associated with the use of biofuels such as reduced greenhouse gas emissions, lower cost than fossil fuels, renewability, etc. These energy crops can be used to generate electricity. Wood cellulose and biofuel in conjunction with stationary electricity generation has been shown to be very efficient. From 2008 to 2013, there has been a 109% increase in global biofuel production and this is expected to increase an additional 60% to meet our demands (according to the Organization for Economic Co-operation and Development (OECD)/Food and Agriculture Organization (FAO)).
The projected increase in use/need of energy crops prompts the question of whether this resource is sustainable. Increased biofuel production draws on issues relating to changes in land use, impacts on ecosystem (soil and water resources), and adds to competition of land space for use to grow energy crops, food, or feed crops. Plants best suited for future bioenergy feedstocks should be fast growing, high yielding, and require very little energy inputs for growth and harvest etc. The use of energy crops for energy production can be beneficial because of its carbon neutrality. It represents a cheaper alternative to fossil fuels while being extremely diverse in the species of plants that can be used for energy production. But issues regarding cost (more expensive than other renewable energy sources), efficiency and space required to maintain production need to be considered and improved upon to allow for the use of biofuels to be commonly adopted.
Carbon neutrality
During plant growth, CO2 is absorbed by the plants. While regular forest stands have carbon rotation times spanning many decades, short rotation forestry (SRF) stands have a rotation time of 8–20 years, and short rotation coppicing (SRC) stands 2–4 years. Perennial grasses like miscanthus or napier grass have a rotation time of 4–12 months. In addition to absorbing CO2 in its above-ground tissue, biomass crops also sequester carbon below ground, in roots and soil. Typically, perennial crops sequester more carbon than annual crops because the root buildup is allowed to continue undisturbed over many years. Also, perennial crops avoid the yearly tillage procedures (plowing, digging) associated with growing annual crops. Tilling helps the soil microbe populations to decompose the available carbon, producing CO2.
Soil organic carbon has been observed to be greater below switchgrass crops than under cultivated cropland, especially at depths below .
The amount of carbon sequestrated and the amount of greenhouse gases (GHGs) emitted will determine if the total GHG life cycle cost of a bioenergy project is positive, neutral, or negative. Specifically, a GHG/carbon-negative life cycle is possible if the total below-ground carbon accumulation more than compensates for the above-ground total life-cycle GHG emissions.
For example, for Miscanthus × giganteus, carbon neutrality and even negativity is within reach. This means that the yield and related carbon sequestration is so great that it accounts for more than the total of farm operations emissions, fuel conversion emissions, and transport emissions. Successful sequestration is dependent on planting sites, as the best soils for sequestration are those that are currently deficient in carbon.
For the UK, successful sequestration is expected for arable land over most of England and Wales, with unsuccessful sequestration expected in parts of Scotland, due to already carbon-rich soils (existing woodland). Also, for Scotland, the relatively lower yields in this colder climate make CO2 negativity harder to achieve. Soils already rich in carbon includes peatland and mature forest. Grassland can also be carbon-rich, and it has been found that the most successful carbon sequestration in the UK takes place below improved grasslands.
See also
Algal fuel
Anaerobic digestion
Cellulosic ethanol
Coal pollution mitigation
Eichhornia crassipes#Bioenergy
European Biomass Association
Myriophyllum
Short rotation coppice
Short rotation forestry
Sustainable energy
Table of biofuel crop yields
Vegoil
References
External links
GA Mansoori, N Enayati, LB Agyarko (2016), Energy: Sources, Utilization, Legislation, Sustainability, Illinois as Model State, World Sci. Pub. Co.,
Energy Crops for Fuel
Energy crops at Biomass Energy Centre
Center for Sustainable Energy Farming
01
.
.
Anaerobic digestion | Energy crop | [
"Chemistry",
"Engineering"
] | 2,115 | [
"Water technology",
"Anaerobic digestion",
"Environmental engineering"
] |
5,665,228 | https://en.wikipedia.org/wiki/Quasi-open%20map | In topology a branch of mathematics, a quasi-open map or quasi-interior map is a function which has similar properties to continuous maps.
However, continuous maps and quasi-open maps are not related.
Definition
A function between topological spaces and is quasi-open if, for any non-empty open set , the interior of in is non-empty.
Properties
Let be a map between topological spaces.
If is continuous, it need not be quasi-open. Conversely if is quasi-open, it need not be continuous.
If is open, then is quasi-open.
If is a local homeomorphism, then is quasi-open.
The composition of two quasi-open maps is again quasi-open.
See also
Notes
References
Topology | Quasi-open map | [
"Physics",
"Mathematics"
] | 151 | [
"Topology stubs",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
5,665,456 | https://en.wikipedia.org/wiki/Purinergic%20receptor | Purinergic receptors, also known as purinoceptors, are a family of plasma membrane molecules that are found in almost all mammalian tissues. Within the field of purinergic signalling, these receptors have been implicated in learning and memory, locomotor and feeding behavior, and sleep. More specifically, they are involved in several cellular functions, including proliferation and migration of neural stem cells, vascular reactivity, apoptosis and cytokine secretion. These functions have not been well characterized and the effect of the extracellular microenvironment on their function is also poorly understood.
Geoff Burnstock originally separated purinoceptors into P1 adenosine receptors and P2 nucleotide (ATP, ADP) receptors. P2 receptors were later subdivided into P2X, P2Y, P2T, and P2Z receptors. Subclasses X and Y mediated vasoconstriction and vasodilation, respectively, in the smooth muscle of some arteries. They had been observed in blood vessels, smooth muscle, heart, hepatocytes, and parotid acinar cells. Subclass T was only observed in thrombocytes, platelets and megakaryocytes. Subclass Z required ~100 μM-ATP for activation, where the previous classes required <1 μM. They had been observed in mast cells and lymphocytes.
In the early 1990s, purinoceptors were cloned and characterized, and the P2 subclasses were redefined. Now, P2 receptors are classified based on structure: P2X are ionotropic and P2Y are metabotropic. Appropriately, P2Z was reclassified as P2X7 and P2T was reclassified as P2Y1.
3 classes of purinergic receptors
There are three known distinct classes of purinergic receptors, known as P1, P2X, and P2Y receptors.
P2X receptors
P2X receptors are ligand-gated ion channels, whereas the P1 and P2Y receptors are G protein-coupled receptors. These ligand-gated ion channels are nonselective cation channels responsible for mediating excitatory postsynaptic responses, similar to nicotinic and ionotropic glutamate receptors. P2X receptors are distinct from the rest of the widely known ligand-gated ion channels, as the genetic encoding of these particular channels indicates the presence of only two transmembrane domains within the channels. These receptors are greatly distributed in neurons and glial cells throughout the central and peripheral nervous systems. P2X receptors mediate a large variety of responses including fast transmission at central synapses, contraction of smooth muscle cells, platelet aggregation, macrophage activation, and apoptosis. Moreover, these receptors have been implicated in integrating functional activity between neurons, glial, and vascular cells in the central nervous system, thereby mediating the effects of neural activity during development, neurodegeneration, inflammation, and cancer. The physiological modulator Zn2+ allosterically enhances ATP-induced inward cation currents in the P2X4 receptor by binding to cysteine 132 and cystine 149 residues on the extracellular domain of the P2X4 protein.
P2Y and P1 receptors
Both of these metabotropic receptors are distinguished by their reactivity to specific activators. P1 receptors are preferentially activated by adenosine and P2Y receptors are preferentially more activated by ATP. P1 and P2Y receptors are known to be widely distributed in the brain, heart, kidneys, and adipose tissue. Xanthines (e.g. caffeine) specifically block adenosine receptors, and are known to induce a stimulating effect to one's behavior.
Inhibitors
Inhibitors of purinergic receptors include clopidogrel, prasugrel and ticlopidine, as well as ticagrelor. All of these are antiplatelet agents that block P2Y12 receptors.
Effects on chronic pain
Data obtained from using P2 receptor-selective antagonists has produced evidence supporting ATP's ability to initiate and maintain chronic pain states after exposure to noxious stimuli. It is believed that ATP functions as a pronociceptive neurotransmitter, acting at specific P2X and P2Y receptors in a systemized manner, which ultimately (as a response to noxious stimuli) serve to initiate and sustain heightened states of neuronal excitability. This recent knowledge of purinergic receptors' effects on chronic pain provide promise in discovering a drug that specifically targets individual P2 receptor subtypes. While some P2 receptor-selective compounds have proven useful in preclinical trials, more research is required to understand the potential viability of P2 receptor antagonists for pain.
Recent research has identified a role for microglial P2X receptors in neuropathic pain and inflammatory pain, especially the P2X4 and P2X7 receptors.
Effects on cytotoxic edema
Purinergic receptors have been suggested to play a role in the treatment of cytotoxic edema and brain infarctions. It was found that with treatment of the purinergic ligand 2-methylthioladenosine 5' diphosphate (2-MeSADP), which is an agonist and has a high preference for the purinergic receptor type 1 isoform (P2Y1R), significantly contributes to the reduction of an ischemic lesions caused by cytotoxic edema. Further pharmacological evidence has suggested that 2MeSADP protection is controlled by enhanced astrocyte mitochondrial metabolism through increased inositol triphosphate-dependent calcium release. There is evidence suggesting a relationship between the levels of ATP and cytotoxic edema, where low ATP levels are associated with an increased prevalence of cytotoxic edema. It is believed that mitochondria play an essential role in the metabolism of astrocyte energy within the penumbra of ischemic lesions. By enhancing the source of ATP provided by mitochondria, there could be a similar 'protective' effect for brain injuries in general.
Effects on diabetes
Purinergic receptors have been implicated in the vascular complications associated with diabetes due to the effect of high-glucose concentration on ATP-mediated responses in human fibroblasts.
See also
Purinergic signaling
References
External links
IUPHAR GPCR Database – Adenosine receptors
IUPHAR GPCR Database – P2Y receptors
G protein-coupled receptors | Purinergic receptor | [
"Chemistry"
] | 1,390 | [
"G protein-coupled receptors",
"Signal transduction"
] |
5,665,600 | https://en.wikipedia.org/wiki/UK%20Centre%20for%20Materials%20Education | The UK Centre for Materials Education (UKCME) was one of 24 subject centres within the Higher Education Academy (HEA). It supported teaching and learning in Materials Science and related disciplines. The Centre was established in 2000 as part of the Learning and Teaching Support Network (LTSN), later subsumed within the HEA. It was directed from its inception by Professor Peter Goodhew and ceased operating in 2010.
The Centre was based at the University of Liverpool and worked with individual academics, departments, professional bodies, employers and students to develop and share excellent practice that would enhance the learning experience.
The Centre funded and supported programmes to develop and evaluate innovative approaches to teaching Materials Science. The Centre also maintained an extensive database of resources relevant to materials education. Lecturers could find material to use in their teaching, whilst students would find items to help support their learning. The database also included resources on the processes of learning and teaching for those wishing to further develop their approach.
External links
UKCME Website at: http://www.materials.ac.uk/
The Engineering Subject Centre
The Centre for Bioscience
The Physical Sciences Centre
Materials science organizations
Science and technology in Merseyside
Science education in the United Kingdom
University of Liverpool | UK Centre for Materials Education | [
"Materials_science",
"Engineering"
] | 252 | [
"Materials science organizations",
"Materials science"
] |
5,665,800 | https://en.wikipedia.org/wiki/Sergei%20Godunov | Sergei Konstantinovich Godunov (; 17 July 1929 – 15 July 2023) was a Soviet and Russian professor at the Sobolev Institute of Mathematics of the Russian Academy of Sciences in Novosibirsk, Russia.
Biography
Godunov's most influential work is in the area of applied and numerical mathematics, particularly in the development of methodologies used in Computational Fluid Dynamics (CFD) and other computational fields. Godunov's theorem (Godunov 1959) (also known as Godunov's order barrier theorem) : Linear numerical schemes for solving partial differential equations, having the property of not generating new extrema (a monotone scheme), can be at most first-order accurate. Godunov's scheme is a conservative numerical scheme for solving partial differential equations. In this method, the conservative variables are considered as piecewise constant over the mesh cells at each time step and the time evolution is determined by the exact solution of the Riemann (shock tube) problem at the inter-cell boundaries (Hirsch, 1990).
On 1–2 May 1997 a symposium entitled: Godunov-type numerical methods, was held at the University of Michigan to honour Godunov. These methods are widely used to compute continuum processes dominated by wave propagation. On the following day, 3 May, Godunov received an honorary degree from the University of Michigan. Godunov died on 15 July 2023, two days shy of his 94th birthday.
Education
1946–1951 – Department of Mechanics and Mathematics, Moscow State University.
1951 – Diploma (M. S.), Moscow State University.
1954 – Candidate of Physical and Mathematical Sciences (Ph. D.).
1965 – Doctor of Physical and Mathematical Sciences (D. Sc.).
1976 – Corresponding member of the Academy of Sciences of the Soviet Union.
1994 – Member of the Russian Academy of Sciences (Academician).
1997 – Honorary professor of the University of Michigan (Ann-Arbor, USA).
Awards
1954 – Order of the Badge of Honour
1956 – Order of the Red Banner of Labour
1959 – Lenin Prize
1972 – A.N. Krylov Prize of the Academy of Sciences of the Soviet Union
1975 – Order of the Red Banner of Labour
1981 – Order of the Badge of Honour
1993 – M.A. Lavrentyev Prize of the Russian Academy of Sciences
2010 – Order of Honour
2020 - SAE/Ramesh Agarwal Computational Fluid Dynamics Award
2023 – Order of Alexander Nevsky
See also
Riemann solver
Total variation diminishing
Upwind scheme
Notes
References
Godunov, Sergei K. (1954), Ph. D. Dissertation: Difference Methods for Shock Waves, Moscow State University.
Godunov, S. K. (1959), A Difference Scheme for Numerical Solution of Discontinuous Solution of Hydrodynamic Equations, Mat. Sbornik, 47, 271-306, translated US Joint Publ. Res. Service, JPRS 7225 November 29, 1960.
Godunov, Sergei K. and Romenskii, Evgenii I. (2003) Elements of Continuum Mechanics and Conservation Laws, Springer, .
Hirsch, C. (1990), Numerical Computation of Internal and External Flows, vol 2, Wiley.
External links
Godunov's Personal Web Page
Sobolev Institute of Mathematics
1929 births
2023 deaths
20th-century Russian mathematicians
21st-century Russian mathematicians
Mathematicians from Moscow
Academic staff of Novosibirsk State University
Corresponding Members of the USSR Academy of Sciences
Full Members of the Russian Academy of Sciences
Moscow State University alumni
Recipients of the Lenin Prize
Recipients of the Order of Alexander Nevsky
Recipients of the Order of the Badge of Honour
Recipients of the Order of Honour (Russia)
Recipients of the Order of the Red Banner of Labour
Fluid dynamicists
Numerical analysts
Soviet mathematicians
Russian scientists | Sergei Godunov | [
"Chemistry"
] | 778 | [
"Fluid dynamicists",
"Fluid dynamics"
] |
16,307,876 | https://en.wikipedia.org/wiki/Fosaprepitant | Fosaprepitant, sold under the brand names Emend (US) and Ivemend (EU) among others, is an antiemetic medication, administered intravenously. It is a prodrug of aprepitant.
Fosaprepitant was developed by Merck & Co. and was approved for medical use in the United States, and in the European Union in January 2008.
References
Antiemetics
CYP3A4 inhibitors
NK1 receptor antagonists
Prodrugs
Drugs developed by Merck & Co.
Morpholines
Phosphoramidates
Triazoles
Trifluoromethyl compounds
4-Fluorophenyl compounds
Ethers
Ureas | Fosaprepitant | [
"Chemistry"
] | 145 | [
"Functional groups",
"Prodrugs",
"Organic compounds",
"Ethers",
"Chemicals in medicine",
"Ureas"
] |
16,308,082 | https://en.wikipedia.org/wiki/Simulated%20presence%20therapy | Simulated presence therapy (SPT) is an emotion-oriented non-pharmacological intervention for people with dementia developed by P. Woods and J. Ashley in 1995. SPT was created as part of a study conducted in a nursing home where 17 individuals with the disease listened to a recording of a caregiver over a stereo. The study was originally conducted in order to combat one of the side effects of dementia such as disturbances of behavior which are called behavioral and psychological symptoms (BPSD) associated with dementia. This therapy is based on psychological attachment theories and is normally carried out by playing a recording with voices of the closest relatives of the patient in an attempt to treat BPSD in addition to reducing anxiety, decreasing challenging behavior, social isolation, or verbal aggression.
It is not clear if simulated presence therapy is effective as some research has shown the effectiveness of the therapy might depend on the attachment style of the diseased person. One study indicated simulated presence therapy was mainly effective for individuals with a secure attachment style to their loved ones or main caregivers. The researchers who conducted the states claimed those with insecure attachment styles such as anxious, avoidant, or ambivalent as these individuals tended to be more wary of the tape. It was theorized the lack of effectiveness for the therapy came from a potential lack of emotional meaning due to the attachment style or act as a reminder of the circumstances of the relationship and the absence of presence of the person in the recording. In the original study conducted by Woods and Ashley simulated therapy was most effective in the treatment of social isolation. Woods and Ashley also claim behavioral problems (i.e. blank facial expression, failure to engage in conversation or activities, restlessness, pacing, or wandering) decreased by 91%.
Simulated presence, also known as SimPres, is used to create a store of loved memories from an individual's lifetime. The creation of SimPres utilizes personalized and interactive tapes which contain a pre-recorded conversation and message discussing favorite memories full of positive emotions. The aim of simulated presence is to simulate a phone call with a loved one where the individual can have a new conversation each time the recording is played.
See also
Psychological therapies for dementia
Sundowning
References
Alzheimer's disease
Aging-associated diseases
Neurological disorders
Treatment of dementia | Simulated presence therapy | [
"Biology"
] | 463 | [
"Senescence",
"Aging-associated diseases"
] |
16,311,497 | https://en.wikipedia.org/wiki/HD%2037017 | HD 37017 is a binary star system in the equatorial constellation of Orion. It has the variable star designation V1046 Orionis; HD 37017 is the identifier from the Henry Draper Catalogue. The system is a challenge to view with the naked eye, being close to the lower limit of visibility with a combined apparent visual magnitude of 6.55. It is located at a distance of approximately 1,230 light years based on parallax, and is drifting further away with a radial velocity of +32 km/s. The system is part of star cluster NGC 1981.
The binary nature of this system was suggested by A. Blaauw and T. S. van Albada in 1963. It is a double-lined spectroscopic binary with an orbital period of 18.6556 days and an eccentricity of 0.31. The eccentricity is considered unusually large for such a close system. It forms a suspected eclipsing binary that ranges in brightness from 6.54 down to 6.58.
The primary is a helium-strong, magnetic chemically peculiar star with a stellar classification of B1.5 Vp. It has a magnetic field strength of , and the helium concentrations are located at the magnetic poles. V1046 Orionis was found to be a variable star by L. A. Balona in 1997, and is now classified as an SX Arietis variable. The star undergoes periodic changes in visual brightness, magnetic field strength, and spectral characteristics with a cycle time of 0.901175 days – the star's presumed rotation period. Radio emission has been detected that varies with the rotation period.
The secondary component has an estimated 4.5 times the mass of the Sun. The class has been estimated as type B6III-IV.
References
External links
Simbad
B-type main-sequence stars
Ap stars
SX Arietis variables
Eclipsing binaries
Spectroscopic binaries
Orion (constellation)
Durchmusterung objects
037017
1890
026233
Orionis, V1046 | HD 37017 | [
"Astronomy"
] | 417 | [
"Constellations",
"Orion (constellation)"
] |
16,311,730 | https://en.wikipedia.org/wiki/Mary%20Carskadon | Mary A. Carskadon is an American sleep researcher. She is a professor in the Department of Psychiatry and Human Behavior at the Warren Alpert Medical School of Brown University. She is also the director of the Sleep and Chronobiology Research Lab at E.P. Bradley Hospital.
She is considered to be a prominent expert on sleep and circadian rhythms during childhood, adolescence, and young adulthood. She researches issues related to daytime sleepiness. She has also contributed important research on school start times as it relates to sleep patterns and sleepiness in adolescence.
Every summer, Dr. Carskadon offers a prestigious summer internship for highly motivated students interested in sleep research at the Bradley Sleep Lab. These students, known as Dement Fellows, after William C. Dement, work in the sleep lab for the entirety of the summer and learn under Dr. Carskadon.
Career
Carskadon studied psychology at Gettysburg College and graduated in 1969. She received a Ph.D. in neuro- and biobehavioral sciences in 1979 at Stanford University. At Stanford, she studied under William C. Dement. Along with Dement, she developed the Multiple Sleep Latency Test (MSLT) used to clinically determine sleepiness in sleep disordered patients, particularly by measuring daytime sleep onset latency. Carskadon started her own research group at Brown University in 1985. Her research in adolescent sleep/wake behavior has resulted in proposed changes in public policy. This research suggests that circadian rhythms shift during adolescence and that secondary schools should have later start times.
Each summer, Carskadon's lab hosts adolescents who live in the sleep lab for 14 days. The adolescents participate in summer camp-like activities while their sleep is monitored each night.
Carskadon has received many awards for her research including the Nathaniel Kleitman Distinguished Service Award of the American Sleep Disorders Association (1991), the Lifetime Achievement Award of the National Sleep Foundation (2003), Mark O. Hatfield Public Policy Award of the American Academy of Sleep Medicine (2003), and the Outstanding Educator Award of the Sleep Research Society (2005). The Sleep Research Society has since renamed the award the Mary A. Carskadon Outstanding Educator Award. The Association of Polysomnographic Technologists also presents the Carskadon Award for Research Excellence to a member each year. In 2007 she was presented with the Distinguished Scientist Award by the Sleep Research Society. She is a past president of the Sleep Research Society (1999–2000) and founder of the Northeast Sleep Society (1986). In 2020 Carskadon was recognized and awarded by Harvard Medical School Division of Sleep Medicine Prize, for her outstanding lifetime contribution to the field of sleep.
Carskadon has published many research articles and book chapters. In addition she has edited or co-edited several books such as The Encyclopedia of Sleep and Dreaming, Sleep Medicine, and Adolescent Sleep Patterns: Biological, Social, and Psychological Influences.
References
External links
Carskadon Biography, Brown Medical School
Carskadon Biography, Bradley Hasbro Children's Research Center
Sleep researchers
Gettysburg College alumni
Stanford University alumni
Brown University faculty
Living people
Year of birth missing (living people)
Chronobiologists
American women scientists
American women academics
American women neuroscientists
American neuroscientists | Mary Carskadon | [
"Biology"
] | 665 | [
"Sleep researchers",
"Behavior",
"Sleep"
] |
16,311,904 | https://en.wikipedia.org/wiki/17%20Leporis | 17 Leporis is a binary star system in the southern constellation of Lepus. It has an overall apparent visual magnitude which varies between 4.82 and 5.06, making it luminous enough to be visible to the naked eye as a faint star. The variable star designation for this system is SS Leporis, while 17 Leporis is the Flamsteed designation. Parallax measurements yield a distance estimate of around 910 light years from the Sun. The system is moving further away from the Earth with a heliocentric radial velocity of +18.7 km/s.
This is a double-lined spectroscopic binary system with an orbital period of 260 days and an eccentricity of 0.005. The spectrum reveals the pair to consist of an A-type main-sequence star with a stellar classification of A1 V, and a red giant with a class of M6III. The close pair form a symbiotic binary with ongoing mass transfer from the giant to the hotter component. The giant does not appear to be filling its Roche lobe, so the mass transfer is coming from stellar wind off the giant. The pair are surrounded by a shell and a dusty circumbinary disk, with the former obliterating the lines from the A-type star.
Gallery
References
A-type main-sequence stars
M-type giants
Emission-line stars
Circumstellar disks
Lepus (constellation)
BD-16 1349
Leporis, 17
041511
028816
2148
Leporis, SS
Articles containing video clips
Spectroscopic binaries | 17 Leporis | [
"Astronomy"
] | 326 | [
"Lepus (constellation)",
"Constellations"
] |
16,312,085 | https://en.wikipedia.org/wiki/Hu%E2%80%93Washizu%20principle | In continuum mechanics, and in particular in finite element analysis, the Hu–Washizu principle is a variational principle which says that the action
is stationary, where is the elastic stiffness tensor. The Hu–Washizu principle is used to develop mixed finite element methods. The principle is named after Hu Haichang and Kyūichirō Washizu.
References
Further reading
K. Washizu: Variational Methods in Elasticity & Plasticity, Pergamon Press, New York, 3rd edition (1982)
O. C. Zienkiewicz, R. L. Taylor, J. Z. Zhu : The Finite Element Method: Its Basis and Fundamentals, Butterworth–Heinemann, (2005).
Calculus of variations
Finite element method
Structural analysis
Principles
Continuum mechanics | Hu–Washizu principle | [
"Physics",
"Mathematics",
"Engineering"
] | 160 | [
"Structural engineering",
"Continuum mechanics",
"Applied mathematics",
"Structural analysis",
"Classical mechanics",
"Applied mathematics stubs",
"Mechanical engineering",
"Aerospace engineering"
] |
16,312,380 | https://en.wikipedia.org/wiki/HR%202554 | HR 2554, also known as V415 Carinae and A Carinae, is an eclipsing spectroscopic binary of the Algol type in the constellation of Carina whose apparent visual magnitude varies by 0.06 magnitude and is approximately 4.39 at maximum brightness. It is easily visible to the naked eye of a person far from brightly-lit urban ares. Its primary is a G-type bright giant star and its secondary is an A-type main-sequence star. It is approximately 553 light-years from Earth.
HR 2554 A
The primary component, HR 2554 A, is a yellow G-type bright giant with a mean apparent magnitude of +4.4.
HR 2554 B
The secondary component, HR 2554 B, is a white A-type main-sequence dwarf, about three magnitudes fainter than the primary.
HR 2554 binary system
HR 2554 has two components in orbit around each other, making it a binary star. The semi-major axis of the secondary's orbit is 2.17 arcseconds. Thomas B. Ake and Sidney B. Parsons discovered that HR 2554 is a variable star, in 1986. It was given its variable star designation, V415 Carinae, in 1989. The two components regularly eclipse each other. The system's brightness varies by 0.06 magnitude with a period equal to its orbital period of 195 days.
References
Carina (constellation)
G-type bright giants
A-type main-sequence stars
Algol variables
Spectroscopic binaries
Carinae, A
050337
032761
2554
Carinae, V415
CD-53 01613 | HR 2554 | [
"Astronomy"
] | 343 | [
"Carina (constellation)",
"Constellations"
] |
16,312,706 | https://en.wikipedia.org/wiki/HD%2049976 | HD 49976 is a variable star in the constellation of Monoceros (the Unicorn). It has the variable star designation V592 Monocerotis, while HD 49976 is the identifier from the Henry Draper Catalogue. It has a white hue and is near the lower limit of visibility to the naked eye, having an apparent visual magnitude that fluctuates from 6.16 down to 6.32 with a 2.976 day period. Based upon parallax measurements, it is located at a distance of approximately 337 light years from the Sun. The star is drifting further away with a radial velocity of +19 km/s.
This is a magnetic chemically peculiar star with a stellar classification of , showing excesses in strontium and the rare earth elements in the photosphere, among others. Houk and Swift (1999) assigned it a class of B9V, matching a B-type main sequence star. It is an Alpha2 Canum Venaticorum variable; the magnetic field is complex; not corresponding to a simple dipole.
HD 49976 is an estimated 209 million years old and is spinning with a period of 2.976 days. The star has 2.2 times the mass of the Sun and 2.3 times the Sun's radius. It is radiating 32 times the luminosity of the Sun from its photosphere at an effective temperature of 9,016 K.
References
Ap stars
Alpha2 Canum Venaticorum variables
B-type main-sequence stars
Monoceros
Durchmusterung objects
049976
032838
2534
Monocerotis, V592 | HD 49976 | [
"Astronomy"
] | 348 | [
"Monoceros",
"Constellations"
] |
16,313,021 | https://en.wikipedia.org/wiki/Iodine%20pentoxide | Iodine pentoxide is the chemical compound with the formula I2O5. This iodine oxide is the anhydride of iodic acid, and one of the few iodine oxides that is stable. It is produced by dehydrating iodic acid at 200 °C in a stream of dry air:
2HIO3 → I2O5 + H2O
Structure
I2O5 is bent with an I–O–I angle of 139.2°, but the molecule has no mirror plane so its symmetry is C2 rather than C2v. The terminal I–O distances are around 1.80 Å and the bridging I–O distances are around 1.95 Å.
Reactions
Iodine pentoxide easily oxidises carbon monoxide to carbon dioxide at room temperature:
5 CO + I2O5 → I2 + 5 CO2
This reaction can be used to analyze the concentration of CO in a gaseous sample.
I2O5 forms iodyl salts, [IO2+], with SO3 and S2O6F2, but iodosyl salts, [IO+], with concentrated sulfuric acid.
Iodine pentoxide decomposes to iodine (vapor) and oxygen when heated to about 350 °C.
References
Iodine compounds
Oxides
Acidic oxides
Oxidizing agents | Iodine pentoxide | [
"Chemistry"
] | 278 | [
"Oxides",
"Redox",
"Oxidizing agents",
"Salts"
] |
16,313,813 | https://en.wikipedia.org/wiki/Eyesore | An eyesore is something that is largely considered to look unpleasant or ugly. Its technical usage is as an alternative perspective to the notion of landmark. Common examples include dilapidated buildings, graffiti, litter, polluted areas, and excessive commercial signage such as billboards. Some eyesores may be a matter of opinion such as controversial modern architecture (see also spite house), transmission towers or wind turbines. Natural eyesores include feces, mud and weeds.
Effect on property values
In the US, the National Association of Realtors says an eyesore can shave about 10 percent off the value of a nearby listing.
Remediation
Clean-up programmes to improve or remove eyesores are often started by local bodies or even national governments. These are frequently called Operation Eyesore. High-profile international events such as the Olympic Games usually trigger such activity.
Others contend that it is best to address these problems while they are small, since signs of neglect encourage anti-social behaviour such as vandalism and fly-tipping. This strategy is known as fixing broken windows.
Controversy
Whether some constructions are eyesores is a matter of opinion which may change over time. Landmarks are often called eyesores.
Examples of divided opinion
Eiffel Tower – Upon its construction, Parisians wanted it torn down as an eyesore. In modern times it is one of the world's top landmarks.
Golden Gate Bridge – Controversial ahead of its construction, it being said in The Wasp that it "would prove an eye-sore to those now living ... certainly mar if not utterly destroy the natural charm of the harbor famed throughout the world." It is now considered a notable landmark.
Millennium Dome – The ugliest building in the world in a poll by the business magazine Forbes of "15 architects, all of whom were American apart from one who was British and one who was Canadian".
Federation Square – Despite being hailed a landmark by many, it has equally been rejected by many notable Australians as an eyesore.
Wind farms – Thought to be the worst eyesore by readers of Country Life but liked by others.
Boston City Hall – Has been called "The World's Ugliest Building".
One Rincon Hill – Situated just south of San Francisco's Financial District, this high-rise condominium surrounded by shorter buildings has generated some mixed reviews.
Lloyd's Building – Situated in the City of London, this building was described as an oil refinery when it was opened in 1986 for having most of its facilities, stairways and AC on the outside. Some people still say this, although the building has become more popular and liked in the recent years.
Tour Montparnasse – A lone skyscraper in the Montparnasse area of Paris, France. Its appearance mars the Paris urban landscape, and construction of skyscrapers was banned in the city centre two years after its completion. A 2008 poll of editors on Virtualtourist voted the building the second ugliest building in the world. It is sometimes said that the view from the top is the most beautiful in Paris, because it is the only place from which the tower itself cannot be seen.
Brisbane Transit Centre and Riverside Expressway – Both have been called eyesores and planning debacles by University of Queensland Associate Professor of Architecture Peter Skinner.
Tricorn Centre in Portsmouth – Built in 1964, it was initially highly respected. It was described as a "mildewed lump of elephant droppings" by Prince Charles, and was subsequently demolished.
Structures that have been described as eyesores
Spencer Street Power Station – An asbestos ridden landmark regarded by many as Melbourne's biggest eyesore. It was demolished in 2008.
Cahill Expressway in Sydney – Regarded by many as a major planning mistake.
Sydney Harbour Control Tower – Constructed in 1974 and demolished in 2016.
Riverside Plaza in Minneapolis, Minnesota
Embarcadero Freeway – Along The Embarcadero in San Francisco, this double-decker elevated freeway blocked The Embarcadero's view and shadowed the boulevard under it. When it was demolished in 1991, the long-abandoned Ferry Building and the boulevard under the freeway were restored.
Petrobras Headquarters in Rio de Janeiro, Brazil – An example of concrete brutalism applied to an office building.
The Hole In The Road in Sheffield, England – Filled-in during 1994.
City-Center in Helsinki – Colloquially known as Makkaratalo (Sausage House) because of the concrete sausage-like railing circling the third floor parking lot.
Northampton Power Station, England – Left derelict since 1975, it was demolished circa 2015 to make way for the University of Northampton.
House of Soviets, Kaliningrad, Russia – "The ugliest building on Russian soil".
School of Architecture, Royal Institute of Technology, Stockholm, Sweden – Won an opinion poll for Stockholm's ugliest building, by broad majority. Damaged by a fire in 2011.
Spire of Dublin in Dublin, Republic of Ireland
American Dream Meadowlands – Most politicians and the public have equally criticized the building's appearance calling it "The ugliest building in New Jersey".
Waldschlösschen Bridge in Dresden, Germany – The Dresden Elbe Valley lost the UNESCO World Heritage Site status because of this bridge.
Barclays Center in Brooklyn, New York – Widely regarded as a jarring and aesthetically unappealing addition to the local landscape.
Cebu City Hall – Considered an eyesore by many during the early to mid 2000s, until it was renovated in 2007, and is now considered as one of the best city halls in the Philippines.
Majesty Building in Altamonte Springs, Florida – Locally known as the I-4 Eyesore, a building that has been under construction since 2001.
Torre de Manila – A high-rise development by DMCI Homes that dwarfs the Rizal Monument.
Viking Wind Farm – Under construction in the Tingwall Valley in Central Shetland.
See also
Aesthetics
Brownfield land
Local ordinances
NIMBY
Redevelopment
Spite fence
Town planning
Ugliness
Urban blight
Visual pollution
References
External links
Aesthetics
Urban planning
Pollution | Eyesore | [
"Engineering"
] | 1,218 | [
"Urban planning",
"Architecture"
] |
16,314,447 | https://en.wikipedia.org/wiki/Robert%20McCarley | Robert W. McCarley, MD, (1937–2017) was Chair and Professor of Psychiatry at Harvard Medical School and the VA Boston Healthcare System. He is also Director of the Laboratory of Neuroscience located at the Brockton VA Medical Center and the McLean Hospital. McClarley was a prominent researcher in the field of sleep and dreaming as well as schizophrenia.
McCarley graduated from Harvard College in 1959 and Harvard Medical School in 1964. During his residency at Massachusetts Mental Health Center, he studied with J. Allan Hobson. In 1977, Hobson and McCarley developed the activation synthesis theory of dreaming that said that dreams do not have meanings and are the result of the brain attempting to make sense of random neuronal firing in the cortex. McCarley has extensively studied the brainstem mechanisms that control REM sleep. Additionally, he has studied the buildup of adenosine in the basal forebrain following sleep deprivation.
In the area of schizophrenia, McCarley has studied brain abnormalities in patients with schizophrenia. McCarley and Martha Shenton published a classic paper in 1992 that described a relationship in a reduction in the volume of the left superior temporal gyrus and thought disorder in patients with schizophrenia.
McCarley has been presented with many awards for his research. In 1998, he received William S. Middleton Award which is the highest honor awarded to a VA biomedical research scientist. He has also been presented awards from the Sleep Research Society, American Psychiatric Association, and American Academy of Sleep Medicine.
In 2007, McCarley was ranked as the ninth most cited author in the field of schizophrenia research over the past decade. McCarley has published around 300 research articles and several books and book chapters such as Brain Control of Wakefulness and Sleep.
References
External links
An ESSAY with Dr. Robert McCarley
Faculty Profile, Harvard Medical School
1937 births
2017 deaths
American psychiatrists
Sleep researchers
Harvard Medical School faculty
Harvard Medical School alumni
People from Mayfield, Kentucky
Harvard College alumni
McLean Hospital physicians | Robert McCarley | [
"Biology"
] | 403 | [
"Sleep researchers",
"Behavior",
"Sleep"
] |
16,315,269 | https://en.wikipedia.org/wiki/Muraglitazar | Muraglitazar (proposed tradename Pargluva) is a dual peroxisome proliferator-activated receptor agonist with affinity to PPARα and PPARγ.
The drug had completed phase III clinical trials, however in May 2006 Bristol-Myers Squibb announced that it had discontinued further development.
Data on muraglitazar is relatively sparse due to the brief introduction and subsequent abandonment of this agent. One double-blind randomized clinical trial comparing muraglitazar and pioglitazone found that the effects of the former were favourable in terms of HDL-C increase, decrease in total cholesterol, apolipoprotein B, triglycerides and a greater reduction in HbA1c (p <0.0001 for all comparisons). However, the muraglitazar group had a higher all-cause mortality, greater incidence of edema and heart failure and more weight gain compared to the pioglitazone group. A meta-analysis of the phase II and III clinical trials of muraglitazar revealed that it was associated with a greater incidence of myocardial infarction, stroke, transient ischemic attacks and congestive heart failure (CHF) when compared to placebo or pioglitazone.
By calling attention to adverse events made public through the FDA advisory committee process, Dr Nissen came upon a mechanism to steer FDA from the outside. This mechanism came to fruition with rosiglitazone (Avandia) and led to FDA requiring demonstration of cardiac safety for new drugs to treat type 2 diabetes. This process is described by Dr Robert Misbin in INSULIN-History from an FDA Insider, published June 1, 2020 on Amazon.
References
Abandoned drugs
Oxazoles
Carbamates
4-Methoxyphenyl compounds
PPAR agonists | Muraglitazar | [
"Chemistry"
] | 376 | [
"Drug safety",
"Abandoned drugs"
] |
16,315,657 | https://en.wikipedia.org/wiki/High-definition%20television | High-definition television (HDTV) describes a television or video system which provides a substantially higher image resolution than the previous generation of technologies. The term has been used since at least 1933; in more recent times, it refers to the generation following standard-definition television (SDTV). It is the standard video format used in most broadcasts: terrestrial broadcast television, cable television, satellite television.
Formats
HDTV may be transmitted in various formats:
720p (): 921,600 pixels
1080i () interlaced scan: 1,036,800 pixels (≈1.04Mpx).
1080p () progressive scan: 2,073,600 pixels (≈2.07Mpx).
Some countries also use a non-standard CTA resolution, such as : 777,600 pixels (≈0.78Mpx) per field or 1,555,200 pixels (≈1.56Mpx) per frame
When transmitted at two megapixels per frame, HDTV provides about five times as many pixels as SD (standard-definition television). The increased resolution provides for a clearer, more detailed picture. In addition, progressive scan and higher frame rates result in a picture with less flicker and better rendering of fast motion. Modern HDTV began broadcasting in 1989 in Japan, under the MUSE/Hi-Vision analog system. HDTV was widely adopted worldwide in the late 2000s.
Standards
All modern high-definition broadcasts utilize digital television standards.
The major digital television broadcast standards used for terrestrial, cable, satellite, and mobile devices are:
DVB, originating in Europe and also used in much of Asia, Africa, and Australia
ATSC, used in much of North America
DTMB, used in China and some neighboring countries
ISDB, used in two incompatible variations in Japan and South America
DMB, used by mobile devices in South Korea
These standards use a variety of video codecs, some of which are also used for internet video.
History
The term high definition once described a series of television systems first announced in 1933 and launched starting in August 1936; however, these systems were only high definition when compared to earlier systems that were based on mechanical systems with as few as 30 lines of resolution. The ongoing competition between companies and nations to create true HDTV spanned the entire 20th century, as each new system became higher definition than the last. In the early 21st century, this race has continued with 4K, 5K and 8K systems.
The British high-definition TV service started trials in August 1936 and a regular service on 2 November 1936 using both the (mechanical) Baird 240 line sequential scan (later referred to as progressive) and the (electronic) Marconi-EMI 405 line interlaced systems. The Baird system was discontinued in February 1937. In 1938 France followed with its own 441-line system, variants of which were also used by a number of other countries. The US NTSC 525-line system joined in 1941. In 1949 France introduced an even higher-resolution standard at 819 lines, a system that would have been high definition even by modern standards, but was monochrome only and had technical limitations that prevented it from achieving the intended definition. All of these systems used interlacing and a 4:3 aspect ratio except the 240-line system which was progressive (actually described at the time by the technically correct term sequential) and the 405-line system which started as 5:4 and later changed to 4:3. The 405-line system adopted the (at that time) revolutionary idea of interlaced scanning to overcome the flicker problem of the 240-line with its 25 Hz frame rate. The 240-line system could have doubled its frame rate but this would have meant that the transmitted signal would have doubled in bandwidth, an unacceptable option as the video baseband bandwidth was required to be not more than 3 MHz.
Color broadcasts started at similar line counts, first with the US NTSC color system in 1953, which was compatible with the earlier monochrome systems and therefore had the same 525 lines per frame. European standards did not follow until the 1960s, when the PAL and SECAM color systems were added to the monochrome 625-line broadcasts.
The NHK (Japan Broadcasting Corporation) began researching to "unlock the fundamental mechanism of video and sound interactions with the five human senses" in 1964, after the Tokyo Olympics. NHK set out to create an HDTV system that scored much higher in subjective tests than NTSC's previously dubbed HDTV. This new system, NHK Color, created in 1972, included 1125 lines, a 5:3 (1.67:1) aspect ratio and 60 Hz refresh rate. The Society of Motion Picture and Television Engineers (SMPTE), headed by Charles Ginsburg, became the testing and study authority for HDTV technology in the international theater. SMPTE would test HDTV systems from different companies from every conceivable perspective, but the problem of combining the different formats plagued the technology for many years.
There were four major HDTV systems tested by SMPTE in the late 1970s, and in 1979 an SMPTE study group released A Study of High Definition Television Systems:
EIA monochrome: 4:3 aspect ratio, 1023 lines, 60 Hz
NHK color: 5:3 aspect ratio, 1125 lines, 60 Hz
NHK monochrome: 4:3 aspect ratio, 2125 lines, 50 Hz
BBC colour: 8:3 aspect ratio, 1501 lines, 60 Hz
Since the formal adoption of Digital Video Broadcasting's (DVB) widescreen HDTV transmission modes in the mid to late 2000s; the 525-line NTSC (and PAL-M) systems, as well as the European 625-line PAL and SECAM systems, have been regarded as standard definition television systems.
Analog systems
Early HDTV broadcasting used analog technology that was later converted to digital television with video compression.
In 1949, France started its transmissions with an 819 lines system (with 737 active lines). The system was monochrome only and was used only on VHF for the first French TV channel. It was discontinued in 1983.
In 1958, the Soviet Union developed Тransformator (, meaning Transformer), the first high-resolution (definition) television system capable of producing an image composed of 1,125 lines of resolution aimed at providing teleconferencing for military command. It was a research project and the system was never deployed by either the military or consumer broadcasting.
In 1986, the European Community proposed HD-MAC, an analog HDTV system with 1,152 lines. A public demonstration took place for the 1992 Summer Olympics in Barcelona. However HD-MAC was scrapped in 1993 and the DVB project was formed, which would foresee development of a digital HDTV standard.
Japan
In 1979, the Japanese public broadcaster NHK first developed consumer high-definition television with a 5:3 display aspect ratio. The system, known as Hi-Vision or MUSE after its multiple sub-Nyquist sampling encoding (MUSE) for encoding the signal, required about twice the bandwidth of the existing NTSC system but provided about four times the resolution (1035i/1125 lines). In 1981, the MUSE system was demonstrated for the first time in the United States, using the same 5:3 aspect ratio as the Japanese system. Upon visiting a demonstration of MUSE in Washington, US President Ronald Reagan was impressed and officially declared it "a matter of national interest" to introduce HDTV to the US. NHK taped the 1984 Summer Olympics with a Hi-Vision camera, weighing 40 kg.
Satellite test broadcasts started June 4, 1989, the first daily high-definition programs in the world, with regular testing starting on November 25, 1991, or "Hi-Vision Day"dated exactly to refer to its 1,125-lines resolution. Regular broadcasting of BS-9ch commenced on November 25, 1994, which featured commercial and NHK programming.
Several systems were proposed as the new standard for the US, including the Japanese MUSE system, but all were rejected by the Federal Communications Commission (FCC) because of their higher bandwidth requirements. At this time, the number of television channels was growing rapidly and bandwidth was already a problem. A new standard had to be more efficient, needing less bandwidth for HDTV than the existing NTSC.
Decrease of analog HD systems
The limited standardization of analog HDTV in the 1990s did not lead to global HDTV adoption as technical and economic constraints at the time did not permit HDTV to use bandwidths greater than normal television. Early HDTV commercial experiments, such as NHK's MUSE, required over four times the bandwidth of a standard-definition broadcast. Despite efforts made to reduce analog HDTV to about twice the bandwidth of SDTV, these television formats were still distributable only by satellite. In Europe too, the HD-MAC standard was considered not technically viable.
In addition, recording and reproducing an HDTV signal was a significant technical challenge in the early years of HDTV (Sony HDVS). Japan remained the only country with successful public broadcasting of analog HDTV, with seven broadcasters sharing a single channel.
However, the Hi-Vision/MUSE system also faced commercial issues when it launched on November 25, 1991. Only 2,000 HDTV sets were sold by that day, rather than the enthusiastic 1.32 million estimation. Hi-Vision sets were very expensive, up to US$30,000 each, which contributed to its low consumer adaption. A Hi-Vision VCR from NEC released at Christmas time retailed for US$115,000. In addition, the United States saw Hi-Vision/MUSE as an outdated system and had already made it clear that it would develop an all-digital system. Experts thought the commercial Hi-Vision system in 1992 was already eclipsed by digital technology developed in the U.S. since 1990. This was an American victory against the Japanese in terms of technological dominance. By mid-1993 prices of receivers were still as high as 1.5 million yen (US$15,000).
On February 23, 1994, a top broadcasting administrator in Japan admitted failure of its analog-based HDTV system, saying the U.S. digital format would be more likely a worldwide standard. However this announcement drew angry protests from broadcasters and electronic companies who invested heavily into the analog system. As a result, he took back his statement the next day saying that the government will continue to promote Hi-Vision/MUSE. That year NHK started development of digital television in an attempt to catch back up to America and Europe. This resulted in the ISDB format. Japan started digital satellite and HDTV broadcasting in December 2000.
Rise of digital compression
High-definition digital television was not possible with uncompressed video, which requires a bandwidth exceeding 1Gbit/s for studio-quality HD digital video. Digital HDTV was made possible by the development of discrete cosine transform (DCT) video compression. DCT coding is a lossy image compression technique that was first proposed by Nasir Ahmed in 1972, and was later adapted into a motion-compensated DCT algorithm for video coding standards such as the H.26x formats from 1988 onwards and the MPEG formats from 1993 onwards. Motion-compensated DCT compression significantly reduces the amount of bandwidth required for a digital TV signal. By 1991, it had achieved data compression ratios from 8:1 to 14:1 for near-studio-quality HDTV transmission, down to 70140 Mbit/s. Between 1988 and 1991, DCT video compression was widely adopted as the video coding standard for HDTV implementations, enabling the development of practical digital HDTV. Dynamic random-access memory (DRAM) was also adopted as framebuffer semiconductor memory, with the DRAM semiconductor industry's increased manufacturing and reducing prices important to the commercialization of HDTV.
Since 1972, International Telecommunication Union's radio telecommunications sector (ITU-R) had been working on creating a global recommendation for Analog HDTV. These recommendations, however, did not fit in the broadcasting bands which could reach home users. The standardization of MPEG-1 in 1993 led to the acceptance of recommendations ITU-R BT.709. In anticipation of these standards, the DVB organization was formed. It was alliance of broadcasters, consumer electronics manufacturers and regulatory bodies. The DVB develops and agrees upon specifications which are formally standardised by ETSI.
DVB created first the standard for DVB-S digital satellite TV, DVB-C digital cable TV and DVB-T digital terrestrial TV. These broadcasting systems can be used for both SDTV and HDTV. In the US the Grand Alliance proposed ATSC as the new standard for SDTV and HDTV. Both ATSC and DVB were based on the MPEG-2 standard, although DVB systems may also be used to transmit video using the newer and more efficient H.264/MPEG-4 AVC compression standards. Common for all DVB standards is the use of highly efficient modulation techniques for further reducing bandwidth, and foremost for reducing receiver-hardware and antenna requirements.
In 1983, the International Telecommunication Union's radio telecommunications sector (ITU-R) set up a working party (IWP11/6) with the aim of setting a single international HDTV standard. One of the thornier issues concerned a suitable frame/field refresh rate, the world already having split into two camps, 25/50 Hz and 30/60 Hz, largely due to the differences in mains frequency. The IWP11/6 working party considered many views and throughout the 1980s served to encourage development in a number of video digital processing areas, not least conversion between the two main frame/field rates using motion vectors, which led to further developments in other areas. While a comprehensive HDTV standard was not in the end established, agreement on the aspect ratio was achieved.
Initially the existing 5:3 aspect ratio had been the main candidate but, due to the influence of widescreen cinema, the aspect ratio 16:9 (1.78) eventually emerged as being a reasonable compromise between 5:3 (1.67) and the common 1.85 widescreen cinema format. An aspect ratio of 16:9 was duly agreed upon at the first meeting of the IWP11/6 working party at the BBC's Research and Development establishment in Kingswood Warren. The resulting ITU-R Recommendation ITU-R BT.709-2 ("Rec. 709") includes the 16:9 aspect ratio, a specified colorimetry, and the scan modes 1080i (1,080 actively interlaced lines of resolution) and 1080p (1,080 progressively scanned lines). The British Freeview HD trials used MBAFF, which contains both progressive and interlaced content in the same encoding.
It also includes the alternative 1440×1152 HDMAC scan format. (According to some reports, a mooted 750-line (720p) format (720 progressively scanned lines) was viewed by some at the ITU as an enhanced television format rather than a true HDTV format, and so was not included, although 1920×1080i and 1280×720p systems for a range of frame and field rates were defined by several US SMPTE standards.)
Inaugural HDTV broadcast in the United States
HDTV technology was introduced in the United States in the early 1990s and made official in 1993 by the Digital HDTV Grand Alliance, a group of television, electronic equipment, communications companies consisting of AT&T Bell Labs, General Instrument, Philips, Sarnoff, Thomson, Zenith and the Massachusetts Institute of Technology. Field testing of HDTV at 199 sites in the United States was completed August 14, 1994. The first public HDTV broadcast in the United States occurred on July 23, 1996, when the Raleigh, North Carolina television station WRAL-HD began broadcasting from the existing tower of WRAL-TV southeast of Raleigh, winning a race to be first with the HD Model Station in Washington, D.C., which began broadcasting July 31, 1996 with the callsign WHD-TV, based out of the facilities of NBC owned and operated station WRC-TV. The American Advanced Television Systems Committee (ATSC) HDTV system had its public launch on October 29, 1998, during the live coverage of astronaut John Glenn's return mission to space on board the Space Shuttle Discovery. The signal was transmitted coast-to-coast, and was seen by the public in science centers, and other public theaters specially equipped to receive and display the broadcast.
European HDTV broadcasts
Between 1988 and 1991, several European organizations were working on discrete cosine transform (DCT) based digital video coding standards for both SDTV and HDTV. The EU 256 project by the CMTT and ETSI, along with research by Italian broadcaster RAI, developed a DCT video codec that broadcast near-studio-quality HDTV transmission at about 70140 Mbit/s. The first HDTV transmissions in Europe, albeit not direct-to-home, began in 1990, when RAI broadcast the 1990 FIFA World Cup using several experimental HDTV technologies, including the digital DCT-based EU 256 codec, the mixed analog-digital HD-MAC technology, and the analog MUSE technology. The matches were shown in 8 cinemas in Italy, where the tournament was played, and 2 in Spain. The connection with Spain was made via the Olympus satellite link from Rome to Barcelona and then with a fiber optic connection from Barcelona to Madrid. After some HDTV transmissions in Europe, the standard was abandoned in 1993, to be replaced by a digital format from DVB.
The first regular broadcasts began on January 1, 2004, when the Belgian company Euro1080 launched the HD1 channel with the traditional Vienna New Year's Concert. Test transmissions had been active since the IBC exhibition in September 2003, but the New Year's Day broadcast marked the official launch of the HD1 channel, and the official start of direct-to-home HDTV in Europe.
Euro1080, a division of the later defunct Belgian TV services company Alfacam, broadcast HDTV channels to break the pan-European stalemate of "no HD broadcasts mean no HD TVs bought means no HD broadcasts ..." and kick-start HDTV interest in Europe. The HD1 channel was initially free-to-air and mainly comprised sporting, dramatic, musical and other cultural events broadcast with a multi-lingual soundtrack on a rolling schedule of four or five hours per day.
These first European HDTV broadcasts used the 1080i format with MPEG-2 compression on a DVB-S signal from SES's Astra 1H satellite. Euro1080 transmissions later changed to MPEG-4/AVC compression on a DVB-S2 signal in line with subsequent broadcast channels in Europe.
Despite delays in some countries, the number of European HD channels and viewers has risen steadily since the first HDTV broadcasts, with SES's annual Satellite Monitor market survey for 2010 reporting more than 200 commercial channels broadcasting in HD from Astra satellites, 185 million HD capable TVs sold in Europe (£60 million in 2010 alone), and 20 million households (27% of all European digital satellite TV homes) watching HD satellite broadcasts (16 million via Astra satellites).
In December 2009, the United Kingdom became the first European country to deploy high-definition content using the new DVB-T2 transmission standard, as specified in the Digital TV Group (DTG) D-book, on digital terrestrial television.
The Freeview HD service contains 13 HD channels () and was rolled out region by region across the UK in accordance with the digital switchover process, finally being completed in October 2012. However, Freeview HD is not the first HDTV service over digital terrestrial television in Europe; Italy's RAI started broadcasting in 1080i on April 24, 2008, using the DVB-T transmission standard.
In October 2008, France deployed five high definition channels using DVB-T transmission standard on digital terrestrial distribution.
Notation
HDTV broadcast systems are identified with three major parameters:
Frame size in pixels is defined as number of horizontal pixels × number of vertical pixels, for example 1280 × 720 or 1920 × 1080. Often the number of horizontal pixels is implied from context and is omitted, as in the case of 720p and 1080p.
Scanning system is identified with the letter p for progressive scanning or i for interlaced scanning.
Frame rate is identified as number of video frames per second. For interlaced systems, the number of frames per second should be specified, but it is not uncommon to see the field rate incorrectly used instead.
If all three parameters are used, they are specified in the following form: [frame size][scanning system][frame or field rate] or [frame size]/[frame or field rate][scanning system]. Often, frame size or frame rate can be dropped if its value is implied from context. In this case, the remaining numeric parameter is specified first, followed by the scanning system.
For example, 1920×1080p25 identifies progressive scanning format with 25 frames per second, each frame being 1,920 pixels wide and 1,080 pixels high. The 1080i25 or 1080i50 notation identifies interlaced scanning format with 25 frames (50 fields) per second, each frame being 1,920 pixels wide and 1,080 pixels high. The 1080i30 or 1080i60 notation identifies interlaced scanning format with 30 frames (60 fields) per second, each frame being 1,920 pixels wide and 1,080 pixels high. The 720p60 notation identifies progressive scanning format with 60 frames per second, each frame being 720 pixels high; 1,280 pixels horizontally are implied.
Systems using 50 Hz support three scanning rates: 50i, 25p and 50p, while 60 Hz systems support a much wider set of frame rates: 59.94i, 60i, 23.976p, 24p, 29.97p, 30p, 59.94p and 60p. In the days of standard-definition television, the fractional rates were often rounded up to whole numbers, e.g. 23.976p was often called 24p, or 59.94i was often called 60i. Sixty Hertz high definition television supports both fractional and slightly different integer rates, therefore strict usage of notation is required to avoid ambiguity. Nevertheless, 29.97p/59.94i is almost universally called 60i, likewise 23.976p is called 24p.
For the commercial naming of a product, the frame rate is often dropped and is implied from context (e.g., a 1080i television set). A frame rate can also be specified without a resolution. For example, 24p means 24 progressive scan frames per second, and 50i means 25 interlaced frames per second.
There is no single standard for HDTV color support. Colors are typically broadcast using a (10-bits per channel) YUV color space but, depending on the underlying image generating technologies of the receiver, are then subsequently converted to a RGB color space using standardized algorithms. When transmitted directly through the Internet, the colors are typically pre-converted to 8-bit RGB channels for additional storage savings with the assumption that it will only be viewed only on a (sRGB) computer screen. As an added benefit to the original broadcasters, the losses of the pre-conversion essentially make these files unsuitable for professional TV re-broadcasting.
Most HDTV systems support resolutions and frame rates defined either in the ATSC table 3, or in EBU specification. The most common are noted below.
Display resolutions
At a minimum, HDTV has twice the linear resolution of standard-definition television (SDTV), thus showing greater detail than either analog television or regular DVD. The technical standards for broadcasting HDTV also handle the 16:9 aspect ratio images without using letterboxing or anamorphic stretching, thus increasing the effective image resolution.
A very high-resolution source may require more bandwidth than available in order to be transmitted without loss of fidelity. The lossy compression that is used in all digital HDTV storage and transmission systems will distort the received picture when compared to the uncompressed source.
Standard frame or field rates
ATSC and DVB define the following frame rates for use with the various broadcast standards:
23.976 Hz (film-looking frame rate compatible with NTSC clock speed standards)
24 Hz (international film and ATSC high-definition material)
25 Hz (PAL film, DVB standard-definition and high-definition material)
29.97 Hz (NTSC film and standard-definition material)
30 Hz (NTSC film, ATSC high-definition material)
50 Hz (DVB high-definition material)
59.94 Hz (ATSC high-definition material)
60 Hz (ATSC high-definition material)
The optimum format for a broadcast depends upon the type of videographic recording medium used and the image's characteristics. For best fidelity to the source, the transmitted field ratio, lines, and frame rate should match those of the source.
PAL, SECAM and NTSC frame rates technically apply only to analog standard-definition television, not to digital or high definition broadcasts. However, with the rollout of digital broadcasting, and later HDTV broadcasting, countries retained their heritage systems. HDTV in former PAL and SECAM countries operates at a frame rate of 25/50 Hz, while HDTV in former NTSC countries operates at 30/60 Hz.
Types of media
High-definition image sources include terrestrial broadcast, direct broadcast satellite, digital cable, IPTV, Blu-ray video disc (BD), and internet downloads.
In the US, residents in the line of sight of television station broadcast antennas can receive free, over-the-air programming with a television set with an ATSC tuner via a TV aerial. Laws prohibit homeowners' associations and city government from banning the installation of antennas.
Standard 35mm photographic film used for cinema projection has a much higher image resolution than HDTV systems, and is exposed and projected at a rate of 24 frames per second (frame/s). To be shown on standard television, in PAL-system countries, cinema film is scanned at the TV rate of 25 frame/s, causing a speedup of 4.1 percent, which is generally considered acceptable. In NTSC-system countries, the TV scan rate of 30 frame/s would cause a perceptible speedup if the same were attempted, and the necessary correction is performed by a technique called 3:2 pulldown: Over each successive pair of film frames, one is held for three video fields (1/20 of a second) and the next is held for two video fields (1/30 of a second), giving a total time for the two frames of 1/12 of a second and thus achieving the correct average film frame rate.
Non-cinematic HDTV video recordings intended for broadcast are typically recorded either in 720p or 1080i format as determined by the broadcaster. 720p is commonly used for Internet distribution of high-definition video, because most computer monitors operate in progressive-scan mode. 720p also imposes less strenuous storage and decoding requirements compared to both 1080i and 1080p. 1080p/24, 1080i/30, 1080i/25, and 720p/30 is most often used on Blu-ray Disc.
Recording and compression
HDTV can be recorded to D-VHS (Digital-VHS or Data-VHS), W-VHS (analog only), to an HDTV-capable digital video recorder (for example DirecTV's high-definition digital video recorder, Sky HD's set-top box, Dish Network's VIP 622 or VIP 722 high-definition digital video recorder receivers (these set-top boxes allow for HD on the Primary TV and SD on the secondary TV (TV2) without a secondary box on TV2), or TiVo's Series 3 or HD recorders), or an HDTV-ready HTPC. Some cable boxes are capable of receiving or recording two or more broadcasts at a time in HDTV format, and HDTV programming, some included in the monthly cable service subscription price, some for an additional fee, can be played back with the cable company's on-demand feature.
The massive amount of data storage required to archive uncompressed streams meant that inexpensive uncompressed storage options were not available to the consumer. In 2008, the Hauppauge 1212 Personal Video Recorder was introduced. This device accepts HD content through component video inputs and stores the content in MPEG-2 format in a .ts file or in a Blu-ray-compatible format .m2ts file on the hard drive or DVD burner of a computer connected to the PVR through a USB 2.0 interface. More recent systems are able to record a broadcast high definition program in its 'as broadcast' format or transcode to a format more compatible with Blu-ray.
Analog tape recorders with bandwidth capable of recording analog HD signals, such as W-VHS recorders, are no longer produced for the consumer market and are both expensive and scarce in the secondary market.
In the United States, as part of the FCC's plug and play agreement, cable companies are required to provide customers who rent HD set-top boxes with a set-top box with "functional" FireWire (IEEE 1394) on request. None of the direct broadcast satellite providers have offered this feature on any of their supported boxes, but some cable TV companies have. , boxes are not included in the FCC mandate. This content is protected by encryption known as 5C. This encryption can prevent duplication of content or simply limit the number of copies permitted, thus effectively denying most if not all fair use of the content.
See also
Display motion blur
Glossary of video terms
High Efficiency Video Coding
List of digital television deployments by country
Optimum HDTV viewing distance
Ultra-high-definition television (UHD or UHDTV)
References
Further reading
Joel Brinkley (1997), Defining Vision: The Battle for the Future of Television, New York: Harcourt Brace.
High Definition Television: The Creation, Development and Implementation of HDTV Technology by Philip J. Cianci (McFarland & Company, 2012)
Technology, Television, and Competition (New York: Cambridge University Press, 2004)
External links
History
L'Alta Definizione a Torino 1986–2006 the Italian HDTV experience from 1980s to 2006 in Italian C.R.I.T./RAI
The HDTV Archive Project
European adoption
Images formats for HDTV, article from the EBU, Technical Review
High Definition for Europe a progressive approach, article from the EBU, Technical Review
High Definition (HD) Image Formats for Television Production, technical report from the EBU
High-definition television
Telecommunications-related introductions in 1936
Telecommunications-related introductions in 1990
ATSC
Consumer electronics
Digital television
Television technology
Television terminology | High-definition television | [
"Technology"
] | 6,395 | [
"Information and communications technology",
"Television technology"
] |
16,315,705 | https://en.wikipedia.org/wiki/Egg%20tossing%20%28behavior%29 | Egg tossing or egg destruction is a behavior observed in some species of birds where one individual removes an egg from the communal nest. This is related to infanticide, where parents kill their own or other's offspring. Egg tossing is observed in avian species, most commonly females, who are involved with cooperative breeding or brood parasitism. Among colonial non-co-nesting birds, egg-tossing is observed to be performed by an individual of the same species, and, in the case of brood parasites, this behavior is done by either the same or different species. The behavior of egg tossing offers its advantages and disadvantages to both the actor and recipient.
Behavior
Tossing of eggs is non-accidental; the individual rolls the egg to the edge of the nest by repeatedly flicking it with its beak. In brood-parasitic birds, such as the common cuckoo, the chick will push host eggs out using its back. During co-nesting, before a bird starts laying its own eggs, it will toss out eggs laid previously by other females. As a result, the last egg-layers may contribute more eggs to the common nest, and this will increase the chances that newly laid eggs bearing the genetic material of that female will have a better chance of survival. In some species, egg-tossing is a strategy of clutch coordination; eggs are tossed until all birds in the common nest are ready to proceed with brooding. This helps to prevent early egg-layers from dominating reproduction.
Species
Some examples of communal breeders that demonstrate the egg tossing behavior are: ostriches, groove-billed anis, acorn woodpeckers, gray-breasted jays, guira cuckoos, smooth-billed anis, and common cuckoos.
Advantages and disadvantages
Advantages
Performing the egg tossing behavior increases the number of offspring per individual compared to those in single pairs. Many species have learned to adapt to this behavior to increase the chances of offspring survival.
The smooth-billed anis is one species that participates in communal breeding, where there are multiple females in a group. This has shown that the number of eggs produced per individual is greater in comparison to single-female groups. The reasoning is that this is due to the higher competition between females to have their own eggs be successfully hatched and from the large amount of egg loss. When there are more females in a group, the majority of egg loss is due to egg tossing.
The acorn woodpecker showed that when in a group of 7–8 individuals, the success rate of reproduction increased, but would decrease if more members joined the group. When there were two females in the clutch the success rate would decrease compared to a single-female clutch due to conflicts such as egg tossing.
In the guira cuckoo, up to 7 females share a nest and perform egg tossing behavior. Eggs that are laid in the early period of production are more likely to be tossed out of the nest by another female. When the group size increases, the behaviors that attempt to disrupt egg hatching or laying by others increase.
Disadvantage
Laying eggs late prevents the chicks from being tossed out of the nest, but it can have a negative impact on the offspring's survival. Late egg-laying causes later hatching, which increases the probability of death, since these late chicks will be smaller than their nestmates, putting them at risk.
Adaptation
In the acorn woodpecker, it has been observed that the egg destruction behavior causes egg-laying to be synchronized between females. This synchronization of egg-laying allows for all females to have the same opportunity to have a similar number of eggs in the nest. The larger the communal breeding group, the longer it takes for this synchronization to occur.
Ostriches are usually found in a group of two to seven, and there is only one major hen, which will incubate the nest with the single male. The female ostriches will lay their eggs at the same time, leading to having too many eggs in the nest. The major hen is able to detect which eggs belong to her, and will push the other eggs to the perimeter of the nest, which is not looked after. This adaption of abandoning these eggs protects the well-kept eggs from predators.
In the grooved bill anis and in the guira cuckoo, these species will stop tossing eggs once they have started to produce eggs in the nest. This behavior prevents them from unknowingly tossing one of their own eggs out of the nest.
By brood parasites
There are several species that will increase their offspring's chance of survival through a means that is slightly different than egg tossing, which is brood parasitism. These species will lay their eggs in nests of different species, allowing the offspring to survive without their direct contributions. Some bird species that exhibit this behavior are the black-headed duck, the common cuckoo, and the cowbirds. There are two types of brood parasitism; one which the females lay their eggs in the nest of the same species, and one where the eggs are laid in the nest of a different species.
The common cuckoo is a species of cuckoo that exhibits brood parasitism in the nest of a different species. They accomplish this by watching the nest of a potential host, and, once the host leaves the nest, the female cuckoo will remove one of the host's eggs and will replace it with one of their own. The female cuckoo will have no part in taking care of her offspring; instead, she will leave the host's nest and look for another nest which she can lay more eggs. The common cuckoo will often stay in the nest and take advantage of feeding by the host mother, even after the cuckoo is much larger and evidently not the host's offspring.
A common species nest that the cuckoo will choose to place its eggs in is the reed warbler. The common cuckoo distinguishes the warbler's nest and will choose what specific nest to brood in depending on the foliage and distance from the nest.
The common cuckoo demonstrates the egg tossing behavior when they are just hatchlings. Once the cuckoo eggs are placed into the host nest and they hatch, they will push the other species' eggs out of the nest with their backs. This behavior is very beneficial for the cuckoo's survival, as they are able to grow and feed without any competition from other members of the nest.
The cowbird is another parasitic species that lays their eggs in a different species' nest; the eastern phoebe. Although the cowbird's eggs differ in size and colour, the eastern phoebe will still choose to provide parental care unless there is a partial clutch reduction, or PCR.
There are different methods that brood parasites use to trick the host into raising their child; however, some hosts have developed counter adaptations to these. The adaptation between the host and brood parasites is an example of co-evolution.
Species
Brood parasitism is a rare behavior in which about 1% of all 10,000 birds in the world exhibit. The birds that display this behavior are 57 species of cuckoos, 5 species of cowbirds, 17 species of honeyguides, 20 species of African finches, and one duck called the black headed duck.
References
Oology
Bird behavior
Brood parasites | Egg tossing (behavior) | [
"Biology"
] | 1,464 | [
"Behavior by type of animal",
"Behavior",
"Bird behavior"
] |
16,316,103 | https://en.wikipedia.org/wiki/Neuroepidemiology | Neuroepidemiology is a science of incidence, prevalence, risk factors, natural history and prognosis of neurological disorders, as well as of experimental neuroepidemiology, which is research based on clinical trials of effectiveness or efficacy of various interventions in neurological disorders.
Publications
In 1982, Karger set up a new journal titled Neuroepidemiology. This periodical was the first major international journal devoted to the study of neurological disease distribution and determinants of frequency in human populations. It publishes manuscripts on all aspects of epidemiology of neurological disorders, including clinical trials, public health, health care delivery, as well as research methodology.
The founding editor-in-chief was Bruce Schoenberg. , George Jelinek, Head of the Neuroepidemiology Unit at The University of Melbourne, was posted as Specialty Chief Editor of Frontiers in Neurology section.
Congresses
To present advances in non-experimental and experimental (clinical trials) epidemiology of neurological disorders, the First International Congress on Clinical Neurology and Epidemiology was held in 2009.
Programs and training
Several institutions in the United States offer formal training and research experience in neuroepidemiology, including:
Training scholarships are offered for pre- and post-doctoral scholars by the Epidemiology Department at the University of Pittsburgh Graduate School of Public Health
The Neuroepidemiology Training Program offered by the Columbia University Mailman School of Public Health, and
The University of Maryland Neuroepidemiology Training Program.
In addition, the Center for Stroke Research in the Department of Neurology and Rehabilitation at the University of Illinois College of Medicine offers a fellowship in neuroepidemiology. Michigan State University also offers a neuroepidemiology fellowship as part of the International Neurologic & Psychiatric Epidemiology Program.
As the field of neuroepidemiology continues to expand, research groups have developed at some of the leading medical research institutes across the United States. Currently active research groups can be found at:
The University of Pittsburgh Graduate School of Public Health's e-Brain research group has multiple ongoing research projects. Their conceptual model highlights the use of neuroimaging in their research.
Harvard University's School of Public Health. At HSPH the Neuroepidemiology Research Group is actively investigation neurological diseases including multiple sclerosis, Parkinson's disease, and amyotrophic lateral sclerosis, among others.
The University of California, San Francisco has developed a Neuroepidemiology Research Group through the USCF Department of Neurological Surgery.
Other prominent organizations such as the National Institute of Environmental Health Sciences and Kaiser Permanente have established research programs in neuroepidemiology.
The American Academy of Neurology provides additional information on career paths in neuroepidemiology.
References
Clinical neuroscience
Epidemiology | Neuroepidemiology | [
"Environmental_science"
] | 590 | [
"Epidemiology",
"Environmental social science"
] |
16,316,638 | https://en.wikipedia.org/wiki/DECHEMA | DECHEMA is an abbreviation for "Deutsche Gesellschaft für chemisches Apparatewesen" (German Society for Chemical Apparatus), though it has since been expanded to "Deutsche Gesellschaft für Chemische Technik und Biotechnologie" (German Society for Chemical Engineering and Biotechnology).
Founded in 1926, this is a non-profit organisation based in Frankfurt. It has over 5000 chemists, biotechnologists, and engineers as personal members, as well as other organisations and company members. DECHEMA awards prizes, such as the DECHEMA medal "for outstanding achievements in the field of chemical apparatus technology."
The main purpose of Dechema is to support developments in chemical technology, biotechnology, and environmental protection. It is seen as an interface between science, economy, industry, and the public. To this purpose the trade fair ACHEMA is organised and held every three years in Frankfurt am Main, Germany, as the biggest event for chemical technology and biotechnology.
DECHEMA is a member of the European Federation of Chemical Engineering, and acts as joint Secretariat.
See also
Elektrolytdatenbank Regensburg
Dortmund Data Bank
References
External links
Main website
Connecat: website concerned with catalysis
Dechema Chemistry Data Series
Detherm
Roadmap for catalysis research
Chemical industry in Germany
Chemistry societies
Scientific organisations based in Germany | DECHEMA | [
"Chemistry"
] | 273 | [
"Chemistry societies",
"nan"
] |
16,317,992 | https://en.wikipedia.org/wiki/Marine%20plastic%20pollution | Marine plastic pollution is a type of marine pollution by plastics, ranging in size from large original material such as bottles and bags, down to microplastics formed from the fragmentation of plastic material. Marine debris is mainly discarded human rubbish which floats on, or is suspended in the ocean. Eighty percent of marine debris is plastic. Microplastics and nanoplastics result from the breakdown or photodegradation of plastic waste in surface waters, rivers or oceans. Recently, scientists have uncovered nanoplastics in heavy snow, more specifically about 3,000 tons that cover Switzerland yearly.
It is approximated that there is a stock of 86 million tons of plastic marine debris in the worldwide ocean as of the end of 2013, assuming that 1.4% of global plastics produced from 1950 to 2013 has entered the ocean and has accumulated there. Global consumption of plastics is estimated to be 300 million tonnes per year as of 2022, with around 8 million tonnes ending up in the oceans as macroplastics. Approximately 1.5 million tonnes of primary microplastics end up in the seas. Around 98% of this volume is created by land-based activities, with the remaining 2% being generated by sea-based activities. It is estimated that 19–23 million tonnes of plastic leaks into aquatic ecosystems annually. The 2017 United Nations Ocean Conference estimated that the oceans might contain more weight in plastics than fish by the year 2050.
Oceans are polluted by plastic particles ranging in size from large original material such as bottles and bags, down to microplastics formed from the fragmentation of plastic material. This material is only very slowly degraded or removed from the ocean so plastic particles are now widespread throughout the surface ocean and are known to be having deleterious effects on marine life. Discarded plastic bags, six-pack rings, cigarette butts and other forms of plastic waste which finish up in the ocean present dangers to wildlife and fisheries. Aquatic life can be threatened through entanglement, suffocation, and ingestion. Fishing nets, usually made of plastic, can be left or lost in the ocean by fishermen. Known as ghost nets, these entangle fish, dolphins, sea turtles, sharks, dugongs, crocodiles, seabirds, crabs, and other creatures, restricting movement, causing starvation, laceration, infection, and, in those that need to return to the surface to breathe, suffocation. There are various types of ocean plastics causing problems to marine life. Bottle caps have been found in the stomachs of turtles and seabirds, which have died because of the obstruction of their respiratory and digestive tracts. Ghost nets are also a problematic type of ocean plastic as they can continuously trap marine life in a process known as "ghost fishing".
The 10 largest emitters of oceanic plastic pollution worldwide are, from the most to the least, China, Indonesia, Philippines, Vietnam, Sri Lanka, Thailand, Egypt, Malaysia, Nigeria, and Bangladesh, largely through the Yangtze, Indus, Yellow River, Hai, Nile, Ganges, Pearl River, Amur, Niger, and Mekong, and accounting for "90 percent of all the plastic that reaches the world's oceans". Asia was the leading source of mismanaged plastic waste, with China alone accounting for 2.4 million metric tons. The Ocean Conservancy has reported that China, Indonesia, Philippines, Thailand, and Vietnam dump more plastic in the sea than all other countries combined.
Plastics accumulate because they do not biodegrade in the way many other substances do. They will photodegrade on exposure to the sun, but they do so properly only under dry conditions, and water inhibits this process. In marine environments, photo-degraded plastic disintegrates into ever-smaller pieces while remaining polymers, even down to the molecular level. When floating plastic particles photodegrade down to zooplankton sizes, jellyfish attempt to consume them, and in this way the plastic enters the ocean food chain.
Solutions to marine plastic pollution, along with plastic pollution within the whole environment will be intertwined with changes in manufacturing and packaging practices, and a reduction in the usage, in particular, of single or short-lived plastic products. Many ideas exist for cleaning up plastic in the oceans including trapping plastic particles at river mouths before entering the ocean, and cleaning up the ocean gyres.
Scope of the problem
Marine pollution caused by plastic substances is recognized as an issue of the highest magnitude, from a pollution perspective. A majority of plastics used in people's day to day lives are never recycled. Single use plastics of this kind contribute significantly to the 8 million tons of plastic waste found in the ocean each year. If this trend continues, by the year 2050 there will be more plastic than fish in the ocean by weight. In just the first decade of the century, more plastic has been created than all the plastic in history up until the year of 2000 and a majority of that plastic is not recycled. One estimate of the historic production of plastic gives a figure of 8,300 million metric tonnes (Mt) for global plastic production up to 2015, of which 79% have been accumulated in landfills or the natural environment. According to the IUCN, this number has grown to 14 million tons of plastic. There is an estimated 15 to 51 trillion pieces of plastic amongst all of the world's oceans stretching from the top of ocean to the seafloor. Oceans are Earth's deepest and most extensive basins with average depths of the abyssal plains being about 4 km beneath sea level. Gravity will naturally move and transfer materials from land to the ocean, with the ocean becoming the end-repository. Oceanic plastic pollution is remarkable for the sheer ubiquity of its presence, from ocean trenches, within deep sea sediment, on the ocean floor and ocean ridges to the ocean surface and coastal margins of oceans. Even remote island atolls can have beaches loaded with plastic from a faraway source. At the ocean surface, plastic debris is concentrated within circular structures of large areal extent, called ocean gyres. Ocean gyres form within all oceans, due to alternating patterns of zonal winds that drive equatorward interior transport in the subtropics, and poleward interior transport in subpolar oceans. Ocean currents concentrate plastic waste within the gyres.
Plastics have been increasingly manufactured because of their flexible, molding and durable qualities, which provides plastic with a myriad of useful applications. Plastics are remarkably resistant to natural weathering processes that break down many other materials at the Earth's surface. Ocean processes, including storms, wave action, ocean currents, hydration, and surface exposure to the atmospheric weathering processes (e.g. oxidation) and ultraviolet radiation, tend to break plastic particles into ever-decreasing sizes (resulting in microplastics), rather than organically digest or chemically alter plastic substances. Estimates of the total number and weight of plastic across five ocean gyre plastic concentration zones are of the order of 5.25 trillion particles weighing almost 300,000 tons. The reduction in size of plastic particles to the millimeter and micro-scales allow plastic to settle within deep sea sediments, with perhaps four times as much plastic ending up within sediments compared to surface ocean waters. Plastics are now a part of complex biogeochemical cycles with living organisms, such as cetaceans, seabirds, mammals, and bacteria, ingesting plastic.
Over 300 million tons of plastic are produced every year, half of which is used in single-use products like cups, bags, and packaging. It is estimated that 19–23 million tonnes of plastic leaks into aquatic ecosystems annually. It is impossible to know for sure, but it is estimated that about 150 million metric tons of plastic exists in our oceans. Plastic pollution makes up 80% of all marine debris from surface waters to deep-sea sediments. Because plastics are light, much of this pollution is seen in and around the ocean surface, but plastic trash and particles are now found in most marine and terrestrial habitats, including the deep sea, Great Lakes, coral reefs, beaches, rivers, and estuaries. Submarine canyons are important accumulation sites as well, contributing to the transfer of such debris to the deep sea. The most eye-catching evidence of the ocean plastic problem are the garbage patches that accumulate in gyre regions. A gyre is a circular ocean current formed by the Earth's wind patterns and the forces created by the rotation of the planet. There are five main ocean gyres: the North and South Pacific Subtropical Gyres, the North and South Atlantic Subtropical Gyres, and the Indian Ocean Subtropical Gyre. There are significant garbage patches in each of these.
Larger plastic waste (macroplastics) can be ingested by marine species, filling their stomachs and leading them to believe they are full when in fact they have taken in nothing of nutritional value. This can bring seabirds, whales, fish, and turtles to die of starvation with plastic-filled stomachs. Marine species can also be suffocated or entangled in plastic garbage.
Macroplastic waste can break can weather into smaller fragments of plastic debris, known as microplastics when they are smaller than 5mm in size. Sunlight exposure, temperature, humidity, waves, and wind begin to break the plastic down into pieces smaller than five millimeters long. Plastics can also be broken down by smaller organisms who will eat plastic debris, breaking it down into small pieces, and either excrete these microplastics or spit them out. In lab tests, it was found that amphipods of the species Orchestia gammarellus could quickly devour pieces of plastic bags, shredding a single bag into 1.75 million microscopic fragments. Although the plastic is broken down, it is still a man-made material that does not biodegrade. It is estimated that approximately 90% of the plastics in the pelagic marine environment are microplastics. There are also primary sources of microplastics, such as microbeads and nurdles. These microplastics are frequently consumed by marine organisms at the base of the food chain, like plankton and fish larvae, which leads to a concentration of ingested plastic up the food chain. Plastics are produced with toxic chemicals, so these toxic substances enter the marine food chain, including the fish that some humans eat.
Types of sources and amounts
Plastic waste entering the seas is increasing each year with much of the plastic entering the seas is in particles smaller than 5 millimetres. it was estimated that there was approximately 150 million tonnes of plastic pollution in the world's oceans, estimated to grow to 250 million tonnes in 2025. Another study estimated that in 2012, it was approximately 165 million tonnes. In 2020 a study found that the Atlantic Ocean contains approximately ten times more plastic than was previously thought. The largest single type of plastic pollution (~10%) and majority of large plastic in the oceans is discarded and lost nets from the fishing industry.
The Ocean Conservancy reported that China, Indonesia, Philippines, Thailand, and Vietnam dump more plastic in the sea than all other countries combined.
One study estimated that there are more than 5 trillion plastic pieces (defined into the four classes of small microplastics, large microplastics, meso- and macroplastics) afloat at sea. In 2020, new measurements found more than 10 times as much plastic in the Atlantic Ocean than previously estimated to be there.
In October 2019, when research indicated a substantial proportion of ocean plastic pollution comes from Chinese cargo ships, an Ocean Cleanup spokesperson said: "Everyone talks about saving the oceans by stopping using plastic bags, straws and single use packaging. That's important, but when we head out on the ocean, that's not necessarily what we find."
Almost 20% of plastic debris that pollutes ocean water, which translates to 5.6 million tonnes, comes from ocean-based sources. MARPOL, an international treaty, "imposes a complete ban on the at-sea disposal of plastics". Merchant ships expel cargo, sewage, used medical equipment, and other types of waste that contain plastic into the ocean. In the United States, the Marine Plastic Pollution Research and Control Act of 1987 prohibits discharge of plastics in the sea, including from naval vessels. Naval and research vessels eject waste and military equipment that are deemed unnecessary. Pleasure craft release fishing gear and other types of waste, either accidentally or through negligent handling. The largest ocean-based source of plastic pollution is discarded fishing gear (including traps and nets), estimated to be up to 90% of plastic debris in some areas.
Continental plastic litter enters the ocean largely through storm-water runoff, flowing into watercourses or directly discharged into coastal waters. Plastic in the ocean has been shown to follow ocean currents which eventually form into what is known as Great Garbage Patches.
The impact of microplastic and macroplastic into the ocean is not subjected to infiltration directly by dumping of plastic into marine ecosystems, but through polluted rivers that lead or create passageways to oceans across the globe. Rivers can either act as a source or sink depending on the context. Rivers are thought to be a major source of plastic pollution for the ocean, although possibly not as much as direct input from coastal populations.
The amount of plastic that is recorded to be in the ocean is considerably less than the amount of plastic that is entering the ocean at any given time. According to a study done in the UK, there are "ten top" macroplastic dominant typologies that are solely consumer related (located in the table below). Within this study, 192,213 litter items were counted with an average of 71% being plastic and 59% were consumer related macroplastic items. Even though freshwater pollution is the major contributor to marine plastic pollution there is little studies done and data collection for the amount of pollution going from freshwater to marine. Majority of papers conclude that there is minimal data collection of plastic debris in freshwater environments and natural terrestrial environments, even though these are the major contributor. The need for policy change in production, usage, disposal, and waste management is necessary to decrease the amount and potential of plastic to enter freshwater environments.
A 1994 study of the seabed using trawl nets in the north-western Mediterranean around the coasts of Spain, France, and Italy reported mean concentrations of debris of 1,935 items per square kilometre. Plastic debris accounted for 77%, of which 93% was plastic bags.
Buoyancy
Approximately half of the plastic material introduced to the marine environment is buoyant, but fouling by organisms can cause plastic debris to sink to the sea floor, where it may interfere with sediment-dwelling species and sedimental gas exchange processes. Several factors contribute to microplastic's buoyancy, including the density of the plastic it is composed of as well as the size and shape of the microplastic fragments themselves. Microplastics can also form a buoyant biofilm layer on the ocean's surface. Buoyancy changes in relation to ingestion of microplastics have been clearly observed in autotrophs because the absorption can interfere with photosynthesis and subsequent gas levels. However, this issue is of more importance for larger plastic debris.
Land-based sources of ocean plastic pollution
Estimates for the contribution of land-based plastic vary widely. While one study estimated that a little over 80% of plastic debris in ocean water comes from land-based sources, responsible for every year. In 2015, it was calculated that of plastic waste was generated in 192 coastal countries in 2010, with entering the ocean – a percentage of only up to 5%.
Most land-based plastic pollution enters the ocean from South, Southeast, and East Asia, with the largest emitters including China, Indonesia, Philippines, and India.
A source that has caused concern is landfills. Most waste in the form of plastic in landfills are single-use items such as packaging. Discarding plastics this way leads to accumulation. Although disposing of plastic waste in landfills has less of a gas emission risk than disposal through incineration, the former has space limitations. Another concern is that the liners acting as protective layers between the landfill and environment can break, thus leaking toxins and contaminating the nearby soil and water. Landfills located near oceans often contribute to ocean debris because content is easily swept up and transported to the sea by wind or small waterways like rivers and streams. Marine debris can also result from sewage water that has not been efficiently treated, which is eventually transported to the ocean through rivers. Plastic items that have been improperly discarded can also be carried to oceans through storm waters.
Nurdles
Microplastics
A growing concern regarding plastic pollution in the marine ecosystem is the use of microplastics. Microplastics are beads of plastic less than 5 millimeters wide, and they are commonly found in hand soaps, face cleansers, and other exfoliators. When these products are used, the microplastics go through the water filtration system and into the ocean, but because of their small size they are likely to escape capture by the preliminary treatment screens on wastewater plants. These beads are harmful to the organisms in the ocean, especially filter feeders, because they can easily ingest the plastic and become sick. The microplastics are such a concern because it is difficult to clean them up due to their size, so humans can try to avoid using these harmful plastics by purchasing products that use environmentally safe exfoliates.
Because plastic is so widely used across the planet, microplastics have become widespread in the marine environment. For example, microplastics can be found on sandy beaches and surface waters as well as in the water column and deep sea sediment. Microplastics are also found within the many other types of marine particles such as dead biological material (tissue and shells) and some soil particles (blown in by wind and carried to the ocean by rivers). Population density and proximity to urban centers have been considered the main factors that influence the abundance of microplastics in the environment.
A greater concentration of microplastics have been associated with rainfall events. The runoff after rainfall on land, where plastic production and degradation rate of plastic debris is higher, could deliver these microplastics into the aquatic environment. The greater the rainfall, the stronger the erosion effect of surface runoff on land will be, and the more plastic debris will be transported.
Microplastics enter waterways through many avenues including deterioration of road paint, tire wear and city dust entering the waterways, plastic pellets spilled from shipping containers, ghost nets and other synthetic textiles dumped into the ocean, cosmetics discharged and laundry products entering sewage water and marine coatings on ships degrading.
Upon reaching marine environments, due to their small size and low density, microplastics are transported over long distances via wind and surface ocean currents. The transportation is affected by their inherent characteristics (texture and shape) but also environmental factors such as flow velocity, matrix type and seasonal variability. Numerical models are able to trace small plastic debris (micro- and meso-plastics) drifting in the ocean, thus predicting their fate.
Some microplastics leave the sea and enter the air, as researchers from the University of Strathclyde discovered in 2020. Some remain on the ocean's surface; microplastics account for 92% of plastic debris on the ocean's surface, according to a 2018 study. And some sink to the ocean floor. Australia's national science agency CSIRO estimated that 14 million metric tons of microplastics are already on the ocean floor in 2020. This represents an increase from a 2015 estimate that the world's oceans contain 93–236 thousand metric tons of microplastics and a 2018 estimate of 270 thousand tons.
A study of the distribution of eastern Pacific Ocean surface plastic debris (not specifically microplastic, although, as previously mentioned, most is likely microplastic) helps to illustrate the rising concentration of plastics in the ocean. By using data on surface plastic concentration (pieces of plastic per km2) from 1972 to 1985 (n=60) and 2002–2012 (n=457) within the same plastic accumulation zone, the study found the mean plastic concentration increase between the two sets of data, including a 10-fold increase of 18,160 to 189,800 pieces of plastic per km2.
Arctic Ocean microplastics come mainly from Atlantic sources, especially Europe and North America. Recent studies have revealed that the concentration of microplastics on glaciers or snow is surprisingly higher than even urban water bodies, even though microplastics are not directly used or produced near glaciers. As of 2021, Europe and Central Asia account for around 16% of global microplastics discharge into the seas.
A higher concentration of microplastics in glaciers indicates that transport via wind is a significant pathway to distribute microplastics in the environment.
Microplastics can accumulate in the whitecaps of ocean waves or sea foam and increase the stability of breaking waves, potentially affecting sea albedo or atmosphere-ocean gas exchange. A study found that microplastics from oceans have been found in sea breeze and may re-enter the atmosphere.
Microplastics can concentrate in the gills and intestines of marine life and can interfere with their feedings habits, typically resulting in death. Microplastics have been shown to induce a lethargic swimming and feeding behavior in fish, mussels and nematodes, under severe overload situations. Microplastic size is an important feature for the production of toxic effects on the different organisms, however, the tissue structure and anatomy of each organism play an important role in the severity of the damage that these particles can produce.
Bioaccumulation of microplastics can have a huge effect on the food web, thus altering ecosystems and contributing to loss of biodiversity. Once ingested, microplastics will either be egested or retained by an organism. If a predator consumes an organism that has retained microplastic, the predator will be indirectly consuming this plastic as part of its diet, in a process referred to as "trophic transfer'. Retention of plastics can be influenced by food availability and shape but will be governed by the size of the plastic. Ingested microplastics will typically be passed along the intestinal tract, then will either be adsorbed across the gut lining, become entrapped in the gut (i.e., intestinal blockage causing retention of plastic), or become incorporated into the animal's feces and egested.
The ingestion of plastic by marine organisms has now been established at full ocean depth. Microplastic was found in the stomachs of hadal amphipods sampled from the Japan, Izu-Bonin, Mariana, Kermadec, New Hebrides and the Peru-Chile trenches. The amphipods from the Mariana Trench were sampled at 10,890 m and all contained microfibres.
According to one recent research estimate, a person who consumes seafood will ingest 11 000 bits of microplastics per year. Even very minute microplastics have been discovered in human blood.
Research studies
The extent of microplastic pollution in the deep sea has yet to be fully determined, and as a result scientists are currently examining organisms and studying sediments to better understand this issue. A 2013 study surveyed four separate locations to represent a wider range of marine habitats at depths varying from 1100–5000m. Three of the four locations had identifiable amounts of microplastics present in the top 1 cm layer of sediment. Core samples were taken from each spot and had their microplastics filtered out of the normal sediment. The plastic components were identified using micro-Raman spectroscopy; the results showed man-made pigments commonly used in the plastic industry. In 2016, researchers used an ROV to collect nine deep-sea organisms and core-top sediments. The nine deep-sea organisms were dissected and various organs were examined by the researchers on shore to identify microplastics with a microscope. The scientists found that six out of the nine organisms examined contain microplastics which were all microfibers, specifically located in the GI tract. Research performed by MBARI in 2013 off the west coast of North America and around Hawaii found that out of all the debris observed from 22 years of VARS database video footage, one-third of the items was plastic bags. This debris was most common below 2000 m depth. A recent study that collected organisms and sediments in the Abyssopelagic Zone of the Western Pacific Ocean extracted materials from samples and discovered that poly(propylene-ethylene) copolymer (40.0%) and polyethylene terephthalate (27.5%) were the most commonly detected polymers.
Another study was conducted by collecting deep-sea sediment and coral specimens between 2011 and 2012 in the Mediterranean Sea, Southwest Indian Ocean, and Northeast Atlantic Ocean. Of the 12 coral and sediment samples taken, all were found with an abundance of microplastics. Rayon is not a plastic but was included in the study due to being a common synthetic material. It was found in all samples and comprised 56.9% of materials found, followed by polyester (53.4%), plastics (34.1%) and acrylic (12.4%). This study found that the amount of microplastics, in the form of microfibres, was comparable to that found in intertidal or subtidal sediments. A 2017 study had a similar finding – by surveying the Rockall Trough in the Northeast Atlantic Ocean at a depth of more than 2200 meters, microplastic fibers were identified at a concentration of 70.8 particles per cubic meter. This is comparable to amounts reported in surface waters. This study also looked at micropollution ingested by benthic invertebrates Ophiomusium lymani, Hymenaster pellucidus and Colus jeffreysianus and found that of the 66 organisms studied, 48% had ingested microplastics in quantities also comparable to coastal species. A recent review of 112 studies found the highest plastic ingestion in organisms collected in the Mediterranean and Northeast Indian Ocean with significant differences among plastic types ingested by different groups of animals, including differences in colour and the type of prevalent polymers. Overall, clear fibre microplastics are likely the most predominant types ingested by marine megafauna around the globe.
In 2020 scientists created what may be the first scientific estimate of how much microplastic currently resides in Earth's seafloor, after investigating six areas of ~3 km depth ~300 km off the Australian coast. They found the highly variable microplastic counts to be proportionate to plastic on the surface and the angle of the seafloor slope. By averaging the microplastic mass per cm3, they estimated that Earth's seafloor contains about 14 million tons of microplastic – about double the amount they estimated based on data from earlier studies – despite calling both estimates "conservative" as coastal areas are known to contain much more microplastic. These estimates are about one to two times the amount of plastic thought to currently enter the oceans annually.
Two billion people worldwide lack adequate garbage collection facilities to capture harmful plastics. Improved wastewater treatment and stormwater management in many poor nations would prevent part of the 1.5 million tonnes of microplastics from entering the marine ecosystems each year.
Toxic chemicals
Toxic additives used in the manufacture of plastic materials can leach out into their surroundings when exposed to water. Approximately 8000–19000 tonnes of additives are transported with buoyant plastic matrices globally with a significant portion also transported to the Arctic. Waterborne hydrophobic pollutants collect and magnify on the surface of plastic debris, thus making plastic far more deadly in the ocean than it would be on land. Hydrophobic contaminants are also known to bioaccumulate in fatty tissues, biomagnifying up the food chain and putting pressure on apex predators and humans. Some plastic additives are known to disrupt the endocrine system when consumed, others can suppress the immune system or decrease reproductive rates.
Floating debris can also absorb persistent organic pollutants from seawater, including PCBs, DDT, and PAHs. Plastic debris can absorb toxic chemicals from ocean pollution, potentially poisoning any creature that eats it. Aside from toxic effects when ingested some of these affect animal brain cells similarly to estradiol, causing hormone disruption in the affected wildlife. A study discovered, when plastics eventually decompose, they produce potentially toxic bisphenol A (BPA) and PS oligomer into the water. These toxins are believed to bring harm to the marine life living in the area. Bisphenol A (BPA) is a famous example of a plasticizer produced in high volumes for food packing from where it can leach into food, leading to human exposure. As an estrogen and glucocorticoid receptor agonist, BPA is interfering with the endocrine system and is associated with increased fat in rodents.
Researchers collected seawater samples worldwide, and found that all samples contained polystyrene derivatives. Polystyrene is a plastic found in styrofoam and many household and consumer goods. The scientists then simulated the decomposition of polystyrene in the open ocean. The results of this simulation showed that polystyrene, which begins breaking down at temperatures of 86° and higher, breaks down into harmful chemicals, such as Bisphenol A (BPA, which can cause reproductive harm in animals), styrene monomer (a suspected carcinogen), and styrene trimer (a by-product of polystyrene).
Plasticizers in microplastics have been linked to abnormal growth and reproductive problems in multiple animal models due to endocrine disruption. Microplastics have also been postulated to cause GI irritation, alteration of the microbiome, disturbance of energy and lipid metabolism, and oxidative stress.
Organic pollutants, such as pesticides, can leach into organisms that ingest microplastics, along with dangerous metals such as lead and cadmium.
Accumulation sites
Plastic debris tends to accumulate at the center of ocean gyres. The North Pacific Gyre, for example, has collected the Great Pacific Garbage Patch, which is now estimated to be one to twenty times the size of Texas (approximately from 700,000 to 15,000,000 square kilometers). There could be as much plastic as fish in the sea. It has a very high level of plastic particulate suspended in the upper water column. In samples taken from the North Pacific Gyre in 1999, the mass of plastic exceeded that of zooplankton (the dominant animal life in the area) by a factor of six.
Midway Atoll, in common with all the Hawaiian Islands, receives substantial amounts of debris from the garbage patch. Ninety percent plastic, this debris accumulates on the beaches of Midway where it becomes a hazard to the bird population of the island.
Garbage patches
Environmental impacts
The litter that is being delivered into the oceans is toxic to marine life, and humans. The toxins that are components of plastic include diethylhexyl phthalate, which is a toxic carcinogen, as well as lead, cadmium, and mercury.
Plankton, fish, and ultimately the human race, through the food chain, ingest these highly toxic carcinogens and chemicals. Consuming the fish that contain these toxins can cause an increase in cancer, immune disorders, and birth defects. However, these toxins are not only found in fish but also in staple foods, drinking water, table salts, toothpaste, and other kinds of seafood. These issues can be found in Indonesia, which is the second largest contributor of plastic waste, where human stools were collected from fishermen finding that 50% had concentrations of microplastics. Each human stool that had microplastics had a concentration between 3.33 and 13.99 μg of microplastic per gram of feces.
The majority of the litter near and in the ocean is made up of plastics and is a persistent pervasive source of marine pollution. In many countries improper management of solid waste means there is little control of plastic entering the water system. As of 2016, there are 5.25 trillion particles of plastic pollution that weigh as much as 270,000 tonnes. Since then, studies have found that the amount of plastic particles has increased to somewhere from 15 to 51 trillion particles in 2021. This plastic is taken by the ocean currents and accumulates in large vortexes known as ocean gyres. The majority of the gyres become pollution dumps filled with plastic.
Research on floating plastic debris in the ocean was the fastest-growing topic among 56 sustainability topics examined in a study of scientific publishing by 193 countries over 2011 to 2019. Over nine years, global research documenting this phenomenon ballooned from 46 (2011) to 853 (2019) publications.
Marine ecosystems
Concern among experts has grown since the 2000s that some organisms have adapted to live on floating plastic debris, allowing them to disperse with ocean currents and thus potentially become invasive species in distant ecosystems. Marine animals can experience internal injuries, lacerations, infections, starvation, and diminished swimming ability from injesting plastic or getting entangled in plastic garbage. Additionally, floating plastics aid in the spread of invasive marine organisms, endangering marine biodiversity and the food chain. Research in 2014 in the waters around Australia confirmed a wealth of such colonists, even on tiny flakes, and also found thriving ocean bacteria eating into the plastic to form pits and grooves. These researchers showed that "plastic biodegradation is occurring at the sea surface" through the action of bacteria, and noted that this is congruent with a new body of research on such bacteria. Their finding is also congruent with the other major research undertaken in 2014, which sought to answer the riddle of the overall lack of build up of floating plastic in the oceans, despite ongoing high levels of dumping. Plastics were found as microfibres in core samples drilled from sediments at the bottom of the deep ocean. The cause of such widespread deep sea deposition has yet to be determined.
The hydrophobic nature of plastic surfaces stimulates rapid formation of biofilms, which support a wide range of metabolic activities, and drive succession of other micro- and macro-organisms.
Photodegradation of plastics
The garbage patches are one of several oceanic regions where researchers have studied the effects and impact of plastic photodegradation in the neustonic layer of water. Unlike organic debris, which biodegrades, plastic disintegrates into ever smaller pieces while remaining a polymer (without changing chemically). This process continues down to the molecular level. Some plastics decompose within a year of entering the water, releasing potentially toxic chemicals such as bisphenol A, PCBs and derivatives of polystyrene.
As the plastic flotsam photodegrades into smaller and smaller pieces, it concentrates in the upper water column. As it disintegrates, the pieces become small enough to be ingested by aquatic organisms that reside near the ocean's surface. Plastic may become concentrated in neuston, thereby entering the food chain. Disintegration means that much of the plastic is too small to be seen. Moreover, plastic exposed to sunlight and in watering environments produce greenhouse gases, leading to further environmental impact.
As the plastic particles are primarily found in the pelagic layer of the ocean they experience high levels of photodegradation, which causes the plastics to break down into ever smaller pieces. These pieces eventually become so small that even microorganisms can ingest and metabolize them, converting the plastics into carbon dioxide. In some instances, these microplastics are absorbed directly into a microorganism's biomolecules. However, before reaching this state, any number of organisms could potentially interact with these plastics.
Climate change and air pollution aspects
Plastic pollution and climate change are linked together and the effects of both are complements. The toxins released by plastic pollutants breaking down and releasing into the air are causing climate change rates to move up and worsen as a fast pace. The way that plastic contributes to climate change issues is because of the way plastic is made. Through fossil fuels being used to run machinery creating more plastic, it is released into the air resulting in greenhouse gas emissions. The ocean contains millions of pounds of plastic residue and large pieces, but also contains most of the greenhouse gases produced. The plastics in the oceans emit greenhouse gases while breaking down in the water.
The greenhouse gases produced by the making of plastics makes it difficult for the ocean to trap in carbon and help slow the processes of climate change. Another way that plastic consumption and pollution results in increasing climate change rates, is from incineration of plastic waste. This releases way more toxins into the air and then it all gets consumed by ocean water. The oceans end up taking up chemicals, but also the small pieces of plastic that were not fully broken down. This causes dirty marine water and affects the ecosystems living in the oceans. The incineration of plastic products pushes black carbon into the air. Black carbon comes from emissions and is a lead contributor to climate change.
Effects on animals
Plastic waste has reached all the world's oceans. This plastic pollution harms an estimated 100,000 sea turtles and marine mammals and 1,000,000 sea creatures each year. Larger plastics (called "macroplastics") such as plastic shopping bags can clog the digestive tracts of larger animals when consumed by them and can cause starvation through restricting the movement of food, or by filling the stomach and tricking the animal into thinking it is full. Microplastics on the other hand harm smaller marine life. For example, pelagic plastic pieces in the center of our ocean's gyres outnumber live marine plankton, and are passed up the food chain to reach all marine life.
Fishing gear such as nets, ropes, lines, and cages often get lost in the ocean and can travel large distances which has negatively impacted many marine animals such as coral. The fishing gear is made up of non-biodegradable plastic in many different species of coral get tangled in which causes them to lose tissue and possibly die.
Plastic pollution has the potential to poison animals, which can then adversely affect human food supplies. Plastic pollution has been described as being highly detrimental to large marine mammals, described in the book Introduction to Marine Biology as posing the "single greatest threat" to them. Some marine species, such as sea turtles, have been found to contain large proportions of plastics in their stomach. When this occurs, the animal typically starves, because the plastic blocks the animal's digestive tract. Sometimes marine mammals are entangled in plastic products such as nets, which can harm or kill them.
Entanglement
Entanglement in plastic debris has been responsible for the deaths of many marine organisms, such as fish, seals, turtles, and birds. These animals get caught in the debris and end up suffocating or drowning. Because they are unable to untangle themselves, they also die from starvation or from their inability to escape predators. Being entangled also often results in severe lacerations and ulcers. It was estimated that at least 267 different animal species have suffered from entanglement and ingestion of plastic debris. It has been estimated that over 400,000 marine mammals perish annually due to plastic pollution in oceans. Marine organisms get caught in discarded fishing equipment, such as ghost nets. Ropes and nets used to fish are often made of synthetic materials such as nylon, making fishing equipment more durable and buoyant. These organisms can also get caught in circular plastic packaging materials, and if the animal continues to grow in size, the plastic can cut into their flesh. Equipment such as nets can also drag along the seabed, causing damage to coral reefs.
Some marine animals find themselves tangled in larger pieces of garbage that cause just as much harm as the barely visible microplastics. Trash that has the possibility of wrapping itself around a living organism may cause strangulation or drowning. If the trash gets stuck around a ligament that is not vital for airflow, the ligament may grow with a malformation. Plastic's existence in the ocean becomes cyclical because marine life that is killed by it ultimately decompose in the ocean, re-releasing the plastics into the ecosystem.
Animals can also become trapped in plastic nets and rings, which can cause death. Plastic pollution affects at least 700 marine species, including sea turtles, seals, seabirds, fish, whales, and dolphins. Cetaceans have been sighted within the patch, which poses entanglement and ingestion risks to animals using the Great Pacific Garbage Patch as a migration corridor or core habitat.
Ingestion
Many animals that live on or in the sea consume flotsam by mistake, as it often looks similar to their natural prey. Plastic debris, when bulky or tangled, is difficult to pass, and may become permanently lodged in the digestive tracts of these animals. Especially when evolutionary adaptions make it impossible for the likes of turtles to reject plastic bags, which resemble jellyfish when immersed in water, as they have a system in their throat to stop slippery foods from otherwise escaping. Thereby blocking the passage of food and causing death through starvation or infection.
Many of these long-lasting pieces end up in the stomachs of marine birds and animals, including sea turtles, and black-footed albatross. This results in obstruction of digestive pathways, which leads to reduced appetite or even starvation. In a 2008 Pacific Gyre voyage, Algalita Marine Research Foundation researchers began finding that fish are ingesting plastic fragments and debris. Of the 672 fish caught during that voyage, 35% had ingested plastic pieces.
With the increased amount of plastic in the ocean, living organisms are now at a greater risk of harm from plastic consumption and entanglement. Approximately 23% of aquatic mammals, and 36% of seabirds have experienced the detriments of plastic presence in the ocean. Since as much as 70% of the trash is estimated to be on the ocean floor, and microplastics are only millimeters wide, sealife at nearly every level of the food chain is affected. Animals who feed off of the bottom of the ocean risk sweeping microplastics into their systems while gathering food. Smaller marine life such as mussels and worms sometimes mistake plastic for their prey.
Larger animals are also affected by plastic consumption because they feed on fish, and are indirectly consuming microplastics already trapped inside their prey. Likewise, humans are also susceptible to microplastic consumption. People who eat seafood also eat some of the microplastics that were ingested by marine life. Oysters and clams are popular vehicles for human microplastic consumption. Animals who are within the general vicinity of the water are also affected by the plastic in the ocean. Studies have shown 36% species of seabirds are consuming plastic because they mistake larger pieces of plastic for food. Plastic can cause blockage of intestines as well as tearing of interior stomach and intestinal lining of marine life, ultimately leading to starvation and death.
Some long-lasting plastics end up in the stomachs of marine animals. Plastic attracts seabirds and fish. When marine life consumes plastic allowing it to enter the food chain, this can lead to greater problems when species that have consumed plastic are then eaten by other predators.
Multiple studies have found plastics and microplastics in the stomach contents of marine animals.
The ingestion of large amounts of plastic debris, such as fish nets and ropes, can lead to marine animal's deaths via gastric impaction.
Mammals and fish
A 2021 literature review published in Science identified 1,288 marine species that are known to ingest plastic. Most of these species are fish.
Sea turtles are affected by plastic pollution. Some species are consumers of jelly fish, but often mistake plastic bags for their natural prey. This plastic debris can kill the sea turtle by obstructing the oesophagus. Baby sea turtles are particularly vulnerable according to a 2018 study by Australian scientists.
Plastics are ingested by various species of whales, such as beaked whales, baleen whales, and sperm whales. They can mistake plastics for food and consume them accidentally when feeding on prey organisms that are gathered near plastics. Plastics can also enter their system if their prey already had synthetic plastic particles in their digestive tract via bioaccumulation. Large amounts of plastics have been found in the stomachs of beached whales. Plastic debris started appearing in the stomach of the sperm whale since the 1970s, and has been noted to be the cause of death of several whales. In June 2018, more than 80 plastic bags were found inside a dying pilot whale that washed up on the shores of Thailand. In March 2019, a dead Cuvier's beaked whale washed up in the Philippines with 88 lbs of plastic in its stomach. In April 2019, following the discovery of a dead sperm whale off of Sardinia with 48 pounds of plastic in its stomach, the World Wildlife Foundation warned that plastic pollution is one of the most dangerous threats to sea life, noting that five whales have been killed by plastic over a two-year period.
Some of the tiniest bits of plastic are being consumed by small fish, in a part of the pelagic zone in the ocean called the Mesopelagic zone, which is 200 to 1000 metres below the ocean surface, and completely dark. Not much is known about these fish, other than that there are many of them. They hide in the darkness of the ocean, avoiding predators and then swimming to the ocean's surface at night to feed. Plastics found in the stomachs of these fish were collected during Malaspina's circumnavigation, a research project that studies the impact of global change on the oceans.
A study conducted by Scripps Institution of Oceanography showed that the average plastic content in the stomachs of 141 mesopelagic fish over 27 different species was 9.2%. Their estimate for the ingestion rate of plastic debris by these fish in the North Pacific was between 12,000 and 24,000 tonnes per year. The most popular mesopelagic fish is the lantern fish. It resides in the central ocean gyres, a large system of rotating ocean currents. Since lantern fish serve as a primary food source for the fish that consumers purchase, including tuna and swordfish, the plastics they ingest become part of the food chain. The lantern fish is one of the main bait fish in the ocean, and it eats large amounts of plastic fragments, which in turn will not make them nutritious enough for other fish to consume.
Another study found bits of plastic outnumber baby fish by seven to one in nursery waters off Hawaii. After dissecting hundreds of larval fish, the researchers discovered that many fish species ingested plastic particles. Plastics were also found in flying fish, which are eaten by top predators such as tunas and most Hawaiian seabirds.
Deep sea animals have been found with plastics in their stomachs. In 2020, deep sea species Eurythenes plasticus was discovered, with one of the samples already having plastics in its gut; it was named to highlight the impacts of plastic pollution.
It was found in 2016–2017 that more than 35% of south Pacific Lanternfish had consumed plastic particles. When ingested by the fish, the chemical compounds found in these plastics cannot be digested. This can affect humans, as the Lanternfish is a food source for both salmon and tuna. Fish and whales may also mistake the plastic as a food source.
Birds
Plastic pollution does not only affect animals that live solely in oceans. Seabirds are also greatly affected. In 2004, it was estimated that gulls in the North Sea had an average of thirty pieces of plastic in their stomachs. Seabirds often mistake trash floating on the ocean's surface as prey. Their food sources often has already ingested plastic debris, thus transferring the plastic from prey to predator. Ingested trash can obstruct and physically damage a bird's digestive system, reducing its digestive ability and can lead to malnutrition, starvation, and death. Toxic chemicals called polychlorinated biphenyls (PCBs) also become concentrated on the surface of plastics at sea and are released after seabirds eat them. These chemicals can accumulate in body tissues and have serious lethal effects on a bird's reproductive ability, immune system, and hormone balance. Floating plastic debris can produce ulcers, infections and lead to death. Marine plastic pollution can even reach birds that have never been at the sea. Parents may accidentally feed their nestlings plastic, mistaking it for food. Seabird chicks are the most vulnerable to plastic ingestion since they cannot vomit up their food like the adult seabirds.
Plasticosis is a type of fibrotic disease initially found in one species of bird in 2023.
After the initial observation that many of the beaches in New Zealand had high concentrations of plastic pellets, further studies found that different species of prion ingest the plastic debris. Hungry prions mistook these pellets for food, and these particles were found intact within the birds' gizzards and proventriculi. Pecking marks similar to those made by northern fulmars in cuttlebones have been found in plastic debris, such as styrofoam, on the beaches on the Dutch coast, showing that this species of bird also mistake plastic debris for food.
Of the 1.5 million Laysan albatrosses that inhabit Midway Atoll, nearly all are likely to have plastic in their gastrointestinal tract. Approximately one-third of their chicks die, and many of those deaths are from plastic unwittingly fed to them by their parents. Twenty tons of plastic debris washes up on Midway every year with five tons ending up in the bellies of albatross chicks. These seabirds choose red, pink, brown, and blue plastic pieces because of similarities to their natural food sources. As a result of plastic ingestion, the digestive tract can be blocked resulting in starvation. The windpipe can also be blocked, which results in suffocation. The debris can also accumulate in the animal's gut, and give them a false sense of fullness which would also result in starvation. On the shore, thousands of birds corpses can be seen with plastic remaining where the stomach once was. The durability of the plastics is visible among the remains. In some instances, the plastic piles are still present while the bird's corpse has decayed.
Similar to humans, animals exposed to plasticizers can experience developmental defects. Specifically, sheep have been found to have lower birth weights when prenatally exposed to bisphenol A. Exposure to BPA can shorten the distance between the eyes of a tadpole. It can also stall development in frogs and can result in a decrease in body length. In different species of fish, exposure can stall egg hatching and result in a decrease in body weight, tail length, and body length.
A study found that in 1960 less than 5% of seabirds were found to have consumed waste material, while as of August 2015 that figure climbed to about 90%. It is predicted that by 2050, 99% of seabirds will have consumed such materials. Scientists studying the stomach contents of Laysan albatross chicks report a 40% mortality rate before fledging. When the stomach contents were analyzed following necropsies, they were found to contain plastic waste. Not only do plastic pellets used in manufacturing worldwide absorb toxic chemicals such as DDT and PCBs from the water, but they can even leach chemicals such as biphenyl. It is estimated that up to 267 marine species are affected by plastic pollution.
Coral
Lost fish nets or ghost nets make up around 46% of what is known as the Great Pacific Garbage Patch and have had a negative impact on many different species of coral as they often accidentally trap themselves in these nets. These fishing nets have caused tissue loss, algae growth, and fragmentation of coral. In addition, as coral gets trapped in different types of fishing gear, this causes coral to develop stress as they are not in a favorable condition, which causes coral to break and die off. According to multiple research studies, Tubastraea micranthus is a type of coral species that appears to be the most impacted by fishing gear in the ocean because of its branches and its ability to grow on top of fishing gear such as nets, ropes, and lines.
Phytoplankton
In 2019 and 2020 there were week-long studies done in Australia along the Georges River to measure the number of microplastics. The purpose of these studies was to determine if phytoplankton living in the river were being affected by the microplastics in the water. The studies included the completion of microcosm experiments where water samples were collected in bottles from the river and then filtered. In addition, microplastic solutions were made along with the collection of phytoplankton from the same river. After the studies were complete, scientists found out that there were very high concentrations of microplastics in the river which have negatively impacted phytoplankton such as cyanobacteria.
As many different species of phytoplankton are being exposed to microplastics in the Georges River, not only does this impact the lives of the phytoplankton themselves, but also affects other animals in their food chain. Phytoplankton are primary producers; therefore, when microplastics are ingested, other living organisms in the environment that feed on phytoplankton also ingest microplastics.
Fin Whales
In the Mediterranean Sea, studies have been performed to determine how the number of microplastics on the surface level of the ocean has affected fin whale populations. In the study, researchers collected samples of microplastics during the day when there was little to no wave action. The plastic pieces collected from the samples were then observed under a microscope to determine their size and whether they were microplastics or mesoplastics. The fin whale population's habitat was then observed where the zooplankton population was measured along with sea surface chlorophyll levels within their habitat. The Tyrreno-ROMS model was used to measure the ocean current or gyres along with the sea surface temperatures in the fin whales' habitat within the Mediterranean Sea.
The results of the studies indicated that there were high levels of microplastics within the surface level of the Mediterranean Sea which is the fin whales' habitat and serves as the location of their food source mainly during the summer months. The results indicate that when fin whales search for food to eat on the surface level of the ocean, they often accidentally consume microplastics. These microplastics have many toxins and chemicals that could harm the fin whale if they consume them as these toxins are then stored in the tissues of the fin whale for long periods of time.
Other
A study from 2019 indicates that the large amounts of plastic in the Great Pacific Garbage Patch could affect the behavior and distribution of some marine animals, as they can act as fish aggregating devices (FAD). FADs can attract feeding cetaceans, thus increasing the risk of being entangled or ingesting additional plastic.
Effects on humans
Nanoplastics can penetrate the intestine tissue in aquatic creatures and can end up in the human food chain by inhalation (breathing) or ingestion (eating), particularly through shellfish and crustaceans. Ingestion of plastics has been associated with a variety of reproductive, carcinogenic, and mutagenic effects. The most well-known organic synthetic compound used in many plastics is bisphenol A (BPA). It has been linked with autoimmune disease and endocrine disrupting agents, leading to reduced male fertility and breast cancer. Phthalate esters are also linked to causing reproductive effects due to being found in packing products for food. The toxins from phthalate esters affect the developing male reproductive system. Diethylhexyl phthalate is also suspected to disrupt the functions of the thyroid; however, studies are currently inconclusive.
Plastics in the human body can stop or slow down detoxification mechanisms, causing acute toxicity and lethality. They have the potential to affect the central nervous system and reproductive system, although this would be unlikely unless exposure levels were very high and absorption levels were increased. In vitro studies from human cells showed evidence that polystyrene nanoparticles are taken up and can induce oxidative stress and pro-inflammatory responses.
Reduction efforts
Solutions to marine plastic pollution, along with plastic pollution within the whole environment will be intertwined with changes in manufacturing and packaging practices, and a reduction in the usage, in particular, of single or short-lived plastic products. Many ideas exist for cleaning up plastic in the oceans including trapping plastic particles at river mouths before entering the ocean, and cleaning up the ocean gyres.
Collection in the ocean
Plastics pollution in the oceans might be irreversible. Once microplastics enter the marine environment, they are extremely difficult and expensive to remove.
The organization "The Ocean Cleanup" is trying to collect plastic waste from the oceans by nets. There are concerns from harm to some forms of sea organisms, especially neuston.
At TEDxDelft2012, Boyan Slat unveiled a concept for removing large amounts of marine debris from oceanic gyres. Calling his project The Ocean Cleanup, he proposed to use surface currents to let debris drift to collection platforms. Operating costs would be relatively modest and the operation would be so efficient that it might even be profitable. The concept makes use of floating booms that divert rather than catch the debris. This avoids bycatch, while collecting even the smallest particles. According to Slat's calculations, a gyre could be cleaned up in five years' time, amounting to at least 7.25 million tons of plastic across all gyres. He also advocated "radical plastic pollution prevention methods" to prevent gyres from reforming. In 2015, The Ocean Cleanup project was a category winner in the Design Museum's 2015 Designs of the Year awards. A fleet of 30 vessels, including a 32-metre (105-foot) mothership, took part in a month-long voyage to determine how much plastic is present using trawls and aerial surveys.
The organization "everwave" uses special rubbish collection boats in rivers and estuaries to prevent rubbish from entering the world's oceans.
There is also Ocean Plastic Utilisation Ships System R&D project (OPUSS). The main objective of this project is to make the ocean cleaning process commercially realistic in time, environmentally efficient and viable in general. The central idea of the OPUSS project lies in developing new circular logistic scheme of the ocean cleanup, as existing reverse logistics supply chains are not able to capture the specifics of the plastic waste collection out on the ocean. The main target of a project is cleaning the ocean with optimal results in terms of logistics and construction costs, as well as with minimal operating costs.
Plastic-to-fuel conversion strategy
The Clean Oceans Project (TCOP) promotes conversion of the plastic waste into valuable liquid fuels, including gasoline, diesel and kerosene, using plastic-to-fuel conversion technology developed by Blest Co. Ltd., a Japanese environmental engineering company. TCOP plans to educate local communities and create a financial incentive for them to recycle plastic, keep their shorelines clean, and minimize plastic waste.
In 2019, a research group led scientists of Washington State University found a way to turn plastic waste products into jet fuel.
Also, the company "Recycling Technologies", has come up with a simple process that can convert plastic waste to an oil called Plaxx. The company is led by a team of engineers from the university of Warwick.
Other companies working on a system for converting plastic waste to fuel include GRT Group and OMV.
Policies and legislation
Shortcomings in the existing international policy framework include: "the focus on sea-based sources of marine plastic pollution; the prevalence of soft law instruments; and the fragmentation of the existing international regulatory framework". Four aspects are important for an integrated approach to solve the problem of marine plastic pollution: harmonization of international laws (action example: develop a new global plastics treaty); coherence across national policies; coordination of international organizations (action example: identify a leading coordinating organization (e.g., UN Environment Programme (UNEP)); and science-policy interaction. These shortcomings are often listed as drivers for the advancement of a global plastics treaty. The development of such a treaty is underway as of March 2022 and is expected to conclude by the end of 2024.
In the EU it is estimated that banning the intentional addition of microplastics to cosmetics, detergents, paints, polish and coatings, among others, would reduce emissions of microplastics by about 400,000 tonnes over 20 years.
The trade in plastic waste from industrialized countries to developing countries has been identified as the main cause of marine litter because countries importing the waste plastics often lack the capacity to process all the material. Therefore, the United Nations has imposed a ban on waste plastic trade unless it meets certain criteria. The global plastic waste trade when it comes into effect in January 2021.
History
Background
Plastic pollution was first found in central gyres, or rotating ocean currents in which these observations from the Sargasso Sea were included in the 1972 Journal Science. In 1986, a group of undergraduate students conducted research by recording how much plastic they came across on their ship while traveling across the Atlantic Ocean. Their research led to them being able to collect useful and long term data about plastic in the Atlantic Ocean along with Charles Moore being able to discover the Great Pacific Garbage Patch. In addition, the undergraduate students' research helped lead to the invention of the term "microplastics".
Terminology
Microplastics
The term "microplastics" was first used by Richard Thompson in 2004 as he described microplastics to be small pieces of plastic, specifically less than 5 mm, that are found in the ocean and other bodies of water. After Thompson's invention of the term "microplastics", many scientists have conducted research to try to determine the effects that microplastics have in the ocean.
Plastic soup
The term "plastic soup" was coined by Charles J. Moore in 1997, after he found patches of plastic pollution in the North Pacific Gyre between Hawaii and California. This Great Pacific Garbage Patch had previously been described in 1988 by scientists who used the term neuston plastic to describe "The size fraction of plastic debris caught in nets designed to catch surface plankton (hereafter referred to as neuston plastic)", and acknowledged that earlier studies in the 1970s had shown that "neuston plastic is widespread, is most abundant in the central and western North Pacific, and is distributed by currents and winds".
In 2006, Ken Weiss published an article in the Los Angeles Times which was the first to make the public aware about the effects of the Garbage Patch in the Pacific Ocean. In 2009, a group of researchers decided to go out into the Pacific Ocean to prove if the Great Pacific Garbage Patch was real or a myth. After days out on the sea, the research group came across hundreds of plastic pieces in the ocean that were seen as a soup of microplastics rather than large pieces of plastics as expected.
The term is sometimes used to refer only to pollution by microplastics, pieces of plastic less than 5mm in size such as fibres shed from synthetic textiles in laundry: the British National Federation of Women's Institutes passed a resolution in 2017 headlined "End Plastic Soup" but concentrating on this aspect of pollution.
The Amsterdam-based Plastic Soup Foundation is an advocacy group which aims to raise awareness of the problem, educate people, and support the development of solutions.
, the Oxford English Dictionary did not include the terms plastic soup, neuston plastic or neustonic plastic, but it defined the term microplastic (or micro-plastic) as "Extremely small pieces of plastic, manufactured as such (in the form of nurdles or microbeads) or resulting from the disposal and breakdown of plastic products and waste" and its illustrative quotations all relate to marine pollution, the earliest being a 1990 reference in the South African Journal of Science: "The mean frequency of micro-plastic particles increased from 491 m1 of beach in 1984 to 678 m1 in 1989".
See also
Plastic pollution
Sources
References
Further reading
Water pollution
Plastics and the environment
Pacific Ocean
Articles containing video clips | Marine plastic pollution | [
"Chemistry",
"Environmental_science"
] | 13,393 | [
"Water pollution"
] |
16,321,447 | https://en.wikipedia.org/wiki/Vortex%20lattice%20method | The Vortex lattice method, (VLM), is a numerical method used in computational fluid dynamics, mainly in the early stages of aircraft design and in aerodynamic education at university level. The VLM models the lifting surfaces, such as a wing, of an aircraft as an infinitely thin sheet of discrete vortices to compute lift and induced drag. The influence of the thickness and viscosity is neglected.
VLMs can compute the flow around a wing with rudimentary geometrical definition. For a rectangular wing it is enough to know the span and chord. On the other side of the spectrum, they can describe the flow around a fairly complex aircraft geometry (with multiple lifting surfaces with taper, kinks, twist, camber, trailing edge control surfaces and many other geometric features).
By simulating the flow field, one can extract the pressure distribution or as in the case of the VLM, the force distribution, around the simulated body. This knowledge is then used to compute the aerodynamic coefficients and their derivatives that are important for assessing the aircraft's handling qualities in the conceptual design phase. With an initial estimate of the pressure distribution on the wing, the structural designers can start designing the load-bearing parts of the wings, fin and tailplane and other lifting surfaces. Additionally, while the VLM cannot compute the viscous drag, the induced drag stemming from the production of lift can be estimated. Hence as the drag must be balanced with the thrust in the cruise configuration, the propulsion group can also get important data from the VLM simulation.
Historical background
John DeYoung provides a background history of the VLM in the NASA Langley workshop documentation SP-405.
The VLM is the extension of Prandtl's lifting-line theory, where the wing of an aircraft is modeled as an infinite number of Horseshoe vortices. The name was coined by V.M. Falkner in his Aeronautical Research Council paper of 1946. The method has since then been developed and refined further by W.P. Jones, H. Schlichting, G.N. Ward and others.
Although the computations needed can be carried out by hand, the VLM benefited from the advent of computers for the large amounts of computations that are required.
Instead of only one horseshoe vortex per wing, as in the Lifting-line theory, the VLM utilizes a lattice of horseshoe vortices, as described by Falkner in his first paper on this subject in 1943. The number of vortices used vary with the required pressure distribution resolution, and with required accuracy in the computed aerodynamic coefficients. A typical number of vortices would be around 100 for an entire aircraft wing; an Aeronautical Research Council report by Falkner published in 1949 mentions the use of an "84-vortex lattice before the standardisation of the 126-lattice" (p. 4).
The method is comprehensibly described in all major aerodynamic textbooks, such as Katz & Plotkin, Anderson, Bertin & Smith Houghton & Carpenter or Drela,
Theory
The vortex lattice method is built on the theory of ideal flow, also known as Potential flow. Ideal flow is a simplification of the real flow experienced in nature, however for many engineering applications this simplified representation has all of the properties that are important from the engineering point of view. This method neglects all viscous effects. Turbulence, dissipation and boundary layers are not resolved at all. However, lift induced drag can be assessed and, taking special care, some stall phenomena can be modelled.
Assumptions
The following assumptions are made regarding the problem in the vortex lattice method:
The flow field is incompressible, inviscid and irrotational. However, small-disturbance subsonic compressible flow can be modeled if the general 3D Prandtl-Glauert transformation is incorporated into the method.
The lifting surfaces are thin. The influence of thickness on aerodynamic forces are neglected.
The angle of attack and the angle of sideslip are both small, small angle approximation.
Method
By the above assumptions the flowfield is Conservative vector field, which means that there
exists a perturbation velocity potential such that the total velocity vector is given by
and that satisfies Laplace's equation.
Laplace's equation is a second order linear equation, and being so it is subject
to the principle of superposition. Which means that if and are two solutions of
the linear differential equation, then the linear combination is also a solution for any values of the constants and . As Anderson put it "A complicated flow pattern
for an irrotational, incompressible flow can be synthesized by adding together a number of elementary flows, which are also irrotational and incompressible.”. Such elementary flows are the point source or sink, the doublet and the vortex line, each being a solution of Laplace's equation. These may be superposed in many ways to create the formation of line sources, vortex sheets and so on. In the Vortex Lattice method, each such elementary flow is the velocity field of a horseshoe vortex with some strength .
Aircraft model
All the lifting surfaces of an aircraft are divided into some number of quadrilateral panels, and a horseshoe vortex and a collocation point (or control point) are placed on each panel. The transverse segment of the vortex is at the 1/4 chord position of the panel, while the collocation point is at the 3/4 chord position. The vortex strength is to be determined. A normal vector is also placed at each collocation point, set normal to the camber surface of the actual lifting surface.
For a problem with panels, the perturbation velocity at collocation point is given by summing the contributions of all the horseshoe vortices in terms of an Aerodynamic Influence Coefficient (AIC) matrix .
The freestream velocity vector is given in terms of the freestream speed and the angles of attack and sideslip, .
A Neumann boundary condition is applied at each collocation point, which prescribes that the normal velocity across the camber surface is zero. Alternate implementations may also use the Dirichlet boundary condition directly on the velocity potential.
This is also known as the flow tangency condition. By evaluating the dot products above the following system of equations results. The new normalwash AIC matrix is , and the right hand side is formed by the freestream speed and the two aerodynamic angles
This system of equations is solved for all the vortex strengths . The total force vector and total moment vector about the origin are then computed by summing the contributions of all the forces on all the individual horseshoe vortices, with being the fluid density.
Here, is the vortex's transverse segment vector, and is the perturbation velocity at this segment's center location (not at the collocation point).
The lift and induced drag are obtained from the components of the total force vector . For the case of zero sideslip these are given by
Extension to the Dynamic Case
The preliminary design of airplanes requires unsteady aerodynamic models, usually written in the frequency domain for aeroelastic analyses. Commonly used is the Doublet Lattice Method, where the wing system is subdivided into panels. Each panel has a line of doublets of acceleration potential in the first-quarter line, similarly of what is usually done in the Vortex Lattice Method. Each panel has a load point where the lifting force is assumed applied and a control point where the aeroelastic boundary condition is enforced. The Doublet Lattice Method evaluated at frequency zero is usually obtained with a Vortex Lattice formulation
References
External links
http://web.mit.edu/drela/Public/web/avl/
https://github.com/OpenVOGEL
Sources
NASA, Vortex-lattice utilization. NASA SP-405, NASA-Langley, Washington, 1976.
Prandtl. L, Applications of modern hydrodynamics to aeronautics, NACA-TR-116, NASA, 1923.
Falkner. V.M., The Accuracy of Calculations Based on Vortex Lattice Theory, Rep. No. 9621, British A.R.C., 1946.
J. Katz, A. Plotkin, Low-Speed Aerodynamics, 2nd ed., Cambridge University Press, Cambridge, 2001.
J.D. Anderson Jr, Fundamentals of aerodynamics, 2nd ed., McGraw-Hill Inc, 1991.
J.J. Bertin, M.L. Smith, Aerodynamics for Engineers, 3rd ed., Prentice Hall, New Jersey, 1998.
E.L. Houghton, P.W. Carpenter, Aerodynamics for Engineering Students, 4th ed., Edward Arnold, London, 1993.
Lamar, J. E., Herbert, H. E., Production version of the extended NASA-Langley vortex lattice FORTRAN computer program. Volume 1: User's guide, NASA-TM-83303, NASA, 1982
Lamar, J. E., Herbert, H. E., Production version of the extended NASA-Langley vortex lattice FORTRAN computer program. Volume 2: Source code, NASA-TM-83304, NASA, 1982
Melin, Thomas, A Vortex Lattice MATLAB Implementation for Linear Aerodynamic Wing Applications, Royal Institute of Technology (KTH), Sweden, December, 2000
M. Drela, Flight Vehicle Aerodynamics, MIT Press, Cambridge, MA, 2014.
Fluid dynamics
Aerodynamics | Vortex lattice method | [
"Chemistry",
"Engineering"
] | 1,929 | [
"Chemical engineering",
"Aerodynamics",
"Aerospace engineering",
"Piping",
"Fluid dynamics"
] |
16,323,949 | https://en.wikipedia.org/wiki/History%20of%20online%20games | Online games are video games played over a computer network. The evolution of these games parallels the evolution of computers and computer networking, with new technologies improving the essential functionality needed for playing video games on a remote server. Many video games have an online component, allowing players to play against or cooperatively with players across a network around the world.
Background of technologies
The first video and computer games, such as NIMROD (1951), OXO (1952), and Spacewar! (1962), were for one or two players sitting at a single computer, which was being used only to play the game. Later in the 1960s, computers began to support time-sharing, which allowed multiple users to share the use of a computer simultaneously. Systems of computer terminals were created, allowing users to operate the computer from a different room from where the computer was housed. Soon after, modem links further expanded this range so that users did not have to be in the same building as the computer; terminals could connect to their host computers via dial-up or leased telephone lines. With the increased remote access, host-based games were created, in which users on remote systems connected to a central computer to play single-player, and soon after, multiplayer games.
Later, in the 1970s, packet-based computer networking technology began to mature. Between 1973 and 1975, Xerox PARC developed local area networks based on Ethernet. Additionally, the wide area network ARPANET further developed from its 1969 roots, led to the creation of the Internet on January 1, 1983. These LANs and WANs allowed for network games, where the game created and received network packets; systems located across LANs or the Internet could run games with each other in peer-to-peer or client–server models.
PLATO
In the 1960s, Rick Bloome implemented SpaceWar! as a two-player game on PLATO.
In the early 1970s, the PLATO time-sharing system, created by the University of Illinois and Control Data Corporation, allowed students at several locations to use online lessons in one of the earliest systems for computer-aided instruction. In 1972, PLATO IV terminals with new graphics capabilities were introduced, and students started using this system to create multiplayer games. By 1978, PLATO had multiplayer interactive graphical dungeon crawls, air combat (Airfight), tank combat, space battles (Empire and Spasim), with features such as interplayer messaging, persistent game characters, and team play for at least 32 simultaneous players.
Networked host-based systems
A key goal of early network systems such as ARPANET and JANET was to allow users of "dumb" text-based terminals attached to one host computer (or, later, to terminal servers) to interactively use programs on other host computers. This meant that games on those systems were accessible to users in many different locations by the use of programs such as telnet.
Most of the early host-based games were single-player, and frequently originated and were primarily played at universities. A sizable proportion was written on DEC-20 mainframes, as those had a strong presence in the university market. Games such as The Oregon Trail (1971), Colossal Cave Adventure (1976), and Star Trek (1972) were very popular, with several or many students each playing their own copy of the game at once, time-sharing the system with each other and users running other programs.
Eventually, though, multiplayer host-based games on networked computers began to be developed. One of the most important of these was MUD (1978), a program that spawned a genre and had significant input into the development of concepts of shared world design, having a formative impact on the evolution of MMORPGs. In 1984, MAD debuted on BITNET; this was the first MUD fully accessible from a worldwide computer network. During its two-year existence, 10% of the sites on BITNET connected to it. In 1988, another BITNET MUD named MUDA appeared. It lasted for five years, before going offline due to the retirement of the computers it ran on.
In the summer of 1973, Maze War was first written at NASA's Ames Research Center in California by high school summer interns using Imlac PDS-1 computers. The authors added two-player capability by connecting two IMLAC computers with serial cables. Since two computers were involved, as opposed to "dumb terminals", they could use formatted protocol packets to send information to each other, so this could be considered the first peer-to-peer computer video game. It could also be called the first first-person shooter.
In 1983, Gary Tarolli wrote a flight simulator demonstration program for Silicon Graphics workstation computers. In 1984, networking capabilities were added by connecting two machines using serial cables just as had been done with the IMLACs for Mazewar at NASA eleven years earlier. Next, XNS support was added, allowing multiple stations to play over an Ethernet, just as with the Xerox version of Mazewar. In 1986, UDP support was added (port 5130), making SGI Dogfight the first game to ever use the Internet protocol suite. The packets used, though, were broadcast packets, which meant that the game was limited to a single network segment; it could not cross a router, and thus could not be played across the Internet. Around 1989, IP multicast capability was added, and the game became playable between any compatible hosts on the Internet, assuming that they had multicast access (which was quite uncommon). The multicast address is 224.0.1.2, making this only the third multicast application (and the first game) to receive an address assignment, with only the VMTP protocol (224.0.1.0) and the Network Time Protocol (224.0.1.1) having arrived earlier.
In 1989, Sierra On-Line launched The Sierra Network, fully rolling it out in the US by 1991. As an MS-DOS-based platform, it was groundbreaking as the first subscription service fully dedicated to online gaming. It featured customizable avatars and offered a variety of games under a monthly fee, setting a precedent for modern online gaming communities.
In May 1993, Sega of Japan demonstrated an online version of arcade OutRunners, allowing up to eight players to play the game across two different cities in Japan. It was the first online arcade game to be demonstrated, with two separate OutRunner four-player cabinets connected in Tokyo and Osaka via an Integrated Services Digital Network (ISDN) operated by Nippon Telegraph and Telephone (NTT). Sega announced plans for a Japanese release in July 1993. A month later in June 1993, AT&T announced plans with Sega of America to introduce a similar online console gaming system for the Sega Genesis.
X Window System games
In 1986, MIT and DEC released the X Window System, which provided two important capabilities in terms of game development. Firstly, it provided a widely deployed graphics system for workstation computers on the Internet. A number of workstation graphics systems existed, including Bell Labs' BLIT, SGI's IRIS GL, Carnegie Mellon's Andrew Project, and Sun's NeWS, but X managed over time to secure cross-platform dominance, becoming available for systems from nearly all workstation manufacturers, and coming from MIT, had particular strength in the academic arena. Since Internet games were being written mostly by college students, this was critical.
Secondly, X had the capability of using computers as thin clients, allowing a personal workstation to use a program that was actually being run on a much more powerful server computer exactly as if the user were sitting at the server computer. While remote control programs such as VNC allow similar capabilities, X incorporates it at the operating system level, allowing for much more tightly integrated functionality than these later solutions provide; multiple applications running on different servers can display individual windows. For example, a word processor running on one server could have two or three windows open while a mail reader running on the workstation itself, and a game running on yet another server could each display their own windows, and all applications would be using native graphics calls.
This meant that starting in the summer of 1986, a class of games began to be developed which relied on a fast host computer running the game and "throwing" X display windows, using personal workstation computers to remotely display the game and receive user input. Since X can use multiple networking systems, games based on remote X displays are not Internet-only games; they can be played over DECnet and other non-TCP/IP network stacks.
Xtrek
The first of these remote display games was Xtrek. Based on a PLATO system game, Empire, Xtrek is a 2D multiplayer space battle game loosely set in the Star Trek universe. This game could be played across the Internet, probably the first graphical game that could do so, a few months ahead of the X version of Maze War. Importantly, however, the game itself was not aware that it was using a network. In a sense, it was a host-based game, because the program only ran on a single computer, and knew about the X Window System, and the window system took care of the networking: essentially one computer displaying on several screens.
Fully network-aware games
The X version of Maze War, on the other hand, was peer-to-peer and used the network directly, with a copy of the program running on each computer in the game, instead of only a single copy running on a server. Netrek (originally called Xtrek II) was a fully network-aware client–server rewrite of Xtrek. Other remote X display-based games include xtank, xconq, xbattle and XPilot (1991). By 1989 Simson Garfinkel reported that on MIT's Project Athena, "Games like 'X-tank' and 'X-trek' let students at different workstations command tanks and starships, fire missiles at each other as fast as they can hit the buttons on their mice, and watch the results on their graphics displays". Observers estimated that up to one third of Athena usage was for games.
Commercial timesharing services
As time-sharing technology matured, it became practical for companies with excess capacity on their expensive computer systems to sell that capacity. Service bureaus such as Tymshare (founded 1966) dedicated to selling time on a single computer to multiple customers sprang up. The customers were typically businesses that did not have the need or money to purchase and manage their own computer systems.
In 1979, two time-sharing companies, The Source and CompuServe, began selling access to their systems to individual consumers and small business; this was the beginning of the era of online service providers. While an initial focus of service offerings was the ability for users to run their own programs, over time applications including online chat, electronic mail and BBSs and games became the dominant uses of the systems. For many people, these, rather than the academic and commercial systems available only at universities and technical corporations, were their first exposure to online gaming.
In 1984, CompuServe debuted Islands of Kesmai, the first commercial multiplayer online role playing game. Islands of Kesmai used scrolling text (ASCII graphics) on the screen to draw maps of player location, depict movement, and so on; the interface is considered Roguelike. At some point, graphical overlay interfaces could be downloaded, putting a slightly more glitzy face on the game. Playing cost was the standard CompuServe connection fee of the time, $6 per hour with a 300 baud modem, $12 for a 1200 baud modem; the game processed one command every 10 seconds, which equates to 1 cents per command.
The LINKS was an online network launched for the MSX in Japan in 1986. It featured several graphical multiplayer online games, including T&E Soft's Daiva Dr. Amandora and Super Laydock, Telenet Japan's Girly Block, and Bothtec's Dires. It also featured several downloadable games, including Konami's A1 Grand Prix and Network Rally.
Habitat was the first attempt at a large-scale commercial virtual community that was graphically based. Habitat was not a 3D environment and did not incorporate immersion techniques. It is considered a forerunner of the modern MMORPGs and was quite unlike other online communities (i.e. MUDs and MOOs with text-based interfaces) of the time. Habitat had a GUI and a large userbase of consumer-oriented users, and those elements in particular have made it a much-cited project. When Habitat was shut down in 1988, it was succeeded by a scaled-down but a more sophisticated game called Club Caribe.
In 1987, Nintendo president Hiroshi Yamauchi partnered with Nomura Securities on the development of the Family Computer Network System for the Family Computer in Japan. Led by Masayuki Uemura, Nintendo Research & Development 2 developed the modem hardware, and Nomura Securities developed the client and server software and the information database. Five network-enabled games were developed for the system, including a graphical, competitive online multiplayer version of Yamauchi's favorite classic, Go.
In 1987, Kesmai (the company which developed Islands of Kesmai) released Air Warrior on GEnie. It was a graphical flight simulator/air combat game, initially using wire frame graphics, and could run on Apple Macintosh, Atari ST, or Commodore Amiga computers. Over time, Air Warrior was added to other online services, including Delphi, CRIS, CompuServe, America Online, Earthlink, GameStorm and CompuLink. Over time, Kesmai produced many improved versions of the game. In 1997, a backport from Windows to the Macintosh was made available as an open beta on the Internet. In 1999, Kesmai was purchased by Electronic Arts, which started running the game servers itself. The last Air Warrior servers were shut down on December 7, 2001.
In 1988, Federation debuted on Compunet. It was a text-based online game, focused around the interstellar economy of the galaxy in the distant future. Players work their way up a series of ranks, each of which has a slightly more rewarding and interesting but difficult job attached, which culminates in the ownership of one's own "duchy", a small solar system. After some time on GEnie, in 1995 Federation moved to AOL. AOL made online games free in 1996, dropping surcharges to play, and the resulting load caused it to drop online game offerings entirely. IBGames, creators of Federation, started offering access to the game through its own website, making it perhaps the first game to transition off of an online service provider. IBGames kept the game operational until 2005 after most of the player base transitioned to the sequel, 2003's Federation II.
In 1990, Sega launched the online multiplayer gaming service Sega Meganet for the Mega Drive (Genesis) video game console. Sega continued to provide online gaming services for its later consoles, including the Sega NetLink service for the Sega Saturn and the SegaNet service for the Dreamcast. In 1995, Nintendo released the Satellaview, a satellite modem for the Super Famicom in Japan only after partnering up with St.GIGA, that gave the console online multiplayer gaming. In 1999, Nintendo released an add-on for the Nintendo 64 called the 64DD in Japan only, which offered Internet through a now-defunct dedicated online service for e-commerce, online gaming, and media sharing. The late 1990s saw an explosion of MMORPGs, including Nexus: The Kingdom of the Winds (1996), Ultima Online (1997), Lineage (1998), and EverQuest (1999).
In 2000, Sony introduced online multiplayer to the PlayStation 2. It was the first time of Sony doing so, and like many major consoles to come, it will become a norm in the industry. In 2001, Nintendo introduced online multiplayer to the GameCube using an add-on called a Broadband Adapter and Modem Adapter. It, however, came dead last in competing with the likes of the upcoming Xbox and the now icon of modern gaming, the PlayStation 2, both in sales and online impact. Later on, in 2001, Microsoft released the Xbox, which by using Xbox Live, offered online multiplayer and other Internet capabilities to the console and continued doing so for its later consoles, the Xbox 360 and the Xbox One. In 2006, Nintendo released the Wii, which offered online multiplayer gaming and other Internet capabilities over Nintendo Wi-Fi Connection and WiiConnect24, respectively. Both services were shut down on May 20, 2014, along with online capabilities of any games that utilize the feature, such as Mario Kart Wii (2008). The same year the Wii hit store shelves, rival Sony introduced its new console to add to its line of industry icons, the PlayStation 3 which used the brand new PlayStation Network (PSN) for online multiplayer gaming and other Internet capabilities to the system, and continued doing so for later consoles such as the PlayStation 4. In 2012, Nintendo made a successor to the dying Nintendo Wi-Fi Connection involving their next-gen console, the Wii U, and its handheld counterpart, the Nintendo 3DS, by creating the Nintendo Network to continue on its online multiplayer and Internet capabilities, in order to compete against Microsoft's Xbox Live and Sony's PlayStation Network. Nintendo's latest console, the Nintendo Switch, does offer online play via Nintendo Network.
See also
History of arcade video games
History of massively multiplayer online games
History of mobile games
History of video games
Online game
Multiplayer video game
References
Mainframe games
Online games
Multi-user dungeon | History of online games | [
"Technology"
] | 3,642 | [
"History of video games",
"History of computing"
] |
16,326,506 | https://en.wikipedia.org/wiki/Raising%20of%20Chicago | During the 1850s and 1860s, engineers carried out a piecemeal raising of the grade of central Chicago to lift the city out of its low-lying swampy ground. Buildings and sidewalks were physically raised on jackscrews. The work was funded by private property owners and public funds.
Overview
During the 19th century, the elevation of the Chicago area was little higher than the shoreline of Lake Michigan. For two decades following the city's incorporation, drainage from the city surface was inadequate, resulting in large bodies of standing and pathogenic water. These conditions caused numerous epidemics, including typhoid fever and dysentery, which blighted Chicago six years in a row culminating in the 1854 outbreak of cholera that killed six percent of the city’s population.
The crisis forced the city's engineers and aldermen to take the drainage problem seriously and after many heated discussions—and following at least one false start—a solution eventually materialized. In 1856, engineer Ellis S. Chesbrough drafted a plan for the installation of a citywide sewerage system and submitted it to the Common Council, which adopted the plan. Workers then laid drains, covered and refinished roads and sidewalks with several feet of soil, and raised most buildings on screwjacks to the new grade.
Many of the city's old wooden buildings were considered not worth raising, so instead the owners of these wooden buildings had them either demolished or else placed on rollers and moved to the outskirts of Chicago. Business activities in such buildings continued, as they were being moved.
Raisings of buildings
Earliest raising of a brick building
In January 1858, the first masonry building in Chicago to be thus raised—a four-story, , 750-ton (680 metric tons) brick structure situated at the north-east corner of Randolph Street and Dearborn Street—was lifted on two hundred jackscrews to its new grade, which was higher than the old one, “without the slightest injury to the building.” It was the first of more than fifty comparably large masonry buildings to be raised that year. The contractor was an engineer from Boston, James Brown, who went on to partner with Chicago engineer James Hollingsworth; Brown and Hollingsworth became the first and, it seems, the busiest building raising partnership in the city. By the year-end, they were lifting brick buildings more than long, and the following spring they took the contract to raise a brick block of more than twice that length.
The Row on Lake Street
In 1860, a consortium of no fewer than six engineers—including Brown, Hollingsworth and George Pullman—co-managed a project to raise half a city block on Lake Street, between Clark Street and LaSalle Street completely and in one go. This was a solid masonry row of shops, offices, printeries, etc., long, comprising brick and stone buildings, some four stories high, some five. It had a footprint taking up almost of space, and an estimated total weight—including hanging sidewalks—of 35,000 tons. Businesses operating in these premises were not closed down during the operation; as the buildings were being raised, people came, went, shopped and worked in them as they would ordinarily do. In five days the entire assembly was elevated , by a team consisting of six hundred men using six thousand jackscrews, which made it ready for new foundation walls to be built underneath. The spectacle drew crowds of thousands, who were, on the final day, permitted to walk at the old ground level, among the jacks.
The Tremont House
The following year the consortium of engineers Ely, Smith and Pullman led a team that raised the Tremont House hotel on the south-east corner of Lake Street and Dearborn Street. This six-story brick building was luxuriously appointed, and had an area of over . Once again business as usual was maintained as this large hotel ascended. Some of the guests staying there at the time—among whose number were several VIPs and a US Senator— were oblivious to the process as five hundred men worked under covered trenches operating their five thousand jackscrews. One patron was puzzled to note that the front steps leading from the street into the hotel were becoming steeper every day, and that when he checked out, the windows were several feet above his head, whereas before they had been at eye level. This hotel building, which until just the previous year had been the tallest building in Chicago, was raised without incident.
The Robbins Building
On the corner of South Water Street and Wells Street stood the Robbins Building, an iron building long, wide and five stories high. This was a very heavy building; its ornate iron frame, its twelve-inch (305 mm) thick masonry wall filling, and its “floors filled with heavy goods” made for a weight estimated at 27,000 tons (24,000 metric tons), a large load to raise over a relatively small area. Hollingsworth and Coughlin took the contract, and in November 1865 lifted not only the building but also the of stone sidewalk outside it. The complete mass of iron and masonry was raised , “without the slightest crack or damage.”
Hydraulic raising of the Franklin House
In 1860 the Franklin House, a four story brick building on Franklin Street, was raised with hydraulic apparatus by the engineer John C. Lane, of the Lane and Stratton partnership of San Francisco. Californian engineers had been using hydraulic jacks to raise brick buildings in and around San Francisco as early as 1853.
Relocated buildings
Many of central Chicago’s hurriedly-erected wooden frame buildings were now considered inappropriate to the burgeoning and increasingly wealthy city. Rather than raise them several feet, proprietors often preferred to relocate these old frame buildings, replacing them with new masonry blocks built to the latest grade. Consequently, the practice of putting the old multi-story, intact and furnished wooden buildings—sometimes entire rows of them en bloc—on rollers and moving them to the outskirts of town or to the suburbs was so common as to be considered nothing more than routine traffic.
Traveller David Macrae wrote, “Never a day passed during my stay in the city that I did not meet one or more houses shifting their quarters. One day I met nine. Going out Great Madison Street in the horse cars we had to stop twice to let houses get across.” The function for which such a building had been constructed would often be maintained during the move, with people dining, shopping and working in these buildings as they were rollered down the street. Brick buildings also were moved from one location to another, and in 1866, the first of these—a brick building of two and a half stories—made the short move from Madison Street out to Monroe Street. Later, many other much larger brick buildings were rolled much greater distances across Chicago.
See also
Regrading in Seattle
Seattle Underground
Underground Atlanta
References
External links
The Lifting of Chicago: Source Documents Primary Document Sources.
Raising Chicago: An Illustrated History
History of Chicago
Sewerage
Civil engineering
Building engineering
1850s in Illinois
1860s in Illinois | Raising of Chicago | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,433 | [
"Building engineering",
"Water pollution",
"Sewerage",
"Construction",
"Civil engineering",
"Environmental engineering",
"Architecture"
] |
4,243,069 | https://en.wikipedia.org/wiki/10199%20Chariklo | 10199 Chariklo is the largest confirmed centaur, a class of minor planet in the outer Solar System. It orbits the Sun between Saturn and Uranus, grazing the orbit of Uranus. On 26 March 2014, astronomers announced the discovery of two rings (nicknamed Oiapoque and Chuí after the rivers that define Brazil's borders) around Chariklo by observing a stellar occultation, making it the first minor planet known to have rings.
A photometric study in 2001 was unable to find a definite period of rotation. Infrared observations of Chariklo indicate the presence of water ice, which may in fact be located in its rings.
Discovery and naming
Chariklo was discovered by James V. Scotti of the Spacewatch program on February 15, 1997. Chariklo is named after the nymph Chariclo (), the wife of Chiron and the daughter of Apollo.
A symbol derived from that for 2060 Chiron, , was devised in the late 1990s by German astrologer Robert von Heeren. It replaces Chiron's K with a C for Chariklo.
Size and shape
Chariklo is currently the largest known centaur, with a volume-equivalent diameter of about 250 km. Its shape is probably elongated, with dimensions of 287.6 × 270.4 × 198.2 km. is likely to be the second largest with and 2060 Chiron is likely to be the third largest with .
Orbit
Centaurs originated in the Kuiper belt and are in dynamically unstable orbits that will lead to ejection from the Solar System, an impact with a planet or the Sun, or transition into a short-period comet.
The orbit of Chariklo is more stable than those of Nessus, Chiron, and Pholus. Chariklo lies within 0.09 AU of the 4:3 resonance of Uranus and is estimated to have a relatively long orbital half-life of about 10.3 Myr. Orbital simulations of twenty clones of Chariklo suggest that Chariklo will not start to regularly come within 3 AU (450 Gm) of Uranus for about thirty thousand years.
During the perihelic oppositions of 2003–04, Chariklo had an apparent magnitude of +17.7. , Chariklo was 14.8 AU from the Sun.
Rings
A stellar occultation in 2013 revealed that Chariklo has two rings with radii 386 and 400 km and widths of about 6.9 km and 0.12 km respectively. The rings are approximately 14 km apart. This makes Chariklo the smallest known object to have rings. These rings are consistent with an edge-on orientation in 2008, which can explain Chariklo's dimming before 2008 and brightening since. Nonetheless, the elongated shape of Chariklo explains most of the brightness variability resulting in darker rings than previously determined. Furthermore, the rings can explain the gradual disappearance of the water-ice features in Chariklo's spectrum before 2008 and their reappearance thereafter if the water ice is in Chariklo's rings.
The existence of a ring system around a minor planet was unexpected because it had been thought that rings could only be stable around much more massive bodies. Ring systems around minor bodies had not previously been discovered despite the search for them through direct imaging and stellar occultation techniques. Chariklo's rings should disperse over a period of at most a few million years, so either they are very young, or they are actively contained by shepherd moons with a mass comparable to that of the rings. However, other research suggests that Chariklo's elongated shape combined with its fast rotation can clear material in an equatorial disk through Lindblad resonances and explain the survival and location of the rings, a mechanism valid also for the ring of Haumea.
The team nicknamed the rings Oiapoque (the inner, more substantial ring) and Chuí (the outer ring), after the two rivers that form the northern and southern coastal borders of Brazil. A request for formal names will be submitted to the IAU at a later date.
It has been confirmed that 2060 Chiron may have a similar pair of rings.
Exploration
Camilla is a mission concept published in June 2018 that would launch a robotic probe to perform a single flyby of Chariklo and drop off a impactor made of tungsten to excavate a crater approximately deep for remote compositional analysis during the flyby. The mission would be designed to fit under the cost cap of NASA New Frontiers program, although it has not been formally proposed to compete for funding. The spacecraft would be launched in September 2026, using one gravity assist from Venus in February 2027 and Earth in December 2027 and 2029 to accelerate it out toward Jupiter.
See also
2060 Chiron
References
External links
37th DPS: Albedos, Diameters (and a Density) of Kuiper Belt and Centaur Objects
Chariklo Photo (February 1999)
Chariklo's orbit between Saturn and Uranus.
Demonstration of how centaur 10199 Chariklo is currently controlled by Uranus (Solex 10)
Centaurs (small Solar System bodies)
Discoveries by James V. Scotti
010199
Named minor planets
010199
19970215
Solar System | 10199 Chariklo | [
"Astronomy"
] | 1,093 | [
"Outer space",
"Solar System"
] |
4,243,165 | https://en.wikipedia.org/wiki/Netherlands%20Environmental%20Assessment%20Agency | The Netherlands Environmental Assessment Agency ( - abbr. PBL) is a Dutch research institute that advises the Dutch government on environmental policy and regional planning issues. Operating as an autonomous entity within the Dutch Government organization, specifically under the Ministry of Infrastructure and Water Management. While primarily associated with the Ministry of Infrastructure and Water Management, PBL's expertise is also sought by other government departments, including the Ministry of Economic Affairs, the Ministry of the Interior and Kingdom Relations, the Ministry of Agriculture, Fisheries, Food Security and Nature, and the Ministry of Foreign Affairs. The research fields include sustainable development, energy and climate change, biodiversity, transport, land use, and air quality. It is one of three applied policy research institutes of the Dutch government, the other two being Centraal Planbureau (CPB), and The Netherlands Institute for Social Research (SCP). Since January 2023 Marko Hekkert is director of the Netherlands Environmental Assessment Agency.
History
The PBL was created on May 15, 2008, by merging the Netherlands Environmental Agency () (MNP) with the Netherlands Institute for Spatial Research () (RPB). The English name for the new organization was borrowed from the MNP, which was part of the Netherlands National Institute for Public Health and the Environment (RIVM) until May 1, 2005. It is currently an agency of the Dutch Ministry of Infrastructure and the Environment (IenW, Ministerie van Infrastructuur en Waterstaat).
The Netherlands Environmental Assessment Agency is located in The Hague and employs approximately 250 people.
Core Tasks
The primary functions of the PBL Netherlands Environmental Assessment Agency encompass:
Conducting assessments and reporting on the current state of environmental, ecological, and spatial attributes, as well as appraising the effectiveness of related policy measures.
Investigating potential social developments that may impact environmental, ecological, and spatial conditions, and appraising future policy directions.
Bringing to the forefront societal challenges that could affect environmental, ecological, and spatial quality for public discourse.
Proposing strategic approaches to fulfill governmental goals in environmental protection, nature conservation, and spatial planning sectors.
See also
Environment of the Netherlands
National Institute for Public Health and the Environment
Climate change in the Netherlands
The agency publishes the GLOBIO Model, designed to quantify human impacts on biodiversity at large (regional to global) scales.
References
External links
Netherlands Environmental Assessment Agency
Research institutes in the Netherlands
Environment of the Netherlands
Government agencies of the Netherlands
Environmental research institutes
Environmental agencies | Netherlands Environmental Assessment Agency | [
"Environmental_science"
] | 503 | [
"Environmental research institutes",
"Environmental research"
] |
4,243,182 | https://en.wikipedia.org/wiki/Silver%20telluride | Silver telluride (Ag2Te) is a chemical compound, a telluride of silver, also known as disilver telluride or silver(I) telluride. It forms a monoclinic crystal. In a wider sense, silver telluride can be used to denote AgTe (silver(II) telluride, a metastable compound) or Ag5Te3.
Silver(I) telluride occurs naturally as the mineral hessite, whereas silver(II) telluride is known as empressite.
Silver telluride is a semiconductor which can be doped both n-type and p-type. Stoichiometric Ag2Te has n-type conductivity. On heating silver is lost from the material.
Non-stoichiometric silver telluride has shown extraordinary magnetoresistance.
Synthesis
Porous silver telluride (AgTe) is synthesized by an electrochemical deposition method. The experiment can be performed using a potentiostat and a three-electrode cell with 200 mL of 0.5 M sulfuric acid electrolyte solution containing Ag nanoparticles at room temperature. Then a silver paste used in the tungsten ditelluride (WTe2) attachment leach into the electrolyte which causes small amounts of Ag to dissolve in the electrolyte. The electrolyte was stirred by a magnetic bar to remove hydrogen bubbles. A silver- silver chloride electrode and a platinum wire can be used as reference and counter electrodes. All the potentials can be measured against the reference electrode, and it was calibrated using the equation ERHE = EAg/AgCl + .059 pH + .197. In order to grow the porous AgTe, the WTe2 was treated using multiple cyclic voltammetry between -1.2 and 0 volts with a scan rate of 100 mV/s.
Glutathione coated Ag2Te Nanoparticles can be synthesized by preparing a 9 mL solution containing 10 mM AgNO3, 5mM Na2TeO3, and 30 mM glutathione. Place that solution in an ice bath. N2H4 was added to the solution and the reaction is allowed to proceed for 5 min under constant stirring. Then the nanoparticles are washed three times by a way of centrifugation, after the three washes the nanoparticles are suspended in PBS and washed again with that same method.
References
Hagyeong Kwon, Dongyeon Bae, Dongyeun Won, Heeju Kim, Gunn Kim, Jiung Cho, Hee Jung Park, Hionsuck Baik, Ah Reum Jeong, Chia-Hsien Lin, Ching-Yu Chiang, Ching-Shun Ku, Heejun Yang, and Suyeon Cho "Nanoporous Silver Telluride for active hydrogen evolution." (n.d.) https://pubs.acs.org/doi/10.1021/acsnano.0c09517
See also
Hessite
Empressite
Sylvanite
Related materials
Silver selenide
Silver sulfide
Silver compounds
Tellurides
Semiconductor materials
Non-stoichiometric compounds | Silver telluride | [
"Chemistry"
] | 664 | [
"Non-stoichiometric compounds",
"Semiconductor materials",
"Inorganic compounds",
"Inorganic compound stubs"
] |
4,243,203 | https://en.wikipedia.org/wiki/Russula%20emetica | Russula emetica, commonly known as the sickener, emetic russula, or vomiting russula, is a basidiomycete mushroom, and the type species of the genus Russula. It has a red, convex to flat cap up to in diameter, with a cuticle that can be peeled off almost to the centre. The gills are white to pale cream, and closely spaced. A smooth white stem measures up to long and thick. First described in 1774, the mushroom has a wide distribution in the Northern Hemisphere, where it grows on the ground in damp woodlands in a mycorrhizal association with conifers, especially pine.
The mushroom's common names refer to the gastrointestinal distress which it causes when consumed raw. The flesh is extremely peppery, but this offensive taste, along with its toxicity, can be removed by parboiling or pickling. Although it used to be widely eaten in Russia and eastern European countries, it is generally not recommended for consumption. There are many similar Russula species that have a red cap with white stem and gills, some of which can be reliably distinguished from R. emetica only by microscopic characteristics.
Taxonomy
Russula emetica was first officially described as Agaricus emeticus by Jacob Christian Schaeffer in 1774, in his series on fungi of Bavaria and the Palatinate, Fungorum qui in Bavaria et Palatinatu circa Ratisbonam nascuntur icones. Christian Hendrik Persoon placed it in its current genus Russula in 1796, where it remains. According to the nomenclatural database MycoBank, Agaricus russula is a synonym of R. emetica that was published by Giovanni Antonio Scopoli in 1772, two years earlier than Schaeffer's description. However, this name is unavailable as Persoon's name is sanctioned. Additional synonyms include Jean-Baptiste Lamarck's Amanita rubra (1783), and Augustin Pyramus de Candolle's subsequent new combination Agaricus ruber (1805). The specific epithet is derived from the Ancient Greek
emetikos/εμετικος 'emetic' or 'vomit-inducing'. Similarly, its common names of sickener, emetic russula, and vomiting russula also refer to this attribute.
Russula emetica is the type species of the genus Russula. According to Rolf Singer's infrageneric classification of Russula, it is also the type of the section Russula. In an alternative classification proposed by Henri Romagnesi, it is the type species of subsection Emeticinae. A molecular analysis of European Russula species determined that R. emetica groups in a clade with R. raoultii, R. betularum, and R. nana; a later analysis confirmed the close phylogenetic relationship between R. emetica and the latter two Russulas.
Description
The sticky cap of R. emetica is wide, with a shape ranging from convex (in young specimens) to flattened, sometimes with a central depression, and sometimes with a shallow umbo. It is a bright scarlet or cherry red, and in maturity, the margins have fine radial grooves extending towards the center of the cap. The cuticle can be readily peeled from the cap almost to the centre. The brittle flesh is white (or tinged with red directly under the cap cuticle), measures thick, and has a very sharp and peppery taste. Gills are closely spaced, white to creamy-white, and have an attachment to the stem ranging from adnate to adnexed or completely free. They are intervenose (containing cross-veins in the spaces between the gills) and occasionally forked near the cap margin. Fruit bodies have a slightly fruity or spicy smell.
The white stem measures long by thick, and is roughly the same width throughout its length, although it can be a bit thicker near the base. Its surface is dry and smooth, sometimes marked by faint longitudinal grooves. It is either stuffed (filled with a cottony pith) or partially hollow, and lacks a ring or partial veil.
Russula emetica produces a white to yellowish-white spore print. Spores are roughly elliptical to egg-shaped, with a strongly warted and partially reticulate (web-like) surface. They have dimensions of 8.8–11.0 by 6.6–8 μm, and are amyloid, meaning that they will stain blue, bluish-grey, to blackish in Melzer's reagent. Basidia (spore-bearing cells) are club-shaped, four-spored, hyaline (translucent), and measure 32.9–50 by 9.0–11.6 μm. Cystidia located on the gill face (pleurocystidia) are somewhat cylindrical to club-shaped or somewhat spindle-shaped, and measure 35–88 by 7.3–12.4 μm. They are yellowish, and contain granular contents. Cheilocystidia (found on the edges of the gills), which are similar in shape to the pleurocystidia, are thin-walled, hyaline, and measure 14–24 by 4.4–7.3 μm. Clamp connections are absent from the hyphae.
The red pigments of this and other russulas are water-soluble to some degree, and fruit bodies will often bleach or fade with rain or sunlight; the cap colour of older specimens may fade to pink or orange, or develop white blotches. The main pigment responsible for the red colour of the fruit bodies is called russularhodin, but little is known of its chemical composition.
Similar species
Russula emetica is one of over 100 red-capped Russula species known worldwide. The related beechwood sickener (R. nobilis) is found under beech in Europe. Many, such as the bloody brittlegill (R. sanguinaria), are inedible; this species can be distinguished from R. emetica by the reddish flush in its stem. Among the edible lookalikes, there is R. padulosa, commonly found in Europe and North America. R. aurea has a yellow stem, gills and flesh under its red cap. The edible R. rugulosa—common in mixed woods in the eastern and northern United States—has a wrinkled and pimpled cap cuticle, cream spores, and mild taste. Another inedible species, R. fragilis, has notched gills, and its stem stains blue with naphthol. The uncommon European subspecies R. emetica longipes is distinguished by its longer stem and ochre gills. The paler European mushroom R. betularum, found in coniferous forests and moorland, is sometimes considered a subspecies of R. emetica. R. nana is restricted in distribution to arctic and subarctic highland meadows where dwarf willow (Salix herbacea) or alpine bearberry (Arctostaphylos alpina) are abundant.
Distribution and habitat
Like all species of Russula, R. emetica is mycorrhizal, and forms mutually beneficial partnerships with roots of trees and certain herbaceous plants. Preferred host plants are conifers, especially pines. Fruit bodies grow singly, scattered, or in groups in sphagnum moss near bogs, and in coniferous and mixed forests. The fungus occasionally fruits on humus or on very rotten wood. The mushroom is known from North Africa, Asia and Europe and can be locally very common. There is some doubt over the extent of its range in North America, as some sightings refer to the related R. silvicola; initially the name "Russula emetica" was often applied to any red-capped white Russula. Sightings in Australia are now referred to the similarly coloured R. persanguinea.
A multi-year field study of the growth of R. emetica production in a scots pine plantation in Scotland found that total productivity was 0.24–0.49 million mushrooms per hectare per year (roughly 0.1–0.2 million mushrooms/acre/year), corresponding to a fresh weight of 265–460 kg per hectare per year (49–85 lb/acre/year). Productivity was highest from August to October. The longevity of the mushrooms was estimated to be 4–7 days. In a study of the fungal diversity of ectomycorrhizal species in a Sitka spruce forest, R. emetica was one of the top five dominant fungi. Comparing the frequency of fruit body production between 10-, 20-, 30-, or 40-year-old forest stands, R. emetica was most prolific in the latter.
Toxicity
As its name implies, the sickener is inedible, though not as dangerous as sometimes described in older mushroom guides. The symptoms are mainly gastrointestinal in nature: nausea, diarrhoea, vomiting, and colicky abdominal cramps. These symptoms typically begin half an hour to three hours after ingestion of the mushroom, and usually subside spontaneously, or shortly after the ingested material has been expelled from the intestinal tract. The active agents have not been identified but are thought to be sesquiterpenes, which have been isolated from the related genus Lactarius and from Russula sardonia. Sesquiterpenoids that have been identified from R. emetica include the previously known compounds lactarorufin A, furandiol, methoxyfuranalcohol, and an unnamed compound unique to this species.
The bitter taste does disappear on cooking and it is said to then be edible, though consumption is not recommended. The mushroom used to be widely eaten in eastern European countries and Russia after parboiling (which removes the toxins), and then salting or pickling. In some regions of Hungary and Slovakia, the cap cuticle is removed and used as a spice for goulash. Both the red squirrel (Sciurus vulgaris) and the American red squirrel (Tamiasciurus hudsonicus) are known to forage for, store and eat R. emetica. Other creatures that have been documented consuming the mushroom include the snail Mesodon thyroidus, several species of slugs (including Arion ater, A. subfuscus, A. intermedius, Limax maximus, L. cinereoniger, and Deroceras reticulatum), the fruit flies Drosophila falleni and D. quinaria, and the fungus gnat Allodia bipexa.
See also
List of Russula species
References
External links
emetica
Fungi of Africa
Fungi of Asia
Fungi of Europe
Fungi of North America
Poisonous fungi
Fungi described in 1774
Fungus species | Russula emetica | [
"Biology",
"Environmental_science"
] | 2,255 | [
"Poisonous fungi",
"Fungi",
"Toxicology",
"Fungus species"
] |
4,243,224 | https://en.wikipedia.org/wiki/Russula%20xerampelina | Russula xerampelina, also commonly known as the shrimp russula, crab brittlegill, or shrimp mushroom, is a basidiomycete mushroom of the brittlegill genus Russula. Two subspecies are recognised. The fruiting bodies appear in coniferous woodlands in autumn in northern Europe and North America. Their caps are coloured various shades of wine-red, purple to green. Mild tasting and edible, it is one of the most highly regarded brittlegills for the table. It is also notable for smelling of shellfish or crab when fresh.
Taxonomy
Russula xerampelina was originally described in 1770 as Agaricus xerampelina from a collection in Bavaria by the German mycologist Jacob Christian Schaeffer, who noted the colour as fusco-purpureus or "purple-brown". It was later given its present binomial name by Swedish mycologist Elias Magnus Fries. Its specific epithet is taken from the Ancient Greek meaning "colour of dried vine leaves", xeros meaning "dry", and ampělinos or "of the vine".
Two subspecies have been recognised, var. xerampelina and var. tenuicarnosa, with thinner flesh in the cap and the stipe. The name R. erythropoda is now considered a synonym, and former subspecies R. (xerampelina subsp.) amoenipes (originally named by Henri Romagnesi) now a separate species. A former variety with a greenish cap, R. xerampelina var. elaeodes, is now classified as R. clavipes.
As the first defined species, it gives its name to the section Xerampelinae, a group of related species within the genus Russula, occasionally all termed R. xerampelina in the past.
Common names include shrimp mushroom, shrimp Russula, crab brittlegill, and shellfish-scented Russula.
Description
Russula xerampelina has a characteristic odour of boiled crab or shrimp. Trimethylamine and its precursor, trimethylamine N-oxide, are the source of this mushroom’s distinct odour. The cap is wide, domed, flat, or with a slightly depressed centre, and sticky. The colour is variable, most commonly purple to wine-red, or greenish, and darker towards the centre of the cap. There are fine grooves up to a centimetre long running perpendicular to the margin. The gills have a mild to rather bitter taste, narrowly spaced, and turn creamy-yellow on aging specimens. The spore print is creamy-yellow to ochre. The oval spores measure 8.8–9.9 by 6.7–7.8 μm and are covered with 1 μm spines. The stipe is long, wide, cylindrical, white or sometimes with a reddish blush, bruising brown.
This Russula has been divided into several similar species by some mycologists. However, they all have the singular dark green colour reaction to iron salts (iron(II) sulfate) when applied to the flesh, and all smell of shellfish. This aroma is quite distinct, and becomes stronger with age.
Similar species
More reddish-capped forms could be confused with the sickener (Russula emetica), although the latter always has a white stipe and gills; greener-capped species may resemble the also edible R. aeruginea.
Many other species in the genus are similar, e.g. Russula graveolens, but most lack the seafood smell. Russula olivacea has a more velvety cap.
Distribution and habitat
Russula xerampelina is widely distributed; quite common in northern temperate zones, and often ranging into the Arctic Circle, it also ranges south to Costa Rica. In North America, it appears from July to October to the east and October to January in the west. It grows solitary, or in groups with conifers, and seems to have a preference for Douglas fir, or more rarely pine trees or larch. It is sometimes found in deciduous woods, such as beech and oak.
Variety tenuicarnosa has been found on sandy soils under pine in Slovakia and northern Italy in Trentino.
Uses
The taste of Russula xerampelina is mild. This Russula is considered one of the best edible species of its genus, although the crab, or shrimp taste and smell will persist even when cooking. This is more pronounced and less pleasant in older specimens. The young caps are said to be superb stuffed with any suitable ingredients, and are rarely maggoty.
See also
List of Russula species
References
"Danske storsvampe. Basidiesvampe" [a key to Danish basidiomycetes] J.H. Petersen and J. Vesterholt eds. Gyldendal. Viborg, Denmark, 1990.
External links
Rogers Mushrooms - Russula xerampelina
xerampelina
Edible fungi
Fungi of Europe
Fungi of North America
Fungi described in 1774
Taxa named by Jacob Christian Schäffer
Fungus species | Russula xerampelina | [
"Biology"
] | 1,056 | [
"Fungi",
"Fungus species"
] |
4,243,241 | https://en.wikipedia.org/wiki/Cellular%20architecture | Cellular architecture is a type of computer architecture prominent in parallel computing. Cellular architectures are relatively new, with IBM's Cell microprocessor being the first one to reach the market. Cellular architecture takes multi-core architecture design to its logical conclusion, by giving the programmer the ability to run large numbers of concurrent threads within a single processor. Each 'cell' is a compute node containing thread units, memory, and communication. Speed-up is achieved by exploiting thread-level parallelism inherent in many applications.
Cell, a cellular architecture containing 9 cores, is the processor used in the PlayStation 3. Another prominent cellular architecture is Cyclops64, a massively parallel architecture currently under development by IBM.
Cellular architectures follow the low-level programming paradigm, which exposes the programmer to much of the underlying hardware. This allows the programmer to greatly optimize their code for the platform, but at the same time makes it more difficult to develop software.
See also
Cellular automaton
External links
Cellular architecture builds next generation supercomputers
ORNL, IBM, and the Blue Gene Project
Energy, IBM are partners in biological supercomputing project
Cell-based Architecture
Parallel computing
Computer architecture
Classes of computers | Cellular architecture | [
"Technology",
"Engineering"
] | 245 | [
"Computer engineering",
"Computer architecture",
"Computer hardware stubs",
"Computer systems",
"Computing stubs",
"Computers",
"Classes of computers"
] |
4,243,290 | https://en.wikipedia.org/wiki/Ghorfa | A ghorfa () is a type of communal granary found mainly in southern Tunisia. Similar structures are also found in northeastern Libya. They are associated in particular with Berber settlements in these regions. They consist of a collection of vaulted rooms built in rows and stacked in multiple stories organized around an internal courtyard.
Terminology
The Arabic word ghorfa () refers in a more narrow sense to the individual rooms of the granary. The granary as a whole can also be known as a ksar (plur. ksour), the term used for fortified villages in the region. Some similar fortified granaries in Tunisia are referred to by the term kasbah.
Historical background
The formation of the collective granaries in southern Tunisia and the Nafusa Mountains of Libya can be attributed generally to the 14th century. In more recent centuries, the number of ksour in southern Tunisia increased as local lifestyles became more uniform. At one time, some 6000 ghorfas existed in Tunisia. A large proportion of these have disappeared since Tunisian independence in the 20th century, as the rural economy in the region declined. In the Nafusa Mountains, most of the ksour were destroyed in the 19th century when the Ottomans suppressed a rebellion in the area.
Architecture
Ghorfa-type granaries consist of a series of barrel-vaulted rooms, each with a single door, built in rows and stacked on top of each other to form multiple stories. These are organized around an internal courtyard, usually quadrilateral in shape, from which the rooms are accessed. The tallest granaries can be up to four or five stories high. The rooms were used to store grain, dates, and other food or animal products. The rooms at the ground level could also be used as living quarters for guards and animals. The rooms above ground level are accessed by external staircases. Many of these structures were built using loose stones and clay.
Notable examples
In Tunisia:
Medenine
Metameur
(near Ksar Aouadid, also known as Ksar Zenata)
Ksar Ouled Soltane
In Libya:
Gasr Al-Hajj
In popular culture
Ghorfas were featured prominently in the film Star Wars: Episode I – The Phantom Menace as the slave quarters of Mos Espa, home to Anakin Skywalker. These scenes in the film show ghorfas from several locations in southern Tunisia, including Ksar Ouled Soltane and Ksar Hadada.
See also
Fortified Granaries of Aures
Chenini
Douiret
Menzel (Djerba)
References
Sources
Rooms
Berber architecture
Granaries | Ghorfa | [
"Engineering"
] | 531 | [
"Rooms",
"Architecture"
] |
4,243,339 | https://en.wikipedia.org/wiki/Russula%20vesca | Russula vesca, known by the common names of bare-toothed Russula or the flirt, is a basidiomycete mushroom of the genus Russula.
Taxonomy
Russula vesca was described, and named by the eminent Swedish mycologist Elias Magnus Fries (1794–1878). The specific epithet is the feminine of the Latin adjective vescus, meaning "edible".
Description
The skin of the cap typically does not reach the margins (resulting in the common names). The cap is 5–10 cm wide, flat, convex, or with slightly depressed centre, weakly sticky, colour brownish to dark brick-red. Taste mild. Gills close apart, white. The stipe narrows toward the base, 2–7 cm long, 1.5–2.5 cm wide, white. It turns deep salmon when rubbed with iron salts (Ferrous sulfate). The spore print is white.
Distribution and habitat
Russula vesca appears in summer or autumn, and grows primarily in deciduous forests in Europe, and North America.
Edibility
Russula vesca is considered edible and good, with a mild nutty flavour. In some countries, including Russia, Ukraine and Finland it is considered entirely edible even in the raw state.
See also
List of Russula species
References
"Danske storsvampe. Basidiesvampe" [a key to Danish basidiomycetes] J.H. Petersen and J. Vesterholt eds. Gyldendal. Viborg, Denmark, 1990.
External links
vesca
Fungi described in 1836
Fungi of Europe
Fungi of North America
Edible fungi
Fungus species | Russula vesca | [
"Biology"
] | 336 | [
"Fungi",
"Fungus species"
] |
4,243,352 | https://en.wikipedia.org/wiki/Rubidium%20bromide | Rubidium bromide is an inorganic compound with the chemical formula . It is a salt of hydrogen bromide. It consists of bromide anions and rubidium cations . It has a NaCl crystal structure, with a lattice constant of 685 picometres.
There are several methods for synthesising rubidium bromide. One involves reacting rubidium hydroxide with hydrobromic acid:
RbOH + HBr → RbBr + H2O
Another method is to neutralize rubidium carbonate with hydrobromic acid:
Rb2CO3 + 2 HBr → 2 RbBr + H2O + CO2
Rubidium metal would react directly with bromine to form RbBr, but this is not a sensible production method, since rubidium metal is substantially more expensive than the carbonate or hydroxide; moreover, the reaction would be explosive.
References
WebElements. URL accessed March 1, 2006.
Rubidium compounds
Bromides
Metal halides
Alkali metal bromides
Rock salt crystal structure | Rubidium bromide | [
"Chemistry"
] | 205 | [
"Inorganic compounds",
"Salts",
"Inorganic compound stubs",
"Bromides",
"Metal halides"
] |
4,243,381 | https://en.wikipedia.org/wiki/Russula%20nobilis | Formerly Russula mairei (Singer), and commonly known as the beechwood sickener, the now re-classified fungus Russula nobilis (Velen.) is a basidiomycete mushroom of the genus Russula. This group of mushrooms are noted for their brittle gills and bright colours.
Taxonomy
It was previously named in honour of French mycologist René Maire by Rolf Singer in 1929, but found to be the same taxon as the earlier 1920 Russula nobilis, which has naming priority.
Description
The cap is a red or rosy colour, 3–6 cm wide, convex to flat, or slightly depressed, and weakly sticky. It peels only to a third of its radius, which reveals pink flesh. The flesh is firm and white or sometimes yellowish, smells of coconut, and tastes peppery. It is often damaged by slugs. The stem is 2–5 cm long, 1–1.5 cm wide, cylindrical, (firmer than its conifer dwelling namesake, Russula emetica), and white. The gills are narrowly spaced, adnexed, rounded, and white, often with a faint blue-green sheen. The spore print is white.
Distribution and habitat
The species is mycorrhizal with beech (Fagus) in woodland areas. It is widespread and common in Europe, Asia, and North America, where these trees grow.
Edibility
Russula nobilis is inedible, and probably poisonous in quantity, but not deadly. Many bitter tasting red-capped species can cause problems if eaten raw; the symptoms are mainly gastrointestinal in nature: diarrhoea, vomiting and colicky abdominal cramps. The active agent has not been identified but thought to be caused by chemical compounds known as sesquiterpenes, which have been isolated from the related genus Lactarius and from Russula sardonia.
See also
List of Russula species
References
"Danske storsvampe. Basidiesvampe" [a key to Danish basidiomycetes] J.H. Petersen and J. Vesterholt eds. Gyldendal. Viborg, Denmark, 1990.
External links
nobilis
Inedible fungi
Fungi of North America
Fungi of Europe
Fungi of Asia
Fungi described in 1920
Taxa named by Josef Velenovský
Fungus species | Russula nobilis | [
"Biology"
] | 482 | [
"Fungi",
"Fungus species"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.