id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
9,700,743 | https://en.wikipedia.org/wiki/11-Deoxycortisol | 11-Deoxycortisol, also known as cortodoxone (INN), cortexolone as well as 17α,21-dihydroxyprogesterone or 17α,21-dihydroxypregn-4-ene-3,20-dione, is an endogenous glucocorticoid steroid hormone, and a metabolic intermediate toward cortisol. It was first described by Tadeusz Reichstein in 1938 as Substance S, thus has also been referred to as Reichstein's Substance S or Compound S.
Function
11-Deoxycortisol acts as a glucocorticoid, though is less potent than cortisol.
11-Deoxycortisol is synthesized from 17α-hydroxyprogesterone by 21-hydroxylase and is converted to cortisol by 11β-hydroxylase.
11-Deoxycortisol in mammals has limited biological activity and mainly acts as metabolic intermediate within the glucocorticoid pathway, leading to cortisol. However, in sea lampreys, an early jawless fish species that originated over 500 million years ago, 11-deoxycortisol plays a crucial role as the primary and ultimate glucocorticoid hormone with mineralocorticoid properties; 11-deoxycortisol also takes part, by binding to specific corticosteroid receptors, in intestinal osmoregulation in sea lamprey at metamorphosis, during which they develop seawater tolerance before downstream migration. Sea lampreys do not possess the 11β-hydroxylase enzyme (CYP11B1) responsible for converting 11-deoxycortisol to cortisol and 11-deoxycorticosterone to corticosterone, as observed in mammals. The absence of this enzyme in sea lampreys indicates the existence of a complex and highly specific corticosteroid signaling pathway that emerged at least 500 million years ago with the advent of early vertebrates. The lack of cortisol and corticosterone in sea lampreys suggests that the presence of the 11β-hydroxylase enzyme may have been absent during the early stages of vertebrate evolution. The absence of cortisol and corticosterone in sea lampreys suggests that the 11β-hydroxylase enzyme may not have been present early in vertebrate evolution.
Clinical significance
11-Deoxycortisol in mammals has limited glucocorticoid activity, but it is the direct precursor of the major mammalian glucocorticoid, cortisol. As a result, the level of 11-deoxycortisol is measured to diagnose impaired cortisol synthesis, to find out the enzyme deficiency that causes impairment along the pathway to cortisol, and to differentiate adrenal disorders.
In 11β-hydroxylase deficiency, 11-deoxycortisol and 11-deoxycorticosterone levels increase, and
excess of 11-deoxycorticosterone leads to mineralocorticoid-based hypertension (as opposed to 21-hydroxylase deficiency, in which patients have low blood pressure from a lack of mineralocorticoids). Low levels of cortisol can affect blood pressure by causing a decrease in sodium retention and volume expansion. This is because cortisol plays a role in regulating the balance of water and electrolytes in the body. When cortisol levels are low, there is less sodium reabsorption by the kidneys, leading to increased excretion of sodium through urine. This ultimately reduces blood volume and lowers blood pressure. On the other hand, high levels of cortisol can also affect blood pressure by causing an increase in sodium retention and volume expansion. Cortisol-induced hypertension is accompanied by significant sodium retention, leading to an increase in extracellular fluid volume and exchangeable sodium. This results in an increase in blood volume and subsequently increases blood pressure. The underlying mechanisms for these effects involve various factors such as suppression of the nitric oxide system, alterations in vascular responsiveness to pressor agonists like adrenaline, increased cardiac output or stroke volume due to plasma volume expansion, and potential dysregulation of glucocorticoid receptors or 11β-hydroxylase enzyme activity. It's important to note that these mechanisms may be relevant not only for cortisol-induced hypertension but also for conditions such as Cushing's syndrome (excess cortisol production), apparent mineralocorticoid excess (related to defects in 11β-hydroxylase enzymes), licorice abuse (glycyrrhetinic acid affecting glycyrrhetinic acid receptor), chronic renal failure (prolonged half-life of cortisol due to reduced 11β-hydroxylase activity), and even essential hypertension where there may be abnormalities with 11β-hydroxylase activity or glucocorticoid receptor variations. Low levels of cortisol lead to reduced vascular tone as cortisol helps maintain normal vascular tone by promoting vasoconstriction. Low levels of cortisol can lead to decreased vasoconstriction, resulting in relaxed blood vessels and lower overall blood pressure. Also, low cortisol levels lead to impaired fluid balance, as cortisol affects fluid balance by influencing sodium and water reabsorption in the kidneys. When cortisol levels are low, sodium absorption may be reduced, leading to increased excretion of sodium in the urine and subsequent lowering of blood volume and blood pressure. Additionally, low levels of cortisol cause a dysregulated renin-angiotensin system, as cortisol interacts with the renin-angiotensin system, which regulates blood pressure through vasoconstriction and fluid balance. Low cortisol levels can disrupt this system, leading to altered angiotensin production, reduced aldosterone secretion, and subsequently lower blood pressure. Conversely, high levels of cortisol lead to increased vascular tone, enhanced sodium retention, and increased sympathetic activity. Stress-induced release of high-level glucocorticoids such as cortisol activates the sympathetic nervous system (SNS). The SNS controls heart rate, cardiac output, and vasomotor tone, causing constriction, and thereby increasing peripheral arterial resistance, resulting in an increase in blood pressure. In 11β-hydroxylase deficiency, 11-deoxycortisol can also be converted to androstenedione in a pathway that could explain the increase in androstenedione levels this condition.
In 21-hydroxylase deficiency, 11-deoxycortisol levels are low.
History
In 1934, biochemist Tadeus Reichstein, working in Switzerland, began research on extracts from animal adrenal glands in order to isolate physiologically active compounds. He was publishing results of his findings along the way. By 1944, he already isolated and elucidated the chemical structure of 29 pure substances. He was assigning names that consisted of the word "Substance" and a letter from the Latin alphabet to the newly found substances. In 1938, he published an article about "Substance R" and "Substance S" describing their chemical structures and properties. The Substance S since about 1955 became known as 11-Deoxycortisol.
In 1949, American research chemist Percy Lavon Julian, in looking for ways to produce cortisone, announced the synthesis of the Compound S, from the cheap and readily available pregnenolone (synthesized from the soybean oil sterol stigmasterol).
On 5 April 1952, biochemist Durey Peterson and microbiologist Herbert Murray at Upjohn, published the first report of a breakthrough fermentation process for the microbial 11α-oxygenation of steroids (e.g. progesterone) in a single step by common molds of the order Mucorales.
11α-oxygenation of Compound S produces 11α-hydrocortisone, which can be chemically oxidized to cortisone, or converted by further chemical steps to 11β-hydrocortisone (cortisol).
See also
11-Deoxycorticosterone
Cortexolone 17α-propionate
References
Corticosteroids
Glucocorticoids
Metabolic intermediates
Mineralocorticoids
Pregnanes
Steroid hormones
Steroids | 11-Deoxycortisol | Chemistry | 1,791 |
77,253,313 | https://en.wikipedia.org/wiki/Game%20form | In game theory and related fields, a game form, game frame, ruleset, or outcome function is the set of rules that govern a game and determine its outcome based on each player's choices. A game form differs from a game in that it does not stipulate the utilities or payoffs for each agent.
Mathematically, a game form can be defined as a mapping going from an action space—which describes all the possible moves a player can make—to an outcome space. The action space is also often called a message space when the actions consist of providing information about beliefs or preferences, in which case it is called a direct mechanism. For example, an electoral system is a game form mapping a message space consisting of ballots to a winning candidate (the outcome). Similarly, an auction is a game form that takes each bidder's price and maps them to both a winner and a set of payments by the bidders.
Often, a game form is a set of rules or institutions designed to implement some normative goal (called a social choice function), by motivating agents to act in a particular way through an appropriate choice of incentives. Then, the game form is called an implementation or mechanism. This approach is widely used in the study of auctions and electoral systems.
The social choice function represents the desired outcome or goal of the game, such as maximizing social welfare or achieving a fair allocation of resources. The mechanism designer's task is to design the game form in such a way that when each player plays their best response (i.e. behaves strategically), the resulting equilibrium implements the desired social choice function.
References
Game theory
Mechanism design
Social choice theory | Game form | Mathematics | 344 |
53,581,095 | https://en.wikipedia.org/wiki/Photopolymerization-based%20signal%20amplification | Photopolymerization-based signal amplification (PBA) is a method of amplifying detection signals from molecular recognition events in an immunoassay by utilizing a radical polymerization initiated through illumination by light. To contrast between a negative and a positive result, PBA is linked to a colorimetric method, thereby resulting in a change in color when a targeted analyte is detected, i.e., a positive signal. PBA is also used to quantify the concentration of the analyte by measuring intensity of the color.
Method
PBA is achieved by sequentially adding three kinds of solutions to a test strip and illuminating it with green light. First, a droplet of a patient’s sample is loaded on a test strip whose surface is covered with immobilized antibodies. If the sample contains the target antigens, they bind to the immobilized antibodies. (Figure 1a)
Second, eosin-conjugated antibodies are added to the patient’s sample. This second antibody specifically binds with the bound antigens, thereby causing each bound antigen to be sandwiched between the first and eosin-conjugated antibodies. (Figure 1b) After ten minutes, the droplet on the surface is rinsed away in order to make sure that only the sandwiched binding complexes are left on the surface before adding the third solution.
Lastly, a droplet of mixture of monomers (e.g., PEGDA and N-vinyl pyrrolidone) and phenolphthalein is added to the test strip, and the droplet is illuminated with green visible light, by which the eosin molecules become excited and produce radicals. (Figure 1c) As a result, propagation is caused and polymers are formed.
Since phenolphthalein molecules are surrounded by the polymers and thus left on the surface even after another rinse, the test strip turns red when a base is added. (Figure 1d) On the other hand, if the patient’s sample does not include any targeted antigens, the sandwiched binding complexes on the surface will not be formed, which leads to no red color.
Principle
Regeneration of Eosin
Many radical polymerizations, including ATRP and RAFT, cannot occur in an ambient environment because dissolved oxygen molecules can rapidly react with active radical species and form less active peroxyl radicals, thus inhibiting the radical polymerizations. Eosin-sensitized photopolymerization, on the other hand, can overcome this inhibition by dissolved oxygen with only sub-micromolar concentrations of free eosin in a PBA system, which allows radical polymerization even in an ambient environment. A great number of mechanisms have been proposed to explain this special ability of eosin, but the most recent focuses on the regeneration of eosin with the production of superoxide.
As can be seen in Figure 2, Liang et al. proposed that eosin radicals (Eosin ) react with oxygen, regenerating Eosin Y. In this mechanism, when the ground state of eosin (Eosin Y) absorbs visible light, the eosin becomes excited (Eosin Y*). Then, it is reduced—given one electron—through a reaction with a tertiary amine while generating the amine radical and the eosin radical. This eosin radical is oxidized by the reaction with oxygen, so Eosin Y can be regenerated. The regeneration of eosin makes the PBA efficient because oxygen is consumed through this photocatalytic cycle of Eosin Y, so polymerization can take place in an ambient environment even if there is only a few micromolar concentration of Eosin Y.
Eosin Y is not the only molecule with significant resilience to inhibition of oxygen, and methylene blue can also initiate photopolymerization in the presence of oxygen as Padon reported. However, Eosin Y has been regarded as the best photoinitiator due to its long excitation lifetime and low fluorescence quantum yield (Φ), which allows it to react with tertiary amine and generate radicals much faster than other alternatives.
Quantification
Quantification with PBA can be achieved by measuring intensity of the red color from phenolphthalein because brighter red emerges when the sample contains higher concentration of target antigens. For instance, if more antigens are bound to the surface antibodies, more eosin-conjugated antibodies will also bind to the bound analytes. Thus, photopolymerization on the surface becomes much faster and forms a thicker hydrogel film in which phenolphthalein molecules are trapped. Since more phenolphthalein molecules can remain in the thicker film after further rinsing, the indicators can give a higher intensity of red.
Other approaches
Many other polymerization methods have been implemented as a signal amplification tool: ATRP (atom-transfer radical polymerization), RAFT (reversible addition-fragmentation chain transfer polymerization), and enzyme-mediated redox polymerization. However, many of them are not available in ambient systems because they are susceptible to inhibition by oxygen. In order to solve inhibition in ambient systems, air-tolerant ATRP-based signal amplification has been developed. This method provides better sensitivity (~0.2 pmol) than eosin-sensitized PBA (1~10 nmol), but the air-tolerant ATRP takes much more time (~1 hour) to obtain the high sensitivity than the PBA (~100 seconds).
References
Immunologic tests | Photopolymerization-based signal amplification | Biology | 1,154 |
38,954,778 | https://en.wikipedia.org/wiki/Israel%20Loves%20Iran | Israel Loves Iran is a public campaign and social media movement that seeks to bring together Israelis and Iranians and promote peace between their two countries.
History
The organization was founded by Israeli graphic designer and the owner of Pushpin Mehina, Ronny Edry, when he first posted a picture of himself and his young daughter to Facebook with a graphic stating "Iranians, we love you, we will never bomb your country." The photo went viral, sparking a campaign which has expanded to include hundreds of thousands of people in many different countries; Israel Loves Iran has inspired many similar campaigns, including Iran Loves Israel.
The movement has seen many Israeli and Iranian citizens meet in third-party countries, often simply to have coffee together and take a picture to promote peace. Israel Loves Iran has included extensive public ad campaigns, including a campaign depicting images of Israelis and Iranians on the sides of buses in Tel Aviv.
In 2012, Ronny Edry spoke about the campaign at TEDxJaffa, and the video of his talk has been viewed more than 1.2 million times on TED.com. In 2013, Ronny Edry and Iran Loves Israel founder Majid Nowrouzi travelled to St. Louis, USA with their families in order to meet, where Edry spoke at Principia College and received the Euphrates Institute’s Visionary of the Year award, and the two were interviewed for public radio.
An online petition by Iranian Roya Mobasheri is pushing for a Nobel Peace Prize nomination for the Israel Loves Iran campaign.
See also
Tehran - Haifa - Tel Aviv
References
External links
Peace movements
Social media
Politics of Israel
Politics of Iran
Iran–Israel relations | Israel Loves Iran | Technology | 339 |
47,641 | https://en.wikipedia.org/wiki/Standard%20Model | The Standard Model of particle physics is the theory describing three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifying all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, proof of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy.
Although the Standard Model is believed to be theoretically self-consistent and has demonstrated some success in providing experimental predictions, it leaves some physical phenomena unexplained and so falls short of being a complete theory of fundamental interactions. For example, it does not fully explain why there is more matter than anti-matter, incorporate the full theory of gravitation as described by general relativity, or account for the universe's accelerating expansion as possibly described by dark energy. The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations and their non-zero masses.
The development of the Standard Model was driven by theoretical and experimental particle physicists alike. The Standard Model is a paradigm of a quantum field theory for theorists, exhibiting a wide range of phenomena, including spontaneous symmetry breaking, anomalies, and non-perturbative behavior. It is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) to explain experimental results at variance with the Standard Model, such as the existence of dark matter and neutrino oscillations.
Historical background
In 1928, Paul Dirac introduced the Dirac equation, which implied the existence of antimatter.
In 1954, Yang Chen-Ning and Robert Mills extended the concept of gauge theory for abelian groups, e.g. quantum electrodynamics, to nonabelian groups to provide an explanation for strong interactions. In 1957, Chien-Shiung Wu demonstrated parity was not conserved in the weak interaction.
In 1961, Sheldon Glashow combined the electromagnetic and weak interactions. In 1964, Murray Gell-Mann and George Zweig introduced quarks and that same year Oscar W. Greenberg implicitly introduced color charge of quarks. In 1967 Steven Weinberg and Abdus Salam incorporated the Higgs mechanism into Glashow's electroweak interaction, giving it its modern form.
In 1970, Sheldon Glashow, John Iliopoulos, and Luciano Maiani introduced the GIM mechanism, predicting the charm quark. In 1973 Gross and Wilczek and Politzer independently discovered that non-Abelian gauge theories, like the color theory of the strong force, have asymptotic freedom. In 1976, Martin Perl discovered the tau lepton at the SLAC. In 1977, a team led by Leon Lederman at Fermilab discovered the bottom quark.
The Higgs mechanism is believed to give rise to the masses of all the elementary particles in the Standard Model. This includes the masses of the W and Z bosons, and the masses of the fermions, i.e. the quarks and leptons.
After the neutral weak currents caused by Z boson exchange were discovered at CERN in 1973, the electroweak theory became widely accepted and Glashow, Salam, and Weinberg shared the 1979 Nobel Prize in Physics for discovering it. The W± and Z0 bosons were discovered experimentally in 1983; and the ratio of their masses was found to be as the Standard Model predicted.
The theory of the strong interaction (i.e. quantum chromodynamics, QCD), to which many contributed, acquired its modern form in 1973–74 when asymptotic freedom was proposed (a development that made QCD the main focus of theoretical research) and experiments confirmed that the hadrons were composed of fractionally charged quarks.
The term "Standard Model" was introduced by Abraham Pais and Sam Treiman in 1975, with reference to the electroweak theory with four quarks. Steven Weinberg, has since claimed priority, explaining that he chose the term Standard Model out of a sense of modesty and used it in 1973 during a talk in Aix-en-Provence in France.
Particle content
The Standard Model includes members of several classes of elementary particles, which in turn can be distinguished by other characteristics, such as color charge.
All particles can be summarized as follows:
Fermions
The Standard Model includes 12 elementary particles of spin , known as fermions. Fermions respect the Pauli exclusion principle, meaning that two identical fermions cannot simultaneously occupy the same quantum state in the same atom. Each fermion has a corresponding antiparticle, which are particles that have corresponding properties with the exception of opposite charges. Fermions are classified based on how they interact, which is determined by the charges they carry, into two groups: quarks and leptons. Within each group, pairs of particles that exhibit similar physical behaviors are then grouped into generations (see the table). Each member of a generation has a greater mass than the corresponding particle of generations prior. Thus, there are three generations of quarks and leptons. As first-generation particles do not decay, they comprise all of ordinary (baryonic) matter. Specifically, all atoms consist of electrons orbiting around the atomic nucleus, ultimately constituted of up and down quarks. On the other hand, second- and third-generation charged particles decay with very short half-lives and can only be observed in high-energy environments. Neutrinos of all generations also do not decay, and pervade the universe, but rarely interact with baryonic matter.
There are six quarks: up, down, charm, strange, top, and bottom. Quarks carry color charge, and hence interact via the strong interaction. The color confinement phenomenon results in quarks being strongly bound together such that they form color-neutral composite particles called hadrons; quarks cannot individually exist and must always bind with other quarks. Hadrons can contain either a quark-antiquark pair (mesons) or three quarks (baryons). The lightest baryons are the nucleons: the proton and neutron. Quarks also carry electric charge and weak isospin, and thus interact with other fermions through electromagnetism and weak interaction. The six leptons consist of the electron, electron neutrino, muon, muon neutrino, tau, and tau neutrino. The leptons do not carry color charge, and do not respond to strong interaction. The charged leptons carry an electric charge of −1 e, while the three neutrinos carry zero electric charge. Thus, the neutrinos' motions are influenced by only the weak interaction and gravity, making them difficult to observe.
Gauge bosons
The Standard Model includes 4 kinds of gauge bosons of spin 1, with bosons being quantum particles containing an integer spin. The gauge bosons are defined as force carriers, as they are responsible for mediating the fundamental interactions. The Standard Model explains the four fundamental forces as arising from the interactions, with fermions exchanging virtual force carrier particles, thus mediating the forces. At a macroscopic scale, this manifests as a force. As a result, they do not follow the Pauli exclusion principle that constrains fermions; bosons do not have a theoretical limit on their spatial density. The types of gauge bosons are described below.
Electromagnetism: Photons mediate the electromagnetic force, responsible for interactions between electrically charged particles. The photon is massless and is described by the theory of quantum electrodynamics (QED).
Strong Interactions: Gluons mediate the strong interactions, which binds quarks to each other by influencing the color charge, with the interactions being described in the theory of quantum chromodynamics (QCD). They have no mass, and there are eight distinct gluons, with each being denoted through a color-anticolor charge combination (e.g. red–antigreen). As gluons have an effective color charge, they can also interact amongst themselves.
Weak Interactions: The , , and gauge bosons mediate the weak interactions between all fermions, being responsible for radioactivity. They contain mass, with the having more mass than the . The weak interactions involving the act only on left-handed particles and right-handed antiparticles. The carries an electric charge of +1 and −1 and couples to the electromagnetic interaction. The electrically neutral boson interacts with both left-handed particles and right-handed antiparticles. These three gauge bosons along with the photons are grouped together, as collectively mediating the electroweak interaction.
Gravity: It is currently unexplained in the Standard Model, as the hypothetical mediating particle graviton has been proposed, but not observed. This is due to the incompatibility of quantum mechanics and Einstein's theory of general relativity, regarded as being the best explanation for gravity. In general relativity, gravity is explained as being the geometric curving of spacetime.
The Feynman diagram calculations, which are a graphical representation of the perturbation theory approximation, invoke "force mediating particles", and when applied to analyze high-energy scattering experiments are in reasonable agreement with the data. However, perturbation theory (and with it the concept of a "force-mediating particle") fails in other situations. These include low-energy quantum chromodynamics, bound states, and solitons. The interactions between all the particles described by the Standard Model are summarized by the diagrams on the right of this section.
Higgs boson
The Higgs particle is a massive scalar elementary particle theorized by Peter Higgs (and others) in 1964, when he showed that Goldstone's 1962 theorem (generic continuous symmetry, which is spontaneously broken) provides a third polarisation of a massive vector field. Hence, Goldstone's original scalar doublet, the massive spin-zero particle, was proposed as the Higgs boson, and is a key building block in the Standard Model. It has no intrinsic spin, and for that reason is classified as a boson with spin-0.
The Higgs boson plays a unique role in the Standard Model, by explaining why the other elementary particles, except the photon and gluon, are massive. In particular, the Higgs boson explains why the photon has no mass, while the W and Z bosons are very heavy. Elementary-particle masses and the differences between electromagnetism (mediated by the photon) and the weak force (mediated by the W and Z bosons) are critical to many aspects of the structure of microscopic (and hence macroscopic) matter. In electroweak theory, the Higgs boson generates the masses of the leptons (electron, muon, and tau) and quarks. As the Higgs boson is massive, it must interact with itself.
Because the Higgs boson is a very massive particle and also decays almost immediately when created, only a very high-energy particle accelerator can observe and record it. Experiments to confirm and determine the nature of the Higgs boson using the Large Hadron Collider (LHC) at CERN began in early 2010 and were performed at Fermilab's Tevatron until its closure in late 2011. Mathematical consistency of the Standard Model requires that any mechanism capable of generating the masses of elementary particles must become visible at energies above ; therefore, the LHC (designed to collide two proton beams) was built to answer the question of whether the Higgs boson actually exists.
On 4 July 2012, two of the experiments at the LHC (ATLAS and CMS) both reported independently that they had found a new particle with a mass of about (about 133 proton masses, on the order of ), which is "consistent with the Higgs boson". On 13 March 2013, it was confirmed to be the searched-for Higgs boson.
Theoretical aspects
Construction of the Standard Model Lagrangian
Technically, quantum field theory provides the mathematical framework for the Standard Model, in which a Lagrangian controls the dynamics and kinematics of the theory. Each kind of particle is described in terms of a dynamical field that pervades space-time.
The construction of the Standard Model proceeds following the modern method of constructing most field theories: by first postulating a set of symmetries of the system, and then by writing down the most general renormalizable Lagrangian from its particle (field) content that observes these symmetries.
The global Poincaré symmetry is postulated for all relativistic quantum field theories. It consists of the familiar translational symmetry, rotational symmetry and the inertial reference frame invariance central to the theory of special relativity. The local SU(3) × SU(2) × U(1) gauge symmetry is an internal symmetry that essentially defines the Standard Model. Roughly, the three factors of the gauge symmetry give rise to the three fundamental interactions. The fields fall into different representations of the various symmetry groups of the Standard Model (see table). Upon writing the most general Lagrangian, one finds that the dynamics depends on 19 parameters, whose numerical values are established by experiment. The parameters are summarized in the table (made visible by clicking "show") above.
Quantum chromodynamics sector
The quantum chromodynamics (QCD) sector defines the interactions between quarks and gluons, which is a Yang–Mills gauge theory with SU(3) symmetry, generated by . Since leptons do not interact with gluons, they are not affected by this sector. The Dirac Lagrangian of the quarks coupled to the gluon fields is given by
where is a three component column vector of Dirac spinors, each element of which refers to a quark field with a specific color charge (i.e. red, blue, and green) and summation over flavor (i.e. up, down, strange, etc.) is implied.
The gauge covariant derivative of QCD is defined by , where
are the Dirac matrices,
is the 8-component () SU(3) gauge field,
are the 3 × 3 Gell-Mann matrices, generators of the SU(3) color group,
represents the gluon field strength tensor, and
is the strong coupling constant.
The QCD Lagrangian is invariant under local SU(3) gauge transformations; i.e., transformations of the form , where is 3 × 3 unitary matrix with determinant 1, making it a member of the group SU(3), and is an arbitrary function of spacetime.
Electroweak sector
The electroweak sector is a Yang–Mills gauge theory with the symmetry group ,
where the subscript sums over the three generations of fermions; , and are the left-handed doublet, right-handed singlet up type, and right handed singlet down type quark fields; and and are the left-handed doublet and right-handed singlet lepton fields.
The electroweak gauge covariant derivative is defined as , where
is the U(1) gauge field,
is the weak hypercharge – the generator of the U(1) group,
is the 3-component SU(2) gauge field,
are the Pauli matrices – infinitesimal generators of the SU(2) group – with subscript L to indicate that they only act on left-chiral fermions,
and are the U(1) and SU(2) coupling constants respectively,
() and are the field strength tensors for the weak isospin and weak hypercharge fields.
Notice that the addition of fermion mass terms into the electroweak Lagrangian is forbidden, since terms of the form do not respect gauge invariance. Neither is it possible to add explicit mass terms for the U(1) and SU(2) gauge fields. The Higgs mechanism is responsible for the generation of the gauge boson masses, and the fermion masses result from Yukawa-type interactions with the Higgs field.
Higgs sector
In the Standard Model, the Higgs field is an SU(2) doublet of complex scalar fields with four degrees of freedom:
where the superscripts + and 0 indicate the electric charge of the components. The weak hypercharge of both components is 1. Before symmetry breaking, the Higgs Lagrangian is
where is the electroweak gauge covariant derivative defined above and is the potential of the Higgs field. The square of the covariant derivative leads to three and four point interactions between the electroweak gauge fields and and the scalar field . The scalar potential is given by
where , so that acquires a non-zero Vacuum expectation value, which generates masses for the Electroweak gauge fields (the Higgs mechanism), and , so that the potential is bounded from below. The quartic term describes self-interactions of the scalar field .
The minimum of the potential is degenerate with an infinite number of equivalent ground state solutions, which occurs when . It is possible to perform a gauge transformation on such that the ground state is transformed to a basis where and . This breaks the symmetry of the ground state. The expectation value of now becomes
where has units of mass and sets the scale of electroweak physics. This is the only dimensional parameter of the Standard Model and has a measured value of ~.
After symmetry breaking, the masses of the W and Z are given by and , which can be viewed as predictions of the theory. The photon remains massless. The mass of the Higgs boson is . Since and are free parameters, the Higgs's mass could not be predicted beforehand and had to be determined experimentally.
Yukawa sector
The Yukawa interaction terms are:
where , , and are matrices of Yukawa couplings, with the term giving the coupling of the generations and , and h.c. means Hermitian conjugate of preceding terms. The fields and are left-handed quark and lepton doublets. Likewise, and are right-handed up-type quark, down-type quark, and lepton singlets. Finally is the Higgs doublet and is its charge conjugate state.
The Yukawa terms are invariant under the SU(2) × U(1) gauge symmetry of the Standard Model and generate masses for all fermions after spontaneous symmetry breaking.
Fundamental interactions
The Standard Model describes three of the four fundamental interactions in nature; only gravity remains unexplained. In the Standard Model, such an interaction is described as an exchange of bosons between the objects affected, such as a photon for the electromagnetic force and a gluon for the strong interaction. Those particles are called force carriers or messenger particles.
Gravity
Despite being perhaps the most familiar fundamental interaction, gravity is not described by the Standard Model, due to contradictions that arise when combining general relativity, the modern theory of gravity, and quantum mechanics. However, gravity is so weak at microscopic scales, that it is essentially unmeasurable. The graviton is postulated to be the mediating particle, but has not yet been proved to exist.
Electromagnetism
Electromagnetism is the only long-range force in the Standard Model. It is mediated by photons and couples to electric charge. Electromagnetism is responsible for a wide range of phenomena including atomic electron shell structure, chemical bonds, electric circuits and electronics. Electromagnetic interactions in the Standard Model are described by quantum electrodynamics.
Weak nuclear force
The weak interaction is responsible for various forms of particle decay, such as beta decay. It is weak and short-range, due to the fact that the weak mediating particles, W and Z bosons, have mass. W bosons have electric charge and mediate interactions that change the particle type (referred to as flavor) and charge. Interactions mediated by W bosons are charged current interactions. Z bosons are neutral and mediate neutral current interactions, which do not change particle flavor. Thus Z bosons are similar to the photon, aside from them being massive and interacting with the neutrino. The weak interaction is also the only interaction to violate parity and CP. Parity violation is maximal for charged current interactions, since the W boson interacts exclusively with left-handed fermions and right-handed antifermions.
In the Standard Model, the weak force is understood in terms of the electroweak theory, which states that the weak and electromagnetic interactions become united into a single electroweak interaction at high energies.
Strong nuclear force
The strong nuclear force is responsible for hadronic and nuclear binding. It is mediated by gluons, which couple to color charge. Since gluons themselves have color charge, the strong force exhibits confinement and asymptotic freedom. Confinement means that only color-neutral particles can exist in isolation, therefore quarks can only exist in hadrons and never in isolation, at low energies. Asymptotic freedom means that the strong force becomes weaker, as the energy scale increases. The strong force overpowers the electrostatic repulsion of protons and quarks in nuclei and hadrons respectively, at their respective scales.
While quarks are bound in hadrons by the fundamental strong interaction, which is mediated by gluons, nucleons are bound by an emergent phenomenon termed the residual strong force or nuclear force. This interaction is mediated by mesons, such as the pion. The color charges inside the nucleon cancel out, meaning most of the gluon and quark fields cancel out outside of the nucleon. However, some residue is "leaked", which appears as the exchange of virtual mesons, that causes the attractive force between nucleons. The (fundamental) strong interaction is described by quantum chromodynamics, which is a component of the Standard Model.
Tests and predictions
The Standard Model predicted the existence of the W and Z bosons, gluon, top quark and charm quark, and predicted many of their properties before these particles were observed. The predictions were experimentally confirmed with good precision.
The Standard Model also predicted the existence of the Higgs boson, which was found in 2012 at the Large Hadron Collider, the final fundamental particle predicted by the Standard Model to be experimentally confirmed.
Challenges
Self-consistency of the Standard Model (currently formulated as a non-abelian gauge theory quantized through path-integrals) has not been mathematically proved. While regularized versions useful for approximate computations (for example lattice gauge theory) exist, it is not known whether they converge (in the sense of S-matrix elements) in the limit that the regulator is removed. A key question related to the consistency is the Yang–Mills existence and mass gap problem.
Experiments indicate that neutrinos have mass, which the classic Standard Model did not allow. To accommodate this finding, the classic Standard Model can be modified to include neutrino mass, although it is not obvious exactly how this should be done.
If one insists on using only Standard Model particles, this can be achieved by adding a non-renormalizable interaction of leptons with the Higgs boson. On a fundamental level, such an interaction emerges in the seesaw mechanism where heavy right-handed neutrinos are added to the theory.
This is natural in the left-right symmetric extension of the Standard Model and in certain grand unified theories. As long as new physics appears below or around 1014 GeV, the neutrino masses can be of the right order of magnitude.
Theoretical and experimental research has attempted to extend the Standard Model into a unified field theory or a theory of everything, a complete theory explaining all physical phenomena including constants. Inadequacies of the Standard Model that motivate such research include:
The model does not explain gravitation, although physical confirmation of a theoretical particle known as a graviton would account for it to a degree. Though it addresses strong and electroweak interactions, the Standard Model does not consistently explain the canonical theory of gravitation, general relativity, in terms of quantum field theory. The reason for this is, among other things, that quantum field theories of gravity generally break down before reaching the Planck scale. As a consequence, we have no reliable theory for the very early universe.
Some physicists consider it to be ad hoc and inelegant, requiring 19 numerical constants whose values are unrelated and arbitrary. Although the Standard Model, as it now stands, can explain why neutrinos have masses, the specifics of neutrino mass are still unclear. It is believed that explaining neutrino mass will require an additional 7 or 8 constants, which are also arbitrary parameters.
The Higgs mechanism gives rise to the hierarchy problem if some new physics (coupled to the Higgs) is present at high energy scales. In these cases, in order for the weak scale to be much smaller than the Planck scale, severe fine tuning of the parameters is required; there are, however, other scenarios that include quantum gravity in which such fine tuning can be avoided. There are also issues of quantum triviality, which suggests that it may not be possible to create a consistent quantum field theory involving elementary scalar particles.
The model is inconsistent with the emerging Lambda-CDM model of cosmology. Contentions include the absence of an explanation in the Standard Model of particle physics for the observed amount of cold dark matter (CDM) and its contributions to dark energy, which are many orders of magnitude too large. It is also difficult to accommodate the observed predominance of matter over antimatter (matter/antimatter asymmetry). The isotropy and homogeneity of the visible universe over large distances seems to require a mechanism like cosmic inflation, which would also constitute an extension of the Standard Model.
Currently, no proposed theory of everything has been widely accepted or verified.
See also
Yang–Mills theory
Fundamental interaction:
Quantum electrodynamics
Strong interaction: Color charge, Quantum chromodynamics, Quark model
Weak interaction: Electroweak interaction, Fermi's interaction, Weak hypercharge, Weak isospin
Gauge theory: Introduction to gauge theory
Generation
Higgs mechanism: Higgs boson, Alternatives to the Standard Higgs Model
Lagrangian
Open questions: CP violation, Neutrino masses, QCD matter, Quantum triviality
Quantum field theory
Standard Model: Mathematical formulation of, Physics beyond the Standard Model
Electron electric dipole moment
Notes
References
Further reading
Introductory textbooks
Advanced textbooks
Highlights the gauge theory aspects of the Standard Model.
Highlights dynamical and phenomenological aspects of the Standard Model.
920 pages.
952 pages.
670 pages. Highlights group-theoretical aspects of the Standard Model.
Journal articles
External links
"The Standard Model explained in Detail by CERN's John Ellis" omega tau podcast.
The Standard Model on the CERN website explains how the basic building blocks of matter interact, governed by four fundamental forces.
Particle Physics: Standard Model, Leonard Susskind lectures (2010).
Concepts in physics
Particle physics | Standard Model | Physics | 5,764 |
57,528,170 | https://en.wikipedia.org/wiki/Rayleigh%E2%80%93Lorentz%20pendulum | Rayleigh–Lorentz pendulum (or Lorentz pendulum) is a simple pendulum, but subjected to a slowly varying frequency due to an external action (frequency is varied by varying the pendulum length), named after Lord Rayleigh and Hendrik Lorentz. This problem formed the basis for the concept of adiabatic invariants in mechanics. On account of the slow variation of frequency, it is shown that the ratio of average energy to frequency is constant.
History
The pendulum problem was first formulated by Lord Rayleigh in 1902, although some mathematical aspects have been discussed before by Léon Lecornu in 1895 and Charles Bossut in 1778. Unaware of Rayleigh's work, at the first Solvay conference in 1911, Hendrik Lorentz proposed a question, How does a simple pendulum behave when the length of the suspending thread is gradually shortened?, in order to clarify the quantum theory at that time. To that Albert Einstein responded the next day by saying that both energy and frequency of the quantum pendulum changes such that their ratio is constant, so that the pendulum is in the same quantum state as the initial state. These two separate works formed the basis for the concept of adiabatic invariant, which found applications in various fields and old quantum theory. In 1958, Subrahmanyan Chandrasekhar took interest in the problem and studied it so that a renewed interest in the problem was set, subsequently to be studied by many other researchers like John Edensor Littlewood etc.
Mathematical description
The equation of the simple harmonic motion with frequency for the displacement is given by
If the frequency is constant, the solution is simply given by . But if the frequency is allowed to vary slowly with time , or precisely, if the characteristic time scale for the frequency variation is much smaller than the time period of oscillation, i.e.,
then it can be shown that
where is the average energy averaged over an oscillation. Since the frequency is changing with time due to external action, conservation of energy no longer holds and the energy over a single oscillation is not constant. During an oscillation, the frequency changes (however slowly), so does its energy. Therefore, to describe the system, one defines the average energy per unit mass for a given potential as follows
where the closed integral denotes that it is taken over a complete oscillation. Defined this way, it can be seen that the averaging is done, weighting each element of the orbit by the fraction of time that the pendulum spends in that element. For simple harmonic oscillator, it reduces to
where both the amplitude and frequency are now functions of time.
References
Classical mechanics | Rayleigh–Lorentz pendulum | Physics | 535 |
9,912,921 | https://en.wikipedia.org/wiki/Zintl%20phase | In chemistry, a Zintl phase is a product of a reaction between a group 1 (alkali metal) or group 2 (alkaline earth metal) and main group metal or metalloid (from groups 13, 14, 15, or 16). It is characterized by intermediate metallic/ionic bonding. Zintl phases are a subgroup of brittle, high-melting intermetallic compounds that are diamagnetic or exhibit temperature-independent paramagnetism and are poor conductors or semiconductors.
This type of solid is named after German chemist Eduard Zintl who investigated them in the 1930s. The term "Zintl Phases" was first used by Laves in 1941. In his early studies, Zintl noted that there was an atomic volume contraction upon the formation of these products and realized that this could indicate cation formation. He suggested that the structures of these phases were ionic, with complete electron transfer from the more electropositive metal to the more electronegative main group element. The structure of the anion within the phase is then considered on the basis of the resulting electronic state. These ideas are further developed in the Zintl-Klemm-Busmann concept, where the polyanion structure should be similar to that of the isovalent element. Further, the anionic sublattice can be isolated as polyanions (Zintl ions) in solution and are the basis of a rich subfield of main group inorganic chemistry.
History
A "Zintl Phase" was first observed in 1891 by M. Joannis, who noted an unexpected green colored solution after dissolving lead and sodium in liquid ammonia, indicating the formation of a new product. It was not until many years later, in 1930, that the stoichiometry of the new product was identified as Na4Pb94− by titrations performed by Zintl et al.; and it was not until 1970 that the structure was confirmed by crystallization with ethylenediamine (en) by Kummer.
In the intervening years and in the years since, many other reaction mixtures of metals were explored to provide a great number of examples of this type of system. There are hundreds of both compounds composed of group 14 elements and group 15 elements, plus dozens of others beyond those groups, all spanning a variety of different geometries. Corbett has contributed improvements to the crystallization of Zintl ions by demonstrating the use of chelating ligands, such as cryptands, as cation sequestering agents.
More recently, Zintl phase and ion reactivity in more complex systems, with organic ligands or transition metals, have been investigated, as well as their use in practical applications, such as for catalytic purposes or in materials science.
Zintl phases
Zintl phases are intermetallic compounds that have a pronounced ionic bonding character. They are made up of a polyanionic substructure and group 1 or 2 counter ions, and their structure can be understood by a formal electron transfer from the electropositive element to the more electronegative element in their composition. Thus, the valence electron concentration (VEC) of the anionic element is increased, and it formally moves to the right in its row of the periodic table. Generally the anion does not reach an octet, so to reach that closed shell configuration, bonds are formed. The structure can be explained by the 8-N rule (replacing the number of valence electrons, N, by VEC), making it comparable to an isovalent element. The formed polyanionic substructures can be chains (two-dimensional), rings, and other two-or three-dimensional networks or molecule-like entities.
The Zintl line is a hypothetical boundary drawn between groups 13 and 14. It separates the columns based on the tendency for group 13 elements to form metals when reacted with electropositive group 1 or 2 elements and for group 14 and above to form ionic solids. The 'typical salts' formed in these reactions become more metallic as the main group element becomes heavier.
Synthesis
Zintl phases can be prepared in regular solid state reactions, usually performed under an inert atmosphere or in a molten salt solution. Typical solid state methods include direct reduction of corresponding oxides in solution phase reactions in liquid ammonia or mercury. The product can be purified in some cases via zone refining, though often careful annealing will result in large single crystals of a desired phase.
Characterization
Many of the usual methods are useful for determining physical and structural properties of Zintl phases. Some Zintl phases can be decomposed into a Zintl ion—the polyanion that composes the anionic substructure of the phase—and counter ion, which can be studied as described below. The heat of formation of these phases can be evaluated. Often their magnitude is comparable to those of salt formation, providing evidence for the ionic character of these phases. Density measurements indicate a contraction of the product compared to reactants, similarly indicating ionic bonding within the phase. X-ray spectroscopy gives additional information about the oxidation state of the elements, and correspondingly the nature of their bonding. Conductivity and magnetization measurements can also be taken. Finally, the structure of a Zintl phase or ion is most reliably confirmed via X-ray crystallography.
Examples
An illustrative example: There are two types of Zintl ions in K12Si17; 2x (pseudo P4,
or according to Wade's rules, 12 = 2n + 4 skeletal-electrons corresponding to a nido-form of a trigonal-bipyramid) and 1x
(according to Wade's rules, 22 = 2n + 4 skeletal-electrons corresponding to a nido-form of a bicapped square antiprism)
Examples from Müller's 1973 review paper with known structures are listed in the table below.
Exceptions
There are examples of a new class of compounds that, on the basis of their chemical formulae, would appear to be Zintl phases, e.g., K8In11, which is metallic and paramagnetic. Molecular orbital calculations have shown that the anion is (In11)7− and that the extra electron is distributed over the cations and, possibly, the anion antibonding orbitals. Another exception is the metallic InBi. InBi fulfills the Zintl phase requisite of element-element bonds but not the requisite of the polyanionic structure fitting a normal valence compound, i.e., the Bi–Bi polyanionic structure does not correspond to a normal valence structure such as the diamond Tl− in NaTl.
Zintl ions
Zintl phases that contain molecule-like polyanions will often separate into its constituent anions and cations in liquid ammonia, ethylenediamene, crown ethers, or cryptand solutions. Therefore, they are referred to as Zintl ions. The term 'clusters' is also used to emphasize them as groups with homonuclear bonding. The structures can be described by Wade's rules and occupy an area of transition between localized covalent bonds and delocalized skeletal bonding. Beyond the "aesthetic simplicity and beauty of their structures" and distinctive electronic properties, Zintl ions are also of interest in synthesis because of their unique and unpredictable behavior in solution.
The largest subcategory of Zintl ions is homoatomic clusters of group 14 or 15 elements. Some examples are listed below.
Many examples similarly exist for heteroatomic clusters where the polyanion is composed of greater than one main group element. Some examples are listed below. Zintl ions are also capable of reacting with ligands and transition metals, and further 'heteroatomic examples are discussed below (intermetalloid clusters). In some solvents, atoms exchange can occur between heteroatomic clusters. Additionally, it is notable that fewer large cluster examples exist.
Examples
Homoatomic clusters
[Si4]4−
[Si5]2−
[Si9]2−
[Si9]4−
[Ge4]4−
[Ge5]2−
[Ge9]3−
[Sn4]4−
[Sn5]2−
[Pb4]4−
[Pb9]4−
[P4]2−
[P7]3−
[P11]3−
[As6]4−
[As7]3−
[Sb8]8−
[Sb11]3−
[Bi4]2−
[Bi7]3−
[Bi11]3−
Heteroatomic clusters
AsP3
[Ge2Sn2]4−
[Sn2Bi2]2−
[Sn3Bi3]5−
[Pb2Sb2]2−
Sn5Sb3
[InBi3]2−
Bi14Ge4
[GaBi3]2−
[In4Bi5]3−
[TlSn8]3−
[TlSn9]3−
[Sb@In8Sb12]3−
[Sb@In8Sb12]5−
Synthesis
Zintl ions are typically prepared through one of two methods. The first is a direct reduction route performed at low temperature. In this method, dry ammonia is condensed over a mixture of the two (or more) metals under inert atmosphere. The reaction initially produces solvated electrons in ammonia that reduce the more electronegative element over the course of the reaction. This reaction can be monitored by a color change from blue (solvated electrons) to the color of the Zintl phase. The second is method, performed at higher temperatures, is to dissolve a Zintl phase in liquid ammonia or other polar aprotic solvent like ethylenediamine (on rare occasions DMF or pyridine is used). Some Zintl ions, such as Si and Ge based ions, can only be prepared via this indirect method because they cannot be reduced at low temperatures.
Characterization
The structure of Zintl ions can be confirmed through x-ray crystallography. Corbett has also improved the crystallization of Zintl ions by demonstrating the use of chelating ligands such as cryptands, as cation sequestering agents.
Many of the main group elements have NMR active nuclei, thus NMR experiments are also valuable for gaining structural and electronic information; they can reveal information about the flexibility of clusters. For example, differently charged species can be present in solution because the polyanions are highly reduced and may be oxidized by solvent molecules. NMR experiments have shown a low barrier to change and thus similar energies for different states. NMR is also useful for gaining information about the coupling between individual atoms of the polyanion and with the counter-ion, a coordinated transition metal, or ligand. Nucleus independent chemical shifts can also be an indicator for 3D aromaticity, which causes magnetic shielding at special points.
Additionally, EPR can be used to measure paramagnetic in relevant clusters, of which there are a number of examples of the [E9]3− type, among others.
Reactivity
As highly reduced species in solution, Zintl ions offer many and often unexpected, reaction possibilities, and their discrete nature positions them as potentially important starting materials in inorganic synthesis.
In solution, individual Zintl ions can react with each other to form oligomers and polymers. In fact, anions with high nuclearity can be viewed as oxidative coupling products of monomers. After oxidation, the clusters may sometimes persist as radicals that can be used as precursors in other reactions. Zintl ions can oxidize without the presence of specific oxidizing agents through solvent molecules or impurities, for example in the presence of cryptand, which is often used to aid crystallization.
Zintl ion clusters can be functionalized with a variety of ligands in a similar reaction to their oligomerization. As such, functionalization competes with those reactions and both can be observed to occur. Organic groups, for example phenyl, TMS, and bromomethane, form exo bonds to the electronegative main group atoms. These ligands can also stabilize high nuclearity clusters, in particular heteroatomic examples.
Similarly in solids, Zintl phases can incorporate hydrogen. Such Zintl phase hydrides can be either formed by direct synthesis of the elements or element hydrides in a hydrogen atmosphere or by a hydrogenation reaction of a pristine Zintl phase. Since hydrogen has a comparable electronegativity as the post-transition metal it is incorporated as part of the polyanionic spatial structure. There are two structural motifs present. A monatomic hydride can be formed occupying an interstitial site that is coordinated by cations exclusively (interstitial hydride) or it can bind covalently to the polyanion (polyanionic hydride).
The Zintl ion itself can also act as a ligand in transition metal complexes. This reactivity is usually seen in clusters composed of greater than 9 atoms, and it is more common for group 15 clusters. A change in geometry often accompanies complexation; however zero electrons are contributed from the metal to the complex, so the electron count with respect to Wade's rules does not change. In some cases the transition metal will cap the face of the cluster. Another mode of reaction is the formation of endohedral complexes where the metal is encapsulated inside the cluster. These types of complexes lend themselves to comparison with the solid state structure of the corresponding Zintl phase. These reactions tend to be unpredictable and highly dependent on temperature, among other reaction conditions.
Examples
Group 14 anions functionalized with organic groups: [Ge9Mes]3−, [Ge9(CHCHCH2NH2)2]2−, [(CH2CH)Ge9Ge9(CHCH2)]4−, [Ge9(CHCHCHCH)Ge9]6−, [(CH2CH)Ge9(CH)4Ge9(CHCH2)]4−;
Silated anions: Ge9Hyp3Tl, [Ge9Hyp3]−;
Intermetalloid deltahedral clusters: [Co@Sn9]4−, [Ni@Pb10]2−, [Au@Pb12]3−, [Mn@Pb12]3−, [Rh3@Sn24]5−;
Exo coordinated transition metal complexes: [(ŋ2-Sn9)Hg(ŋ2-Sn9)]6−, [Ge5Ni2(CO)3]2−, [Sn8TiCp]3−, [(tol)NbSn6Nb(tol)]2−;
[Ni5Sb17]4− (Ni4Sb4 ring inside Sb13 bowl).
Electronic structure and bonding
Wade's rules
The geometry and bonding of a Zintl ion cannot be easily described by classical two electron two center bonding theories; however the geometries Zintl ions can be well described by Wade’s rules of boranes. Wade’s rules offer an alternative model for the relationship between geometry and electron count in delocalized electron deficient systems. The rules were developed to predict the geometries of boranes from the number of electrons and can be applied to these polyanions by replacing the BH unit with a lone pair. Some unique clusters of Ge occur in non-deltahedral shapes that cannot be described by Wade’s rules. The rules also become more convoluted in intermetallic clusters with transition metals and consideration needs to be taken for the location of the additional electrons.
Zintl-Klemm-Busmann concept
The Zintl-Klemm-Busmann concept describes how in an anionic cluster, the atoms arrange in typical geometries found for the element to the right of it on the periodic table. So “the anionic lattice is isometric with elemental lattices having the same number of valence electrons.” In this formulation, the average charge on each atom of the cluster can be calculated by:
where na is number of anion atoms and VEC is the valence electron concentration per anion atom, then:
.
The number of bonds per anion predicts structure based on isoelectronic neighbor. This rule is also referred to as the 8 - N rule and can also be written as:
.
Not all phases follow the Zintl-Klemm-Busmann concept, particularly when there is a high content of either the electronegative or electropositive element. There are still other examples where this does not apply.
Electronic theory
Wade's rules are successful in describing the geometry of the anionic sublattice of Zintl phases and of Zintl ions but not the electronic structure. Other 'spherical shell models' with spherical harmonic wave functions for molecular orbitals—analogous to atomic orbitals—that describe the clusters as pseduo elements. The Jellium model uses a spherical potential from the nuclei to give orbitals with global nodal properties. Again, this formulates the cluster as a 'super atom' with an electron configuration comparable to a single atom. The model is best applied to spherically symmetric systems, and two examples for which it works well are the icosahedral Al13− and [Sn@Cu12@Sn20]12− clusters. DFT or ab initio molecular orbital calculations similarly treat the clusters with atomic, and correspondingly label them S, P, D etc. These closed shell configurations have prompted some investigation of 3D aromaticity. This concept was first suggested for fullerenes and corresponds to a 2(N+1)2 rule in the spherical shell model. An indicator of this phenomenon is a negative Nucleus Independent Chemical Shift (NICS) values of the center of the cluster or of certain additional high symmetry points.
Use in catalysis and materials science
Some Zintl ions show the ability to activate small molecules. One example from Dehnen and coworkers is the capture of O2 by the intermetallic cluster [Bi9{Ru(cod)}2]3−. Another ruthenium intertermetallic cluster, [Ru@Sn9]6−, was used as a precursor to selectively disperse the CO2 hydrogenation catalyst Ru-SnOx onto CeO2, resulting in nearly 100% CO selectivity for methanation.
In materials science, Ge94− has been used as a source of Ge in lithium ion batteries, where is can be deposited in a microporous layer of alpha-Ge. The discrete nature of Zintl ions opens the possibility for the bottom up synthesis of nanostructured semiconductors and the surface modification of solids. The oxidation and polymerization of Zintl ions may also be a source of new materials. For example, polymerization of Ge clusters was used to create guest free germanium clathrate, in other words a particular, pure Ge.
See also
Plumbide
Stannide
References
Additional resources
Video of preparation of K4Ge9 (subscription required)
Inorganic compounds
Intermetallics
Cluster chemistry | Zintl phase | Physics,Chemistry,Materials_science | 4,009 |
60,583,371 | https://en.wikipedia.org/wiki/Airport%20privacy | Airport privacy involves the right of personal privacy for passengers when it comes to screening procedures, surveillance, and personal data being stored at airports. This practice intertwines airport security measures and privacy specifically the advancement of security measures following the 9/11 attacks in the United States and other global terrorist attacks. Several terrorist attacks, such as 9/11, have led airports all over the world to look to the advancement of new technology such as body and baggage screening, detection dogs, facial recognition, and the use of biometrics in electronic passports. Amidst the introduction of new technology and security measures in airports and the growing rates of travelers there has been a rise of risk and concern in privacy.
History of airport policies
Before the 9/11 terrorist attacks, the only security measure in place in U.S. airports were metal detectors. A metal detector's ability to only detect metal weapons made it inefficient in detecting nonmetal such as liquids, sharp objects, or explosives. After the 9/11 terrorist attacks in the United States, the Transportation Security Administration (TSA) increased security measures all over the airports. Policies were made to prohibit the carry on of liquids, sharp objects, and explosives. Airlines instructed passengers to arrive 2 hours before their flight is to depart if traveling domestically and 3 hours if traveling internationally. After passing through screening, passengers were selected at random for additional screening including bag checks. After an incident, that involved a passenger carrying a bomb in their shoe, security screeners asked passengers to remove their shoes when passing through checkpoints. In February 2002, the TSA officially took over the responsibility for airport security. In 2009, airport security measures were once again shaken when a passenger, now commonly known as the "underwear bomber," smuggled a bomb into the airport facility in his underwear. Before these terrorist attacks, only 5 percent of bags were screened. Following these attacks, all bags are now subject to screening.
In 2008, the European Union considered the use of full body scanners to overcome the challenges with metal detectors in not being able to detect nonmetal weapons and also the challenge of pat-downs. The European commission came to the consensus that passengers must have an option to decline body scanning.
Technology and privacy
Body screening
Screening Technology has advanced to detect any harmful materials under a traveler's clothes and also detect any harmful materials that may have been consumed internally. Full body scanners or Advanced Imaging Technology (AIT) were introduced to U.S. Airports in 2006. Two types of body screening that are currently being used at all airports internationally are backscatters and millimeter wave scanners. Backscatters use a high-speed yet thin intensity x-ray beam to portray the digital image of an individual's body. Millimeter wave scanners uses the millimeter waves to create a 3-D image based on the energy reflected from the individual's body.
In June 2010, the TSA's commissioners report regulated that screening must follow a framework to ensure fundamental rights and health provisions for travelers. Members of particular groups including disabled people, transgender people, older people, children, women and religious groups have experienced additional negative effects on privacy. On April 15, 2010, a letter from the TSA stated that the TSA had in their possession about 2000 body scanned photos from devices that they claimed were unable to store data/images.
Alternative security measures are offered to travelers who wish to opt out-of-body screening. The United States and European Union allow a traveler the option of refusing screening procedures and instead go through a pat-down. However, if a traveler refuses the pat-down then they are refused entry to the secure area of the airport terminal.
Baggage screening
Baggage screening of all bags entering the airport wasn't implemented until after the 9/11 terrorist attacks. Carry on bags typically go through two stages of inspection: an X-ray screening of the contents in the bag and a manual inspection by a Security Screening Operator (SSO). A manual inspection is only necessary if a SSO has doubts of the contents in the bags. After security checkpoints, carry on bags can be inspected further through random searches and checked in bags are screened for explosives or other dangerous items before being sent out to a passenger's respective flight.
Sniffer dogs/detection dogs
Detection dogs are utilized all over airports specifically in baggage claim sections. The role of a detection dog is to prevent any substance such as illegal drugs or explosives to further enter the facility or leave it. The accuracy and physical capacity of these dogs has raised concerns and has led to the creation of sniffer devices. Similarly to dogs, this device also known as "chemical sniffers" or "electronic noses" is used to detect any trace of drugs and explosives. Odor, although not commonly thought to be connected with an individual's privacy, is unique to every individual. Every individual has a characteristic odor known as the “body odor signature” which can be used to identify gender, sexual orientation, health, diet, and so on. It can reveal emotions like fear and happiness and is viewed as a biological footprint.
In the Lopez Ostra v Spain, the European Court of Human Rights ruled that odor does in fact have an effect on privacy and human rights.
Camera surveillance and the future of facial recognition
Surveillance cameras are placed strategically around airports to ensure the safety of everyone. An increase in camera surveillance then calls for an increase amount of personally stored data. A human machine interface is controlled by a person who operates the surveillance system to assess a situation. The operator is in control of the cameras and determines where the person will appear on the next camera. Facial recognition is an emerging technology measure for airport security. Facial recognition has made its way to camera surveillance. A study done at The Palm Beach airport showed that the false alarm rate of face recognition surveillance was fairly low and had a success rate of almost 50% when it came to matching. The use of facial recognition during the Super Bowl brought up concerns in connection to the Fourth Amendment.
Facial profiling
In 2003, in an effort to improve the detection of terrorist threat, the TSA introduced the Screening Passengers by Observation Technique (SPOT). SPOT is a behavioral recognition system that looks at the way people conduct themselves through facial expressions and body movement. TSA Behavior Detection Officers (TBO) are stationed at airport security checkpoints and keep an eye out for behavior from travelers that may give off any suspicion of malice. SPOT has been highly critiqued for the times it has been used to misidentify a threat causing intrusive searches on travelers based on a hunch that a TSA official has.
Biometric systems
Biometrics are human characteristics that are unique to every individual and that usually do not change such as fingerprints, speech, face, etc. Electronic gates, also known as e-gates, have become very common in airports because of their ability to verify travelers based on their biometric information. There are two types of privacy concerns when discussing e-gates, one is general privacy and the other is system-specific privacy. System-specific privacy focuses on beliefs regarding the capability of the system to protect privacy. Europe is the first to have introduced e-gates in their airport facilities, and the future of airport technology for the United States points toward e-gates.
In the United States, there are two registered travel programs: the Register Travel (RT) system and the Global Entry (GE) System. These programs are designed to expedite immigration procedures for trusted travelers and are primarily based on fingerprint recognition.
Legality and precedence
The Fourth Amendment
The Fourth Amendment prohibits unreasonable searches and seizures while also ensuring the protection of an individual's privacy. The relationship between airport security measures such as screening and pat-downs has sparked a controversial debate when it comes to the Fourth Amendment. As security measures heightened following the 9/11 attacks, many travelers have voiced their opinion that these new measures are in violation of the Fourth Amendment. However, airport officials have responded by claiming that screening measures and pat-downs are not seen to violate the Fourth Amendment because these procedures can be made into a condition when a traveler is purchasing airline tickets. In addition, the Fourth Amendment does not create an absolute right to privacy against all intrusive searches instead the constitutionality of a search is highly based on reasonableness and security.
The Privacy Act of 1974 protects personal information when it is being processed by the federal government.
The following cases provide examples to court rulings on intrusive searches:
United States v. Montoya de Hernandez
United States v. Montoya de Hernandez (1985) ruled that an individual can be subject to an intrusive search, such as a body cavity, if authorities see it fit for the safety of others or themselves. This ruling coincides with the fact that a person is subject to additional screening at any point throughout the airport if airport authorities feel it is necessary or suspect the risk of safety.
Illinois v. Caballes and United States v. Place
The US Supreme Court ruled in Illinois v. Caballes (2005) and United States v. Place (1983) that warrantless searches such as dog searches can be acted upon an individual without the need of suspicion. In airports, detection dogs conduct searches on passengers throughout the facility by having them sniff on a passenger's baggage at baggage claim sections.
United States v. Guapi
United States v. Guapi (1998) ruled that the police did not effectively communicate that the search was optional to the suspect. This type of issue occurs regularly in airports when a traveler is unaware of their privacy rights when it comes to unreasonable searches or alternative search methods available.
Texas and the Tenth Amendment
Following the implementation of enhanced pat downs in airports, the state of Texas challenged federal power by passing two bills into state legislature that would criminalize TSA officials from conducting these pat downs on travelers. Texas argued that the Tenth Amendment, which reserves all remaining power, not delegated to the federal government, to the state and people, allowed them to ensure police powers to protect citizens within their state. However, in 2011, U.S Department of Justice Murphy used the Supremacy Clause to argue that airport security is part of federal domain and cannot be controlled or changed by states laws.
See also
Airport Security
September 11 Attacks
Epassport
Biometrics
Full Body Scanner
Automated Border Control System
Detection Dog
Homeland Security
References
Privacy
Information privacy | Airport privacy | Engineering | 2,098 |
767,350 | https://en.wikipedia.org/wiki/Nitrophosphate%20process | The nitrophosphate process (also known as the Odda process) is a method for the industrial production of nitrogen fertilizers invented by Erling Johnson in the municipality of Odda, Norway around 1927.
The process involves acidifying phosphate rock with dilute nitric acid to produce a mixture of phosphoric acid and calcium nitrate.
Ca5(PO4)3OH + 10 HNO3 -> 3 H3PO4 + 5 Ca(NO3)2 + H2O
The mixture is cooled to below 0 °C, where the calcium nitrate crystallizes and can be separated from the phosphoric acid.
2 H3PO4 + 3 Ca(NO3)2 + 12 H2O -> 2 H3PO4 + 3 Ca(NO3)2.4H2O
The resulting calcium nitrate produces nitrogen fertilizer. The filtrate is composed mainly of phosphoric acid with some nitric acid and traces of calcium nitrate, and this is neutralized with ammonia to produce a compound fertilizer.
Ca(NO3)2 + 4 H3PO4 + 8 NH3 -> CaHPO4 + 2 NH4NO3 + 3(NH4)2HPO4
If potassium chloride or potassium sulfate is added, the result will be NPK fertilizer. The process was an innovation for requiring neither the expensive sulfuric acid nor producing gypsum waste (known in the context of phosphate production as phosphogypsum).
The calcium nitrate mentioned before, can as said be worked up as calcium nitrate fertilizer but often it is converted into ammonium nitrate and calcium carbonate using carbon dioxide and ammonia.
Ca(NO3)2 + 2 NH3 + CO2 + H2O -> 2 NH4NO3 + CaCO3
Both products can be worked up together as straight nitrogen fertilizer.
Although Johnson created the process while working for the Odda Smelteverk, his company never employed it. Instead, it licensed the process to Norsk Hydro, BASF, Hoechst, and DSM. Each of these companies used the process, introduced variations, and licensed it to other companies. Today, only a few companies (e.g. Yara (Norsk Hydro), Acron, EuroChem, Borealis Agrolinz Melamine GmbH, Omnia, GNFC) still use the Odda process. Due to the alterations of the process by the various companies who employed it, the process is now generally referred to as the nitrophosphate process.
References
Chemical processes
Norwegian inventions | Nitrophosphate process | Chemistry | 540 |
40,835,006 | https://en.wikipedia.org/wiki/Invalid%20science | Invalid science consists of scientific claims based on experiments that cannot be reproduced or that are contradicted by experiments that can be reproduced. Recent analyses indicate that the proportion of retracted claims in the scientific literature is steadily increasing. The number of retractions has grown tenfold over the past decade, but they still make up approximately 0.2% of the 1.4m papers published annually in scholarly journals.
The U.S. Office of Research Integrity (ORI) investigates scientific misconduct.
Incidence
Science magazine ranked first for the number of articles retracted at 70, just edging out PNAS, which retracted 69. 32 of Science's retractions were due to fraud or suspected fraud, and 37 to error. A subsequent "retraction index" indicated that journals with relatively high impact factors, such as Science, Nature and Cell, had a higher rate of retractions. Under 0.1% of papers in PubMed had were retracted of more than 25 million papers going back to the 1940s.
The fraction of retracted papers due to scientific misconduct was estimated at two-thirds, according to studies of 2047 papers published since 1977. Misconducted included fraud and plagiarism. Another one-fifth were retracted because of mistakes, and the rest were pulled for unknown or other reasons.
A separate study analyzed 432 claims of genetic links for various health risks that vary between men and women. Only one of these claims proved to be consistently reproducible. Another meta review, found that of the 49 most-cited clinical research studies published between 1990 and 2003, more than 40 percent of them were later shown to be either totally wrong or significantly incorrect.
Biological sciences
In 2012 biotech firm Amgen was able to reproduce just six of 53 important studies in cancer research. Earlier, a group at Bayer, a drug company, successfully repeated only one fourth of 67 important papers. From 2000 to 2010, roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.
Paleontology
Nathan Myhrvold failed repeatedly to replicate the findings of several papers on dinosaur growth. Dinosaurs added a layer to their bones each year. Tyrannosaurus rex was thought to have increased in size by more than 700 kg a year, until Mhyrvold showed that this was a factor of 2 too large. In 4 of 12 papers he examined, the original data had been lost. In three, the statistics were correct, while three had serious errors that invalidated their conclusions. Two papers mistakenly relied on data from these three. He discovered that some of the paper's graphs did not reflect the data. In one case, he found that only four of nine points on the graph came from data cited in the paper.
Major retractions
Torcetrapib was originally hyped as a drug that could block a protein that converts HDL cholesterol into LDL with the potential to "redefine cardiovascular treatment". One clinical trial showed that the drug could increase HDL and decrease LDL. Two days after Pfizer announced its plans for the drug, it ended the Phase III clinical trial due to higher rates of chest pain and heart failure and a 60 percent increase in overall mortality. Pfizer had invested more than $1 billion in developing the drug.
An in-depth review of the most highly cited biomarkers (whose presence are used to infer illness and measure treatment effects) claimed that 83 percent of supposed correlations became significantly weaker in subsequent studies. Homocysteine is an amino acid whose levels correlated with heart disease. However, a 2010 study showed that lowering homocysteine by nearly 30 percent had no effect on heart attack or stroke.
Priming
Priming studies claim that decisions can be influenced by apparently irrelevant events that a subject witnesses just before making a choice. Nobel Prize-winner Daniel Kahneman alleges that much of it is poorly founded. Researchers have been unable to replicate some of the more widely cited examples. A paper in PLoS ONE reported that nine separate experiments could not reproduce a study purporting to show that thinking about a professor before taking an intelligence test leads to a higher score than imagining a football hooligan. A further systematic replication involving 40 different labs around the world did not replicate the main finding. However, this latter systematic replication showed that participants who did not think there was a relation between thinking about a hooligan or a professor were significantly more susceptible to the priming manipulation.
Potential causes
Competition
In the 1950s, when academic research accelerated during the Cold War, the total number of scientists was a few hundred thousand. In the new century 6–7m researchers are active. The number of research jobs has not matched this increase. Every year six new PhDs compete for every academic post. Replicating other researcher's results is not perceived to be valuable. The struggle to compete encourages exaggeration of findings and biased data selection. A recent survey found that one in three researchers knows of a colleague who has at least somewhat distorted their results.
Publication bias
Major journals reject in excess of 90% of submitted manuscripts and tend to favor the most dramatic claims. The statistical measures that researchers use to test their claims allow a fraction of false claims to appear valid. Invalid claims are more likely to be dramatic (because they are false.) Without replication, such errors are less likely to be caught.
Conversely, failures to prove a hypothesis are rarely even offered for publication. "Negative results" now account for only 14% of published papers, down from 30% in 1990. Knowledge of what is not true is as important as of what is true.
Peer review
Peer review is the primary validation technique employed by scientific publications. However, a prominent medical journal tested the system and found major failings. It supplied research with induced errors and found that most reviewers failed to spot the mistakes, even after being told of the tests.
A pseudonymous fabricated paper on the effects of a chemical derived from lichen on cancer cells was submitted to 304 journals for peer review. The paper was filled with errors of study design, analysis and interpretation. 157 lower-rated journals accepted it. Another study sent an article containing eight deliberate mistakes in study design, analysis and interpretation to more than 200 of the British Medical Journals regular reviewers. On average, they reported fewer than two of the problems.
Peer reviewers typically do not re-analyse data from scratch, checking only that the authors’ analysis is properly conceived.
Statistics
Type I and type II errors
Scientists divide errors into type I, incorrectly asserting the truth of a hypothesis (false positive) and type II, rejecting a correct hypothesis (false negative). Statistical checks assess the probability that data which seem to support a hypothesis come about simply by chance. If the probability is less than 5%, the evidence is rated "statistically significant". One definitional consequence is a type one error rate of one in 20.
Statistical power
In 2005 Stanford epidemiologist John Ioannidis showed that the idea that only one paper in 20 gives a false-positive result was incorrect. He claimed, "most published research findings are probably false." He found three categories of problems: insufficient "statistical power" (avoiding type II errors); the unlikeliness of the hypothesis; and publication bias favoring novel claims.
A statistically powerful study identifies factors with only small effects on data. In general studies with more repetitions that run the experiment more times on more subjects have greater power. A power of 0.8 means that of ten true hypotheses tested, the effects of two are missed. Ioannidis found that in neuroscience the typical statistical power is 0.21; another study found that psychology studies average 0.35.
Unlikeliness is a measure of the degree of surprise in a result. Scientists prefer surprising results, leading them to test hypotheses that are unlikely to very unlikely. Ioannidis claimed that in epidemiology, some one in ten hypotheses should be true. In exploratory disciplines like genomics, which rely on examining voluminous data about genes and proteins, only one in a thousand should prove correct.
In a discipline in which 100 out of 1,000 hypotheses are true, studies with a power of 0.8 will find 80 and miss 20. Of the 900 incorrect hypotheses, 5% or 45 will be accepted because of type I errors. Adding the 45 false positives to the 80 true positives gives 125 positive results, or 36% specious. Dropping statistical power to 0.4, optimistic for many fields, would still produce 45 false positives but only 40 true positives, less than half.
Negative results are more reliable. Statistical power of 0.8 produces 875 negative results of which only 20 are false, giving an accuracy of over 97%. Negative results however account for a minority of published results, varying by discipline. A study of 4,600 papers found that the proportion of published negative results dropped from 30% to 14% between 1990 and 2007.
Subatomic physics sets an acceptable false-positive rate of one in 3.5m (known as the five-sigma standard). However, even this does not provide perfect protection. The problem invalidates some 3/4s of machine learning studies according to one review.
Statistical significance
Statistical significance is a measure for testing statistical correlation. It was invented by English mathematician Ronald Fisher in the 1920s. It defines a "significant" result as any data point that would be produced by chance less than 5 (or more stringently, 1) percent of the time. A significant result is widely seen as an important indicator that the correlation is not random.
While correlations track the relationship between truly independent measurements, such as smoking and cancer, they are much less effective when variables cannot be isolated, a common circumstance in biological systems. For example, statistics found a high correlation between lower back pain and abnormalities in spinal discs, although it was later discovered that serious abnormalities were present in two-thirds of pain-free patients.
Minimum threshold publishers
Journals such as PLoS One use a “minimal-threshold” standard, seeking to publish as much science as possible, rather than to pick out the best work. Their peer reviewers assess only whether a paper is methodologically sound. Almost half of their submissions are still rejected on that basis.
Unpublished research
Only 22% of the clinical trials financed by the National Institutes of Health (NIH) released summary results within one year of completion, even though the NIH requires it. Fewer than half published within 30 months; a third remained unpublished after 51 months. When other scientists rely on invalid research, they may waste time on lines of research that are themselves invalid. The failure to report failures means that researchers waste money and effort exploring blind alleys already investigated by other scientists.
Fraud
In 21 surveys of academics (mostly in the biomedical sciences but also in civil engineering, chemistry and economics) carried out between 1987 and 2008, 2% admitted fabricating data, but 28% claimed to know of colleagues who engaged in questionable research practices.
Lack of access to data and software
Clinical trials are generally too costly to rerun. Access to trial data is the only practical approach to reassessment. A campaign to persuade pharmaceutical firms to make all trial data available won its first convert in February 2013 when GlaxoSmithKline became the first to agree.
Software used in a trial is generally considered to be proprietary intellectual property and is not available to replicators, further complicating matters. Journals that insist on data-sharing tend not to do the same for software.
Even well-written papers may not include sufficient detail and/or tacit knowledge (subtle skills and extemporisations not considered notable) for the replication to succeed. One cause of replication failure is insufficient control of the protocol, which can cause disputes between the original and replicating researchers.
Reform
Statistics training
Geneticists have begun more careful reviews, particularly of the use of statistical techniques. The effect was to stop a flood of specious results from genome sequencing.
Protocol registration
Registering research protocols in advance and monitoring them over the course of a study can prevent researchers from modifying the protocol midstream to highlight preferred results. Providing raw data for other researchers to inspect and test can also better hold researchers to account.
Post-publication review
Replacing peer review with post-publication evaluations can encourage researchers to think more about the long-term consequences of excessive or unsubstantiated claims. That system was adopted in physics and mathematics with good results.
Replication
Few researchers, especially junior workers, seek opportunities to replicate others' work, partly to protect relationships with senior researchers.
Reproduction benefits from access to the original study's methods and data. More than half of 238 biomedical papers published in 84 journals failed to identify all the resources (such as chemical reagents) necessary to reproduce the results. In 2008 some 60% of researchers said they would share raw data; in 2013 just 45% do. Journals have begun to demand that at least some raw data be made available, although only 143 of 351 randomly selected papers covered by some data-sharing policy actually complied.
The Reproducibility Initiative is a service allowing life scientists to pay to have their work validated by an independent lab. In October 2013 the initiative received funding to review 50 of the highest-impact cancer findings published between 2010 and 2012. Blog Syn is a website run by graduate students that is dedicated to reproducing chemical reactions reported in papers.
In 2013 replication efforts received greater attention. Nature and related publications introduced an 18-point checklist for life science authors in May, in its effort to ensure that its published research can be reproduced. Expanded "methods" sections and all data were to be available online. The Centre for Open Science opened as an independent laboratory focused on replication. The journal Perspectives on Psychological Science announced a section devoted to replications. Another project announced plans to replicate 100 studies published in the first three months of 2008 in three leading psychology journals.
Major funders, including the European Research Council, the US National Science Foundation and Research Councils UK have not changed their preference for new work over replications.
See also
Metascience (research)
Replication crisis
Reproducibility Project
Retraction Watch
Séralini affair
Statistical correlation
References
External links
Scientific misconduct | Invalid science | Technology | 2,911 |
21,153,971 | https://en.wikipedia.org/wiki/Potassium%20picrate | Potassium picrate, or potassium 2,4,6-trinitrophenolate, is an organic chemical, a picrate of potassium. It is a reddish yellow or green crystalline material. It is a primary explosive. Anhydrous potassium picrate forms orthorhombic crystals.
History
Potassium picrate was first prepared in impure form in the mid 17th century by Johann Rudolf Glauber by dissolving wood in nitric acid and neutralizing with potassium carbonate. It is commonly made by neutralizing picric acid with potassium carbonate. It has been used in industry since the 1860s.
Potassium Picrate and picric acid were formerly used in pyrotechnics to produce whistle effects, but since mixes that don't involve primary explosives have since been developed it is no longer used in that industry. Its chief applications were as a component of explosives (with potassium nitrate and charcoal), propellants (with the same substances in the poudre Dessignole of the 1870s French Navy), and in explosive primers (with lead picrate and potassium chlorate).
Description
Potassium picrate is not a very powerful explosive. It is somewhat shock-sensitive. In contact with flame it deflagrates with a loud sound. If ignited in confined space, it will detonate. It is more sensitive than picric acid.
In contact with metals (e.g. lead, calcium, iron), potassium picrate, like ammonium picrate and picric acid, forms picrates of said metals. These are often more dangerous and more sensitive explosives. Contact with such materials therefore should be prevented.
Potassium picrate is used to determine the concentration of nonionic surfactants in water; materials detectable by this method are called potassium picrate active substances (PPAS).
Synthesis
As with other picrates, potassium picrate may be produced by the neutralization of picric acid with the corresponding carbonate.
As picric acid is barely soluble in water the reaction must be done in an appropriate solvent like methanol.
First dissolving the picric acid in methanol and then adding potassium carbonate will result in potassium picrate. Temperature control is important to prevent detonation or excessive methanol evaporation.
Sensitivity
According to Urbanski, Potassium picrate detonated 10% of the time when struck by a mass of 2kg dropped from the height of 21cm.
By comparison, the more sensitive anhydrous lead picrate detonated 10% of the time when struck by the same mass dropped from the height of 2cm.
See also
Dunnite
Picric acid
Lead picrate
References
Urbanski, Tadeusz (1964), Chemistry and Technology of Explosives, Volume 1, New York: Pergamon Press.
Picrates
Potassium compounds
Explosive chemicals | Potassium picrate | Chemistry | 571 |
52,200,124 | https://en.wikipedia.org/wiki/NGC%20342 | NGC 342 is a lenticular galaxy in the constellation Cetus. It was discovered on September 27, 1864 by Albert Marth. It was described by Dreyer as "very faint, very small."
References
External links
0342
18640927
Cetus
Lenticular galaxies
003631 | NGC 342 | Astronomy | 61 |
64,064,108 | https://en.wikipedia.org/wiki/ITF-6 | ITF-6 is the implementation of an Interleaved 2 of 5 (ITF) barcode to encode a addon to ITF-14 and ITF-16 barcodes. Originally was developed as a part of JIS specification for Physical Distribution Center. Instead of ITF-14, it wasn’t standardized by ISO Committee but it is widely used to encode additional data to Global Trade Item Number such as items quantity or container weight.
History
In 1983, the Logistics Symbol Committee proposed the Interleaved 2 of 5 barcode as a method to improve the JAN code. In 1985, a logistics symbol JIS drafting committee was set up at the Distribution System Development Center, and the final examination was started toward JIS. Then in 1987 it was standardized as JIS-X-0502, a standard physical distribution barcode symbol ITF-14/16/6.
The ITF barcode has an add-on version for displaying the weight, etc., and it is possible to encode a 5-digits numerical value and 6-th check character as ITF-6 after ITF-14 or ITF-16(obsolete in 2010).
Currently ITF-6 isn’t standardized by ISO Committee and it is used only as a part of JIS standards. However, it is widely used by manufacturers to encode additional data and it is supported by wide range of barcode scanners
Uses
Despite the fact that ITF-6 barcode isn’t included into ISO standards, it is widely used as add-on to encode items quantity in package or item weight. At this time, it is used only with ITF-14 (Global Trade Item Number), but up to 2010 it was used with standardized only in Japan ITF-16 (Extended Symbology for Physical Distribution).
From the left, ITF-6 contains 5 significant digits and the last one is control digit, which is calculated same way as UPC checksums. If a decimal point is required, the decimal point is between the 3rd and 4th digits:
NNNNN(C/D) - without decimal point;
NNN.NN(C/D) - with decimal point.
ITF-6 is supported by various barcode generating software and barcode scanners.
Checksum
Checksum is calculated as other UPC checksums:
Example for the first 5 digits 12345:
10 - ((3*1 + 2 + 3*3 + 4 + 3*5) mod 10) = 7. Check digit is 7.
See also
Automated identification and data capture (AIDC)
Barcode
Global Trade Item Number
ITF-14
Interleaved 2 of 5
Japanese Industrial Standards
References
External links
Free ITF-6 generator
JISX0502:1994 (in Japanese)
Standard Symbology for Physical Distribution
Automatic identification and data capture
Barcodes
Encodings | ITF-6 | Technology | 571 |
16,977,319 | https://en.wikipedia.org/wiki/Absolute%20molar%20mass | Absolute molar mass is a process used to determine the characteristics of molecules.
History
The first absolute measurements of molecular weights (i.e. made without reference to standards) were based on fundamental physical characteristics and their relation to the molar mass. The most useful of these were membrane osmometry and sedimentation.
Another absolute instrumental approach was also possible with the development of light scattering theory by Albert Einstein, Chandrasekhara Venkata Raman, Peter Debye, Bruno H. Zimm, and others. The problem with measurements made using membrane osmometry and sedimentation was that they only characterized the bulk properties of the polymer sample. Moreover, the measurements were excessively time consuming and prone to operator error. In order to gain information about a polydisperse mixture of molar masses, a method for separating the different sizes was developed. This was achieved by the advent of size exclusion chromatography (SEC). SEC is based on the fact that the pores in the packing material of chromatography columns could be made small enough for molecules to become temporarily lodged in their interstitial spaces. As the sample makes its way through a column the smaller molecules spend more time traveling in these void spaces than the larger ones, which have fewer places to "wander". The result is that a sample is separated according to its hydrodynamic volume . As a consequence, the big molecules come out first, and then the small ones follow in the eluent. By choosing a suitable column packing material it is possible to define the resolution of the system. Columns can also be combined in series to increase resolution or the range of sizes studied.
The next step is to convert the time at which the samples eluted into a measurement of molar mass. This is possible because if the molar mass of a standard were known, the time at which this standard eluted should be equal to a specific molar mass. Using multiple standards, a calibration curve of time versus molar mass can be developed. This is significant for polymer analysis because a single polymer could be shown to have many different components, and the complexity and distribution of which would also affect the physical properties. However this technique has shortcomings. For example, unknown samples are always measured in relation to known standards, and these standards may or may not have similarities to the sample of interest. The measurements made by SEC are then mathematically converted into data similar to that found by the existing techniques.
The problem was that the system was calibrated according to the Vh characteristics of polymer standards that are not directly related to the molar mass. If the relationship between the molar mass and Vh of the standard is not the same as that of the unknown sample, then the calibration is invalid. Thus, to be accurate, the calibration must use the same polymer, of the same conformation, in the same eluent and have the same interaction with the solvent as the hydration layer changes Vh.
Benoit et al. showed that taking into account the hydrodynamic volume would solve the problem. In his publication, Benoit showed that all synthetic polymers elutes on the same curve when the log of the intrinsic viscosity multiplied by the molar mass was plotted against the elution volume. This is the basis of universal calibration which requires a viscometer to measure the intrinsic viscosity of the polymers. Universal calibration was shown to work for branched polymers, copolymers as well as starburst polymers.
For good chromatography, there must be no interaction with the column other than that produced by size. As the demands on polymer properties increased, the necessity of getting absolute information on the molar mass and size also increased. This was especially important in pharmaceutical applications where slight changes in molar mass (e.g. aggregation) or shape may result in different biological activity. These changes can actually have a harmful effect instead of a beneficial one.
To obtain molar mass, light scattering instruments need to measure the intensity of light scattered at zero angle. This is impractical as the laser source would outshine the light scattering intensity at zero angle. The 2 alternatives are to measure very close to zero angle or to measure at many angle and extrapolate using a model (Rayleigh, Rayleigh–Gans–Debye, Berry, Mie, etc.) to zero degree angle.
Traditional light scattering instruments worked by taking readings from multiple angles, each being measured in series. A low angle light scattering system was developed in the early 1970s that allowed a single measurement to be used to calculate the molar mass. Although measurements at low angles are better for fundamental physical reasons (molecules tend to scatter more light in lower angle directions than in higher angles), low angle scattering events caused by dust and contamination of the mobile phase easily overwhelm the scattering from the molecules of interest. When the low-angle laser light scattering (LALLS) became popular in the 1970s and mid-1980s, good quality disposable filters were not readily available and hence multi-angle measurements gained favour.
Multi-angle light scattering was invented in the mid-1980s and instruments like that were able to make measurements at the different angles simultaneously but it was not until the later 1980s that the connection of multi-angle laser light scattering (MALS) detectors to SEC systems was a practical proposition enabling both molar mass and size to be determined from each slice of the polymer fraction.
Applications
Light scattering measurements can be applied to synthetic polymers, proteins, pharmaceuticals and particles such as liposomes, micelles, and encapsulated proteins. Measurements can be made in one of two modes which are un-fractionated (batch mode) or in continuous flow mode (with SEC, HPLC or any other flow fractionation method). Batch mode experiments can be performed either by injecting a sample into a flow cell with a syringe or with the use of discrete vials. These measurements are most often used to measure timed events like antibody-antigen reactions or protein assembly. Batch mode measurements can also be used to determine the second virial coefficient (A2), a value that gives a measure of the likelihood of crystallization or aggregation in a given solvent. Continuous flow experiments can be used to study material eluting from virtually any source. More conventionally, the detectors are coupled to a variety of different chromatographic separation systems. The ability to determine the mass and size of the materials eluting then combines the advantage of the separation system with an absolute measurement of the mass and size of the species eluting.
The addition of an SLS detector coupled downstream to a chromatographic system allows the utility of SEC or similar separation combined with the advantage of an absolute detection method. The light scattering data is purely dependent on the light scattering signal times the concentration; the elution time is irrelevant and the separation can be changed for different samples without recalibration. In addition, a non-size separation method such as HPLC or IC can also be used.
As the light scattering detector is mass dependent, it becomes more sensitive as the molar mass increases. Thus it is an excellent tool for detecting aggregation. The higher the aggregation number, the more sensitive the detector becomes.
Low-angle (laser)-light scattering (LALS) method
LALS measurements are measuring at a very low angle where the scattering vector is almost zero. LALS does not need any model to fit the angular dependence and hence is giving more reliable molecular weights measurements for large molecules. LALS alone does not give any indication of the root mean square radius.
Multi-angle (laser)-light scattering (MALS) method
MALS measurements work by calculating the amount of light scattered at each angle detected. The calculation is based on the intensity of light measured and the quantum efficiency of each detector. Then a model is used to approximate the intensity of light scattered at zero angle. The zero angle light scattered is then related to the molar mass.
As previously noted, the MALS detector can also provide information about the size of the molecule. This information is the Root Mean Square radius of the molecule (RMS or Rg). This is different from the Rh mentioned above who is taking the hydration layer into account. The purely mathematical root mean square radius is defined as the radii making up the molecule multiplied by the mass at that radius.
Bibliography
A. Einstein, Ann. Phys. 33 (1910), 1275
C.V. Raman, Indian J. Phys. 2 (1927), 1
P.Debye, J. Appl. Phys. 15 (1944), 338
B.H. Zimm, J. Chem. Phys. 13 (1945), 141
B.H. Zimm, J. Chem. Phys. 16 (1948), 1093
B.H. Zimm, R.S. Stein and P. Dotty, Pol. Bull. 1,(1945), 90
M. Fixman, J. Chem. Phys. 23 (1955), 2074
A.C. Ouano and W. Kaye J. Poly. Sci. A1(12) (1974), 1151
Z. Grubisic, P. Rempp, and H. Benoit, J. Polym. Sci., 5 (1967), 753
Flow Through MALS detector, DLS 800, Science Spectrum Inc.
P.J. Wyatt, C. Jackson and G.K. Wyatt Am. Lab 20(6) (1988), 86
P.J. Wyatt, D. L. Hicks, C. Jackson and G.K. Wyatt Am. Lab. 20(6) (1988), 106
C. Jackson, L.M. Nilsson and P.J. Wyatt J. Appl. Poly. Sci. 43 (1989), 99
Chemical properties
Mass | Absolute molar mass | Physics,Chemistry,Mathematics | 2,034 |
70,805,681 | https://en.wikipedia.org/wiki/Fudania%20jinshanensis | Fudania jinshanensis is a gram-positive species of bacteria from the family Actinomycetaceae, which has been isolated from the faeces of an antelope (Pantholops hodgsonii).
References
Actinomycetales
Bacteria described in 2019
Monotypic bacteria genera | Fudania jinshanensis | Biology | 63 |
276,174 | https://en.wikipedia.org/wiki/Time-scale%20calculus | In mathematics, time-scale calculus is a unification of the theory of difference equations with that of differential equations, unifying integral and differential calculus with the calculus of finite differences, offering a formalism for studying hybrid systems. It has applications in any field that requires simultaneous modelling of discrete and continuous data. It gives a new definition of a derivative such that if one differentiates a function defined on the real numbers then the definition is equivalent to standard differentiation, but if one uses a function defined on the integers then it is equivalent to the forward difference operator.
History
Time-scale calculus was introduced in 1988 by the German mathematician Stefan Hilger. However, similar ideas have been used before and go back at least to the introduction of the Riemann–Stieltjes integral, which unifies sums and integrals.
Dynamic equations
Many results concerning differential equations carry over quite easily to corresponding results for difference equations, while other results seem to be completely different from their continuous counterparts. The study of dynamic equations on time scales reveals such discrepancies, and helps avoid proving results twice—once for differential equations and once again for difference equations. The general idea is to prove a result for a dynamic equation where the domain of the unknown function is a so-called time scale (also known as a time-set), which may be an arbitrary closed subset of the reals. In this way, results apply not only to the set of real numbers or set of integers but to more general time scales such as a Cantor set.
The three most popular examples of calculus on time scales are differential calculus, difference calculus, and quantum calculus. Dynamic equations on a time scale have a potential for applications such as in population dynamics. For example, they can model insect populations that evolve continuously while in season, die out in winter while their eggs are incubating or dormant, and then hatch in a new season, giving rise to a non-overlapping population.
Formal definitions
A time scale (or measure chain) is a closed subset of the real line . The common notation for a general time scale is .
The two most commonly encountered examples of time scales are the real numbers and the discrete time scale .
A single point in a time scale is defined as:
Operations on time scales
The forward jump and backward jump operators represent the closest point in the time scale on the right and left of a given point , respectively. Formally:
(forward shift/jump operator)
(backward shift/jump operator)
The graininess is the distance from a point to the closest point on the right and is given by:
For a right-dense , and .
For a left-dense ,
Classification of points
For any , is:
left dense if
right dense if
left scattered if
right scattered if
dense if both left dense and right dense
isolated if both left scattered and right scattered
As illustrated by the figure at right:
Point is dense
Point is left dense and right scattered
Point is isolated
Point is left scattered and right dense
Continuity
Continuity of a time scale is redefined as equivalent to density. A time scale is said to be right-continuous at point if it is right dense at point . Similarly, a time scale is said to be left-continuous at point if it is left dense at point .
Derivative
Take a function:
(where R could be any Banach space, but is set to the real line for simplicity).
Definition: The delta derivative (also Hilger derivative) exists if and only if:
For every there exists a neighborhood of such that:
for all in .
Take Then , , ; is the derivative used in standard calculus. If (the integers), , , is the forward difference operator used in difference equations.
Integration
The delta integral is defined as the antiderivative with respect to the delta derivative. If has a continuous derivative one sets
Laplace transform and z-transform
A Laplace transform can be defined for functions on time scales, which uses the same table of transforms for any arbitrary time scale. This transform can be used to solve dynamic equations on time scales. If the time scale is the non-negative integers then the transform is equal to a modified Z-transform:
Partial differentiation
Partial differential equations and partial difference equations are unified as partial dynamic equations on time scales.
Multiple integration
Multiple integration on time scales is treated in Bohner (2005).
Stochastic dynamic equations on time scales
Stochastic differential equations and stochastic difference equations can be generalized to stochastic dynamic equations on time scales.
Measure theory on time scales
Associated with every time scale is a natural measure defined via
where denotes Lebesgue measure and is the backward shift operator defined on . The delta integral turns out to be the usual Lebesgue–Stieltjes integral with respect to this measure
and the delta derivative turns out to be the Radon–Nikodym derivative with respect to this measure
Distributions on time scales
The Dirac delta and Kronecker delta are unified on time scales as the Hilger delta:
Fractional calculus on time scales
Fractional calculus on time scales is treated in Bastos, Mozyrska, and Torres.
See also
Analysis on fractals for dynamic equations on a Cantor set.
Multiple-scale analysis
Method of averaging
Krylov–Bogoliubov averaging method
References
Further reading
Dynamic Equations on Time Scales Special issue of Journal of Computational and Applied Mathematics (2002)
Dynamic Equations And Applications Special Issue of Advances in Difference Equations (2006)
Dynamic Equations on Time Scales: Qualitative Analysis and Applications Special issue of Nonlinear Dynamics And Systems Theory (2009)
External links
The Baylor University Time Scales Group
Timescalewiki.org
Dynamical systems
Calculus
Recurrence relations | Time-scale calculus | Physics,Mathematics | 1,141 |
26,039,201 | https://en.wikipedia.org/wiki/Queries%20per%20second | Queries per second (QPS) is a measure of the amount of search traffic an information-retrieval system, such as a search engine or a database, receives in one second. The term is used more broadly for any request–response system, where it can more correctly be called requests per second (RPS).
High-traffic systems must be mindful of QPS to know when to scale to handle greater load.
See also
Transactions per second
References
Units of frequency
Information retrieval evaluation | Queries per second | Mathematics,Technology | 100 |
18,865,825 | https://en.wikipedia.org/wiki/Mercury-manganese%20star | A mercury-manganese star (also HgMn star) is a type of chemically peculiar star with a prominent spectral line at 398.4 nm, due to absorption from ionized mercury. These stars are of spectral type B8, B9, or A0, corresponding to surface temperatures between about 10,000 and 15,000 K, with two distinctive characteristics:
An atmospheric excess of elements like phosphorus, manganese, gallium, strontium, yttrium, zirconium, platinum and mercury.
A lack of a strong dipole magnetic field.
Their rotation is relatively slow, and as a consequence their atmosphere is relatively calm. It is thought, but has not been proven, that some types of atoms sink under the force of gravity, while others are lifted towards the exterior of the star by radiation pressure, making a heterogeneous atmosphere.
List
The following table includes the brightest stars in this group.
References
Star types
Mercury (element)
Manganese | Mercury-manganese star | Astronomy | 201 |
25,260,257 | https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20December%2027%2C%202084 | A total solar eclipse will occur at the Moon's ascending node of orbit on Wednesday, December 27, 2084, with a magnitude of 1.0396. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A total solar eclipse occurs when the Moon's apparent diameter is larger than the Sun's, blocking all direct sunlight, turning day into darkness. Totality occurs in a narrow path across Earth's surface, with the partial solar eclipse visible over a surrounding region thousands of kilometres wide. Occurring about 21 hours before perigee (on December 28, 2084, at 6:00 UTC), the Moon's apparent diameter will be larger.
The path of totality will be visible from parts of the Crozet Islands. A partial solar eclipse will also be visible for parts of Southern Africa, Antarctica, and Australia.
Eclipse details
Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2084
A partial solar eclipse on January 7.
A total lunar eclipse on January 22.
An annular solar eclipse on July 3.
A partial lunar eclipse on July 17.
A total solar eclipse on December 27.
Metonic
Preceded by: Solar eclipse of March 10, 2081
Followed by: Solar eclipse of October 14, 2088
Tzolkinex
Preceded by: Solar eclipse of November 15, 2077
Followed by: Solar eclipse of February 7, 2092
Half-Saros
Preceded by: Lunar eclipse of December 22, 2075
Followed by: Lunar eclipse of January 1, 2094
Tritos
Preceded by: Solar eclipse of January 27, 2074
Followed by: Solar eclipse of November 27, 2095
Solar Saros 133
Preceded by: Solar eclipse of December 17, 2066
Followed by: Solar eclipse of January 8, 2103
Inex
Preceded by: Solar eclipse of January 16, 2056
Followed by: Solar eclipse of December 8, 2113
Triad
Preceded by: Solar eclipse of February 26, 1998
Followed by: Solar eclipse of October 29, 2171
Solar eclipses of 2083–2087
Saros 133
Metonic series
Tritos series
Inex series
Notes
References
2084 12 27
2084 12 27
2084 12 27
2084 in science | Solar eclipse of December 27, 2084 | Astronomy | 599 |
265,128 | https://en.wikipedia.org/wiki/Cost-effectiveness%20analysis | Cost-effectiveness analysis (CEA) is a form of economic analysis that compares the relative costs and outcomes (effects) of different courses of action. Cost-effectiveness analysis is distinct from cost–benefit analysis, which assigns a monetary value to the measure of effect. Cost-effectiveness analysis is often used in the field of health services, where it may be inappropriate to monetize health effect. Typically the CEA is expressed in terms of a ratio where the denominator is a gain in health from a measure (years of life, premature births averted, sight-years gained) and the numerator is the cost associated with the health gain. The most commonly used outcome measure is quality-adjusted life years (QALY).
Cost–utility analysis is similar to cost-effectiveness analysis. Cost-effectiveness analyses are often visualized on a plane consisting of four quadrants, the cost represented on one axis and the effectiveness on the other axis. Cost-effectiveness analysis focuses on maximising the average level of an outcome, distributional cost-effectiveness analysis extends the core methods of CEA to incorporate concerns for the distribution of outcomes as well as their average level and make trade-offs between equity and efficiency, these more sophisticated methods are of particular interest when analysing interventions to tackle health inequality.
Applications
The concept of cost-effectiveness is applied to the planning and management of many types of organized activity. It is widely used in many aspects of life.
In military acquisitions
In the acquisition of military tanks, for example, competing designs are compared not only for purchase price, but also for such factors as their operating radius, top speed, rate of fire, armor protection, and caliber and armor penetration of their guns. If a tank's performance in these areas is equal or even slightly inferior to its competitor, but substantially less expensive and easier to produce, military planners may select it as more cost-effective than the competitor.
Conversely, if the difference in price is near zero, but the more costly competitor would convey an enormous battlefield advantage through special ammunition, radar fire control and laser range finding, enabling it to destroy enemy tanks accurately at extreme ranges, military planners may choose it instead – based on the same cost-effectiveness principle.
In pharmacoeconomics
In the context of pharmacoeconomics, the cost-effectiveness of a therapeutic or preventive intervention is the ratio of the cost of the intervention to a relevant measure of its effect. Cost refers to the resource expended for the intervention, usually measured in monetary terms such as dollars or pounds. The measure of effects depends on the intervention being considered. Examples include the number of people cured of a disease, the mm Hg reduction in diastolic blood pressure and the number of symptom-free days experienced by a patient. The selection of the appropriate effect measure should be based on clinical judgment in the context of the intervention being considered.
A special case of CEA is cost–utility analysis, where the effects are measured in terms of years of full health lived, using a measure such as quality-adjusted life years (QALY) or disability-adjusted life years. Cost-effectiveness is typically expressed as an incremental cost-effectiveness ratio (ICER), the ratio of change in costs to the change in effects. A complete compilation of cost-utility analyses in the peer-reviewed medical and public health literature is available from the Cost-Effectiveness Analysis Registry website.
A 1995 study of the cost-effectiveness of reviewed over 500 life-saving interventions found that the median cost-effectiveness was $42,000 per life-year saved. A 2006 systematic review found that industry-funded studies often concluded with cost-effective ratios below $20,000 per QALY and low quality studies and those conducted outside the US and EU were less likely to be below this threshold. While the two conclusions of this article may indicate that industry-funded ICER measures are lower methodological quality than those published by non-industry sources, there is also a possibility that, due to the nature of retrospective or other non-public work, publication bias may exist rather than methodology biases. There may be incentive for an organization not to develop or publish an analysis that does not demonstrate the value of their product. Additionally, peer reviewed journal articles should have a strong and defendable methodology, as that is the expectation of the peer-review process.
In energy efficiency investments
CEA has been applied to energy efficiency investments in buildings to calculate the value of energy saved in $/kWh. The energy in such a calculation is virtual in the sense that it was never consumed but rather saved due to some energy efficiency investment being made. Such savings are sometimes called negawatts. The benefit of the CEA approach in energy systems is that it avoids the need to guess future energy prices for the purposes of the calculation, thus removing the major source of uncertainty in the appraisal of energy efficiency investments.
See also
References
External links
Tufts CEA Registry
Global Health Cost-Effectiveness Analysis Registry
Why some drugs are not worth it BBC report
World Health Organization – CHOICE (Choosing Interventions that are Cost Effective)
ISPOR-CO, The Colombian Chapter of The International Society for Pharmacoeconomics and Outcomes Research
International Cost Estimating and Analysis Association
Costs
Health economics
Health informatics
Health care quality
Decision analysis | Cost-effectiveness analysis | Biology | 1,074 |
20,362 | https://en.wikipedia.org/wiki/Merge%20algorithm | Merge algorithms are a family of algorithms that take multiple sorted lists as input and produce a single list as output, containing all the elements of the inputs lists in sorted order. These algorithms are used as subroutines in various sorting algorithms, most famously merge sort.
Application
The merge algorithm plays a critical role in the merge sort algorithm, a comparison-based sorting algorithm. Conceptually, the merge sort algorithm consists of two steps:
Recursively divide the list into sublists of (roughly) equal length, until each sublist contains only one element, or in the case of iterative (bottom up) merge sort, consider a list of n elements as n sub-lists of size 1. A list containing a single element is, by definition, sorted.
Repeatedly merge sublists to create a new sorted sublist until the single list contains all elements. The single list is the sorted list.
The merge algorithm is used repeatedly in the merge sort algorithm.
An example merge sort is given in the illustration. It starts with an unsorted array of 7 integers. The array is divided into 7 partitions; each partition contains 1 element and is sorted. The sorted partitions are then merged to produce larger, sorted, partitions, until 1 partition, the sorted array, is left.
Merging two lists
Merging two sorted lists into one can be done in linear time and linear or constant space (depending on the data access model). The following pseudocode demonstrates an algorithm that merges input lists (either linked lists or arrays) and into a new list . The function yields the first element of a list; "dropping" an element means removing it from its list, typically by incrementing a pointer or index.
algorithm merge(A, B) is
inputs A, B : list
returns list
C := new empty list
while A is not empty and B is not empty do
if head(A) ≤ head(B) then
append head(A) to C
drop the head of A
else
append head(B) to C
drop the head of B
// By now, either A or B is empty. It remains to empty the other input list.
while A is not empty do
append head(A) to C
drop the head of A
while B is not empty do
append head(B) to C
drop the head of B
return C
When the inputs are linked lists, this algorithm can be implemented to use only a constant amount of working space; the pointers in the lists' nodes can be reused for bookkeeping and for constructing the final merged list.
In the merge sort algorithm, this subroutine is typically used to merge two sub-arrays , of a single array . This can be done by copying the sub-arrays into a temporary array, then applying the merge algorithm above. The allocation of a temporary array can be avoided, but at the expense of speed and programming ease. Various in-place merge algorithms have been devised, sometimes sacrificing the linear-time bound to produce an algorithm; see for discussion.
K-way merging
-way merging generalizes binary merging to an arbitrary number of sorted input lists. Applications of -way merging arise in various sorting algorithms, including patience sorting and an external sorting algorithm that divides its input into blocks that fit in memory, sorts these one by one, then merges these blocks.
Several solutions to this problem exist. A naive solution is to do a loop over the lists to pick off the minimum element each time, and repeat this loop until all lists are empty:
Input: a list of lists.
While any of the lists is non-empty:
Loop over the lists to find the one with the minimum first element.
Output the minimum element and remove it from its list.
In the worst case, this algorithm performs element comparisons to perform its work if there are a total of elements in the lists.
It can be improved by storing the lists in a priority queue (min-heap) keyed by their first element:
Build a min-heap of the lists, using the first element as the key.
While any of the lists is non-empty:
Let .
Output the first element of list and remove it from its list.
Re-heapify .
Searching for the next smallest element to be output (find-min) and restoring heap order can now be done in time (more specifically, comparisons), and the full problem can be solved in time (approximately comparisons).
A third algorithm for the problem is a divide and conquer solution that builds on the binary merge algorithm:
If , output the single input list.
If , perform a binary merge.
Else, recursively merge the first lists and the final lists, then binary merge these.
When the input lists to this algorithm are ordered by length, shortest first, it requires fewer than comparisons, i.e., less than half the number used by the heap-based algorithm; in practice, it may be about as fast or slow as the heap-based algorithm.
Parallel merge
A parallel version of the binary merge algorithm can serve as a building block of a parallel merge sort. The following pseudocode demonstrates this algorithm in a parallel divide-and-conquer style (adapted from Cormen et al.). It operates on two sorted arrays and and writes the sorted output to array . The notation denotes the part of from index through , exclusive.
algorithm merge(A[i...j], B[k...ℓ], C[p...q]) is
inputs A, B, C : array
i, j, k, ℓ, p, q : indices
let m = j - i,
n = ℓ - k
if m < n then
swap A and B // ensure that A is the larger array: i, j still belong to A; k, ℓ to B
swap m and n
if m ≤ 0 then
return // base case, nothing to merge
let r = ⌊(i + j)/2⌋
let s = binary-search(A[r], B[k...ℓ])
let t = p + (r - i) + (s - k)
C[t] = A[r]
in parallel do
merge(A[i...r], B[k...s], C[p...t])
merge(A[r+1...j], B[s...ℓ], C[t+1...q])
The algorithm operates by splitting either or , whichever is larger, into (nearly) equal halves. It then splits the other array into a part with values smaller than the midpoint of the first, and a part with larger or equal values. (The binary search subroutine returns the index in where would be, if it were in ; that this always a number between and .) Finally, each pair of halves is merged recursively, and since the recursive calls are independent of each other, they can be done in parallel. Hybrid approach, where serial algorithm is used for recursion base case has been shown to perform well in practice
The work performed by the algorithm for two arrays holding a total of elements, i.e., the running time of a serial version of it, is . This is optimal since elements need to be copied into . To calculate the span of the algorithm, it is necessary to derive a Recurrence relation. Since the two recursive calls of merge are in parallel, only the costlier of the two calls needs to be considered. In the worst case, the maximum number of elements in one of the recursive calls is at most since the array with more elements is perfectly split in half. Adding the cost of the Binary Search, we obtain this recurrence as an upper bound:
The solution is , meaning that it takes that much time on an ideal machine with an unbounded number of processors.
Note: The routine is not stable: if equal items are separated by splitting and , they will become interleaved in ; also swapping and will destroy the order, if equal items are spread among both input arrays. As a result, when used for sorting, this algorithm produces a sort that is not stable.
Parallel merge of two lists
There are also algorithms that introduce parallelism within a single instance of merging of two sorted lists. These can be used in field-programmable gate arrays (FPGAs), specialized sorting circuits, as well as in modern processors with single-instruction multiple-data (SIMD) instructions.
Existing parallel algorithms are based on modifications of the merge part of either the bitonic sorter or odd-even mergesort. In 2018, Saitoh M. et al. introduced MMS for FPGAs, which focused on removing a multi-cycle feedback datapath that prevented efficient pipelining in hardware. Also in 2018, Papaphilippou P. et al. introduced FLiMS that improved the hardware utilization and performance by only requiring pipeline stages of compare-and-swap units to merge with a parallelism of elements per FPGA cycle.
Language support
Some computer languages provide built-in or library support for merging sorted collections.
C++
The C++'s Standard Template Library has the function , which merges two sorted ranges of iterators, and , which merges two consecutive sorted ranges in-place. In addition, the (linked list) class has its own method which merges another list into itself. The type of the elements merged must support the less-than () operator, or it must be provided with a custom comparator.
C++17 allows for differing execution policies, namely sequential, parallel, and parallel-unsequenced.
Python
Python's standard library (since 2.6) also has a function in the module, that takes multiple sorted iterables, and merges them into a single iterator.
See also
Merge (revision control)
Join (relational algebra)
Join (SQL)
Join (Unix)
References
Further reading
Donald Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching, Third Edition. Addison-Wesley, 1997. . Pages 158–160 of section 5.2.4: Sorting by Merging. Section 5.3.2: Minimum-Comparison Merging, pp. 197–207.
External links
High Performance Implementation of Parallel and Serial Merge in C# with source in GitHub and in C++ GitHub
Articles with example pseudocode
Sorting algorithms | Merge algorithm | Mathematics | 2,149 |
39,536,649 | https://en.wikipedia.org/wiki/Mikl%C3%B3s%20B%C3%B3na | Miklós Bóna (born in Székesfehérvár) is an American mathematician of Hungarian origin.
Bóna completed his undergraduate studies in Budapest and Paris, then obtained his Ph.D. at MIT in 1997 as a student of Richard P. Stanley. Since 1999, he has taught at the University of Florida, where in 2010 he was inducted to the Academy of Distinguished Teaching Scholars.
Bóna's main fields of research include the combinatorics of permutations, as well as enumerative and analytic combinatorics. Since 2010, he has been one of the editors-in-chief of the Electronic Journal of Combinatorics.
Books
External links
Professional home page
References
20th-century American mathematicians
20th-century Hungarian mathematicians
21st-century American mathematicians
21st-century Hungarian mathematicians
University of Florida faculty
Combinatorialists
1967 births
Living people
Massachusetts Institute of Technology alumni | Miklós Bóna | Mathematics | 179 |
858,492 | https://en.wikipedia.org/wiki/OpenVPN | OpenVPN is a virtual private network (VPN) system that implements techniques to create secure point-to-point or site-to-site connections in routed or bridged configurations and remote access facilities. It implements both client and server applications.
OpenVPN allows peers to authenticate each other using pre-shared secret keys, certificates or username/password. When used in a multiclient-server configuration, it allows the server to release an authentication certificate for every client, using signatures and certificate authority.
It uses the OpenSSL encryption library extensively, as well as the TLS protocol, and contains many security and control features. It uses a custom security protocol that utilizes SSL/TLS for key exchange. It is capable of traversing network address translators (NATs) and firewalls.
OpenVPN has been ported and embedded to several systems. For example, DD-WRT has the OpenVPN server function. SoftEther VPN, a multi-protocol VPN server, also has an implementation of OpenVPN protocol.
It was written by James Yonan and is free software, released under the terms of the GNU General Public License version 2 (GPLv2). Additionally, commercial licenses are available.
Architecture
Encryption
OpenVPN uses the OpenSSL library to provide encryption of both the data and control channels. It lets OpenSSL do all the encryption and authentication work, allowing OpenVPN to use all the ciphers available in the OpenSSL package. It can also use the HMAC packet authentication feature to add an additional layer of security to the connection (referred to as an "HMAC Firewall" by the creator). It can also use hardware acceleration to get better encryption performance. Support for mbed TLS is available starting from version 2.3.
Authentication
OpenVPN has several ways to authenticate peers with each other. OpenVPN offers pre-shared keys, certificate-based, and username/password-based authentication. Preshared secret key is the easiest, and certificate-based is the most robust and feature-rich. In version 2.0 username/password authentications can be enabled, both with or without certificates. However, to make use of username/password authentications, OpenVPN depends on third-party modules.
Networking
OpenVPN can run over User Datagram Protocol (UDP) or Transmission Control Protocol (TCP) transports, multiplexing created SSL tunnels on a single TCP/UDP port (RFC 3948 for UDP).
From 2.3.x series on, OpenVPN fully supports IPv6 as protocol of the virtual network inside a tunnel and the OpenVPN applications can also establish connections via IPv6.
It has the ability to work through most proxy servers (including HTTP) and is good at working through network address translation (NAT) and getting out through firewalls. The server configuration has the ability to "push" certain network configuration options to the clients. These include IP addresses, routing commands, and a few connection options. OpenVPN offers two types of interfaces for networking via the Universal TUN/TAP driver. It can create either a layer-3 based IP tunnel (TUN), or a layer-2 based Ethernet TAP that can carry any type of Ethernet traffic. OpenVPN can optionally use the LZO compression library to compress the data stream. Port 1194 is the official IANA assigned port number for OpenVPN. Newer versions of the program now default to that port. A feature in the 2.0 version allows for one process to manage several simultaneous tunnels, as opposed to the original "one tunnel per process" restriction on the 1.x series.
OpenVPN's use of common network protocols (TCP and UDP) makes it a desirable alternative to IPsec in situations where an ISP may block specific VPN protocols in order to force users to subscribe to a higher-priced, "business grade" service tier. For example, Comcast previously declared that their @Home product was, and had always been, designated as a residential service and did not allow the use of commercial applications. Their argument was that conducting remote work via a VPN can adversely affect the network performance of their regular residential subscribers. They offered an alternative, @Home Professional, this would cost more than @Home product. So, anyone wishing to use VPN would have to subscribe to higher-priced, business-grade service tier.
When OpenVPN uses Transmission Control Protocol (TCP) transports to establish a tunnel, performance will be acceptable only as long as there is sufficient excess bandwidth on the un-tunneled network link to guarantee that the tunneled TCP timers do not expire. If this becomes untrue, performance falls off dramatically due to the TCP meltdown problem.
Security
OpenVPN offers various internal security features. It has up to 256-bit encryption through the OpenSSL library, although some service providers may offer lower rates, effectively providing some of the fastest VPN available to consumers. OpenVPN also supports Perfect Forward Secrecy (PFS), which regenerates encryption keys at set intervals, ensuring that even if one key is compromised, previous and future data remains secure. Additionally, OpenVPN can be configured with various encryption ciphers, such as ChaCha20 and AES-256. It runs in userspace instead of requiring IP stack (therefore kernel) operation. OpenVPN has the ability to drop root privileges, use mlockall to prevent swapping sensitive data to disk, enter a chroot jail after initialization, and apply a SELinux context after initialization.
OpenVPN runs a custom security protocol based on SSL and TLS, rather than supporting IKE, IPsec, L2TP or PPTP.
OpenVPN offers support of smart cards via PKCS#11-based cryptographic tokens.
Extensibility
OpenVPN can be extended with third-party plug-ins or scripts, which can be called at defined entry points. The purpose of this is often to extend OpenVPN with more advanced logging, enhanced authentication with username and passwords, dynamic firewall updates, RADIUS integration and so on. The plug-ins are dynamically loadable modules, usually written in C, while the scripts interface can execute any scripts or binaries available to OpenVPN. In the OpenVPN source code there are some examples of such plug-ins, including a PAM authentication plug-in. Several third-party plug-ins also exist to authenticate against LDAP or SQL databases such as SQLite and MySQL.
Header
Platforms
It is available on Solaris, Linux, OpenBSD, FreeBSD, NetBSD, QNX, macOS and Windows XP and later. OpenVPN is available for mobile operating systems including Maemo, Windows Mobile 6.5 and below, iOS 3GS+ devices, jailbroken iOS 3.1.2+ devices, Android 4.0+ devices, and Android devices that have had the Cyanogenmod aftermarket firmware flashed or have the correct kernel module installed. It is not compatible with some mobile phone OSes, including Palm OS. It is not a "web-based" VPN shown as a web page such as Citrix or Terminal Services Web access; the program is installed independently and configured by editing text files manually, rather than through a GUI-based wizard. OpenVPN is not compatible with VPN clients that use the IPsec over L2TP or PPTP protocols. The entire package consists of one binary for both client and server connections, an optional configuration file, and one or more key files depending on the authentication method used.
Firmware implementations
OpenVPN has been integrated into several router firmware packages allowing users to run OpenVPN in client or server mode from their network routers. A router running OpenVPN in client mode, for example, allows any device on a network to access a VPN without needing the capability to install OpenVPN.
Notable firmware packages with OpenVPN integration include:
OpenVPN has also been implemented in some manufacturer router firmware.
Software implementations
OpenVPN has been integrated into SoftEther VPN, an open-source multi-protocol VPN server, to allow users to connect to the VPN server from existing OpenVPN clients.
OpenVPN is also integrated into Vyos, an open-source routing operating system forked from the Vyatta software router.
Licensing
OpenVPN is available in two versions:
OpenVPN Community Edition, which is a free and open-source version
OpenVPN Access Server (OpenVPN-AS) is based on the Community Edition, but provides additional paid and proprietary features like LDAP integration, SMB server, Web UI management and provides a set of installation and configuration tools that are reported to simplify the rapid deployment of a VPN remote-access solution. The Access Server edition relies heavily on iptables for load balancing and it has never been available on Windows for this reason. This version is also able to dynamically create client ("OpenVPN Connect") installers, which include a client profile for connecting to a particular Access Server instance. However, the user does not need to have an Access Server client in order to connect to the Access Server instance; the client from the OpenVPN Community Edition can be used.
See also
OpenConnect
OpenSSH
Secure Socket Tunneling Protocol (SSTP)
stunnel
Tunnelblick
WireGuard
References
External links
Community website
Tech Talks
2001 software
Free security software
Tunneling protocols
Unix network-related software
Virtual private networks
Free software programmed in C | OpenVPN | Engineering | 2,041 |
64,304,070 | https://en.wikipedia.org/wiki/Naomi%20Chayen | Naomi Chayen is a biochemist and structural biologist. She is a professor of Biomedical Sciences at Imperial College London, where she leads the Crystallization Group in Computational and Systems Medicine. She is best known for developing the microbatch method and inventing novel nucleants for protein crystallization which have been applied to high-throughput screening for rational drug design.
Education and career
Chayen earned her first degree in pharmacy at the Hebrew University of Jerusalem. During her undergraduate studies, she visited the Kennedy Institute of Rheumatology to learn histochemistry. She subsequently pursued MSc and PhD research at the Kennedy Institute. In 1983, Chayen submitted her thesis on stimulus-response coupling in smooth muscle cells and received a PhD from Brunel University London.
Chayen began her first postdoctoral fellowship at Imperial College London, where she studied the biophysics of muscle proteins. When her grant was not renewed, she joined the lab of David Mervyn Blow to develop novel protein crystallization techniques. There, she began her influential work of utilizing phase diagrams to optimize conditions for crystal growth.
Currently, Chayen is a professor of Biomedical Sciences and head of the Crystallization Group in Computational and Systems Medicine at Imperial College London.
Research
Chayen is best known for her invention of novel protein crystallization methods. In 1990, she first published a method of suspending droplets of protein solution and precipitant solutions in low-density paraffin oil to prevent evaporation during the microbatch crystallization process. The microbatch process can be suitable for membrane proteins, which are ordinarily difficult to crystallize. Chayen's method has since been applied towards the analysis of many biomolecules that are relevant to human diseases such as cancer, HIV, diabetes, and heart disease.
In addition to her work on microbatch methods, Chayen invented a novel gel-glass nucleant now known as "Naomi's Nucleant." Naomi's Nucleant has been used to crystallize more than 20 proteins, the most of any single nucleant. In 2015, she collaborated with Subrayal Reddy at University of Central Lancashire to develop the first non-protein nucleant, a semi-liquid molecularly imprinted polymer designed for high-throughput screening. The nucleant was commercialized as "Chayen Reddy MIP."
Chayen's current research interests include protein crystallization, structural biology, and structural genomics and proteomics.
Awards and honors
Chayen holds nine patents and has launched several commercial products for protein crystallization, such as "Chayen Reddy MIP" and "Naomi's nucleant." In addition, she has won the following awards:
Women of Outstanding Achievement for Innovation and Entrepreneurship Commendation, WISE Campaign (2012)
Investigator of the Year, Select Biosciences Life Sciences Awards (2011)
Innovator of the Year, CWT everywoman in Technology Awards (2011)
Chayen was the Sterling Drug Visiting Professor of Pharmacology at Yale School of Medicine in 2009. She was formerly the president of the International Organization for Biological Crystallization.
References
Biochemists
Women biochemists
Living people
Alumni of Brunel University London
People associated with Imperial College London
Year of birth missing (living people) | Naomi Chayen | Chemistry,Biology | 672 |
43,361,371 | https://en.wikipedia.org/wiki/60%20Serpentis | 60 Serpentis, also known as c Serpentis, is a single, orange-hued star in Serpens Cauda, the eastern section of the constellation Serpens. It is faintly visible to the naked eye with an apparent visual magnitude of 5.38. The distance to this star, as estimated from its annual parallax shift of , is approximately 130 light years. It is moving further from the Sun with a heliocentric radial velocity of +28 km/s, having approached as close as some 1.9 million years ago.
This is an evolved K-type giant star with a stellar classification of K0 III, having used up its core hydrogen and expanded. At the age of around 1.26 billion years, it currently belongs to the so-called "red clump", which indicates it is on the horizontal branch and is generating energy through helium fusion at its core. The star has an estimated 1.8 times the mass of the Sun and 8 times the Sun's radius. It is radiating 35 times the Sun's luminosity from its enlarged photosphere at an effective temperature of about 5,059 K.
References
K-type giants
Horizontal-branch stars
Serpens
Serpentis, c
Durchmusterung objects
Serpentis, 60
170474
090642
6935 | 60 Serpentis | Astronomy | 273 |
23,907,457 | https://en.wikipedia.org/wiki/Accounting%20rate%20of%20return | The accounting rate of return, also known as average rate of return, or ARR, is a financial ratio used in capital budgeting. The ratio does not take into account the concept of time value of money. ARR calculates the return, generated from net income of the proposed capital investment. The ARR is a percentage return. Say, if ARR = 7%, then it means that the project is expected to earn seven cents out of each dollar invested (yearly). If the ARR is equal to or greater than the required rate of return, the project is acceptable. If it is less than the desired rate, it should be rejected. When comparing investments, the higher the ARR, the more attractive the investment. More than half of large firms calculate ARR when appraising projects.
The key advantage of ARR is that it is easy to compute and understand. The main disadvantage of ARR is that it disregards the time factor in terms of time value of money or risks for long term investments. The ARR is built on evaluation of profits, and it can be easily manipulated with changes in depreciation methods. The ARR can give misleading information when evaluating investments of different size.
Basic formulas
where:
Pitfalls
This technique is based on profits rather than cash flow. It ignores cash flow from investment. Therefore, it can be affected by non-cash items such as bad debts and depreciation when calculating profits. The change of methods for depreciation can be manipulated and lead to higher profits.
This technique does not adjust for the risk to long term forecasts.
ARR doesn't take into account the time value of money.
See also
Average accounting return
References
Financial ratios
Capital budgeting
Corporate development
Management cybernetics | Accounting rate of return | Mathematics | 357 |
166,010 | https://en.wikipedia.org/wiki/Vorticity | In continuum mechanics, vorticity is a pseudovector (or axial vector) field that describes the local spinning motion of a continuum near some point (the tendency of something to rotate), as would be seen by an observer located at that point and traveling along with the flow. It is an important quantity in the dynamical theory of fluids and provides a convenient framework for understanding a variety of complex flow phenomena, such as the formation and motion of vortex rings.
Mathematically, the vorticity is the curl of the flow velocity :
where is the nabla operator. Conceptually, could be determined by marking parts of a continuum in a small neighborhood of the point in question, and watching their relative displacements as they move along the flow. The vorticity would be twice the mean angular velocity vector of those particles relative to their center of mass, oriented according to the right-hand rule. By its own definition, the vorticity vector is a solenoidal field since
In a two-dimensional flow, is always perpendicular to the plane of the flow, and can therefore be considered a scalar field.
Mathematical definition and properties
Mathematically, the vorticity of a three-dimensional flow is a pseudovector field, usually denoted by , defined as the curl of the velocity field describing the continuum motion. In Cartesian coordinates:
In words, the vorticity tells how the velocity vector changes when one moves by an infinitesimal distance in a direction perpendicular to it.
In a two-dimensional flow where the velocity is independent of the -coordinate and has no -component, the vorticity vector is always parallel to the -axis, and therefore can be expressed as a scalar field multiplied by a constant unit vector :
The vorticity is also related to the flow's circulation (line integral of the velocity) along a closed path by the (classical) Stokes' theorem. Namely, for any infinitesimal surface element with normal direction and area , the circulation along the perimeter of is the dot product where is the vorticity at the center of .
Since vorticity is an axial vector, it can be associated with a second-order antisymmetric tensor (the so-called vorticity or rotation tensor), which is said to be the dual of . The relation between the two quantities, in index notation, are given by
where is the three-dimensional Levi-Civita tensor. The vorticity tensor is simply the antisymmetric part of the tensor , i.e.,
Examples
In a mass of continuum that is rotating like a rigid body, the vorticity is twice the angular velocity vector of that rotation. This is the case, for example, in the central core of a Rankine vortex.
The vorticity may be nonzero even when all particles are flowing along straight and parallel pathlines, if there is shear (that is, if the flow speed varies across streamlines). For example, in the laminar flow within a pipe with constant cross section, all particles travel parallel to the axis of the pipe; but faster near that axis, and practically stationary next to the walls. The vorticity will be zero on the axis, and maximum near the walls, where the shear is largest.
Conversely, a flow may have zero vorticity even though its particles travel along curved trajectories. An example is the ideal irrotational vortex, where most particles rotate about some straight axis, with speed inversely proportional to their distances to that axis. A small parcel of continuum that does not straddle the axis will be rotated in one sense but sheared in the opposite sense, in such a way that their mean angular velocity about their center of mass is zero.
{| border="0"
|-
| style="text-align:center;" colspan=3 | Example flows:
|-
| valign="top" |
| valign="top" |
| valign="top" |
|-
| style="text-align:center;" | Rigid-body-like vortex
| style="text-align:center;" | Parallel flow with shear
| style="text-align:center;" | Irrotational vortex
|-
| style="text-align:center;" colspan=3 | where is the velocity of the flow, is the distance to the center of the vortex and ∝ indicates proportionality.Absolute velocities around the highlighted point:
|-
| valign="top" |
| valign="top" |
| valign="top" |
|-
| style="text-align:center;" colspan=3 | Relative velocities (magnified) around the highlighted point
|-
| valign="top" |
| valign="top" |
| valign="top" |
|-
| style="text-align:center;" | Vorticity ≠ 0
| style="text-align:center;" | Vorticity ≠ 0
| style="text-align:center;" | Vorticity = 0
|}
Another way to visualize vorticity is to imagine that, instantaneously, a tiny part of the continuum becomes solid and the rest of the flow disappears. If that tiny new solid particle is rotating, rather than just moving with the flow, then there is vorticity in the flow. In the figure below, the left subfigure demonstrates no vorticity, and the right subfigure demonstrates existence of vorticity.
Evolution
The evolution of the vorticity field in time is described by the vorticity equation, which can be derived from the Navier–Stokes equations.
In many real flows where the viscosity can be neglected (more precisely, in flows with high Reynolds number), the vorticity field can be modeled by a collection of discrete vortices, the vorticity being negligible everywhere except in small regions of space surrounding the axes of the vortices. This is true in the case of two-dimensional potential flow (i.e. two-dimensional zero viscosity flow), in which case the flowfield can be modeled as a complex-valued field on the complex plane.
Vorticity is useful for understanding how ideal potential flow solutions can be perturbed to model real flows. In general, the presence of viscosity causes a diffusion of vorticity away from the vortex cores into the general flow field; this flow is accounted for by a diffusion term in the vorticity transport equation.
Vortex lines and vortex tubes
A vortex line or vorticity line is a line which is everywhere tangent to the local vorticity vector. Vortex lines are defined by the relation
where is the vorticity vector in Cartesian coordinates.
A vortex tube is the surface in the continuum formed by all vortex lines passing through a given (reducible) closed curve in the continuum. The 'strength' of a vortex tube (also called vortex flux) is the integral of the vorticity across a cross-section of the tube, and is the same everywhere along the tube (because vorticity has zero divergence). It is a consequence of Helmholtz's theorems (or equivalently, of Kelvin's circulation theorem) that in an inviscid fluid the 'strength' of the vortex tube is also constant with time. Viscous effects introduce frictional losses and time dependence.
In a three-dimensional flow, vorticity (as measured by the volume integral of the square of its magnitude) can be intensified when a vortex line is extended — a phenomenon known as vortex stretching. This phenomenon occurs in the formation of a bathtub vortex in outflowing water, and the build-up of a tornado by rising air currents.
Vorticity meters
Rotating-vane vorticity meter
A rotating-vane vorticity meter was invented by Russian hydraulic engineer A. Ya. Milovich (1874–1958). In 1913 he proposed a cork with four blades attached as a device qualitatively showing the magnitude of the vertical projection of the vorticity and demonstrated a motion-picture photography of the float's motion on the water surface in a model of a river bend.
Rotating-vane vorticity meters are commonly shown in educational films on continuum mechanics (famous examples include the NCFMF's "Vorticity" and "Fundamental Principles of Flow" by Iowa Institute of Hydraulic Research).
Specific sciences
Aeronautics
In aerodynamics, the lift distribution over a finite wing may be approximated by assuming that each spanwise segment of the wing has a semi-infinite trailing vortex behind it. It is then possible to solve for the strength of the vortices using the criterion that there be no flow induced through the surface of the wing. This procedure is called the vortex panel method of computational fluid dynamics. The strengths of the vortices are then summed to find the total approximate circulation about the wing. According to the Kutta–Joukowski theorem, lift per unit of span is the product of circulation, airspeed, and air density.
Atmospheric sciences
The relative vorticity is the vorticity relative to the Earth induced by the air velocity field. This air velocity field is often modeled as a two-dimensional flow parallel to the ground, so that the relative vorticity vector is generally scalar rotation quantity perpendicular to the ground. Vorticity is positive when – looking down onto the Earth's surface – the wind turns counterclockwise. In the northern hemisphere, positive vorticity is called cyclonic rotation, and negative vorticity is anticyclonic rotation; the nomenclature is reversed in the Southern Hemisphere.
The absolute vorticity is computed from the air velocity relative to an inertial frame, and therefore includes a term due to the Earth's rotation, the Coriolis parameter.
The potential vorticity is absolute vorticity divided by the vertical spacing between levels of constant (potential) temperature (or entropy). The absolute vorticity of an air mass will change if the air mass is stretched (or compressed) in the vertical direction, but the potential vorticity is conserved in an adiabatic flow. As adiabatic flow predominates in the atmosphere, the potential vorticity is useful as an approximate tracer of air masses in the atmosphere over the timescale of a few days, particularly when viewed on levels of constant entropy.
The barotropic vorticity equation is the simplest way for forecasting the movement of Rossby waves (that is, the troughs and ridges of 500 hPa geopotential height) over a limited amount of time (a few days). In the 1950s, the first successful programs for numerical weather forecasting utilized that equation.
In modern numerical weather forecasting models and general circulation models (GCMs), vorticity may be one of the predicted variables, in which case the corresponding time-dependent equation is a prognostic equation.
Related to the concept of vorticity is the helicity , defined as
where the integral is over a given volume . In atmospheric science, helicity of the air motion is important in forecasting supercells and the potential for tornadic activity.
See also
Barotropic vorticity equation
D'Alembert's paradox
Enstrophy
Palinstrophy
Velocity potential
Vortex
Vortex tube
Vortex stretching
Horseshoe vortex
Wingtip vortices
Fluid dynamics
Biot–Savart law
Circulation
Vorticity equations
Kutta–Joukowski theorem
Atmospheric sciences
Prognostic equation
Carl-Gustaf Rossby
Hans Ertel
References
Bibliography
Clancy, L.J. (1975), Aerodynamics, Pitman Publishing Limited, London
"Weather Glossary"' The Weather Channel Interactive, Inc.. 2004.
"Vorticity". Integrated Publishing.
Further reading
Ohkitani, K., "Elementary Account Of Vorticity And Related Equations". Cambridge University Press. January 30, 2005.
Chorin, Alexandre J., "Vorticity and Turbulence". Applied Mathematical Sciences, Vol 103, Springer-Verlag. March 1, 1994.
Majda, Andrew J., Andrea L. Bertozzi, "Vorticity and Incompressible Flow". Cambridge University Press; 2002.
Tritton, D. J., "Physical Fluid Dynamics". Van Nostrand Reinhold, New York. 1977.
Arfken, G., "Mathematical Methods for Physicists", 3rd ed. Academic Press, Orlando, Florida. 1985.
External links
Weisstein, Eric W., "Vorticity". Scienceworld.wolfram.com.
Doswell III, Charles A., "A Primer on Vorticity for Application in Supercells and Tornadoes". Cooperative Institute for Mesoscale Meteorological Studies, Norman, Oklahoma.
Cramer, M. S., "Navier–Stokes Equations -- Vorticity Transport Theorems: Introduction". Foundations of Fluid Mechanics.
Parker, Douglas, "ENVI 2210 – Atmosphere and Ocean Dynamics, 9: Vorticity". School of the Environment, University of Leeds. September 2001.
Graham, James R., "Astronomy 202: Astrophysical Gas Dynamics". Astronomy Department, UC Berkeley.
"The vorticity equation: incompressible and barotropic fluids".
"Interpretation of the vorticity equation".
"Kelvin's vorticity theorem for incompressible or barotropic flow".
"Spherepack 3.1 ". (includes a collection of FORTRAN vorticity program)
"Mesoscale Compressible Community (MC2) Real-Time Model Predictions". (Potential vorticity analysis)
Continuum mechanics
Fluid dynamics
Meteorological quantities
Rotation
fr:Tourbillon (physique) | Vorticity | Physics,Chemistry,Mathematics,Engineering | 2,859 |
5,810,868 | https://en.wikipedia.org/wiki/Epoetin%20alfa | Epoetin alfa, sold under the brand name Epogen among others, is a human erythropoietin produced in cell culture using recombinant DNA technology. Epoetin alfa is an erythropoiesis-stimulating agent. It stimulates erythropoiesis (increasing red blood cell levels) and is used to treat anemia, commonly associated with chronic kidney failure and cancer chemotherapy. Epoetin alfa is developed by Amgen.
It is on the World Health Organization's List of Essential Medicines. It was approved for medical use in the European Union in August 2007,
Medical uses
Epoetin alfa is indicated for the treatment of anemia due to chronic kidney disease; zidovudine in people with human immunodeficiency virus; HIV infection; the effects of concomitant myelosuppressive chemotherapy; reduction of allogeneic red blood cell transfusions.
Anemia caused by kidney disease
For people who require dialysis or have chronic kidney disease, iron should be given with erythropoietin, depending on some laboratory parameters such as ferritin and transferrin saturation.
Erythropoietin is also used to treat anemia in people, and cats and dogs, with chronic kidney disease who are not on dialysis (those in Stage 3 or 4 disease and those living with a kidney transplant). There are two types of erythropoietin for people, and cats and dogs, with anemia due to chronic kidney disease (not on dialysis).
Anemia in critically ill people
Erythropoietin is used to treat people with anemia resulting from critical illness.
In a randomized controlled trial, erythropoietin was shown to not change the number of blood transfusions required by critically ill patients. A surprising finding in this study was a small mortality reduction in patients receiving erythropoietin. This result was statistically significant after 29 days but not at 140 days. The mortality difference was most marked in patients admitted to the ICU for trauma. The authors provide several hypotheses for potential etiologies of this reduced mortality, but, given the known increase in thrombosis and increased benefit in trauma patients as well as marginal nonsignificant benefit (adjusted hazard ratio of 0.9) in surgery patients, it could be speculated that some of the benefit might be secondary to the procoagulant effect of erythropoietin. Regardless, this study suggests further research may be necessary to see which critical care patients, if any, might benefit from administration of erythropoietin.
Adverse effects
Epoetin alfa is generally well tolerated. Common side effects include high blood pressure, headache, disabling cluster migraine (resistant to remedies), joint pain, and clotting at the injection site. Rare cases of stinging at the injection site, skin rash, and flu-like symptoms (joint and muscle pain) have occurred within a few hours following administration. More serious side effects, including allergic reactions, seizures and thrombotic events (e.g., heart attacks, strokes, and pulmonary embolism) rarely occur. Chronic self-administration of the drug has been shown to cause increases in blood hemoglobin and hematocrit to abnormally high levels, resulting in dyspnea and abdominal pain.
Erythropoietin is associated with an increased risk of adverse cardiovascular complications in patients with kidney disease if it is used to target an increase of hemoglobin levels above 13.0 g/dl.
Early treatment (before an infant is 8 days old) with erythropoietin correlated with an increase in the risk of retinopathy of prematurity in premature and anemic infants, raising concern that the angiogenic actions of erythropoietin may exacerbate retinopathy. Since anemia itself increases the risk of retinopathy, the correlation with erythropoietin treatment may be incidental.
Safety advisories in anemic cancer patients
Amgen advised the U.S. Food and Drug Administration (FDA) regarding the results of the DAHANCA 10 clinical trial. The DAHANCA 10 data monitoring committee found that three-year loco-regional cancer control in subjects treated with Aranesp was significantly worse than for those not receiving Aranesp (p=0.01).
In response to these advisories, the FDA released a Public Health Advisory
on 9 March 2007, and a clinical alert for doctors in February 2007, about the use of erythropoiesis-stimulating agents (ESAs) such as epogen and darbepoetin. The advisory recommended caution in using these agents in cancer patients receiving chemotherapy or off chemotherapy, and indicated a lack of clinical evidence to support improvements in quality of life or transfusion requirements in these settings.
Several publications and FDA communications have increased the level of concern related to adverse effects of ESA therapy in selected groups. In a revised black box warning, the FDA notes significant risks, advising that ESAs should be used only in patients with cancer when treating anemia specifically caused by chemotherapy, and not for other causes of anemia. Further, the warning states that ESAs should be discontinued once the patient's chemotherapy course has been completed.
Interactions
Drug interactions with erythropoietin include:
Major: lenalidomide—risk of thrombosis
Moderate: cyclosporine—risk of high blood pressure may be greater in combination with EPO. EPO may lead to variability in blood levels of cyclosporine.
Minor: ACE inhibitors and angiotensin receptor blockers may interfere with hematopoiesis, possibly by decreasing the synthesis of endogenous erythropoietin and decreasing bone marrow production of red blood cells.
Society and culture
The publication of an editorial questioning the benefits of high-dose epoetin was canceled by the marketing branch of a journal after being accepted by the editorial branch highlighting concerns of conflict of interest in publishing.
In 2011, author Kathleen Sharp published a book, Blood Feud: The Man Who Blew the Whistle on One of the Deadliest Prescription Drugs Ever,
alleging drug maker Johnson & Johnson encouraged doctors to prescribe epoetin in high doses, particularly for cancer patients, because this would increase sales by hundreds of millions of dollars. Former sales representatives Mark Duxbury and Dean McClennan, claimed that the bulk of their business selling epoetin to hospitals and clinics was Medicare fraud, totaling billion.
Economics
The average cost per patient in the US was in 2009.
Epoetin alfa has accounted for the single greatest drug expenditure paid by the US Medicare system; in 2010, the program paid for the medication.
Biosimilars
In August 2007, Binocrit, Epoetin Alfa Hexal, and Abseamed were approved for use in the European Union.
Research
Neurological diseases
Erythropoietin has been hypothesized to be beneficial in treating certain neurological diseases such as schizophrenia and stroke. Some research has suggested that erythropoietin improves the survival rate in children with cerebral malaria, which is caused by the malaria parasite's blockage of blood vessels in the brain. However, the possibility that erythropoietin may be neuroprotective is inconsistent with the poor transport of the chemical into the brain and the low levels of erythropoietin receptors expressed on neuronal cells.
Psychiatric diseases
Randomized clinical control trials have shown promising results of EPO in improving cognition which is often intractable with the current treatment of mood disorders and schizophrenia.These domains include speed of complex cognitive processing across attention,memory and executive function.
Preterm infants
Infants born early often require transfusions with red blood cells and have low levels of erythropoietin. Erythropoietin has been studied as a treatment option to reduce anemia in preterm infants. Treating infants less than 8 days old with erythropoietin may slightly reduce the need for red blood cell transfusions, but increases the risk of retinopathy. Due to the limited clinical benefit and increased risk of retinopathy, early or late erythropoietin treatment is not recommended for preterm infants.
References
Antianemic preparations
Growth factors
Erythropoiesis-stimulating agents
Amgen
Drugs developed by Johnson & Johnson
Recombinant proteins | Epoetin alfa | Chemistry,Biology | 1,765 |
60,803,057 | https://en.wikipedia.org/wiki/Microtechnique | Microtechnique is an aggregate of methods used to prepare micro-objects for studying. It is currently being employed in many fields in life science. Two well-known branches of microtechnique are botanical (plant) microtechnique and zoological (animal) microtechnique.
With respect to both plant microtechnique and animal microtechnique, four types of methods are commonly used, which are whole mounts, smears, squashes, and sections, in recent micro experiments. Plant microtechnique contains direct macroscopic examinations, freehand sections, clearing, maceration, embedding, and staining. Moreover, three preparation ways used in zoological micro observations are paraffin method, celloidin method, and freezing method.
History
The early development of microtechnique in botany is closely related to that in zoology. Zoological and botanical discoveries are adopted by both zoologists and botanists.
The field of microtechnique lasted from at the end of the 1930s when the principle of dry preparation emerged. The early development of microtechnique in botany is closely related to that in zoology. Zoological and botanical discoveries are adopted by both zoologists and botanists. Since Hooke discovered cells, microtechnique had also developed with the emergence of early microscopes. Microtechnique then had advanced over the period of 1800–1875. After 1875, modern micro methods have emerged. In recent years, both traditional methods and modern microtechnique have been in use in many experiments.
Commonly used methods
Some general microtechnique can be used in both plant and animal micro observation. Whole mounts, smears, squashes, and sections are four commonly used methods when preparing plant and animal specimens for specific purposes.
Whole mounts
Whole mounts are usually used when observers need to use a whole organism or do some detailed research on specific organ structure. This method requires objects in which moisture can be removed, like seeds and micro fossils.
According to different purposes, Whole-mounts can be divided into three categories, Temporary whole mounts, Semi-permanent whole mounts, and permanent whole mounts. Temporary whole mounts are usually used for teaching activities in class. Semi-permanent whole mounts are prepared for longer using time, which is no more than fourteen days. In this preparation, Canada balsam is used to seal the specimens, and this method is used to observe unicellular and colonial algae, fungal spores, mosses protonemata, and prothalli. The third way is a permanent whole mount. Two methods are usually used, which are hygrobutol method and glycerine-xylol method.
Smears
Smears is an easy way for preparing slices. This method is used in many laboratories. Smears can be employed when making slide specimens by spreading liquid or semi-liquid materials or lose tissues and cells of animals and plants evenly on the slide. The steps and requirements for the application of the smear method are as follows: first, smear. When the solid material is smeared, the material should be placed on the glass slide and wiped away, then use the blade to press the material on one side. The cells should be pressed out and distributed evenly on the glass slide in a thin layer, such as the anther smeared.
Squashes
Squashes are methods, in which objects are crushed with force. This method is suitable for preparing both transparent and tender tissues. When preparing squashes slides, specimens are supposed to be thin and transparent so that objects can be observed clearly under microscopes.
This technique is to place the material on the glass slide and remove it with the scalpel or to dissect needle, then add a drop of dye solution. After these steps, apply the second slide to cover the initial slide and apply pressure evenly to break the material and disperse the cells. Furthermore, another possible way can be used to prepare slides. The specimens can also be extruded between the cover slide and the slide with equal pressure.
Sections
Sections are known as thin slices need to be tested in all studies of cellular structures. This technique can be used for the preparation of tissue of animals and plants. For using under optical microscopy, the thickness of the material should be between above 2 and 25 micrometers. When observing under electron microscopy, sections should be from 20 to 30 nanometers. Microtome can be used in sectioning of sufficiently thin slices. If the objects cannot satisfy the requirement of thickness, materials are required to be dehydrated using alcohol before section. Three commonly used sectioning method are freehand section technique, paraffin method, and celloidin method.
Methods used in plant micro-experiments
Botanical microtechnique is an aggregate of methods providing micro visualization of gene and gene product in an entire plant. Plant microtechnique is also a study providing valuable experimental information. Plant microtechnique involves classical methods developed over a hundred years ago and new methods developed to expand our research scope and depth in botanical micro studies. Both traditional and new micro technique is useful for experimental research, and some will have a significant influence on further study. Different methods are used to prepare plant specimens, including direct macroscopic examinations, freehand sections, clearing, maceration, embedding, and staining.
Direct microscopic examinations
The direct micro examination is a simple way prepared for observing micro-objects. Also, this method is useful to observe whether the mold grows on the surface of the specimens. This can be an initial step of the micro experiment.
Freehand section
Freehand slicing is a method of making thin slices of fresh or fixed experimental materials with a hand-held blade. Freehand slicing refers to the method of directly cutting fresh or fixed materials (generally plants with a low degree of lignification) into thin slices without special instruments or special chemical reagents.
Clearing
Clearing technique provides translucent slides via removing part of cytoplasmic content and then applying high refractive index reagents to process the tissues. This method is suitable for preparing whole mount slides. The clearing is a procedure of using clearing reagents for removal of alcohol and makes tissue translucent. Xylene is the most popular clearing agent.
Maceration
Macerating tissues is the process of separating the constituent cells of tissues. This method enables observers to study the whole cell in third-dimensional detail. Chemical maceration method means the using chemicals to process organs or part to soften tissue and dissolving the cells so that different cell can be identified.
Embedding
Embedding technique is a medium stage when doing a sectioning process. When preparing specimens, it is difficult to make uniform slices since the tissue is soft. Therefore, it is necessary to soak the tissue with a certain substance to harden the whole tissue, to facilitate the slicing. This process is called embedding. The substance used to embed tissue is embedding media, which is chosen depends on the category of the microscope, category of the micro tome, and category of tissue. Paraffin wax, whose melting point is from 56 to 62°C, is commonly used for embedding.
Staining
Since few plant tissues have a color, there is little chromatically difference between plant tissues makes it difficult to differentiate botanical structure. Material is usually dyed before installation. This process is called staining, which can be used to prepare botanical specimens so that it is possible to distinguish one part of the sample from another in terms of color. Acid dyes can be used when staining micro slides, for example, acid dyes are in use when coloring nuclei and other cellular components are stained using alkaline. There are also staining machine used for staining, which allows tissue to be stained automatically.
Microtechnique used for animal observation
The zoological microtechnique is the art of the preparation for microscopic animal observation. Although many microtechniques can be used in both plant and animal micro experiments. Some methods may differ from itself when employed in different field. Three commonly used preparation ways used in zoological micro observations can be concluded as paraffin method, celloidin method, freezing method, and miscellaneous techniques.
Paraffin method
Infiltration and embedding
This process usually consists of steps of infiltration, embedding, sectioning, affixing and processing the sections. Followed by the initial stage, fixation, the next step is dehydration, which removes the water in the tissue using alcohol. Then the tissue can be infiltrated and embedded with wax. A tissue specimen can keep for several years after finishing embedding this tissue into the wax. Paraffin wax, which is soft and colorless, is the most commonly used reagent.
Sectioning
Sectioning a tissue can use either the micro tome knife or the razor blade as the cutting blade.
The micro tome knife is used for handling sectioning. It is necessary to use a micro tome knife when preparing sections less than 1/1000 micrometers. When using such a knife, the operators must be extremely careful. This instrument is impractical sometimes, so using the razor blade for general work to prepare sections above 9 microns (1 micron equals 1/1000 micrometers). Furthermore, the razor blade works better than the micro tome knife when requiring thick sections with no less than 20 microns.
Affixing and processing
After sectioning, the prepared slices are affixed on slides. There are two commonly used affixatives, Haupt's and Mayer's. Haupt's affixative contains 100 ccs (cubic centimeter) distilled water, 1gm gelatin, 2 gm phenol crystals, 15 cc glycerine. Mayer's affixative is consist of 5 cc egg albumen, 50 cc glycerine, 1 gm sodium salicylate. The general steps of affixing paraffin sections can be concluded as 1. Clean the required slides, 2. Mark the cleaned slides, 3. Drop affixative on each slide, 4. Put on another slide, 5. Spread the affixative, 6. Drop floating medium, 7. Divide the paraffin into required length, 8. Transfer the sections, 9. Add more floating medium if incomplete floating occurs, 10. Rise the temperature, 11. Remove slides and redundant floating medium, 12, drying the section.
Processing paraffin sections include 1. Deparaffination, 2. Removing the deparaffing solution, 3. Hydration, 4. Staining, 5. Dehydration, 6. Dealcoholisation and clearing, 7. Mounting the cover slide.
Celloidin method
Celloidin technique is the procedure of embedding a specimen in celloidin. This method can be used for embedding large, hard objects. Celloidin is a digestive fiber, which is flammable, and it is soluble in acetone, clove oil, and the mixture of anhydrous alcohol and ether. Celloidin will turn into white emulsion turbid liquid when it meets water, so it is required to use a dry container to contain celloidin.
The method of celloidin slicing is to fix and dehydrate the tissue, then treat it with the anhydrous alcohol-ether mixture. After this step, to impregnate, embed and slice the tissue with celloidin. Moreover, this slicing method can slice large tissues and has the advantage that its heat allows the tissues does not shrink. However, this technique contains some shortcomings. For instance, the slices cannot be sliced very thin (more than 20 microns), and impregnation with celloidin is time-consuming.
Freezing method
Freezing technique is the most commonly used sectioning method. This method can preserve the immune activity of various antigens well. Both fresh tissue and fixed tissue can be frozen. Moreover, it is also a technique used for freezing sections of either fresh or fixed plant tissues.
During the freezing procedure, the water in tissues is easy to form ice crystals, which often affects the antigen localization. It is generally believed that when ice crystals are small, the effect is small, and when ice crystals are large, the damage to the tissue structure is large, and the above phenomenon is more likely to occur in tissues with more moisture components. The size of an ice crystal is directly proportional to its growth rate and inversely proportional to the nucleation rate (formation rate), that is, the larger the number of ice crystal formation, the smaller it is, and the more serious the impact on the structure. Therefore, the number of ice crystals should be minimized. The freezing method allows sectioning tissues rapidly and biopsy without using reagents. This procedure should be rapidly in case of the form of ice crystal.
See also
Microtechnology
Histology
References
Scientific techniques
Microbiology | Microtechnique | Chemistry,Biology | 2,626 |
21,167,712 | https://en.wikipedia.org/wiki/Comparator%20hypothesis | The comparator hypothesis is a psychological model of associative learning and performance. To understand the model, it helps to consider how associative learning is usually studied. For example, to study the learning of an association between cues, such as lights and sounds, and an outcome such as food, an experimenter typically pairs the cues and the food a number of times (the learning phase) and then tests with one or more of the cues to see if a response has been learned (the test phase). Most theories of associative learning have assumed that phenomena of interest (see Classical conditioning for a list of phenomena) depend on what happens during the learning phase. The comparator hypothesis assumes, on the contrary, that what happens during the learning phase is fairly simple, and that most interesting phenomena depend on what happens during the test phase. The comparator hypothesis arose primarily in response to so-called “cue competition” effects. If for example in classical conditioning, two conditioned stimuli A and B are presented with an unconditioned stimulus, one may find on test that the subject responds to A or to B or to both or not very much to either. How can one account for such varied results?
First proposed by Ralph Miller, the comparator hypothesis is a model of Pavlovian associations which posits that cue competition effects arise at the time of test, that is during performance, not during learning. The model assumes, essentially, that during conditioning the subject acquires both CS-US and context-US associations. At the time of the test, the associations are compared, and a response to a CS occurs only if the CS-US association is stronger than the context-US association. The model was initially proposed to account for unexplained variations in cue competition effects such as recovery from blocking, but has been expanded to apply more broadly to learning phenomena. The success of the hypothesis has led to modifications in existing theories, such as Wagner's SOP and the Rescorla-Wagner model, enabling them to explain such phenomena as retrospective reevaluation, but other phenomena such as counteraction still pose difficulties for most models.
Recently Ghirlanda and Ibadullayev studied Stout and Miller's mathematical formulation of the comparator hypothesis, and compared its predictions with a variety of experimental results. They concluded that "...all versions of comparator theory make a number of surprising predictions, some of which appear hard to reconcile with empirical data."
References
Psychology of learning
Motivation | Comparator hypothesis | Biology | 520 |
63,371,137 | https://en.wikipedia.org/wiki/Pyrocene | Pyrocene is a proposed term for a new geologic epoch or age characterized by the influence of human-caused fire activity on Earth. The concept focuses on the many ways humans have applied and removed fire from the Earth, including the burning of fossil fuels and the technologies that have enabled people to leverage their influence and become the dominant species on the planet. The Pyrocene offers a fire-centric perspective on human history that is an alternative to or complementary term for the Anthropocene. Like the Anthropocene, the concept suggests that human activity has shaped the Earth's geology and identifies fire as humanity's primary tool for shaping the planet and its environment.
Pyrocene was first proposed by environmental historian Stephen J. Pyne in 2015. Since that time, it has been adopted by journalists and scholars in the fields of wildland fire, ecology, and environmental policy focused on the impacts of climate change and increased risk of wildfires around the globe. This research has focused on extreme wildfires in Hawaii, California, Spain, Portugal, Romania, Australia, and Canada. It has relied on the concept of Pyrocene to highlight how climate change, land-use changes, and direct human ignition have increased the frequency and intensity of these conflagrations.
Etymology
The word Pyrocene is formed from two Ancient Greek words. Pyros (; ) is the Greek word for fire while "Cene" coming from the word kainós () means "new". The concept is that this epoch is "entirely new". The suffix '-cene' is used in several epochs of geologic Era.
Overview of the concept
The concept of the Pyrocene argues that humanity's collective fire practices have become an informing presence and a geological force on Earth. Fire practices include all those activities that start and stop fires among living biomass, but also those that involve fossil biomass and those pyrotechnologies that enable people to leverage their influence.
The foundational premise suggests that humanity and fire formed an alliance that increased the range and power of each. While fire has been on Earth for over 420 million years, as long as terrestrial plants (Silurian-Devonian terrestrial revolution), humanity has altered its regimes and expanded it into a planetary presence. In return, fire has enabled and accompanied humans to every landscape on Earth and even the Moon. Humanity enjoys a species monopoly over fire's manipulation, establishing it as the keystone species for fire on Earth and making fire its unique ecological signature.
Together, humans and fire have upset biogeochemical cycles, including carbon, rewired energy flows, and, through the accumulation of emitted greenhouse gases into the atmosphere, perturbed global climate to such an extent that climate history has become a subnarrative of fire history.
According to Pyne's concept of the Pyrocene, there have been three kinds of fire. The first kind of fire was natural fire, which began with the evolution of plants. The second kind of fire was human fire, which the species domesticated for heat, light, cooking, and the control of landscapes. The third kind of fire involves extracting and burning fossil fuels through chemical combustion and industrial machinery. This third kind of fire has created by-products in the form of pollution and greenhouse gasses on a scale that have overwhelmed the atmosphere and the planet's capacity to cope with the accelerated rate with which humans burn these fuel sources.
History of the concept
Fire historian Stephen Pyne first used the term in an article, "Fire Age," published by Aeon in 2015, then announced in the more fully developed form in 2019, again in Aeon. In 2019, Pyne used the concept to frame the September fire-focused issue of Natural History magazine and to provide a coda to a revision of Fire: A Brief History. In 2021 he condensed his notions into a small book,The Pyrocene: How We Created a Fire Age, and What Happens Next. The book was translated into Italian, Portuguese, Chinese, and Danish, and reviewed by both Nature and Science.
The Pyrocene has been frequently invoked in news articles about catastrophic wildfires in places like California, Hawaii, and Australia and the growing impact of wildfire smoke. Journalists in outlets including the New York Times, New York Magazine, Wired, and Los Angeles Times. The New Yorker has included it a review of recent books with a fire theme. An overview has appeared in Scientific American.
In his original conception, Pyne imagined the Pyrocene as coextensive with the Holocene (current geological epoch, beginning approximately 11,700 years ago), commencing as a fire-wielding species that interacted more widely with a fire-warming Earth. He introduced the term "pyric transition" to describe the subsequent phase change that occurred when humans began to burn fossil biomass (or what he calls "lithic landscapes") in place of surface biomass ("living landscapes"). Burning in living landscapes has a long evolutionary ecological checks and balances history. Burning fossil biomass lacks those baffles and barriers; the available sources overwhelm the sinks, unhinging air, seas, and terrestrial biotas. Both realms of combustion share an unbroken narrative of humanity's relationship to fire. Pyne has proposed the metaphor of a fire age – the fire-informed equivalent of an ice age to suggest the cumulative magnitude of these effects. The Pyrocene is thus a successor epoch to the Pleistocene.
Interpretations
In the original conception of the term, Pyne advocated for a long Pyrocene that spans the entirety of the Holocene. He acknowledges, however, that the evidence and case for pre-industrial global change through anthropogenic fire is only recently emerging. Most commentators reserve the term for a shorter Pyrocene era that begins with humanity's use of fossil biomass, which changed the use of fire in quantity and kind. Others have modified the concept to refer to that still briefer era of accelerated fossil-fuel burning after World War II. Some consider it a feature best restricted to the 21st century with its eruption of serial conflagrations.
Wildlife ecologist Gavin Jones and others have defined the Pyrocene as "the modern, human-caused era of extreme fire characterized by greater negative impacts to society and ecosystems than in the past." This definition focuses on the more recent impact of human activity on fire regimes, including climate change and fire suppression activities. Jones has argued that fire is a key driver of evolution and situated his research in the framework of the Pyrocene to investigate how species are evolving in response to more frequent and intense wildfires.
Likewise, Australian fire researcher Hamish Clarke has situated the Pyrocene specifically within the recent era of extreme fires exacerbated by changing populations and land use patterns. Others have invoked Pyne's concept of the Pyrocene when discussing recent catastrophic wildfires but have interpreted it as a new era coming after the Holocene, and as an alternative concept to a short Anthropocene.
Interpretations that set the start of this era in the 21st century have suggested that the Earth has just begun to enter the Pyrocene and that the planet can escape an era defined by fire through actions such as the reduction of greenhouse gases in the atmosphere and changes to land use practices, including a selective restoration of a second fire.
References
Geologic time scales
Fire | Pyrocene | Chemistry | 1,502 |
8,591,345 | https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Auriga | This is the list of notable stars in the constellation Auriga, sorted by decreasing brightness.
See also
List of stars by constellation
References
List
Auriga | List of stars in Auriga | Astronomy | 33 |
13,738,930 | https://en.wikipedia.org/wiki/Half-title | The half-title or bastard title is a page carrying nothing but the title of a book—as opposed to the title page, which also lists subtitle, author, publisher and edition. The half-title is usually counted as the first page (p. i) in a printed book. The half-title can have some ornamentation of the book's title, or it can be plain text.
The purpose of the half-title page is to protect the full title page and its traditional counterpart, the frontispiece, during the bookbinding process. When the completed interior pages of the book are bound together to form the book block, the half-title page serves as the outermost layer of paper at the front of the book. Several hundreds or thousands of book blocks may need to be moved or stored for a period of time before they are bound into their covers, during which the half-title page protects the more intricately-designed pages that follow from rubbing and dust.
Archaic uses of the terms half-title and bastard title may refer to two separate pages on both sides of the full title page that have been conflated in modern usage. Theodore Low De Vinne distinguished between half-title (by his definition, a "caption title") and bastard title in his series The Practice of Typography, saying:
The half-title should not be confounded with the bastard title. The half-title follows the title and begins the first page of text; the bastard title, usually a single line in capital letters, precedes the full title, and takes a separate leaf with blank verso.
See also
Book design
References
Book design | Half-title | Engineering | 341 |
684,851 | https://en.wikipedia.org/wiki/National%20Development%20Plan | National Development Plan (NDP, ) is the title given by the Irish Government to a scheme of organised large-scale expenditure on (mainly) national infrastructure. The first five-year plan ran from 1988 to 1993, the second was a six-year plan from 1994 to 1999 and the third ran as a seven-year plan from 2000 to 2006. A fourth National Development Plan ran from 2007 to 2011 (spending €70 million a day every day during this period). The main elements of the third plan were the development of a national motorway network between the major cities in Ireland. The upgrading of the rail network was a secondary scheme.
The ESRI conducted a review of the latest NDP in 2023.
Achievements
Road network
By the start of 2009 substantial progress had been made on the motorway network, with all sections of the five major inter-urban motorways either under construction or complete. The M1 motorway from Dublin towards Belfast has been completed as far as the border with Northern Ireland. The last section of N1/M1 route to be completed was the motorway/dual carriageway upgrade that crossed the border to become an upgraded A1 route as far as Newry. This was the first cross-border road project and was opened to traffic on 2 August 2007, thus completing the N1/M1 route.
The N7/M7 motorway from Dublin to Limerick was completed in 2010. Major work was undertaken to extend the motorway westwards from Portlaoise to Limerick from 2006 onwards when work on the Nenagh to Limerick section commenced. Construction commenced on the Portlaoise to Castletown section in 2007 while work commenced on the Castletown to Nenagh section in 2008. All sections were completed at various times in 2010. Other upgrades during this period include the Naas road upgrade, which was finished in August 2006. This involved the widening of a section of the route to three lanes in each direction and the removal of several at-grade junctions.
The M4/N4 from Dublin towards Sligo (and providing the link to the N6/M6 for Galway) now reaches as far as the Midlands. By September 2008, the M6 motorway was contiguous from Kinnegad to Athlone. The rest of the M6 route was completed in 2009 as far as Galway (where it will tie in with the proposed M17/M18 schemes). All sections of the M9 motorway to Waterford are also completed. The M9 Carlow bypass element opened in May 2008, while the Waterford-Knocktopher scheme opened in 2010. The M8 Dublin-Cork road motorway moved substantially towards completion in 2008 with the opening of the 37 km Cashel-Mitchelstown scheme and the redesignation of the Cashel bypass to motorway standard. In December 2008, the 40 km M8 Cullahill-Cashel scheme was opened to traffic. All other sections of this route were under construction by late 2007. The 16 km Fermoy to Mitchelstown segment opened to traffic on 25 May 2009. The M8 was completed in 2010 following the opening of the M7/M8 Portlaoise to Castletown/Cullahill route where the M8 intersects with the M7. The N11/M11 is also receiving upgrades along with the new controversial 47 km M3 motorway.
In 2010, upgrade works the now-complete M50 motorway Dublin inner ring road were finalised. The upgrade of parts of the M50 was appended to the NDP. One of the upgrades was the M50 Dublin Port Tunnel project which was a major scheme involving tunnelling from the M1 north of the city centre, through to the Docklands to the east of the city centre. The tunnel was officially opened by the Taoiseach, Bertie Ahern, on 20 December 2006. Other upgrades included the changing of the N4/M50 to a free-flow layout and an upgrade of the motorway to three or four lanes in each direction between many junctions. Works to upgrade the N7/M50 and M1/M50 junctions to free-flow or near free-flow layouts, and to upgrade the rest of the route to three or four-lane motorway were completed in late 2009 and during 2010.
By early 2009, some progress had been made on the Atlantic Corridor, which aims to link Letterkenny to Waterford, via upgrades of the N15, N17, N18, N20 and N25. The upgrade would have seen dual-carriageway/motorway links between Letterkenny and Sligo, Sligo and Galway, Galway and Limerick, Limerick and Cork and Cork and Waterford. In November 2008 work commenced on the 22 km section of the N18 HQDC from Crusheen to Gort. Various other sections of the route are planning stages. The M20 Cork-Limerick route is at public consultation stage and an EIS and motorway order are due to be published around April next year. The M17 from Galway to Tuam, along with the M18 from Gort (which ties in with the aforementioned M17) were completed in 2017. The N25 Waterford City bypass was completed on 19 October 2009, 10 months ahead of schedule. The N25 New Ross bypass was opened in January 2020.
Rail network
Dublin suburban routes benefited from large amounts of new rolling stock, in the form of suburban railcars. These operate north to Dundalk, northwest to Maynooth, southwest to Portlaoise and south to Gorey. The electrified section of the north-south route through Dublin had extra EMUs brought into service. Dublin also saw the opening of a new tram system, the Luas in 2004.
Major projects undertaken were the upgrade of Heuston Station in Dublin to nine platforms and the new railcar servicing depot in Drogheda. Many other stations, particularly the Dublin suburban stations, were upgraded and modernised, with lifts for example on new footbridges. Other measures to improve disabled access were implemented, and park and ride facilities were developed.
InterCity travel benefited from new suburban railcars freeing up InterCity rolling stock previously in use. 67 new locomotive-hauled InterCity carriages were introduced in 2006. Over 100 "regional railcars" were ordered, in the form DMUs currently in use on peripheral services that are not commuter-only. Services were increased on routes such as that from Limerick to Ennis.
Planned developments included enhancing suburban rail in Cork, where a section of rail was reopened to Midleton, east of the city. Other projects under discussion, some or all of which are unlikely to be undertaken, include:
A railway line from Dublin to Navan.
Developing a rail link to Dublin Airport (possibly as an underground Metro link).
Opening a new city centre railway terminus for Sligo and western commuter services at Spencer Dock.
Creating the DART Underground tunnel south from the Docklands railway station in Spencer Dock to Pearse Station and west to Heuston Station.
Reopening a Western Railway Corridor from Limerick to Sligo. The section from Ennis to Athenry was reopened in 2010, with the sections from Athenry to Tuam, as well as preserving right of way north of Tuam, cancelled indefinitely in 2011.
Reopening of Limerick-Tralee and Athlone-Mullingar railway lines.
See also
Transport in Ireland
Transport 21
History of roads in Ireland
Motorways in the Republic of Ireland
References
External links
Department of Public Expenditure & Reform Capital Investment Plan
Rail Users Ireland
Economy of the Republic of Ireland
Economic development
Human development
Social change
Regional development agencies | National Development Plan | Biology | 1,519 |
21,265,944 | https://en.wikipedia.org/wiki/HPE%20Systems%20Insight%20Manager | HPE Systems Insight Manager (HPE SIM, formerly HP Systems Insight Manager or HP SIM) is a proprietary systems management tool designed to help manage HPE servers.
HPE SIM is the basis for the HPE system management tools and is part of HPE's unified infrastructure management strategy.
Web-Based Enterprise Management were rolled into this product as of revision 5.6 with the ability to analyze the System Event Log.
References
External links
HPE Systems Insight Manager Homepage
Systems Insight Manager
System administration | HPE Systems Insight Manager | Technology | 102 |
172,564 | https://en.wikipedia.org/wiki/Chinese%20postman%20problem | In graph theory and combinatorial optimization, Guan's route problem, the Chinese postman problem, postman tour or route inspection problem is to find a shortest closed path or circuit that visits every edge of an (connected) undirected graph at least once. When the graph has an Eulerian circuit (a closed walk that covers every edge once), that circuit is an optimal solution. Otherwise, the optimization problem is to find the smallest number of graph edges to duplicate (or the subset of edges with the minimum possible total weight) so that the resulting multigraph does have an Eulerian circuit. It can be solved in polynomial time, unlike the Travelling Salesman Problem which is NP-hard. It is different from the Travelling Salesman Problem in that the travelling salesman cannot repeat visited nodes and does not have to visit every edge.
The problem was originally studied by the Chinese mathematician Meigu Guan in 1960, whose Chinese paper was translated into English in 1962. The original name "Chinese postman problem" was coined in his honor; different sources credit the coinage either to Alan J. Goldman or Jack Edmonds, both of whom were at the U.S. National Bureau of Standards at the time.
A generalization takes as input any set T of evenly many vertices, and must produce as output a minimum-weight edge set in the graph whose odd-degree vertices are precisely those of T. This output is called a T-join. This problem, the T-join problem, is also solvable in polynomial time by the same approach that solves the postman problem.
Undirected solution and T-joins
The undirected route inspection problem can be solved in polynomial time by an algorithm based on the concept of a T-join.
Let T be a set of vertices in a graph. An edge set J is called a T-join if the collection of vertices that have an odd number of incident edges in J is exactly the set T. A T-join exists whenever every connected component of the graph contains an even number of vertices in T. The T-join problem is to find a T-join with the minimum possible number of edges or the minimum possible total weight.
For any T, a smallest T-join (when it exists) necessarily consists of paths that join the vertices of T in pairs. The paths will be such that the total length or total weight of all of them is as small as possible. In an optimal solution, no two of these paths will share any edge, but they may have shared vertices. A minimum T-join can be obtained by constructing a complete graph on the vertices of T, with edges that represent shortest paths in the given input graph, and then finding a minimum weight perfect matching in this complete graph. The edges of this matching represent paths in the original graph, whose union forms the desired T-join.
Both constructing the complete graph, and then finding a matching in it, can be done in O(n3) computational steps.
For the route inspection problem, T should be chosen as the set of all odd-degree vertices. By the assumptions of the problem, the whole graph is connected (otherwise no tour exists), and by the handshaking lemma it has an even number of odd vertices, so a T-join always exists. Doubling the edges of a T-join causes the given graph to become an Eulerian multigraph (a connected graph in which every vertex has even degree), from which it follows that it has an Euler tour, a tour that visits each edge of the multigraph exactly once. This tour will be an optimal solution to the route inspection problem.
Directed solution
On a directed graph, the same general ideas apply, but different techniques must be used. If the directed graph is Eulerian, one need only find an Euler cycle. If it is not, one must find T-joins, which in this case entails finding paths from vertices with an in-degree greater than their out-degree to those with an out-degree greater than their in-degree such that they would make in-degree of every vertex equal to its out-degree. This can be solved as an instance of the minimum-cost flow problem in which there is one unit of supply for every unit of excess in-degree, and one unit of demand for every unit of excess out-degree. As such it is solvable in O(|V|2|E|) time. A solution exists if and only if the given graph is strongly connected.
Applications
Various combinatorial problems have been reduced to the Chinese Postman Problem, including finding a maximum cut in a planar graph
and a minimum-mean length circuit in an undirected graph.
Variants
A few variants of the Chinese Postman Problem have been studied and shown to be NP-complete.
The windy postman problem is a variant of the route inspection problem in which the input is an undirected graph, but where each edge may have a different cost for traversing it in one direction than for traversing it in the other direction. In contrast to the solutions for directed and undirected graphs, it is NP-complete.
The Mixed Chinese postman problem: for this problem, some of the edges may be directed and can therefore only be visited from one direction. When the problem calls for a minimal traversal of a digraph (or multidigraph) it is known as the "New York Street Sweeper problem."
The k-Chinese postman problem: find k cycles all starting at a designated location such that each edge is traversed by at least one cycle. The goal is to minimize the cost of the most expensive cycle.
The "Rural Postman Problem": solve the problem with some edges not required.
See also
Travelling salesman problem
Arc routing
Mixed Chinese postman problem
References
External links
NP-complete problems
Computational problems in graph theory | Chinese postman problem | Mathematics | 1,197 |
189,371 | https://en.wikipedia.org/wiki/Lactobacillus | Lactobacillus is a genus of gram-positive, aerotolerant anaerobes or microaerophilic, rod-shaped, non-spore-forming bacteria. Until 2020, the genus Lactobacillus comprised over 260 phylogenetically, ecologically, and metabolically diverse species; a taxonomic revision of the genus assigned lactobacilli to 25 genera (see below).
Lactobacillus species constitute a significant component of the human and animal microbiota at a number of body sites, such as the digestive system, and the female genital system. In women of European ancestry, Lactobacillus species are normally a major part of the vaginal microbiota. Lactobacillus forms biofilms in the vaginal and gut microbiota, allowing them to persist during harsh environmental conditions and maintain ample populations. Lactobacillus exhibits a mutualistic relationship with the human body, as it protects the host against potential invasions by pathogens, and in turn, the host provides a source of nutrients. Lactobacilli are among the most common probiotic found in food such as yogurt, and it is diverse in its application to maintain human well-being, as it can help treat diarrhea, vaginal infections, and skin disorders such as eczema.
Metabolism
Lactobacilli are homofermentative, i.e. hexoses are metabolised by glycolysis to lactate as major end product, or heterofermentative, i.e. hexoses are metabolised by the Phosphoketolase pathway to lactate, CO2 and acetate or ethanol as major end products. Most lactobacilli are aerotolerant and some species respire if heme and menaquinone are present in the growth medium. Aerotolerance of lactobacilli is manganese-dependent and has been explored (and explained) in Lactiplantibacillus plantarum (previously Lactobacillus plantarum). Lactobacilli generally do not require iron for growth.
The Lactobacillaceae are the only family of the lactic acid bacteria (LAB) that includes homofermentative and heterofermentative organisms; in the Lactobacillaceae, homofermentative or heterofermentative metabolism is shared by all strains of a genus. Lactobacillus species are all homofermentative, do not express pyruvate formate lyase, and most species do not ferment pentoses. In L. crispatus, pentose metabolism is strain specific and acquired by lateral gene transfer.
Genomes
The genomes of lactobacilli are highly variable, ranging in size from 1.2 to 4.9 Mb (megabases). Accordingly, the number of protein-coding genes ranges from 1,267 to about 4,758 genes (in Fructilactobacillus sanfranciscensis and Lentilactobacillus parakefiri, respectively). Even within a single species there can be substantial variation. For instance, strains of L. crispatus have genome sizes ranging from 1.83 to 2.7 Mb, or 1,839 to 2,688 open reading frames. Lactobacillus contains a wealth of compound microsatellites in the coding region of the genome, which are imperfect and have variant motifs. Many lactobacilli also contain multiple plasmids. A recent study has revealed that plasmids encode the genes which are required for adaptation of lactobacilli to the given environment.
Species
The genus Lactobacillus comprises the following species:
Lactobacillus acetotolerans Entani et al. 1986
Lactobacillus acidophilus (Moro 1900) Hansen and Mocquot 1970 (Approved Lists 1980)
"Lactobacillus alvi" Kim et al. 2011
Lactobacillus amylolyticus Bohak et al. 1999
Lactobacillus amylovorus Nakamura 1981
Lactobacillus apis Killer et al. 2014
"Lactobacillus backi" Bohak et al. 2006
Lactobacillus bombicola Praet et al. 2015
Lactobacillus colini Zhang et al. 2017
Lactobacillus crispatus (Brygoo and Aladame 195555) Moore and Holdeman 1970 (Approved Lists 1980)
Lactobacillus delbrueckii (Leichmann 1896) Beijerinck 1901 (Approved Lists 1980)
Lactobacillus equicursoris Morita et al. 2010
Lactobacillus fornicalis Dicks et al. 2000
Lactobacillus gallinarum Fujisawa et al. 1992
Lactobacillus gasseri Lauer and Kandler 1980
Lactobacillus gigeriorum Cousin et al. 2012
"Lactobacillus ginsenosidimutans" Jung et al. 2013
Lactobacillus hamsteri Mitsuoka and Fujisawa 1988
Lactobacillus helsingborgensis Olofsson et al. 2014
Lactobacillus helveticus (Orla-Jensen 1919) Bergey et al. 1925 (Approved Lists 1980)
Lactobacillus hominis Cousin et al. 2013
Lactobacillus iners Falsen et al. 1999
Lactobacillus intestinalis (ex Hemme 1974) Fujisawa et al. 1990
Lactobacillus jensenii Gasser et al. 1970 (Approved Lists 1980)
"Lactobacillus jinshani" Yu et al. 2020
Lactobacillus johnsonii Fujisawa et al. 1992
Lactobacillus kalixensis Roos et al. 2005
Lactobacillus kefiranofaciens Fujisawa et al. 1988
Lactobacillus kimbladii Olofsson et al. 2014
Lactobacillus kitasatonis Mukai et al. 2003
Lactobacillus kullabergensis Olofsson et al. 2014
Lactobacillus melliventris Olofsson et al. 2014
Lactobacillus mulieris Rocha et al. 2020
Lactobacillus nasalidis Suzuki-Hashido et al. 2021
Lactobacillus panisapium Wang et al. 2018
Lactobacillus paragasseri Tanizawa et al. 2018
Lactobacillus pasteurii Cousin et al. 2013
Lactobacillus porci Kim et al. 2018
Lactobacillus psittaci Lawson et al. 2001
"Lactobacillus raoultii" Nicaise et al. 2018
Lactobacillus rodentium Killer et al. 2014
Lactobacillus rogosae Holdeman and Moore 1974 (Approved Lists 1980)
Lactobacillus taiwanensis Wang et al. 2009
"Lactobacillus thermophilus" Ayers and Johnson 1924
"Lactobacillus timonensis" Afouda et al. 2017
Lactobacillus ultunensis Roos et al. 2005
Lactobacillus xujianguonis Meng et al. 2020
Taxonomy
The genus Lactobacillus currently contains 44 species which are adapted to vertebrate hosts or to insects. In recent years, other members of the genus Lactobacillus (formerly known as the Leuconostoc branch of Lactobacillus) have been reclassified into the genera Atopobium, Carnobacterium, Weissella, Oenococcus, and Leuconostoc. The Pediococcus species P. dextrinicus has been reclassified as a Lapidilactobacillus dextrinicus and most lactobacilli were assigned to Paralactobacillus or one of the 23 novel genera of the Lactobacillaceae. Two websites inform on the assignment of species to the novel genera or species (http://www.lactobacillus.uantwerpen.be/; http://www.lactobacillus.ualberta.ca/).
Phylogeny
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature and the phylogeny is based on whole-genome sequences.
Human health
Vaginal tract
Lactobacillus s.s. species are considered "keystone species" in the vaginal flora of reproductive-age women. Most, but not all, healthy women have vaginal floras dominated by one of four species of Lactobacillus: L. iners, L. crispatus, L. gasseri and L. jensenii. Other women have a more diverse mix of anaerobic microorganisms, though are still considered to have a healthy microbiome.
Interactions with pathogens
Lactobacilli produce lactic acid, which contributes to the vaginal acidity, and this lowered pH is generally accepted to be the main mechanism controlling the composition of the vaginal microflora.
Lactobacilli are also proposed to produce hydrogen peroxide, which inhibits the growth and virulence of the fungal pathogen Candida albicans in vitro, though this is argued not to be the main mechanism in vivo.
In vitro studies have also shown that lactobacilli reduce the pathogenicity of C. albicans through the production of organic acids and certain metabolites. Both the presence of metabolites, such as sodium butyrate, and the decrease in environmental pH caused by the organic acids reduce the growth of hyphae in C. albicans, which reduces its pathogenicity. Lactobacilli also reduce the pathogenicity of C. albicans by reducing C. albicans biofilm formation. Biofilm formation is reduced by both the competition from lactobacilli, and the formation of defective biofilms which is linked to the reduced hypha growth mentioned earlier. On the other hand, following antibiotic therapy, certain Candida species can suppress the regrowth of lactobacilli at body sites where they cohabitate, such as in the gastrointestinal tract.
In addition to its effects on C. albicans, Lactobacillus sp. also interact with other pathogens. For example, Limosilactobacillus reuteri (formerly Lactobacillus reuteri) can inhibit the growth of many different bacterial species by using glycerol to produce the antimicrobial substance called reuterin. Another example is Ligilactobacillus salivarius (formerly Lactobacillus salivarius), which interacts with many pathogens through the production of salivaricin B, a bacteriocin.
Probiotics
Because of the interactions with other microbes, fermenting bacteria like lactic acid bacteria (LAB) are now in use as probiotics with many applications.
Lactobacilli administered in combination with other probiotics benefits cases of irritable bowel syndrome (IBS), although the extent of efficacy is still uncertain. The probiotics help treat IBS by returning homeostasis when the gut microbiota experiences unusually high levels of opportunistic bacteria. In addition, lactobacilli can be administered as probiotics during cases of infection by the ulcer-causing bacterium Helicobacter pylori. Helicobacter pylori is linked to cancer, and antibiotic resistance impedes the success of current antibiotic-based eradication treatments. When probiotic lactobacilli are administered along with the treatment as an adjuvant, its efficacy is substantially increased and side effects may be lessened. In addition, lactobacilli with other probiotic organisms in ripened milk and yogurt aid development of immunity in the mucous intestine in humans by raising the number of LgA (+).
Gastroesophageal reflux disease (GERD) is a common condition associated with bile acid-induced oxidative stress and accumulation of reactive oxygen species (ROS) in esophageal tissues that cause inflammation and DNA damage. In an experimental model of GERD, Lactobacillus species (L. acidophilus, L. plantarum and L. fermentum) facilitated the repair of DNA damage caused by bile-induced ROS. For patients with GERD, there is significant interest in the anti-inflammatory effect of Lactobacilli that may help prevent progression to Barrett’s esophagus and esophageal adenocarcinoma.
Given the known microbial associations, lactobacilli are currently available as probiotics to help control urogenital and vaginal infections, such as bacterial vaginosis (BV). Lactobacilli produce bacteriocins to suppress pathogenic growth of certain bacteria, as well as lactic acid, which lowers the vaginal pH to around 4.5 or less, hampering the survival of other bacteria.
In children, lactobacilli such as Lacticaseibacillus rhamnosus (previously L. rhamnosus) are associated with a reduction of atopic eczema, also known as dermatitis, due to anti-inflammatory cytokines secreted by this probiotic bacteria.
Oral health
Some lactobacilli have been associated with cases of dental caries (cavities). Lactic acid can corrode teeth, and the Lactobacillus count in saliva has been used as a "caries test" for many years. Lactobacilli characteristically cause existing carious lesions to progress, especially those in coronal caries. The issue is, however, complex, as recent studies show probiotics can allow beneficial lactobacilli to populate sites on teeth, preventing streptococcal pathogens from taking hold and inducing dental decay. The scientific research of lactobacilli in relation to oral health is a new field and only a few studies and results have been published. Some studies have provided evidence of certain Lactobacilli which can be a probiotic for oral health. Some species, but not all, show evidence in defense to dental caries. Due to these studies, there have been applications of incorporating such probiotics in chewing gum and lozenges. There is also evidence of certain Lactobacilli that are beneficial in the defense of periodontal disease such as gingivitis and periodontitis.
Food production
Species of Lactobacillus (and related genera) comprise many food fermenting lactic acid bacteria and are used as starter cultures in industry for controlled fermentation in the production of wine, yogurt, cheese, sauerkraut, pickles, beer, cider, kimchi, cocoa, kefir, and other fermented foods, as well as animal feeds and the bokashi soil amendment. Lactobacillus species are dominant in yogurt, cheese, and sourdough fermentations.
Their importance in fermentation comes from both metabolism of the food itself, as well as the inhibition of growth of other potentially pathogenic microbes. The antibacterial and antifungal activity of lactobacilli relies on production of bacteriocins and low molecular weight compounds that inhibits these microorganisms.
Sourdough bread is made either spontaneously, by taking advantage of the bacteria naturally present in flour, or by using a "starter culture", which is a symbiotic culture of yeast and lactic acid bacteria growing in a water and flour medium. The bacteria metabolize sugars into lactic acid, which lowers the pH of their environment and creates the signature sourness associated with yogurt, sauerkraut, etc.
In many traditional pickling processes, vegetables are submerged in brine, and salt-tolerant lactobacilli feed on natural sugars found in the vegetables. The resulting mix of salt and lactic acid is a hostile environment for other microbes, such as fungi, and the vegetables are thus preserved—remaining edible for long periods.
Lactobacilli, especially pediococci and L. brevis, are some of the most common beer spoilage organisms. They are, however, essential to the production of sour beers such as Belgian lambics and American wild ales, giving the beer a distinct tart flavor.
Scientist Elie Metchnikoff won a Nobel prize in 1908 for his work on LAB, the connection to food, and possible usage as a probiotic.
See also
Lactobacillus L. anticaries
Lactic acid fermentation
MRS agar
Pediococcus
Probiotics
Proteobiotics
Carbon monoxide-releasing molecules
References
External links
Lactobacillus at Milk the Funk Wiki
Lactobacillus at BacDive - the Bacterial Diversity Metadatabase
Lactobacillaceae
Food microbiology
Gut flora bacteria
Garde manger
Gram-positive bacteria
Bacteria genera .... | Lactobacillus | Biology | 3,624 |
16,568,424 | https://en.wikipedia.org/wiki/Garenoxacin | Garenoxacin (INN) is a quinolone antibiotic for the treatment of Gram-positive and Gram-negative bacterial infections.
Garenoxacin was discovered by Toyama Chemical Co., Ltd. of Tokyo, Japan, and is currently being marketed in Japan under the tradename Geninax. Schering-Plough holds worldwide rights for garenoxacin, except for Japan, South Korea, and China.
On February 13, 2006, Schering-Plough announced that the United States Food and Drug Administration had accepted the New Drug Application (NDA) for garenoxacin, and had been granted a 10-month review. As of 2015, however, it has not been approved in the US.
Schering-Plough later withdrew its application to the United States Food and Drug Administration, FDA, (August 20, 2006) for approval of the antibiotic Garenoxacin.
The European Medicines Agency (EMA) had also been formally notified by Schering-Plough Europe (July 25, 2007) of its decision to withdraw the application for a centralized marketing authorization for garenoxacin as well. Based on the CHMP review of the data regarding safety and efficacy (risk/benefit), the CHMP considered the application for garenoxacin to be unapprovable.
References
Quinolone antibiotics
Isoindolines
Phenol ethers
Cyclopropyl compounds
Carboxylic acids | Garenoxacin | Chemistry | 302 |
147,983 | https://en.wikipedia.org/wiki/Preliminary%20Design%20of%20an%20Experimental%20World-Circling%20Spaceship | The "Preliminary Design of an Experimental World-Circling Spaceship" was a 1946 proposal by Project RAND for a United States satellite program. Robert M. Salter, James E. Lipp and one other person at RAND served as the editors of the report.
The Preliminary Design of an Experimental World-Circling Spaceship states, "A satellite vehicle with appropriate instrumentation can be expected to be one of the most potent scientific tools of the Twentieth Century. The achievement of a satellite craft would produce repercussions comparable to the explosion of the atomic bomb.."
References
External links
Preliminary Design of an Experimental World-Circling Spaceship
Spaceflight
Hypothetical spacecraft
RAND Corporation | Preliminary Design of an Experimental World-Circling Spaceship | Astronomy,Technology | 130 |
39,080,774 | https://en.wikipedia.org/wiki/Stabilized%20inverse%20Q%20filtering | Stabilized inverse Q filtering is a data processing technology for enhancing the resolution of reflection seismology images where the stability of the method used is considered. Q is the anelastic attenuation factor or the seismic quality factor, a measure of the energy loss as the seismic wave moves. To obtain a solution when we make computations with a seismic model we always have to consider the problem of instability and try to obtain a stabilized solution for seismic inverse Q filtering.
Basics
When a wave propagates through subsurface materials both energy dissipation and velocity dispersion takes place. Inverse Q filtering is a method to restore the energy loss due to energy dissipation (amplitude compensation) and to correct the time-shift of the data due to velocity dispersion.
Wang has written an excellent book on the subject of inverse Q filtering, Seismic inverse Q filtering (2008), and discuss the subject of stabilizing the method. He writes:
“The phase-only inverse Q filter mentioned above is unconditionally stable. However, if including the accompanying amplitude compensation in the inverse Q filter, stability is a major issue of concern in implementation.”
Hale (1981) found that the inverse Q filter overcompensated the amplitudes for the later events in a seismic trace. Therefore, in order to obtain reasonable amplitude, the amplitude spectrum of the computed filter has to be clipped at some maximum gain to prevent undue amplitude at later times. On basis of this concept Wang proposed a stabilized inverse Q filtering approach that was able to compensate simultaneously for both attenuation and dispersion.” The unclipped version of Wang’s solution is presented in the wikipedia article seismic inverse Q filtering. The solution is based on the theory of wavefield downward continuation. In this outline here I will compute on a clipped version by introducing low-pass filtering. Both Hale and Wang introduced low-passfiltering as a method for stabilization.
Calculations
We have the equation for seismic inverse Q filtering from Wang:
Time is denoted τ, frequency is w and i is the imaginary unit. Qr and wr are reference values representing damping and frequency for a certain frequency. To demonstrate stability we can simply bypass using a reference frequency and get a more simple equation:
The sum of these plane waves gives the time-domain seismic signal,
On figure 1 is presented the solution of (2/2.b) for a seismic model for different Q-values, which clearly indicates the numerical instability. Number on top of figure 1 corresponds with the Q number, 1=Q1, 2=Q2 etc. The results are close to the results presented in Wang’s book (each trace is scaled individually, so artefacts are stronger on trace 5 than on trace 4). However, Wang also considered phase compensation. Computations here are for amplitude only inversion since the phase compensation is unnecessary to demonstrate instability because it is always stable.
Low-passfiltering and inverse Q-filtering
In practice, the artefacts caused by numerical instability can be suppressed by a low-pass filter. Hale wrote that the unclipped IQF of a seismogram amplified the Nyquist frequency by a factor 7x106 when we had the ratio t/Q=10 and concluded that for typical seismograms with lengths longer than 1000 samples and Q value around 100, data is seldom pure enough to warrant the use of unclipped IQF. Wang introduced a cutoff frequency to set up a criterion for the stabilization by a mathematical formula. However, considering Hales’ article it could be sufficient to simply remove the Nyquist frequency. That means to let the frequency close to Nyquist frequency be the cutoff frequency. On fig.2 we see a seismic model giving us benchmark data for inverse Q-filtering (red graph). We will see that IQF of this model will amplify the Nyquist frequency by a factor little less than 5x106.
Figure 3 is the amplitude-only inverse Q filtered trace of figure 2 for Q=50 (trace 4). The result clearly indicates the numerical instability. Artefacts are seen through the whole trace.
We will try to remove the artefacts by applying a low-pass filter on the trace of figure 3. We used MATLABS signal processing tool and created a low-passfilter (Zero-phase IIR-filter) on fig.4 with cutoff frequency at 120 Hz. The amplitude response of the filter is in blue and the phase in green.
The result of filtering trace on fig.3 with the low-passfilter of fig.4 is shown on fig.5. All artefacts are removed and we are left with the impulse response that can be compared with the original model on fig.2.
Frequency-response
A study of the frequency response of the trace of figure 3 (unclipped) and figure 5 (clipped) will give more insight into the filtering process. Figure 6 shows the magnitude of the frequency response as a function of digital frequency before filtering. This representation gives a good picture of what happens around the Nyquist frequency when filtering with the low-pass filter is done. Unstable energy is accumulated close to the Nyquistfrequency. After filtering the unstable energy around the Nyquist-frequency is completely removed, and fig.7 give the frequency response of the impulse response of fig.5.
Notes
References
External links
Stabilized inverse Q filtering by Knut Sørsdal
Seismology measurement
Geophysics | Stabilized inverse Q filtering | Physics | 1,117 |
19,654,433 | https://en.wikipedia.org/wiki/Minesweeper%20%28video%20game%29 | Minesweeper is a logic puzzle video game genre generally played on personal computers. The game features a grid of clickable tiles, with hidden "mines" (depicted as naval mines in the original game) scattered throughout the board. The objective is to clear the board without detonating any mines, with help from clues about the number of neighboring mines in each field. Variants of Minesweeper have been made that expand on the basic concepts, such as Minesweeper X, Crossmines, and Minehunt. Minesweeper has been incorporated as a minigame in other games, such as RuneScape and Minecraft 2015 April Fools update.
The origin of Minesweeper is unclear. According to TechRadar, the first version of the game was 1990's Microsoft Minesweeper, but Eurogamer says Mined-Out by Ian Andrew (1983) was the first Minesweeper game. Curt Johnson, the creator of Microsoft Minesweeper, acknowledges that his game's design was borrowed from another game, but it was not Mined-Out, and he does not remember which game it is.
Gameplay
Minesweeper is a puzzle video game. In the game, mines (that resemble naval mines in the classic theme) are scattered throughout a board, which is divided into cells. Cells have three states: unopened, opened and flagged. An unopened cell is blank and clickable, while an opened cell is exposed. Flagged cells are those marked by the player to indicate a potential mine location.
A player selects a cell to open it. If a player opens a mined cell, the game ends. Otherwise, the opened cell displays either a number, indicating the number of mines diagonally and/or adjacent to it, or a blank tile (or "0"), and all adjacent non-mined cells will automatically be opened. Players can also flag a cell, visualised by a flag being put on the location, to denote that they believe a mine to be in that place. Flagged cells are still considered unopened, and a player can click on them to open them. In some versions of the game when the number of adjacent mines is equal to the number of adjacent flagged cells, all adjacent non-flagged unopened cells will be opened, a process known as chording.
Objective and strategy
A game of Minesweeper begins when the player first selects a cell on a board. In some variants the first click is guaranteed to be safe, and some further guarantee that all adjacent cells are safe as well. During the game, the player uses information given from the opened cells to deduce further cells that are safe to open, iteratively gaining more information to solve the board. The player is also given the number of remaining mines in the board, known as the minecount, which is calculated as the total number of mines subtracted by the number of flagged cells (thus the minecount can be negative if too many flags have been placed).
To win a game of Minesweeper, all non-mine cells must be opened without opening a mine. There is no score, but there is a timer recording the time taken to finish the game. Difficulty can be increased by adding mines or starting with a larger grid. Most variants of Minesweeper that are not played on a fixed board offer three default board configurations, usually known as Beginner, Intermediate, and Expert, in order of increasing difficulty. Beginner is usually on an 8x8 or 9x9 board containing 10 mines, Intermediate is usually on a 16x16 board with 40 mines and expert is usually on a 30x16 board with 99 mines; however, there is usually an option to customise board size and mine count.
History
According to TechRadar, Minesweeper was created by Microsoft in the 1990s, but Eurogamer commented that Minesweeper gained a lot of inspiration from a "lesser known, tightly designed game", Mined-Out by Ian Andrew for the ZX Spectrum in 1983. According to Andrew, Microsoft copied Mined-Out for Microsoft Minesweeper. The Microsoft version made its first appearance in 1990, in Windows Entertainment Pack, which was given as part of Windows 3.11. The game was written by Robert Donner and Curt Johnson. Johnson stated that Microsoft Minesweeper's design was borrowed from another game, but it was not Mined-Out, and he does not remember which game it was. In 2001, a group called the International Campaign to Ban Winmine campaigned for the game's topic to be changed from landmines. The group commented that the game "is an offence against the victims of the mines". A later version, found present in Windows Vista's Minesweeper, offered a tileset with flowers replacing mines as a response.
Another early version (predating even Windows 1) is the SunOS (Unix) game Mines, released in 1987 and written by Tom Anderson. According to minesweeper.com, it was ported to XWindows in 1990.
The game is frequently bundled with operating systems and desktop environments, including Minesweeper for IBM's OS/2, Microsoft Windows, KDE, GNOME and Palm OS. Microsoft Minesweeper was included by default in Windows until Windows 8 (2012). Microsoft replaced this with a free-to-play version of the game, downloadable from the Microsoft Store, which is "riddled with ads", according to How-To Geek.
Variants
Variants of Minesweeper have been made that expand on the basic concepts and add new game design elements. Minesweeper X is a clone of the Microsoft version with improved randomization and more statistics, and is popular with players of the game intending to reach a fast time. Arbiter and Viennasweeper are also clones, and are used similarly to Minesweeper X. Crossmines is a more complex version of the game's base idea, adding linked mines and irregular blocks. BeTrapped transposes the game into a mystery game setting. There are several direct clones of Microsoft Minesweeper available online.
Minesweeper Q was released in 2011 by the independent developer Spica and is another clone of the Microsoft version available as a mobile and tablet app for iOS users. It includes quick flagging and quick open mode. Users also have the option to change their board appearance from the classic gray/mines to flowers or clouds. Because this version is limited to use on mobile and iPad it is not ideal for players aiming to reach a fast time.
Minesweeper was made part of RuneScape through a minigame called Vinesweeper. The non-Japanese releases of Pokémon HeartGold and SoulSilver contained a variation of both Minesweeper and Picross. The video game Minecraft released a version of Minesweeper in its 2015 April Fool's update. The HP-48G graphing calculator includes a variant called "Minehunt", where the player has to move safely from one corner of the playfield to the other. The only clues given are how many mines are in the squares surrounding the player's current position. Google search includes a version of Minesweeper as an easter egg, available by searching the game's name.
A logic puzzle variant of minesweeper, suitable for playing on paper, starts with some squares already revealed. The player cannot reveal any more squares, but must instead mark the remaining mines correctly. Unlike the usual form of minesweeper, these puzzles usually have a unique solution. These puzzles appeared under the name "tentaizu" (天体図), Japanese for a star map, in Southwest Airlines' magazine Spirit in 2008–2009.
Competitive play
Competitive Minesweeper players aim to complete the game as fast as possible. The players memorize patterns to reduce times. Some players use a technique called the "1.5 click", which aids in revealing mines, while other players do not flag mines at all. The game is played competitively in tournaments. A community of dedicated players has emerged; this community was centralized on websites such as Minesweeper.info. As of 2015, according to the Guinness Book of World Records, the fastest time to complete all three difficulties of Minesweeper is 38.65 seconds by Kamil Murański in 2014.
Computational complexity
In 2000, Sadie Kaye published a proof that it is NP-complete to determine whether a given grid of uncovered, correctly flagged, and unknown squares, the labels of the foremost also given, has an arrangement of mines for which it is possible within the rules of the game. The argument is constructive, a method to quickly convert any Boolean circuit into such a grid that is possible if and only if the circuit is satisfiable; membership in NP is established by using the arrangement of mines as a certificate. If, however, a minesweeper board is already guaranteed to be consistent, solving it is not known to be NP-complete, but it has been proven to be co-NP-complete. In the latter case, however, minesweeper exhibits a phase transition analogous to -SAT: when more than 25% squares are mined, solving a board requires guessing an exponentially-unlikely set of mines. Kaye also proved that infinite Minesweeper is Turing-complete.
See also
Board puzzles with algebra of binary variables
References
Inline citations
General references
— An open-access paper explaining Kaye's NP-completeness result.
External links
Richard Kaye's Minesweeper pages
Microsoft Minesweeper playable in the browser on the Internet Archive
Casual games
Linux games
NP-complete problems
Puzzle video games
Video games about bomb disposal
Windows games | Minesweeper (video game) | Mathematics | 1,968 |
26,145,195 | https://en.wikipedia.org/wiki/Plastic | Plastics are a wide range of synthetic or semisynthetic materials that use polymers as a main ingredient. Their plasticity makes it possible for plastics to be molded, extruded, or pressed into solid objects of various shapes. This adaptability, combined with a wide range of other properties, such as being lightweight, durable, flexible, nontoxic, and inexpensive to produce, has led to their widespread use around the world. Most plastics are derived from natural gas and petroleum, and a small fraction from renewable materials. One such material polylactic acid.
Between 1950 and 2017, 9.2 billion metric tons of plastic are estimated to have been made; more than half of this has been produced since 2004. In 2023, preliminary figures indicate that over 400 million metric tons of plastic were produced worldwide. If global trends on plastic demand continue, it is estimated that annual global plastic production will reach over 1.3 billion tons by 2060. Major applications include packaging (40%) and building/construction (20%).
The success and dominance of plastics since the early 20th century has had major benefits for mankind, ranging from medical devices to light-weight construction materials. The sewage systems in many countries relies on the resiliency and adaptability of polyvinyl chloride. It is also true that plastics are the basis of widespread environmental concerns, due to their slow decomposition rate in natural ecosystems. Most plastic produced has not been reused. Some is unsuitable for reuse. Much is captured in landfills or as plastic pollution. Particular concern focuses on microplastics. Marine plastic pollution, for example, creates garbage patches. Of all the plastic discarded so far, some 14% has been incinerated and less than 10% has been recycled.
In developed economies, about a third of plastic is used in packaging and roughly the same in buildings in applications such as piping, plumbing or vinyl siding. Other uses include automobiles (up to 20% plastic), furniture, and toys. In the developing world, the applications of plastic may differ; 42% of India's consumption is used in packaging. Worldwide, about 50 kg of plastic is produced annually per person, with production doubling every ten years.
The world's first fully synthetic plastic was Bakelite, invented in New York in 1907, by Leo Baekeland, who coined the term "plastics". Dozens of different types of plastics are produced today, such as polyethylene, which is widely used in product packaging, and polyvinyl chloride (PVC), used in construction and pipes because of its strength and durability. Many chemists have contributed to the materials science of plastics, including Nobel laureate Hermann Staudinger, who has been called "the father of polymer chemistry", and Herman Mark, known as "the father of polymer physics".
Etymology
The word plastic derives from the Greek πλαστικός (plastikos), meaning "capable of being shaped or molded;" in turn, it is from πλαστός (plastos) meaning "molded." As a noun, the word most commonly refers to the solid products of petrochemical-derived manufacturing.
The noun plasticity refers specifically here to the deformability of the materials used in the manufacture of plastics. Plasticity allows molding, extrusion or compression into a variety of shapes: films, fibers, plates, tubes, bottles and boxes, among many others. Plasticity also has a technical definition in materials science outside the scope of this article; it refers to the non-reversible change in form of solid substances.
Structure
Most plastics contain organic polymers. The vast majority of these polymers are formed from chains of carbon atoms, with or without the attachment of oxygen, nitrogen or sulfur atoms. These chains comprise many repeating units formed from monomers. Each polymer chain consists of several thousand repeating units. The backbone is the part of the chain that is on the main path, linking together a large number of repeat units. To customize the properties of a plastic, different molecular groups called side chains hang from this backbone; they are usually attached to the monomers before the monomers themselves are linked together to form the polymer chain. The structure of these side chains influences the properties of the polymer.
Classifications
Plastics are usually classified by the chemical structure of the polymer's backbone and side chains. Important groups classified in this way include the acrylics, polyesters, silicones, polyurethanes, and halogenated plastics. Plastics can be classified by the chemical process used in their synthesis, such as condensation, polyaddition, and cross-linking. They can also be classified by their physical properties, including hardness, density, tensile strength, thermal resistance, and glass transition temperature. Plastics can additionally be classified by their resistance and reactions to various substances and processes, such as exposure to organic solvents, oxidation, and ionizing radiation. Other classifications of plastics are based on qualities relevant to manufacturing or product design for a particular purpose. Examples include thermoplastics, thermosets, conductive polymers, biodegradable plastics, engineering plastics and elastomers.
Thermoplastics and thermosetting polymers
One important classification of plastics is the degree to which the chemical processes used to make them are reversible or not.
Thermoplastics do not undergo chemical change in their composition when heated and thus can be molded repeatedly. Examples include polyethylene (PE), polypropylene (PP), polystyrene (PS), and polyvinyl chloride (PVC).
Thermosets, or thermosetting polymers, can melt and take shape only once: after they have solidified, they stay solid and retain their shape permanently. If reheated, thermosets decompose rather than melt. Examples of thermosets include epoxy resin, polyimide, and Bakelite. The vulcanization of rubber is an example of this process. Before heating in the presence of sulfur, natural rubber (polyisoprene) is a sticky, slightly runny material, and after vulcanization, the product is dry and rigid.
{| class="wikitable" style="text-align:left; font-size:90%; width:70%;"
|- class="hintergrundfarbe2" style="vertical-align:top"
| Thermosets consist of closely cross-linked polymers. Cross-links are shown as red dots in the figure.
| Elastomers consist of wide-meshed cross-linked polymers. The wide mesh allows the material to stretch under tensile load.
| Thermoplastics consist of non-crosslinked polymers, often with a semi-crystalline structure (shown in red). They have a glass transition temperature and are fusible.
|}
Commodity, engineering and high-performance plastics
Commodity plastics
Around 70% of global production is concentrated in six major polymer types, the so-called commodity plastics. Unlike most other plastics, these can often be identified by their resin identification code (RIC):
Polyethylene terephthalate (PET or PETE)
High-density polyethylene (HDPE or PE-HD)
Polyvinyl chloride (PVC or V)
Low-density polyethylene (LDPE or PE-LD),
Polypropylene (PP)
Polystyrene (PS)
Polyurethanes (PUR) and PP&A fibers are often also included as major commodity classes, although they usually lack RICs, as they are chemically quite diverse groups. These materials are inexpensive, versatile and easy to work with, making them the preferred choice for the mass production everyday objects. Their biggest single application is in packaging, with some 146-million tonnes being used this way in 2015, equivalent to 36% of global production. Due to their dominance; many of the properties and problems commonly associated with plastics, such as pollution stemming from their poor biodegradability, are ultimately attributable to commodity plastics.
A huge number of plastics exist beyond the commodity plastics, with many having exceptional properties.
Engineering plastics
Engineering plastics are more robust and are used to manufacture products such as vehicle parts, building and construction materials, and some machine parts. In some cases, they are polymer blends formed by mixing different plastics together (ABS, HIPS etc.). Engineering plastics can replace metals in vehicles, lowering their weight and improving fuel efficiency by 6–8%. Roughly 50% of the volume of modern cars is made of plastic, but this only accounts for 12–17% of the vehicle weight.
Acrylonitrile butadiene styrene (ABS): electronic equipment cases (e.g., computer monitors, printers, keyboards) and drainage pipes
High-impact polystyrene (HIPS): refrigerator liners, food packaging, and vending cups
Polycarbonate (PC): compact discs, eyeglasses, riot shields, security windows, traffic lights, and lenses
Polycarbonate + acrylonitrile butadiene styrene (PC + ABS): a blend of PC and ABS that creates a stronger plastic used in car interior and exterior parts, and in mobile phone bodies
Polyethylene + acrylonitrile butadiene styrene (PE + ABS): a slippery blend of PE and ABS used in low-duty dry bearings
Polymethyl methacrylate (PMMA) (acrylic): contact lenses (of the original "hard" variety), glazing (best known in this form by its various trade names around the world; e.g. Perspex, Plexiglas, and Oroglas), fluorescent-light diffusers, and rear light covers for vehicles. It also forms the basis of artistic and commercial acrylic paints, when suspended in water with the use of other agents.
Silicones (polysiloxanes): heat-resistant resins used mainly as sealants but also used for high-temperature cooking utensils and as a base resin for industrial paints
Urea-formaldehyde (UF): one of the aminoplasts used as a multi-colorable alternative to phenolics: used as a wood adhesive (for plywood, chipboard, hardboard) and electrical switch housings
High-performance plastics
High-performance plastics are usually expensive, with their use limited to specialized applications that make use of their superior properties.
Aramids: best known for their use in the manufacture of body armor, this class of heat-resistant and strong synthetic fibers also has applications in aerospace and military and includes Kevlar, Nomex, and Twaron.
Ultra-high-molecular-weight polyethylenes (UHMWPE)
Polyetheretherketone (PEEK): strong, chemical- and heat-resistant thermoplastic; its biocompatibility allows for use in medical implant applications and aerospace moldings. It is one of the most expensive commercial polymers.
Polyetherimide (PEI) (Ultem): a high-temperature, chemically stable polymer that does not crystallize
Polyimide: a high-temperature plastic used in materials such as Kapton tape
Polysulfone: high-temperature melt-processable resin used in membranes, filtration media, water heater dip tubes and other high-temperature applications
Polytetrafluoroethylene (PTFE), or Teflon: heat-resistant, low-friction coatings used in non-stick surfaces for frying pans, plumber's tape, and water slides
Polyamide-imide (PAI): high-performance engineering plastic extensively used in high-performance gears, switches, transmissions, and other automotive components and aerospace parts
Amorphous and crystalline plastics
Many plastics are completely amorphous (without a highly ordered molecular structure), including thermosets, polystyrene, and methyl methacrylate (PMMA). Crystalline plastics exhibit a pattern of more regularly spaced atoms, such as high-density polyethylene (HDPE), polybutylene terephthalate (PBT), and polyether ether ketone (PEEK). However, some plastics are partially amorphous and partially crystalline in molecular structure, giving them both a melting point and one or more glass transitions (the temperature above which the extent of localized molecular flexibility is substantially increased). These so-called semi-crystalline plastics include polyethylene, polypropylene, polyvinyl chloride, polyamides (nylons), polyesters and some polyurethanes.
Conductive polymers
Intrinsically conducting polymers (ICPs) are organic polymers that conduct electricity. While a conductivity of up to 80 kilosiemens per centimeter (kS/cm) in stretch-oriented polyacetylene has been achieved, it does not approach that of most metals. For example, copper has a conductivity of several hundred kS/cm.
Biodegradable plastics and bioplastics
Biodegradable plastics
Biodegradable plastics are plastics that degrade (break down) upon exposure to biological factors, such as sunlight, ultra-violet radiation, moisture, bacteria, enzymes, or wind abrasion. Attacks by insects, such as waxworms and mealworms, can also be considered forms of biodegradation. Aerobic degradation requires the plastic to be exposed at the surface, whereas anaerobic degradation would be effective in landfill or composting systems. Some companies produce biodegradable additives to further promote biodegradation. Although starch powder can be added as a filler to facilitate degradation of some plastics, such treatment does not lead to complete breakdown. Some researchers have genetically engineered bacteria to synthesize completely biodegradable plastics, such as polyhydroxybutyrate (PHB); however, these were still relatively expensive as of 2021.
Bioplastics
While most plastics are produced from petrochemicals, bioplastics are made substantially from renewable plant materials like cellulose and starch. Due both to the finite limits of fossil fuel reserves and to rising levels of greenhouse gases caused primarily by the burning of those fuels, the development of bioplastics is a growing field. Global production capacity for bio-based plastics is estimated at 327,000 tonnes per year. In contrast, global production of polyethylene (PE) and polypropylene (PP), the world's leading petrochemical-derived polyolefins, was estimated at over 150 million tonnes in 2015.
Plastic industry
The plastic industry includes the global production, compounding, conversion and sale of plastic products. Although the Middle East and Russia produce most of the required petrochemical raw materials, the production of plastic is concentrated in the global East and West. The plastic industry comprises a huge number of companies and can be divided into several sectors:
Production
Between 1950 and 2017, 9.2 billion tonnes of plastic are estimated to have been made, with more than half of this having been produced since 2004. Since the birth of the plastic industry in the 1950s, global production has increased enormously, reaching 400 million tonnes a year in 2021; this is up from 381 million metric tonnes in 2015 (excluding additives). From the 1950s, rapid growth occurred in the use of plastics for packaging, in building and construction, and in other sectors. If global trends on plastic demand continue, it is estimated that by 2050 annual global plastic production will exceed 1.1-billion tonnes annually.
Plastics are produced in chemical plants by the polymerization of their starting materials (monomers); which are almost always petrochemical in nature. Such facilities are normally large and are visually similar to oil refineries, with sprawling pipework running throughout. The large size of these plants allows them to exploit economies of scale. Despite this, plastic production is not particularly monopolized, with about 100 companies accounting for 90% of global production. This includes a mixture of private and state-owned enterprises. Roughly half of all production takes place in East Asia, with China being the largest single producer. Major international producers include:
Dow Chemical
LyondellBasell
ExxonMobil
SABIC
BASF
Sibur
Shin-Etsu Chemical
Indorama Ventures
Sinopec
Braskem
Historically, Europe and North America have dominated global plastics production. However, since 2010 Asia has emerged as a significant producer, with China accounting for 31% of total plastic resin production in 2020. Regional differences in the volume of plastics production are driven by user demand, the price of fossil fuel feedstocks, and investments made in the petrochemical industry. For example, since 2010 over US$200 billion has been invested in the United States in new plastic and chemical plants, stimulated by the low cost of raw materials. In the European Union (EU), too, heavy investments have been made in the plastics industry, which employs over 1.6-million people with a turnover of more than 360 billion euros per year. In China in 2016 there were over 15,000 plastic manufacturing companies, generating more than US$366 billion in revenue.
In 2017, the global plastics market was dominated by thermoplastics– polymers that can be melted and recast. Thermoplastics include polyethylene (PE), polyethylene terephthalate (PET), polypropylene (PP), polyvinyl chloride (PVC), polystyrene (PS) and synthetic fibers, which together represent 86% of all plastics.
Compounding
Plastic is not sold as a pure unadulterated substance, but is instead mixed with various chemicals and other materials, which are collectively known as additives. These are added during the compounding stage and include substances such as stabilizers, plasticizers and dyes, which are intended to improve the lifespan, workability or appearance of the final item. In some cases, this can involve mixing different types of plastic together to form a polymer blend, such as high impact polystyrene. Large companies may do their own compounding prior to production, but some producers have it done by a third party. Companies that specialize in this work are known as Compounders.
The compounding of thermosetting plastic is relatively straightforward; as it remains liquid until it is cured into its final form. For thermosoftening materials, which are used to make the majority of products, it is necessary to melt the plastic in order to mix-in the additives. This involves heating it to anywhere between . Molten plastic is viscous and exhibits laminar flow, leading to poor mixing. Compounding is therefore done using extrusion equipment, which is able to supply the necessary heat and mixing to give a properly dispersed product.
The concentrations of most additives are usually quite low, however high levels can be added to create Masterbatch products. The additives in these are concentrated but still properly dispersed in the host resin. Masterbatch granules can be mixed with cheaper bulk polymer and will release their additives during processing to give a homogeneous final product. This can be cheaper than working with a fully compounded material and is particularly common for the introduction of color.
Converting
Companies that produce finished goods are known as converters (sometimes processors). The vast majority of plastics produced worldwide are thermosoftening and must be heated until molten in order to be molded. Various sorts of extrusion equipment exist which can then form the plastic into almost any shape.
Film blowing - Plastic films (carrier bags, sheeting)
Blow molding - Small thin-walled hollow objects in large quantities (drinks bottles, toys)
Rotational molding - Large thick-walled hollow objects (IBC tanks)
Injection molding - Solid objects (phone cases, keyboards)
Spinning - Produces fibers (nylon, spandex etc.)
For thermosetting materials the process is slightly different, as the plastics are liquid to begin with and but must be cured to give solid products, but much of the equipment is broadly similar.
The most commonly produced plastic consumer products include packaging made from LDPE (e.g. bags, containers, food packaging film), containers made from HDPE (e.g. milk bottles, shampoo bottles, ice cream tubs), and PET (e.g. bottles for water and other drinks). Together these products account for around 36% of plastics use in the world. Most of them (e.g. disposable cups, plates, cutlery, takeaway containers, carrier bags) are used for only a short period, many for less than a day. The use of plastics in building and construction, textiles, transportation and electrical equipment also accounts for a substantial share of the plastics market. Plastic items used for such purposes generally have longer life spans. They may be in use for periods ranging from around five years (e.g. textiles and electrical equipment) to more than 20 years (e.g. construction materials, industrial machinery).
Plastic consumption differs among countries and communities, with some form of plastic having made its way into most people's lives. North America (i.e. the North American Free Trade Agreement or NAFTA region) accounts for 21% of global plastic consumption, closely followed by China (20%) and Western Europe (18%). In North America and Europe, there is high per capita plastic consumption (94 kg and 85 kg/capita/year, respectively). In China, there is lower per capita consumption (58 kg/capita/year), but high consumption nationally because of its large population.
Gallery
Applications
The largest application for plastics is as packaging materials, but they are used in a wide range of other sectors, including: construction (pipes, gutters, door and windows), textiles (stretchable fabrics, fleece), consumer goods (toys, tableware, toothbrushes), transportation (headlights, bumpers, body panels, wing mirrors), electronics (phones, computers, televisions) and as machine parts. In optics, plastics are used to manufacture aspheric lenses.
Additives
Additives are chemicals blended into plastics to improved their performance or appearance. Additives are therefore one of the reasons why plastic is used so widely. Plastics are composed of chains of polymers. Many different chemicals are used as plastic additives. A randomly chosen plastic product generally contains around 20 additives. The identities and concentrations of additives are generally not listed on products.
In the EU, over 400 additives are used in high volumes. In a global market analysis, 5,500 additives were found. At a minimum, all plastic contains some polymer stabilizers which permit them to be melt-processed (molded) without suffering polymer degradation.Additives in polyvinyl chloride (PVC), used widely for sanitary plumbing, can constitute up to 80% of the total volume. Unadulterated plastic (barefoot resin) is rarely sold.
Leaching
Additives may be weakly bound to the polymers or react in the polymer matrix. Although additives are blended into plastic they remain chemically distinct from it and can gradually leach back out during normal use, when in landfills, or following improper disposal in the environment. Additives may also degrade to form other compounds that could be more benign or more toxic. Plastic fragmentation into microplastics and nanoplastics can allow chemical additives to move in the environment far from the point of use. Once released, some additives and derivatives may persist in the environment and bioaccumulate in organisms. They can have adverse effects on human health and biota. A recent review by the United States Environmental Protection Agency (US EPA) revealed that out of 3,377 chemicals potentially associated with plastic packaging and 906 likely associated with it, 68 were ranked by ECHA as "highest for human health hazards" and 68 as "highest for environmental hazards".
Recycling
As additives change the properties of plastics they have to be considered during recycling. Presently, almost all recycling is performed by simply remelting and fabricating used plastic into new items. Additives present risks in recycled products due to their difficulty to remove. When plastic products are recycled, it is highly likely that the additives will be integrated into the new products. Plastic waste, even if it is all of the same polymer type, will contain varying types and amounts of additives. Mixing these together can give a material with inconsistent properties, which can be unappealing to industry. For example, mixing different colored plastics with different plastic colorants together can produce a discolored or brown material and for this reason plastic is usually sorted both by polymer type and color prior to recycling.
Lack of transparency and reporting across the value chain often results in lack of knowledge concerning the chemical profile of the final products. For example, products containing brominated flame retardants have been incorporated into new plastic products. Flame retardants are a group of chemicals used in electronic and electrical equipment, textiles, furniture and construction materials which should not be present in food packaging or child care products. A recent study found brominated dioxins as unintentional contaminants in toys made from recycled plastic electronic waste that contained brominated flame retardants. Brominated dioxins have been found to exhibit toxicity similar to that of chlorinated dioxins. They can have negative developmental effects and negative effects on the nervous system and interfere with mechanisms of the endocrine system.
Health effects
Plastics have proliferated in part because they are relatively benign. They are not acutely toxic, in large part because they are insoluble and or indigestible owing to their large molecular weight. Their degradation products also are rarely toxic. The same cannot be said about some additives, which tend to be lower molecular weight.
Controversies associated with plastics often relate to their additives, some of which are potentially harmful. For example, some flame retardants, such as octabromodiphenyl ether and pentabromodiphenyl ether, are unsuitable for food packaging. Other harmful additives include cadmium, chromium, lead and mercury (regulated under the Minamata Convention on Mercury), which have previously been used in plastic production, are banned in many jurisdictions. However, they are still routinely found in some plastic packaging, including for food.
Poor countries
Additives can also be problematic if waste is burned, especially when burning is uncontrolled or takes place in low-technology incinerators, as is common in many developing countries. Incomplete combustion can cause emissions of hazardous substances such as acid gases and ash, which can contain persistent organic pollutants (POPs) such as dioxins.
A number of additives identified as hazardous to humans and/or the environment are regulated internationally. The Stockholm Convention on Persistent Organic Pollutants is a global treaty to protect human health and the environment from chemicals that remain intact in the environment for long periods, become widely distributed geographically, accumulate in the fatty tissue of humans and wildlife, and have harmful impacts on human health or on the environment. The use of bisphenol A (BPA) in plastic baby bottles is banned in many parts of the world but is not restricted in some low-income countries.
Animals
In 2023, plasticosis, a new disease caused by the ingestion of plastic waste, was discovered in seabirds. Birds affected with this disease were found to have scarred and inflamed digestive tracts, which can impair their ability to digest food. "When birds ingest small pieces of plastic, they found, it inflames the digestive tract. Over time, the persistent inflammation causes tissues to become scarred and disfigured, affecting digestion, growth and survival."
Types of additive
Health effects
Plastics per se have low toxicity due to their insolubility in water and because they have a large molecular weight. They are biochemically inert. Additives in plastic products can be more problemative. For example, plasticizers like adipates and phthalates are often added to brittle plastics like PVC to make them pliable. Traces of these compounds can leach out of the product. Owing to concerns over the effects of such leachates, the EU has restricted the use of DEHP (di-2-ethylhexyl phthalate) and other phthalates in some applications, and the US has limited the use of DEHP, DPB, BBP, DINP, DIDP, and DnOP in children's toys and child-care articles through the Consumer Product Safety Improvement Act. Some compounds leaching from polystyrene food containers have been proposed to interfere with hormone functions and are suspected human carcinogens (cancer-causing substances). Other chemicals of potential concern include alkylphenols.
While a finished plastic may be non-toxic, the monomers used in the manufacture of its parent polymers may be toxic. In some cases, small amounts of those chemicals can remain trapped in the product unless suitable processing is employed. For example, the World Health Organization's International Agency for Research on Cancer (IARC) has recognized vinyl chloride, the precursor to PVC, as a human carcinogen.
Bisphenol A (BPA)
Some plastic products degrade to chemicals with estrogenic activity. The primary building block of polycarbonates, bisphenol A (BPA), is an estrogen-like endocrine disruptor that may leach into food. Research in Environmental Health Perspectives finds that BPA leached from the lining of tin cans, dental sealants and polycarbonate bottles can increase the body weight of lab animals' offspring. A more recent animal study suggests that even low-level exposure to BPA results in insulin resistance, which can lead to inflammation and heart disease. As of January 2010, the Los Angeles Times reported that the US Food and Drug Administration (FDA) is spending $30 million to investigate indications of BPA's link to cancer. Bis(2-ethylhexyl) adipate, present in plastic wrap based on PVC, is also of concern, as are the volatile organic compounds present in new car smell. The EU has a permanent ban on the use of phthalates in toys. In 2009, the US government banned certain types of phthalates commonly used in plastic.
Environmental effects
Because the chemical structure of most plastics renders them durable, they are resistant to many natural degradation processes. Much of this material may persist for centuries or longer, given the demonstrated persistence of structurally similar natural materials such as amber.
Estimates differ as to the amount of plastic waste produced in the last century. By one estimate, one billion tons of plastic waste have been discarded since the 1950s. Others estimate a cumulative human production of 8.3-billion tons of plastic, of which 6.3-billion tons is waste, with only 9% getting recycled.
It is estimated that this waste is made up of 81% polymer resin, 13% polymer fibers and 32% additives. In 2018 more than 343 million tons of plastic waste were generated, 90% of which was composed of post-consumer plastic waste (industrial, agricultural, commercial and municipal plastic waste). The rest was pre-consumer waste from resin production and manufacturing of plastic products (e.g. materials rejected due to unsuitable color, hardness, or processing characteristics).
The Ocean Conservancy reported that China, Indonesia, Philippines, Thailand, and Vietnam dump more plastic into the sea than all other countries combined. The rivers Yangtze, Indus, Yellow, Hai, Nile, Ganges, Pearl, Amur, Niger, and Mekong "transport 88% to 95% of the global [plastics] load into the sea."
The presence of plastics, particularly microplastics, within the food chain is increasing. In the 1960s microplastics were observed in the guts of seabirds, and since then have been found in increasing concentrations. The long-term effects of plastics in the food chain are poorly understood. In 2009 it was estimated that 10% of modern waste was plastic, although estimates vary according to region. Meanwhile, 50% to 80% of debris in marine areas is plastic. Plastic is often used in agriculture. There is more plastic in the soil than in the oceans. The presence of plastic in the environment hurts ecosystems and human health.
Research on the environmental impacts has typically focused on the disposal phase. However, the production of plastics is also responsible for substantial environmental, health and socioeconomic impacts.
Prior to the Montreal Protocol, CFCs had been commonly used in the manufacture of the plastic polystyrene, the production of which had contributed to depletion of the ozone layer.
Efforts to minimize environmental impact of plastics may include lowering of plastics production and use, waste- and recycling-policies, and the proactive development and deployment of alternatives to plastics such as for sustainable packaging.
Microplastics
Decomposition of plastics
Plastics degrade by a variety of processes, the most significant of which is usually photo-oxidation. Their chemical structure determines their fate. Polymers' marine degradation takes much longer as a result of the saline environment and cooling effect of the sea, contributing to the persistence of plastic debris in certain environments. Recent studies have shown, however, that plastics in the ocean decompose faster than had been previously thought, due to exposure to the sun, rain, and other environmental conditions, resulting in the release of toxic chemicals such as bisphenol A. However, due to the increased volume of plastics in the ocean, decomposition has slowed down. The Marine Conservancy has predicted the decomposition rates of several plastic products: It is estimated that a foam plastic cup will take 50 years, a plastic beverage holder will take 400 years, a disposable diaper will take 450 years, and fishing line will take 600 years to degrade.
Microbial species capable of degrading plastics are known to science, some of which are potentially useful for disposal of certain classes of plastic waste.
In 1975, a team of Japanese scientists studying ponds containing waste water from a nylon factory discovered a strain of Flavobacterium that digests certain byproducts of nylon 6 manufacture, such as the linear dimer of 6-aminohexanoate. Nylon 4 (polybutyrolactam) can be degraded by the ND-10 and ND-11 strands of Pseudomonas sp. found in sludge, resulting in GABA (γ-aminobutyric acid) as a byproduct.
Several species of soil fungi can consume polyurethane, including two species of the Ecuadorian fungus Pestalotiopsis. They can consume polyurethane both aerobically and anaerobically (such as at the bottom of landfills).
Methanogenic microbial consortia degrade styrene, using it as a carbon source. Pseudomonas putida can convert styrene oil into various biodegradable plastic|biodegradable polyhydroxyalkanoates.
Microbial communities isolated from soil samples mixed with starch have been shown to be capable of degrading polypropylene.
The fungus Aspergillus fumigatus effectively degrades plasticized PVC. Phanerochaete chrysosporium has been grown on PVC in a mineral salt agar. P. chrysosporium, Lentinus tigrinus, A. niger, and A. sydowii can also effectively degrade PVC.
Phenol-formaldehyde, commonly known as Bakelite, is degraded by the white rot fungus P. chrysosporium.
Acinetobacter has been found to partially degrade low-molecular-weight polyethylene oligomers. When used in combination, Pseudomonas fluorescens and Sphingomonas can degrade over 40% of the weight of plastic bags in less than three months. The thermophilic bacterium Brevibacillus borstelensis (strain 707) was isolated from a soil sample and found capable of using low-density polyethylene as a sole carbon source when incubated at 50 °C. Pre-exposure of the plastic to ultraviolet radiation broke chemical bonds and aided biodegradation; the longer the period of UV exposure, the greater the promotion of the degradation.
Hazardous molds have been found aboard space stations that degrade rubber into a digestible form.
Several species of yeasts, bacteria, algae and lichens have been found growing on synthetic polymer artifacts in museums and at archaeological sites.
In the plastic-polluted waters of the Sargasso Sea, bacteria have been found that consume various types of plastic; however, it is unknown to what extent these bacteria effectively clean up poisons rather than simply release them into the marine microbial ecosystem.
Plastic-eating microbes also have been found in landfills.
Nocardia can degrade PET with an esterase enzyme.
The fungus Geotrichum candidum, found in Belize, has been found to consume the polycarbonate plastic found in CDs.
Futuro houses are made of fiberglass-reinforced polyesters, polyester-polyurethane, and PMMA. One such house was found to be harmfully degraded by Cyanobacteria and Archaea.
Recycling
Pyrolysis
By heating to above 500 °C (932 °F) in the absence of oxygen (pyrolysis), plastics can be broken down into simpler hydrocarbons, which can be used as feedstocks for the fabrication of new plastics. These hydrocarbons can also be used as fuels.
Greenhouse gas emissions
According to the Organisation for Economic Co-operation and Development, plastic contributed greenhouse gases in the equivalent of 1.8 billion tons of carbon dioxide () to the atmosphere in 2019, 3.4% of global emissions. They say that by 2060, plastic could emit 4.3 billion tons of greenhouse gas a year. The effect of plastics on global warming is mixed. Plastics are generally made from fossil gas or petroleum; thus, the production of plastics creates further fugitive emissions of methane when the fossil gas or petroleum is produced. Additionally, much of the energy used in plastic production is not sustainable energy; for example, high temperature from burning fossil gas. However, plastics can also limit methane emissions; for example, packaging to reduce food waste.
A study from 2024 found that compared to glass and aluminum, plastic may actually have less of a negative effect on the environment and therefore might be the best option for must food packaging and other common uses. The study found that, "replacing plastics with alternatives is worse for greenhouse gas emissions in most cases." and that the study involving European researchers found, "15 of the 16 applications a plastic product incurs fewer greenhouse gas emissions than their alternatives."
Production of plastics
Production of plastics from crude oil requires 7.9 to 13.7 kWh/lb (taking into account the average efficiency of US utility stations of 35%). Producing silicon and semiconductors for modern electronic equipment is even more energy consuming: 29.2 to 29.8 kWh/lb for silicon, and about 381 kWh/lb for semiconductors. This is much higher than the energy needed to produce many other materials. For example, to produce iron (from iron ore) requires 2.5-3.2 kWh/lb of energy; glass (from sand, etc.) 2.3–4.4 kWh/lb; steel (from iron) 2.5–6.4 kWh/lb; and paper (from timber) 3.2–6.4 kWh/lb.
Incineration of plastics
Quickly burning plastics at very high temperatures breaks down many toxic components, such as dioxins and furans. This approach is widely used in municipal solid waste incineration. Municipal solid waste incinerators also normally treat the flue gas to decrease pollutants further, which is needed because uncontrolled incineration of plastic produces carcinogenic polychlorinated dibenzo-p-dioxins. Open-air burning of plastic occurs at lower temperatures and normally releases such toxic fumes.
In the European Union, municipal waste incineration is regulated by the Industrial Emissions Directive, which stipulates a minimum temperature of 850 °C for at least two seconds.
Facilitation of natural degradation
The bacterium Blaptica dubia is claimed to help degradation of commercial polysterene. This biodegradation seems to occur in some plastic degrading bacteria inhabiting the gut of cockroaches. The biodegradation products have been found in their feces too.
History
The development of plastics has evolved from the use of naturally plastic materials (e.g., gums and shellac) to the use of the chemical modification of those materials (e.g., natural rubber, cellulose, collagen, and milk proteins), and finally to completely synthetic plastics (e.g., bakelite, epoxy, and PVC). Early plastics were bio-derived materials such as egg and blood proteins, which are organic polymers. In around 1600 BC, Mesoamericans used natural rubber for balls, bands, and figurines. Treated cattle horns were used as windows for lanterns in the Middle Ages. Materials that mimicked the properties of horns were developed by treating milk proteins with lye. In the nineteenth century, as chemistry developed during the Industrial Revolution, many materials were reported. The development of plastics accelerated with Charles Goodyear's 1839 discovery of vulcanization to harden natural rubber.
Parkesine, invented by Alexander Parkes in 1855 and patented the following year, is considered the first man-made plastic. It was manufactured from cellulose (the major component of plant cell walls) treated with nitric acid as a solvent. The output of the process (commonly known as cellulose nitrate or pyroxilin) could be dissolved in alcohol and hardened into a transparent and elastic material that could be molded when heated. By incorporating pigments into the product, it could be made to resemble ivory. Parkesine was unveiled at the 1862 International Exhibition in London and garnered for Parkes the bronze medal.
In 1893, French chemist Auguste Trillat discovered the means to insolubilize casein (milk proteins) by immersion in formaldehyde, producing material marketed as galalith. In 1897, mass-printing press owner Wilhelm Krische of Hanover, Germany, was commissioned to develop an alternative to blackboards. The resultant horn-like plastic made from casein was developed in cooperation with the Austrian chemist (Friedrich) Adolph Spitteler (1846–1940). Although unsuitable for the intended purpose, other uses would be discovered.
The world's first fully synthetic plastic was Bakelite, invented in New York in 1907 by Leo Baekeland, who coined the term plastics. Many chemists have contributed to the materials science of plastics, including Nobel laureate Hermann Staudinger, who has been called "the father of polymer chemistry", and Herman Mark, known as "the father of polymer physics". After World War I, improvements in chemistry led to an explosion of new forms of plastics, with mass production beginning in the 1940s and 1950s. Among the earliest examples in the wave of new polymers were polystyrene (first produced by BASF in the 1930s) and polyvinyl chloride (first created in 1872 but commercially produced in the late 1920s). In 1923, Durite Plastics, Inc., was the first manufacturer of phenol-furfural resins. In 1933, polyethylene was discovered by Imperial Chemical Industries (ICI) researchers Reginald Gibson and Eric Fawcett.
The discovery of polyethylene terephthalate (PETE) is credited to employees of the Calico Printers' Association in the UK in 1941; it was licensed to DuPont for the US and ICI otherwise, and as one of the few plastics appropriate as a replacement for glass in many circumstances, resulting in widespread use for bottles in Europe. In 1954 polypropylene was discovered by Giulio Natta and began to be manufactured in 1957. Also in 1954 expanded polystyrene (used for building insulation, packaging, and cups) was invented by Dow Chemical. Since the 1960s, plastic production has surged with the advent of polycarbonate and HDPE, widely used in various products. In the 1980s and 1990s, plastic recycling and the development of biodegradable plastics began to flourish to mitigate environmental impacts. From 2000 to the present, bioplastics from renewable sources and awareness of microplastics have spurred extensive research and policies to control plastic pollution.
Policy
Work is currently underway to develop a global treaty on plastic pollution. On March 2, 2022, UN Member States voted at the resumed fifth UN Environment Assembly (UNEA-5.2) to establish an Intergovernmental Negotiating Committee (INC) with the mandate of advancing a legally-binding international agreement on plastics. The resolution is entitled "End plastic pollution: Towards an international legally binding instrument." The mandate specifies that the INC must begin its work by the end of 2022 with the goal of "completing a draft global legally binding agreement by the end of 2024."
Further reading
See also
Plastic in the sense of malleable
References
Substantial parts of this text originated from An Introduction to Plastics v1.0 by Greg Goebel (March 1, 2001), which is in the public domain.
Sources
External links
Dielectrics
Articles containing video clips | Plastic | Physics | 9,420 |
10,041,299 | https://en.wikipedia.org/wiki/Iraq%20oil%20law%20%282007%29 | The Iraq Oil Law, also referred to as the Iraq Hydrocarbon Law was a piece of legislation submitted to the Iraqi Council of Representatives in May 2007 that laid out a framework for the regulation and development of Iraq's oil fields.
Start of process
The legislation started when the U.S.-backed Iraqi cabinet approved a new oil law that was set to give foreign companies the long-term contracts and the safe legal framework they have been waiting for. The law rattled labour unions and international campaigners, who say oil production should remain in the hands of Iraqis.
On March 10, 2007, prominent Iraqi parliamentarians, politicians, ex-ministers and oil technocrats urged the Baghdad parliament to reject Iraq's controversial hydrocarbon law, fearing that the new legislation would further divide the country already witnessing civil strife.
On April 28, 2007, discussions turned contentious among the more than 60 Iraqi oil officials reviewing Iraq's draft hydrocarbons bill in the United Arab Emirates. But the dispute highlighted the need for further negotiations on the proposed law that was stalled in talks for nearly eight months, then pushed through Iraq's Cabinet without most key provisions.
By December 2, 2007, the Bush administration was concerned that recent security gains in Iraq may be undermined by continuing political gridlock, and started pushing the Iraqi government to complete long-delayed reform legislation within six months.
Administrative law response
On June 30, 2008, a group of American advisers led by a small State Department team played an integral part in drawing up contracts between the Iraqi government and five major Western oil companies to exploit some of the largest fields in Iraq American officials say.
In June 2008, the Iraqi Oil Ministry announced plans to go ahead with small one- or two-year no-bid contracts to ExxonMobil, Shell, Total and BP — once partners in the Iraq Petroleum Company — along with Chevron and smaller firms to service Iraq's largest fields. Several United States senators had criticized the deal, arguing it was hindering efforts to pass the hydrocarbon law.
By July 1, 2008, Iraq's government invited foreign firms Monday to help boost the production of the country's major oil fields, beginning a global competition for access to the world's third-largest reserves.
By February 2009, Iraq had "sweetened" the terms it was the offering international oil companies vying to develop the country's reserves in the first concrete example of a global shift in power beginning to sweep through the oil industry.
Iraq, which pre-qualified about 45 companies to bid on oil projects, plans to award contracts for the six partly developed and four undeveloped fields offered in its second licensing round by mid-December.
History
The Iraqi oil industry had been completely nationalized by 1972. The government in the 1990s, under the presidency of Saddam Hussein, gave production share agreements (PSAs) to Russian and Chinese companies which gave a profit percentage of less than 10 percent.
The Bush administration hired the consulting firm BearingPoint to help write the law in 2004. The bill was approved by the Iraqi cabinet in February 2007. The Bush administration considers the passage of the law a benchmark for the government of Prime Minister Nuri Kamal al-Maliki.
One stumbling block was the unpopularity of the law, as it is perceived by the Iraqi people. An opinion poll conducted in 2007 by Oil Change International and other groups shows 63% of Iraqis surveyed would "prefer Iraq's oil to be developed and produced by Iraqi state-owned companies [than] by foreign companies". This explains why the law had stalled in the Iraqi parliament.
Profit sharing
The new law authorizes production share agreements (PSAs) which guarantees a profit for foreign oil companies.
The central government distributes remaining oil revenues throughout the nation on a per capita basis. The draft law allows Iraq's provinces freedom from the central government in giving exploration and production contracts. Iraq's constitution allows governorates to form a semi-independent regions, fully controlling their own natural resources.
Criticism of the new law
Some critics have claimed that the new Iraqi Oil law was not needed since Iraq has the cheapest oil to extract. Other analysts have claimed that the no-bid contracts given to U.S. oil companies constitute exploitation since many non-U.S companies would give the same service for shorter contracts and lower percentage of revenue.
See also
Energy law
Iraq withdrawal benchmarks
Iraq Petroleum Company
National Energy Policy Development Group
Ray Lee Hunt
References
External links
Republic of Iraq Ministry of Oil
Text of law in English provided by the Kurdish Regional Government (PDF)
Iraq oil law Detailed timeline at the History Commons
2007 in Iraq
George W. Bush administration controversies
Energy policy
Energy in Iraq
Petroleum politics
Oil and gas law
Iraqi legislation
Proposed laws
Economic history of the Iraq War
May 2007 events in Iraq | Iraq oil law (2007) | Chemistry,Environmental_science | 960 |
1,197,753 | https://en.wikipedia.org/wiki/Styphnic%20acid | Styphnic acid (from Greek stryphnos "astringent"), or 2,4,6-trinitro-1,3-benzenediol, is a yellow astringent acid that forms hexagonal crystals. It is used in the manufacture of dyes, pigments, inks, medicines, and explosives such as lead styphnate. It is itself a low-sensitivity explosive, similar to picric acid, but explodes upon rapid heating.
History
It was discovered in 1808 by Michel Eugène Chevreul who was researching ways of producing colorants from tropical logwoods. Upon boiling Pernambuco wood extract with nitric acid he filtered crystals understood to be styphnic acid in an impure form. In mid-1840s chemists purified and systematically studied the substance with Rudolf Christian Böttger and Heinrich Will giving its modern name, while in 1871 J. Schreder proved that it's trinitroresorcinol.
Preparation and chemistry
It may be prepared by the nitration of resorcinol with a mixture of nitric and sulfuric acid.
This compound is an example of a trinitrophenol.
Like picric acid, it is a moderately strong acid, capable of displacing carbon dioxide from solutions of sodium carbonate, for example.
It may be reacted with weakly basic oxides, such as those of lead and silver, to form the corresponding salts.
The solubility of picric acid and styphnic acid in water is less than their corresponding mono- and di-nitro compounds, and far less than their non-nitrated precursor phenols, so they may be purified by fractional crystallisation.
References
Nitrophenols
Explosive chemicals
Resorcinols
Organic acids | Styphnic acid | Chemistry | 373 |
31,417,877 | https://en.wikipedia.org/wiki/C10H14O3 | {{DISPLAYTITLE:C10H14O3}}
The molecular formula C10H14O3 (molar mass: 182.22 g/mol, exact mass: 182.0943 u) may refer to:
Hagemann's ester, or ethyl-2-methyl-4-oxo-2-cyclohexenecarboxylate
Mephenesin | C10H14O3 | Chemistry | 87 |
11,774,653 | https://en.wikipedia.org/wiki/Alternaria%20panax | Alternaria panax is a fungal plant pathogen, which causes Alternaria blight of ginseng.
References
panax
Fungal plant pathogens and diseases
Food plant pathogens and diseases
Eudicot diseases
Fungi described in 1912
Fungus species | Alternaria panax | Biology | 50 |
10,456,310 | https://en.wikipedia.org/wiki/Amtolmetin%20guacil | Amtolmetin guacil is a non-steroidal anti-inflammatory drug (NSAID). It is a prodrug of tolmetin sodium.
Background
Tolmetin sodium is an approved NSAID, marketed for the treatment of rheumatoid arthritis, osteoarthritis and juvenile rheumatoid arthritis. In humans, tolmetin sodium is absorbed rapidly with peak plasma levels observed 30 minutes after administration. It is eliminated rapidly with a mean plasma elimination t½ of approximately 1 hour. The preparation of slow release formulations or chemical modification of NSAIDs to form prodrugs has been suggested as a method to reduce the gastrotoxicity of these agents.
Amtolmetin guacil is a non-acidic prodrug of tolmetin, having NSAID properties similar to tolmetin with additional analgesic, antipyretic, and gastro protective properties. Amtolmetin is formed by amidation of tolmetin by glycine.
Pharmacology
Most is absorbed on oral administration. It is concentrated in the gastric wall. Highest concentration is reached 2 hours after administration.
Amtolmetin guacil can be hydrolysed to produce the metabolites Tolmetin, MED5 and Guiacol.
Elimination completes in 24 hours. This happens mostly with urine in shape of gluconides products (77%), faecal (7.5%).
It is advised to take the drug on empty stomach.
Permanent anti-inflammatory action continues up to 72 hours, with single administration.
Mechanism of action
Amtolmetin guacil stimulates capsaicin receptors present on gastrointestinal walls, because of presence of vanillic moiety and also releases NO which is gastro protective. It also inhibits prostaglandin synthesis and cyclooxygenase (COX).
References
Nonsteroidal anti-inflammatory drugs
Pyrroles
Prodrugs
2-Methoxyphenyl compounds | Amtolmetin guacil | Chemistry | 430 |
1,080,945 | https://en.wikipedia.org/wiki/Terry%20Matthews | Sir Terence Hedley Matthews (born 6 June 1943) is a Welsh-Canadian business magnate, serial high-tech entrepreneur, and Wales' first billionaire. He was the richest man in Wales until 2012, when he was surpassed by Sir Michael Moritz.
He has founded or funded over 100 companies in the high-tech communications field, most notably Mitel and Newbridge Networks. He is the chairman of Wesley Clover and the Swansea Bay City Region board. He owns the Celtic Manor Resort, KRP Properties, the Brookstreet Hotel, and the Marshes Golf Club.
Early life
Matthews was born in Newport, South Wales, at the then Lydia Beynon Maternity Hospital. Matthews returned as an adult to include the manor house that housed the hospital within the Celtic Manor Resort. He grew up in the town of Newbridge, Caerphilly. He studied at Swansea University and received a bachelor's degree in Electronics in 1969.
Career
After an apprenticeship at British Telecom's research lab at Martlesham Heath, Matthews left Britain and joined MicroSystems International, a chipmaking operation affiliated with Northern Telecom (which became Nortel Networks) in Ottawa, Ontario, Canada.
Mitel
Matthews' first enterprise was started in collaboration with fellow Briton and Microsystems employee Michael Cowpland in 1972. To raise seed money for future enterprises they had planned, the pair intended to import and sell electric lawnmowers built in the UK. Conventionally, and from accounts by Matthews, the name Mitel is thought to be a contraction of "Mike and Terry Lawnmowers". However, Cowpland is quoted as saying that it stands for "Mike and Terry ELectronics". This first endeavour was a fiasco; the shipping company carrying the first batch lost the container. When the lawnmowers finally arrived, the ground was covered with snow in the Canadian winter and no one would buy them. Matthews later said "That taught me a key lesson – the importance of timing. The shipping company lost the lawnmowers! By the time they showed up no-one wanted them, as you can't cut grass when it's covered with snow."
Mitel became a technology consultancy company run from home to the various companies around Ottawa's emerging high-tech district. Mitel's clients included the National Research Council, the Communications Research Centre, and a handful of pioneering start-ups including SHL Systemhouse, which was later purchased by EDS and now is part of HP and Quasar Systems (now Cognos).
Obtaining funding from a $4,000 bank loan, as well as from their own savings and a group of angel investors (notably Kent Plumley), the two developed a telephony DTMF tone receiver based on Cowpland's PhD thesis. This was a major advance in the technology, since they were able to sell receivers at a fraction of the cost of competing versions, while gaining returns of 1000%. Additionally, Mitel later became a chip manufacturer with the acquisition of the Silek foundry in Bromont, Quebec.
Later, the pair realized that the then-new technology of microprocessors and other semiconductor devices would make a similar change in the market for small PBXs. The SX200 PBX launched to immense success, being cheaper to purchase, quicker to install and far more functional. Mitel became one of the more successful manufacturers of small PBX systems and telecom semiconductors in the world, floating on the New York Stock Exchange in 1981.
In 1985, British Telecom bought a controlling interest in Mitel. Cowpland would later form the company that became Corel, and Matthews later founded Newbridge Networks.
Newbridge Networks
In 1986, Matthews drove Newbridge to become a leader in the worldwide data networking industry, manufacturing data communications products, especially ATM devices and routers. In 2000 the company employed more than 6,500 employees with recorded FY 1999 revenue of $1.8 billion. Later in 2000, Newbridge was acquired by Alcatel for $7.1 billion. Matthews' personal stake in Newbridge was valued at over a billion dollars, and as a result of the transaction he became the largest single shareholder in Alcatel.
Return to Mitel
In 2000, he reacquired the Mitel PBX business and company name, taking it private. He has invested heavily ($600 million by 2006) in Mitel to turn it into a broadband communications company. The company has made significant investments in enterprise Voice over IP telephony technology. Mitel's manufacturing business was spun off as BreconRidge. The company acquired Intertel in April 2007. Mitel went public again in 2010. Mitel announced the purchase of Aastra Technologies in November 2013. Mitel announced the purchase of Mavenir Systems in March 2015. Mitel announced the purchase of Polycom in April 2016, however, the deal fell through and Polycom instead merged with private equity firm Siris Capital Group. In July 2017, Mitel announced they had reached a deal to buy ShoreTel for $430 million, increasing the size of the company to 4,200 employees. On 24 April 2018, the company announced it would be bought by an investor group led by Searchlight Capital Partners.
Other businesses
Wesley Clover – Matthews is the founder and chairman of the investment group. Headquartered in Ottawa, Ontario, Canada, Wesley Clover has offices in the United States, United Kingdom, India, China and Russia. Investments in Information and communication technologies, digital media, and real estate.
Celtic Manor Resort - A 5* resort with a golf club attached to it, in Newport, South Wales
KRP Properties - Matthews company which manages land and buildings in the Kanata North Business Park
March Networks - Provider of IP video solutions. IPO on 27 April 2005. This venture was the first successful high tech IPO in the Ottawa area since 1999.
Bridgewater Systems – a software company enabling mobile service providers to personalize, manage, and deliver applications. Acquired by Amdocs in June 2011 for $215 Million
Ubiquity Software – Developer of SIP/IMS Application Server software platform, which was acquired by Avaya in Feb 2007 for about U.S. $150M and is the basis of Avaya's next generation solutions.
Convedia – a maker of VoIP/IMS Media Servers, which was acquired by RadiSys in 2006 for $US 105 Million and is the basis of Radisys' MediaEngine solutions.
Celtic House – Matthews and Roger Maggs co-founded Celtic House International Corporation (CHIC) in 1994. The objective was to invest in emerging high technology companies, primarily in the UK and Canada. In 2002, CHIC was re-structured to form Celtic House Venture Partners, a limited partnership owned and managed independent of Matthews.
Newport Networks - a maker of large carrier-grade session border controllers. On 18 March 2009, Newport Networks, with its two remaining employees, de-listed from the AIM. This is despite having announced twelve months earlier that the company had signed a major OEM agreement which was expected to substantially increase its product sales. The company has subsequently been wound up and liquidated.
Alacrity Incubator and Accelerator Program - Matthews founded Alacrity in 2009 along with his son, Owen Matthews, and Simon Gibson. Alacrity is a non-profit organization aimed at connecting established Western Canadian companies with venture capitalists and investment firms.
Additionally, Matthews serves on the board of directors for a number of high tech companies including Solace (formerly Solace Systems), and chairs CounterPath Corporation and Benbria Corporation.
Wales
Matthews spends a considerable amount of time in his native Wales, working on such ventures as the Celtic Manor Resort, a leisure complex in Newport, near the south Wales coast, chosen to host the 2010 Ryder Cup golf tournament and the 2014 NATO summit. The Celtic Manor hosted the ISPS Handa Wales Open for 15 years. The original Celtic Manor Hotel was formerly a maternity hospital and is Matthews' birthplace.
Matthews bought Celtic Manor in 1980, ploughing £100m into the project. Firstly the 19th century Manor House was renovated, while in 1991 plans were unveiled to develop two new golf courses and a convention centre between the manor and the River Usk. Matthews had become friends with golf course architect, the late Robert Trent Jones Snr, whose family roots were in Aberystwyth. Work began on the 'Roman Road' course in 1992, named after the main route connecting the former Roman fortress of Caerleon with the town of Caerwent, which crosses the land. In 1994 the 4,000-yard 'Coldra Woods' golf course was started, as well as the £10m golf clubhouse hotel.
Celtic Manor was Hotel of the Year in 2002, and won two European Design Awards in 2001 - one for the interior designers, Goff Associates, and one for its spa from the International Spa Association. Matthews commented that: "I think the resort can act as a magnet to draw new investment into Wales from across the UK and overseas. I did my best to put up a building that you can see from the West End of London and I didn't come far short of it!".
In September 2014, Matthews was appointed chairman of the Swansea Bay City Region Board. The Swansea Bay City Region Board is backing a Swansea Tidal Lagoon project.
In January 2016, he purchased the Hilton Hotel in Newport, which has since undergone significant renovation, and has been incorporated into the Celtic Manor Resort Collection as the Coldra Court Hotel.
Philanthropy
Matthews founded the Wesley Clover Foundation, a not-for-profit, philanthropic corporation that supports entrepreneurship, healthcare, education, and community initiatives. The foundation leased the properties formerly known as the Nepean National Equestrian Park and the Ottawa Municipal Campground to create Wesley Clover Parks. Matthews hosts an annual gala, Lumière, that has raised over $1 million for charity.
Honours
Matthews was appointed an Officer of the Order of the British Empire (OBE) in the 1994 Birthday Honours for services to the communications industry.
In 1998, he was elected as a Fellow of the Royal Academy of Engineering (FREng).
On 16 June 2001, he was conferred the honour of Knight Bachelor in the 2001 Birthday Honours for services to Industry and to Wales. He was knighted by HM The Queen at an investiture in Buckingham Palace.
Matthews has received honorary doctorates from the University of Wales (Glamorgan and Swansea) and Carleton University. In 2006, he was awarded an Honorary Degree (Doctor of Engineering) from the University of Bath.
In 2011, he was appointed Patron of the Cancer Stem Cell Research Institute at Cardiff University.
On 29 December 2017, he was appointed an Officer of the Order of Canada (OC) in the 2018 Canadian honours, for his exceptional achievements as a high-tech entrepreneur and investor, and for his contributions to community development through charitable endeavours. He was invested by the Governor General of Canada at Rideau Hall on 20 November 2018.
Personal life
His daughter, Karen, is executive director at Wesley Clover Parks. His son Owen lives in Victoria, British Columbia, is a General Partner at Wesley Clover, and is founder and chairman of the Alacrity Foundation. His youngest son Trevor is an actor, founder and CEO of Brookstreet Pictures.
See also
List of Canadians by net worth
List of people from Ottawa
References
External links
Wesley Clover website
Wesley Clover
Welsh billionaires
Canadian billionaires
Businesspeople from Ottawa
British technology company founders
British Telecom people
20th-century Canadian businesspeople
21st-century Canadian businesspeople
20th-century Welsh businesspeople
21st-century Welsh businesspeople
Canadian chairpersons of corporations
Canadian technology company founders
Canadian businesspeople in real estate
Canadian hoteliers
Canadian computer businesspeople
Canadian investors
Canadian technology chief executives
Businesspeople in software
Welsh knights
Businesspeople awarded knighthoods
Canadian Knights Bachelor
Officers of the Order of the British Empire
Alumni of Swansea University
People associated with the University of Wales
Fellows of the Royal Academy of Engineering
Fellows of the Institution of Engineering and Technology
Officers of the Order of Canada
People from Newbridge, Caerphilly
People from Newport, Wales
Welsh emigrants to Canada
Welsh company founders
Nortel employees
1943 births
Living people
Canadian Officers of the Order of the British Empire
21st-century Welsh landowners | Terry Matthews | Engineering | 2,499 |
49,548,813 | https://en.wikipedia.org/wiki/Robot%20leg | A robot leg (or robotic leg) is a mechanical leg that performs the same functions that a human leg can. The robotic leg is typically programmed to execute similar functions as a human leg. A robotic leg is similar to a prosthetic leg. However, a robotic leg can be controlled electrically or mechanically. To have the robotic leg emulate human leg behaviors, surgeons must redirect the nerves that previously controlled some of the person’s lower-leg muscles to cause the thigh muscles to contract. Sensors embedded in the robotic leg measure the electrical pulses created by both a re-innervated muscle contraction, and the existing thigh muscle.
Mechanism
A robotic Leg attaches to an individual who has had a lower extremity amputation—of a portion of a leg or foot. Doctors and technicians measure the remaining limb structure and of the person’s prosthesis to ideally fit the robotic leg. After they attach the robotic leg, they embed the sensors in the robotic leg that measure the electrical activity created by re-innervated muscle contraction, and existing thigh muscle.
Applications
Robotic legs have become a popular target of research and development in the past few years. Robotic legs can be applied to people who have had limb amputating surgery caused by a major trauma, or a disease like cancer. Robotic legs have also seen much applicable use in military combat injuries. Massachusetts Institute of Technology (MIT) has been working to create a “bionic” or robotic leg that can function as if it had organic muscles.
References
See also
Robot arm
Legged robot
Robotics | Robot leg | Engineering | 316 |
44,385,148 | https://en.wikipedia.org/wiki/UNIX%20Network%20Programming | Unix Network Programming is a book written by W. Richard Stevens. It was published in 1990 by Prentice Hall and covers many topics regarding UNIX networking and Computer network programming. The book focuses on the design and development of network software under UNIX. The book provides descriptions of how and why a given solution works and includes 15,000 lines of C code. The book's summary describes it as "for programmers seeking an in depth tutorial on sockets, transport level interface (TLI), interprocess communications (IPC) facilities under System V and BSD UNIX." The book has been translated into several languages, including Chinese, Italian, German, Japanese and others.
Later editions have expanded into two volumes, Volume 1: The Sockets Networking API and Volume 2: Interprocess Communications.
In the movie Wayne's World 2, the book is briefly referenced.
References
External links
Unix Network Programming, Vol. 1
Prentice Hall interview with Rich Stevens, author of Unix Programming, Volume 1: Networking APIs, Sockets and XTI, 2/e
UNIX Network Programming, Volume 1, Second Edition Aug 1, 1998, By David Bausum
Computer books
1990 books | UNIX Network Programming | Technology | 238 |
7,465,773 | https://en.wikipedia.org/wiki/Phil%20D%27Amato | Dr. Phil D’Amato is the central character in three science fiction mystery novelettes and three novels written by Paul Levinson. The first novelette, "The Chronology Protection Case", was adapted into a radio play which was nominated for an Edgar Award by the Mystery Writers of America. The first novel, The Silk Code, won the Locus Award for the Best First Novel of 1999. The fictional D'Amato, who has a PhD in forensic science, is a detective with the NYPD.
Novelettes
"The Chronology Protection Case"
Dr. Phil D’Amato debuted in "The Chronology Protection Case", published in the American magazine Analog in 1995. The novelette was nominated for Nebula and Sturgeon Awards.
It has been reprinted five times:
The Mammoth Book of Time Travel SF edited by Mike Ashley, 2013
The Best Time Travel Stories of All Time edited by Barry N. Malzberg, 2003
Nebula Awards 32: SFWA's Choices for the Best Science Fiction and Fantasy of the Year edited by Jack Dann, 1998
Supernatural Sleuths edited by Charles G. Waugh & Martin H. Greenberg, 1996
Infinite Edge, a webzine, 1997
The novelette was adapted into a radio play written by Mark Shanahan (with Paul Levinson and Jay Kensinger) in 2002, and performed at New York City's Museum of Television and Radio. In addition to being nominated for the Edgar Allan Poe Award for Best Play, the radio play was later recorded and released as an audiobook by Listen & Live/Audible.com in 2004.
"The Chronology Protection Case" was also adapted into a student film by director Jay Kensinger, which premiered at the I-Con SF Convention in 2002, and was later released on DVD by MODVEC Productions. A re-cut version of the movie, in black-and-white and with a new extended ending, was released in 2013 on Amazon Prime Video.
"The Chronology Protection Case" extrapolates from Stephen Hawking’s chronology protection conjecture, and posits a vengeful universe that seeks to protect itself from time travel by killing any scientists who discover or even begin to understand how to do it.
"The Copyright Notice Case"
D’Amato returned in "The Copyright Notice Case", published in Analog in 1996. The novelette was nominated for a Nebula Award, won the HOMer Award, and was reprinted in Levinson’s anthology, Bestseller: Wired, Analog, and Digital Writings in 1999.
The novelette explores what might happen had an inviolable copyright notice been embedded in human DNA in the prehistoric past. Phil meets Jenna Katen for the first time in this story.
"The Mendelian Lamp Case"
D’Amato’s last appearance in short fiction to date came in "The Mendelian Lamp Case", a novelette published in Analog in 1997.
It was reprinted three times:
The Hard SF Renaissance edited by David G. Hartwell & Kathryn Cramer, 2002
Science Fiction Theater edited by Brian Forbes, 1999
Year's Best SF3 edited by David G. Hartwell, 1998
While investigating the mysterious death of a friend, D'Amato discovers an Amish-like group who use innocent-looking bio-technology for nefarious ends.
Novels
The Silk Code
Levinson continues the intrigue of "The Mendelian Lamp Case" in his first novel, The Silk Code (Tor Books, 1999). Phil D’Amato uses his debut in longer form to explore not only bio-technology and groups masquerading as Amish, but the possible survival of Neanderthals into the present day.
The Silk Code won the Locus Award for Best First Novel of 1999. A Polish translation —Kod Jedwabiu— was published in 2003. An "author's cut" Kindle edition was published by JoSara Media in 2012.
[[File:Consciousness_Plague.jpg|100px|thumb|The Consciousness Plague (2002).]]
The Consciousness Plague
D’Amato’s next novelistic outing was in The Consciousness Plague, published in 2002 by Tor Books. Here D’Amato gets caught up in the possibility that our very consciousness may be engendered by microorganisms that live in the brain. Paths of exploration in this novel range from Lindisfarne to Julian Jaynes’s Origin of Consciousness in the Breakdown of the Bicameral Mind.The Consciousness Plague won the Media Ecology Association’s Mary Shelley Award and was translated into Polish as Zaraza Swiadomosci and published in 2005. An audiobook narrated by Mark Shanahan was released by Listen and Live in 2005, and nominated for the Audie Award that year. An "author's cut" Kindle edition was published by JoSara Media in 2013.
The Pixel Eye
Phil D’Amato’s most recent appearance is in The Pixel Eye, published in 2003 by Tor Books. In this chilling post 9/11 tale set in New York City, D’Amato contends with squirrels and other critters whose brains are outfitted with microchips that transmit everything they see and hear. Holography figures prominently in the story. Although all of the D’Amato stories are set in New York, The Pixel Eye has the most extensive New York ambiences - from Central Park to the New York Public Library on Fifth Avenue to Grand Central Terminal.The Pixel Eye was nominated for the Prometheus Award in 2004. An "author's cut" Kindle edition was published by JoSara Media in 2014.
Critical commentary
Multiple Nebula and Hugo Award winning author Connie Willis said "Forensic detective Phil D’Amato is one of my favorite characters."
See alsoThe Plot to Save Socrates''
Paul Levinson
References
Fictional American Jews
Fictional detectives
Fictional American police detectives
D'Amato, Phil
Science fiction characters
Speculative crime and thriller fiction
Literary characters introduced in 1995
Characters in American novels of the 20th century
Characters in American novels of the 21st century
Fiction about Neanderthals
Fiction about cyborgs
Fiction about holography | Phil D'Amato | Biology | 1,248 |
77,581,443 | https://en.wikipedia.org/wiki/Aron%20Walsh | Aron Walsh (born February 28, 1983) is a chemist known for his research in the fields of computational chemistry and materials science.
Early life and education
Walsh received his undergraduate degree in chemistry from Trinity College Dublin. He went on to complete his PhD in computational chemistry at the same institution. His postdoctoral research included a Marie Curie Fellowship at University College London and a fellowship at the National Renewable Energy Laboratory in the United States.
Academic career
Walsh began his academic career as a Royal Society University Research Fellow at the University of Bath, where he also served as a professor of Materials Theory. He holds a faculty position at Imperial College London leading the Materials Design Group.
Research contributions
Walsh's research integrates quantum mechanics with data-driven machine learning and multi-scale modeling approaches.
Awards and honours
Royal Society of Chemistry Harrison-Meldola Memorial Prize (2013)
Marsh Prize for Best Chemistry Publication (2014) by the University of Bath.
Publications and editorial work
Walsh has written or co-written over 500 research articles. Additionally, he serves as an Associate Editor for the Journal of the American Chemical Society (JACS).
References
1983 births
Living people
Alumni of Trinity College Dublin
Computational chemists
Materials scientists and engineers
Academics of Imperial College London
Academics of the University of Bath | Aron Walsh | Chemistry,Materials_science,Engineering | 253 |
4,558,674 | https://en.wikipedia.org/wiki/Bitext%20word%20alignment | Bitext word alignment or simply word alignment is the natural language processing task of identifying translation relationships among the words (or more rarely multiword units) in a bitext, resulting in a bipartite graph between the two sides of the bitext, with an arc between two words if and only if they are translations of one another. Word alignment is typically done after sentence alignment has already identified pairs of sentences that are translations of one another.
Bitext word alignment is an important supporting task for most methods of statistical machine translation. The parameters of statistical machine translation models are typically estimated by observing word-aligned bitexts, and conversely automatic word alignment is typically done by choosing that alignment which best fits a statistical machine translation model. Circular application of these two ideas results in an instance of the expectation-maximization algorithm.
This approach to training is an instance of unsupervised learning, in that the system is not given examples of the kind of output desired, but is trying to find values for the unobserved model and alignments which best explain the observed bitext. Recent work has begun to explore supervised methods which rely on presenting the system with a (usually small) number of manually aligned sentences. In addition to the benefit of the additional information provided by supervision, these models are typically also able to more easily take advantage of combining many features of the data, such as context, syntactic structure, part-of-speech, or translation lexicon information, which are difficult to integrate into the generative statistical models traditionally used.
Besides the training of machine translation systems, other applications of word alignment include translation lexicon induction, word sense discovery, word sense disambiguation and the cross-lingual projection of linguistic information.
Training
IBM Models
The IBM models are used in Statistical machine translation to train a translation model and an alignment model. They are an instance of the Expectation–maximization algorithm: in the expectation-step the translation probabilities within each sentence are computed, in the maximization step they are accumulated to global translation probabilities.
Features:
IBM Model 1: lexical alignment probabilities
IBM Model 2: absolute positions
IBM Model 3: fertilities (supports insertions)
IBM Model 4: relative positions
IBM Model 5: fixes deficiencies (ensures that no two words can be aligned to the same position)
HMM
Vogel et al. developed an approach featuring lexical translation probabilities and relative alignment by mapping the problem to a Hidden Markov model. The states and observations represent the source and target words respectively. The transition probabilities model the alignment probabilities. In training the translation and alignment probabilities can be obtained from and in the Forward-backward algorithm.
Software
GIZA++ (free software under GPL)
The most widely used alignment toolkit, implementing the famous IBM models with a variety of improvements
The Berkeley Word Aligner (free software under GPL)
Another widely used aligner implementing alignment by agreement, and discriminative models for alignment
Nile (free software under GPL)
A supervised word aligner that is able to use syntactic information on the source and target side
pialign (free software under the Common Public License)
An aligner that aligns both words and phrases using Bayesian learning and inversion transduction grammars
Natura Alignment Tools (NATools, free software under GPL)
UNL aligner (free software under Creative Commons Attribution 3.0 Unported License)
Geometric Mapping and Alignment (GMA) (free software under GPL)
HunAlign (free software under LGPL-2.1)
Anymalign (free software under GPL)
References
Machine translation | Bitext word alignment | Technology | 747 |
9,035,542 | https://en.wikipedia.org/wiki/Extragalactic%20cosmic%20ray | Extragalactic cosmic rays are very-high-energy particles that flow into the Solar System from beyond the Milky Way galaxy. While at low energies, the majority of cosmic rays originate within the Galaxy (such as from supernova remnants), at high energies the cosmic ray spectrum is dominated by these extragalactic cosmic rays. The exact energy at which the transition from galactic to extragalactic cosmic rays occurs is not clear, but it is in the range 1017 to 1018 eV.
Observation
The observation of extragalactic cosmic rays requires detectors with an extremely large surface area, due to the very limited flux. As a result, extragalactic cosmic rays are generally detected with ground-based observatories, by means of the extensive air showers they create. These ground based observatories can be either surface detectors, which observe the air shower particles which reach the ground, or air fluorescence detectors (also called 'fly's eye' detectors), which observe the fluorescence caused by the interaction of the charged air shower particles with the atmosphere. In either case, the ultimate aim is to find the mass and energy of the primary cosmic ray which created the shower. Surface detectors accomplish this by measuring the density of particles at the ground, while fluorescence detectors do so by measuring the depth of shower maximum (the depth from the top of the atmosphere at which the maximum number of particles are present in the shower). The two currently operating high energy cosmic ray observatories, the Pierre Auger Observatory and the Telescope Array, are hybrid detectors which use both of these methods. This hybrid methodology allows for a full three-dimensional reconstruction of the air shower, and gives much better directional information as well as more accurate determination of the type and energy of the primary cosmic ray than either technique on its own.
Pierre Auger Observatory
The Pierre Auger Observatory, located in the Mendoza province in Argentina, consists of 1660 surface detectors, each separated by 1.5 km and covering a total area of 3000 km2, and 27 fluorescence detectors at 4 different locations overlooking the surface detectors. The observatory has been in operation since 2004, and began operating at full capacity in 2008 once construction was completed. The surface detectors are water Cherenkov detectors, each detector being a tank 3.6 m in diameter. One of the Pierre Auger Observatory's most notable results is the detection of a dipole anisotropy in the arrival directions of cosmic rays with energy greater than 8 x 1018 eV, which was the first conclusive indication of their extragalactic origin.
Telescope Array
The Telescope Array is located in the state of Utah in the United States of America, and consists of 507 surface detectors separated by 1.2 km and covering a total area of 700 km2, and 3 fluorescence detector stations with 12-14 fluorescence detectors at each station. The Telescope Array was constructed by a collaboration between the teams formerly operating the Akeno Giant Air Shower Array (AGASA), which was a surface detector array in Japan, and the High Resolution Fly's Eye (HiRes), which was an air fluorescence detector also located in Utah. The Telescope Array was initially designed to detect cosmic rays with energy above 1019 eV, but an extension to the project, the Telescope Array Low Energy extension (TALE) is currently underway and will allow observation of cosmic rays with energies above 3 x 1016 eV
Spectrum and Composition
Two clear and long-known features of the spectrum of extragalactic cosmic rays are the 'ankle', which is a flattening of the spectrum at around 5 x 1018 eV, and suppression of the cosmic ray flux at high energies (above about 4 x 1019 eV). More recently the Pierre Auger Observatory also observed a steepening of the cosmic ray spectrum above the ankle, before the steep cutoff above than 1019 eV (see figure). The spectrum measured by the Pierre Auger Observatory does not appear to depend on the arrival direction of the cosmic rays. However, there are some discrepancies between the spectrum (specifically the energy at which the suppression of flux occurs) measured by the Pierre Auger Observatory in the Southern hemisphere and the Telescope Array in the Northern hemisphere. It is unclear whether this is the result of an unknown systematic error or a true difference between the cosmic rays arriving at the Northern and Southern hemispheres.
The interpretation of these features of the cosmic ray spectrum depends on the details of the model assumed.Historically the ankle is interpreted as the energy at which the steep Galactic cosmic ray spectrum transitions to a flat extragalactic spectrum. However diffusive shock acceleration in supernova remnants, which is the predominant source of cosmic rays below 1015 eV, can accelerate protons only up to 3 x 1015 eV and iron up to 8 x 1016 eV. Thus there must be an additional source of Galactic cosmic rays up to around 1018 eV. On the other hand, the 'dip' model assumes that the transition between Galactic and extragalactic cosmic rays occurs at about 1017 eV. This model assumes that extragalactic cosmic rays are composed purely of protons, and the ankle is interpreted as being due to pair production arising from interactions of cosmic rays with the Cosmic Microwave Background (CMB). This suppresses the cosmic ray flux and thus causes a flattening of the spectrum. Older data, as well as more recent data from the Telescope Array do favour a pure proton composition. However recent Auger data suggests a composition which is dominated by light elements to 2 x 1018 eV, but becomes increasingly dominated by heavier elements with increasing energy. In this case a source of the protons below 2 x 1018 eV is needed.
The suppression of flux at high energies is generally assumed to be due to the Greisen–Zatsepin–Kuz'min (GZK) effect in the case of protons, or due to photodisintegration by the CMB (the Gerasimova-Rozental or GR effect) in the case of heavy nuclei. However it could also be because of the nature of the sources, that is because of the maximum energy to which sources can accelerate cosmic rays.
As mentioned above the Telescope Array and the Pierre Auger Observatory give different results for the most likely composition. However the data used to infer composition from these two observatories is consistent once all systematic effects are taken into account. The composition of extragalactic cosmic rays is thus still ambiguous
Origin
Unlike solar or galactic cosmic rays, little is known about the origins of extragalactic cosmic rays. This is largely due to a lack of statistics: only about 1 extragalactic cosmic ray particle per square kilometer per year reaches the Earth's surface (see figure). The possible sources of these cosmic rays must satisfy the Hillas criterion,
where E is the energy of the particle, q its electric charge, B is the magnetic field in the source and R the size of the source. This criterion comes from the fact that for a particle to be accelerated to a given energy, its Larmor radius must be less than the size of the accelerating region. Once the Larmor radius of the particle is greater than the size of the accelerating region, it escapes and does not gain any more energy. As a consequence of this, heavier nuclei (with a greater number of protons), if present, can be accelerated to higher energies than protons within the same source.
Active galactic nuclei
Active galactic nuclei (AGNs) are well known to be some of the most energetic objects in the universe, and are therefore often considered as candidates for the production of extragalactic cosmic rays. Given their extremely high luminosity, AGNs can accelerate cosmic rays to the required energies even if only 1/1000 of their energy is used for this acceleration. There is some observational support for this hypothesis. Analysis of cosmic ray measurements with the Pierre Auger Observatory suggests a correlation between the arrival directions of cosmic rays of the highest energies of more than 5×1019 eV and the positions of nearby active galaxies. In 2017, IceCube detected a high energy neutrino with energy 290 TeV whose direction was consistent with a flaring blazar, TXS 0506-056, which strengthened the case for AGNs as a source of extragalactic cosmic rays. Since high-energy neutrinos are assumed to come from the decay of pions produced by the interaction of correspondingly high-energy protons with the Cosmic Microwave Background (CMB) (photo-pion production), or from the photodisintegration of energetic nuclei, and since neutrinos travel essentially unimpeded through the universe, they can be traced back to the source of high-energy cosmic rays.
Clusters of galaxies
Galaxy clusters continuously accrete gas and galaxies from filaments of the cosmic web. As the cold gas which is accreted falls into the hot intracluster medium, it gives rise to shocks at the outskirts of the cluster, which could accelerate cosmic rays through the diffusive shock acceleration mechanism. Large scale radio halos and radio relics, which are expected to be due to synchrotron emission from relativistic electrons, show that clusters do host high energy particles. Studies have found that shocks in clusters can accelerate iron nuclei to 1020 eV, which is nearly as much as the most energetic cosmic rays observed by the Pierre Auger Observatory. However, if clusters do accelerate protons or nuclei to such high energies, they should also produce gamma ray emission due to the interaction of the high-energy particles with the intracluster medium. This gamma ray emission has not yet been observed, which is difficult to explain.
Gamma ray bursts
Gamma ray bursts (GRBs) were originally proposed as a possible source of extragalactic cosmic rays because the energy required to produce the observed flux of cosmic rays was similar their typical luminosity in γ-rays, and because they could accelerate protons to energies of 1020 eV through diffusive shock acceleration. Long gamma ray bursts (GRBs) are especially interesting as possible sources of extragalactic cosmic rays in light of the evidence for a heavier composition at higher energies. Long GRBs are associated with the death of massive stars, which are well known to produce heavy elements. However, in this case many of the heavy nuclei would be photo-disintegrated, leading to considerable neutrino emission also associated with GRBs, which has not been observed. Some studies have suggested that a specific population of GRBs known as low-luminosity GRBs might resolve this, as the lower luminosity would lead to less photo-dissociation and neutrino production. These low luminosity GRBs could also simultaneously account for the observed high-energy neutrinos. However it has also been argued that these low-luminosity GRBs are not energetic enough to be a major source of high energy cosmic rays.
Neutron stars
Neutron stars are formed from the core collapse of massive stars, and as with GRBs can be a source of heavy nuclei. In models with neutron stars - specifically young pulsars or magnetars - as the source of extragalactic cosmic rays, heavy elements (mainly iron) are stripped from the surface of the object by the electric field created by the magnetized neutron star's rapid rotation. This same electric field can accelerate iron nucleii up to 1020 eV. The photodisintegration of the heavy nucleii would produce lighter elements with lower energies, matching the observations of the Pierre Auger Observatory. In this scenario, the cosmic rays accelerated by neutron stars within the Milky Way could fill in the 'transition region' between Galactic cosmic rays produced in supernova remnants, and extragalactic cosmic rays.
See also
Ultra-high-energy cosmic ray
References
Astrophysics
Astroparticle physics
Cosmic rays | Extragalactic cosmic ray | Physics,Astronomy | 2,419 |
42,483,816 | https://en.wikipedia.org/wiki/Boiler%20stay | A boiler stay is an internal structural element of a boiler. Where the shell of a boiler or other pressure vessel is made of cylindrical or (part) spherical elements, the internal pressure will be contained without distortion. However, flat surfaces of any significant size will distort under pressure, tending to bulge.
Types
Stays of various types are used to support these surfaces by tying them together to resist pressure. Some boiler configurations require a great deal of staying. A large locomotive boiler may require several thousand stays to support the firebox. In water tube boilers, stays were sometimes used between their main chambers, and could themselves be water tubes. A knuckle joint is used for diagonal stays in a boiler.
A cylindrical firebox may be self-supporting without stays because of its shape.
Gallery
See also
Boiler blowdown
References
External links
Efficient Oil Boilers
Boilers
Steam boiler components | Boiler stay | Chemistry,Engineering | 174 |
72,579,502 | https://en.wikipedia.org/wiki/HD%20170642 | HD 170642, also designated as HR 6942 or rarely 13 G. Coronae Australis, is a single star located in the southern constellation Corona Australis. It is faintly visible to the naked eye as a white hued star with an apparent magnitude of 5.16. The object is located relatively close at a distance of 229 light years based on Hipparcos parallax measurements, but it is approaching the Solar System with a somewhat constrained heliocentric radial velocity of . At its current distance, HD 170642's brightness is diminished by 0.28 magnitudes due to interstellar dust. It has an absolute magnitude of +0.93.
This is an ordinary A-type main-sequence star with a stellar classification of A3 V. Other sources include broad/nebulous absorption lines due to rapid rotation. It has 2.25 times the mass of the Sun and is estimated to be 480 million years old. HD 170642 has a radius of . When combined with an effective temperature of , it radiates 32.6 times the luminosity of the Sun from its photosphere. The star is metal enriched, having an iron abundance 74% greater than the Sun's. Like many hot stars HD 170642 spins rapidly, having a projected rotational velocity of .
References
A-type main-sequence stars
Corona Australis
Coronae Australis, 13
CD-39 12696
170642
090887
6942 | HD 170642 | Astronomy | 305 |
61,498,989 | https://en.wikipedia.org/wiki/Hexagonal%20architecture%20%28software%29 | The hexagonal architecture, or ports and adapters architecture, is an architectural pattern used in software design. It aims at creating loosely coupled application components that can be easily connected to their software environment by means of ports and adapters. This makes components exchangeable at any level and facilitates test automation.
Origin
The hexagonal architecture was invented by Alistair Cockburn in an attempt to avoid known structural pitfalls in object-oriented software design, such as undesired dependencies between layers and contamination of user interface code with business logic. It was discussed at first on the Portland Pattern Repository wiki; in 2005 Cockburn renamed it "Ports and adapters". In April 2024, Cockburn published a comprehensive book on the subject, coauthored with Juan Manuel Garrido de Paz.
The term "hexagonal" comes from the graphical conventions that shows the application component like a hexagonal cell. The purpose was not to suggest that there would be six borders/ports, but to leave enough space to represent the different interfaces needed between the component and the external world.
Principle
The hexagonal architecture divides a system into several loosely-coupled interchangeable components, such as the application core, the database, the user interface, test scripts and interfaces with other systems. This approach is an alternative to the traditional layered architecture.
Each component is connected to the others through a number of exposed "ports". Communication through these ports follow a given protocol depending on their purpose. Ports and protocols define an abstract API that can be implemented by any suitable technical means (e.g. method invocation in an object-oriented language, remote procedure calls, or Web services).
The granularity of the ports and their number is not constrained:
a single port could in some case be sufficient (e.g. in the case of a simple service consumer);
typically, there are ports for event sources (user interface, automatic feeding), notifications (outgoing notifications), database (in order to interface the component with any suitable DBMS), and administration (for controlling the component);
in an extreme case, there could be a different port for every use case, if needed.
Adapters are the glue between components and the outside world. They tailor the exchanges between the external world and the ports that represent the requirements of the inside of the application component. There can be several adapters for one port, for example, data can be provided by a user through a GUI or a command-line interface, by an automated data source, or by test scripts.
Criticism
The term "hexagonal" implies that there are 6 parts to the concept, whereas there are only 4 key areas. The term’s usage comes from the graphical conventions that shows the application component like a hexagonal cell. The purpose was not to suggest that there would be six borders/ports, but to leave enough space to represent the different interfaces needed between the component and the external world.
According to Martin Fowler, the hexagonal architecture has the benefit of using similarities between presentation layer and data source layer to create symmetric components made of a core surrounded by interfaces, but with the drawback of hiding the inherent asymmetry between a service provider and a service consumer that would better be represented as layers.
Evolution
According to some authors, the hexagonal architecture is at the origin of the microservices architecture.
Variants
The onion architecture proposed by Jeffrey Palermo in 2008 is similar to the hexagonal architecture: it also externalizes the infrastructure with interfaces to ensure loose coupling between the application and the database. It decomposes further the application core into several concentric rings using inversion of control.
The clean architecture proposed by Robert C. Martin in 2012 combines the principles of the hexagonal architecture, the onion architecture and several other variants. It provides additional levels of detail of the component, which are presented as concentric rings. It isolates adapters and interfaces (user interface, databases, external systems, devices) in the outer rings of the architecture and leaves the inner rings for use cases and entities. The clean architecture uses the principle of dependency inversion with the strict rule that dependencies shall only exist between an outer ring to an inner ring and never the contrary.
See also
Architecture patterns
Cell-based architecture
Layer (object-oriented design)
Composite structure diagram
Object oriented analysis and design
References
Software design
Architectural pattern (computer science)
Object-oriented programming | Hexagonal architecture (software) | Engineering | 895 |
21,112,598 | https://en.wikipedia.org/wiki/For%20position%20only | In graphic design and printing, the phrases for position only or for placement only, or the initialism FPO, indicate materials that have been used as placeholders in a layout prior to it being declared finished and ready for publication.
These placeholders, commonly either blank frames or stock photographs, indicate that as-yet-unavailable images or illustrations will be placed in the final layout. That allows design work to continue without risk that the pagination or other elements of the layout will be changed when the final images become available, and similarly in web design provides a means for developers to finish coding and testing the website without having to wait for the actual image files.
They can also be used to produce proof copies of the layout in order to assess its general appearance, similar to the use in typesetting of generic filler text blocks like "lorem ipsum", which give an indication of how the finished type will look in the layout.
Because of the risk that placeholder material might accidentally be published, it is clearly marked with an FPO indicator in the form of a simulated watermark or overprint, stamp, or the like in the expectation that it will make the placeholder's presence obvious to designers working on the layout; reviewing proof copies; or, as a last resort, checking the finished publication. Preventing the accidental "escape" of FPO material is particularly important when the placeholder used is a stock photograph or other copyrighted work that the publisher does not have permission to use in public.
References
Printing terminology
Web design | For position only | Engineering | 311 |
53,413,131 | https://en.wikipedia.org/wiki/Planigon | In geometry, a planigon is a convex polygon that can fill the plane with only copies of itself (isotopic to the fundamental units of monohedral tessellations). In the Euclidean plane there are 3 regular planigons; equilateral triangle, squares, and regular hexagons; and 8 semiregular planigons; and 4 demiregular planigons which can tile the plane only with other planigons.
All angles of a planigon are whole divisors of 360°. Tilings are made by edge-to-edge connections by perpendicular bisectors of the edges of the original uniform lattice, or centroids along common edges (they coincide).
Tilings made from planigons can be seen as dual tilings to the regular, semiregular, and demiregular tilings of the plane by regular polygons.
History
In the 1987 book, Tilings and patterns, Branko Grünbaum calls the vertex-uniform tilings Archimedean in parallel to the Archimedean solids. Their dual tilings are called Laves tilings in honor of crystallographer Fritz Laves. They're also called Shubnikov–Laves tilings after Shubnikov, Alekseĭ Vasilʹevich. John Conway calls the uniform duals Catalan tilings, in parallel to the Catalan solid polyhedra.
The Laves tilings have vertices at the centers of the regular polygons, and edges connecting centers of regular polygons that share an edge. The tiles of the Laves tilings are called planigons. This includes the 3 regular tiles (triangle, square and hexagon) and 8 irregular ones. Each vertex has edges evenly spaced around it. Three dimensional analogues of the planigons are called stereohedrons.
These tilings are listed by their face configuration, the number of faces at each vertex of a face. For example V4.8.8 (or V4.82) means isosceles triangle tiles with one corner with four triangles, and two corners containing eight triangles.
Construction
The Conway operation of dual interchanges faces and vertices. In Archimedean solids and k-uniform tilings alike, the new vertex coincides with the center of each regular face, or the centroid. In the Euclidean (plane) case; in order to make new faces around each original vertex, the centroids must be connected by new edges, each of which must intersect exactly one of the original edges. Since regular polygons have dihedral symmetry, we see that these new centroid-centroid edges must be perpendicular bisectors of the common original edges (e.g. the centroid lies on all edge perpendicular bisectors of a regular polygon). Thus, the edges of k-dual uniform tilings coincide with centroid-to-edge-midpoint line segments of all regular polygons in the k-uniform tilings.
Using the 12-5 Dodecagram (Above)
All 14 uniform usable regular vertex planigons also hail from the 6-5 dodecagram (where each segment subtends radians, or 150 degrees).
The incircle of this dodecagram demonstrates that all the 14 VRPs are cocyclic, as alternatively shown by circle packings. The ratio of the incircle to the circumcircle is:
and the convex hull is precisely the regular dodecagons in the k-uniform tiling. The equilateral triangle, square, regular hexagon, and regular dodecagon; are shown above with the VRPs.
In fact, any group of planigons can be constructed from the edges of a polygram, where and is the number of sides of sides in the RP adjacent to each involved vertex figure. This is because the circumradius of any regular -gon (from the vertex to the centroid) is the same as the distance from the center of the polygram to its line segments which intersect at the angle , since all polygrams admit incircles of inradii tangent to all its sides.
Regular Vertices
In Tilings and Patterns, Grünbaum also constructed the Laves tilings using monohedral tiles with regular vertices. A vertex is regular if all angles emanating from it are equal. In other words:
All vertices are regular,
All Laves planigons are congruent.
In this way, all Laves tilings are unique except for the square tiling (1 degree of freedom), barn pentagonal tiling (1 degree of freedom), and hexagonal tiling (2 degrees of freedom):
When applied to higher dual co-uniform tilings, all dual coregular planigons can be distorted except for the triangles (AAA similarity), with examples below:
Derivation of all possible planigons
For edge-to-edge Euclidean tilings, the interior angles of the convex polygons meeting at a vertex must add to 360 degrees. A regular -gon has internal angle degrees. There are seventeen combinations of regular polygons whose internal angles add up to 360 degrees, each being referred to as a species of vertex; in four cases there are two distinct cyclic orders of the polygons, yielding twenty-one types of vertex.
In fact, with the vertex (interior) angles , we can find all combinations of admissible corner angles according to the following rules:
Every vertex has at least degree 3 (a degree-2 vertex must have two straight angles or one reflex angle);
If the vertex has degree , the smallest polygon vertex angles sum to over ;
The vertex angles add to , and must be angles of regular polygons of positive integer sides (of the sequence ).
Using the rules generates the list below:
*The cannot coexist with any other vertex types.
The solution to Challenge Problem 9.46, Geometry (Rusczyk), is in the Degree 3 Vertex column above. A triangle with a hendecagon (11-gon) yields a 13.2-gon, a square with a heptagon (7-gon) yields a 9.3333-gon, and a pentagon with a hexagon yields a 7.5-gon). Hence there are combinations of regular polygons which meet at a vertex.
Planigons in the plane
Only eleven of these angle combinations can occur in a Laves Tiling of planigons.
In particular, if three polygons meet at a vertex and one has an odd number of sides, the other two polygons must be the same. If they are not, they would have to alternate around the first polygon, which is impossible if its number of sides is odd. By that restriction these six cannot appear in any tiling of regular polygons:
On the other hand, these four can be used in k-dual-uniform tilings:
Finally, assuming unit side length, all regular polygons and usable planigons have side-lengths and areas as shown below in the table:
Number of Dual Uniform Tilings
Every dual uniform tiling is in a 1:1 correspondence with the corresponding uniform tiling, by construction of the planigons above and superimposition.
Such periodic tilings may be classified by the number of orbits of vertices, edges and tiles. If there are k orbits of planigons, a tiling is known as k-dual-uniform or k-isohedral; if there are t orbits of dual vertices, as t-isogonal; if there are e orbits of edges, as e-isotoxal.
k-dual-uniform tilings with the same vertex faces can be further identified by their wallpaper group symmetry, which is identical to that of the corresponding k-uniform tiling.
1-dual-uniform tilings include 3 regular tilings, and 8 Laves tilings, with 2 or more types of regular degree vertices. There are 20 2-dual-uniform tilings, 61 3-dual-uniform tilings, 151 4-dual-uniform tilings, 332 5-dual-uniform tilings and 673 6--dualuniform tilings. Each can be grouped by the number m of distinct vertex figures, which are also called m-Archimedean tilings.
Finally, if the number of types of planigons is the same as the uniformity (m = k below), then the tiling is said to be dual Krotenheerdt. In general, the uniformity is greater than or equal to the number of types of vertices (m ≥ k), as different types of planigons necessarily have different orbits, but not vice versa. Setting m = n = k, there are 11 such dual tilings for n = 1; 20 such dual tilings for n = 2; 39 such dual tilings for n = 3; 33 such dual tilings for n = 4; 15 such dual tilings for n = 5; 10 such dual tilings for n = 6; and 7 such dual tilings for n = 7.
Regular and Laves tilings
The 3 regular and 8 semiregular Laves tilings are shown, with planigons colored according to area as in the construction:
Higher Dual Uniform Tilings
Insets of Dual Planigons into Higher Degree Vertices
A degree-six vertex can be replaced by a center regular hexagon and six edges emanating thereof;
A degree-twelve vertex can be replaced by six deltoids (a center deltoidal hexagon) and twelve edges emanating thereof;
A degree-twelve vertex can be replaced by six Cairo pentagons, a center hexagon, and twelve edges emanating thereof (by dissecting the degree-6 vertex in the center of the previous example).
This is done above for the dual of the 3-4-6-12 tiling. The corresponding uniform process is dissection, and is shown here.
2-Dual-Uniform
There are 20 tilings made from 2 types of planigons, the dual of 2-uniform tilings (Krotenheerdt Duals):
3-Dual-Uniform
There are 39 tilings made from 3 types of planigons (Krotenheerdt Duals):
4-Dual-Uniform
There are 33 tilings made from 4 types of planigons (Krotenheerdt Duals):
5-Dual-Uniform
There are 15 5-uniform dual tilings with 5 unique planigons:
Krotenheerdt duals with six planigons
There are 10 6-uniform dual tilings with 6 unique planigons:
Krotenheerdt duals with seven planigons
There are 7 7-uniform dual tilings with 7 unique planigons:
The last two dual uniform-7 tilings have the same vertex types, even though they look nothing alike!
From onward, there are no uniform n tilings with n vertex types, or no uniform n duals with n distinct (semi)planigons.
Fractalizing Dual k-Uniform Tilings
There are many ways of generating new k-dual-uniform tilings from other k-uniform tilings. Three ways is to scale by as seen below:
Large Fractalization
To enlarge the planigons V32.4.12 and V3.4.3.12 using the truncated trihexagonal method, a scale factor of must be applied:
Big Fractalization
By two 9-uniform tilings in a big fractalization is achieved by a scale factor of 3 in all planigons. In the case of s,C,B,H its own planigon is in the exact center:
The two 9-uniform tilings are shown below, fractalizations of the demiregulars DC and DB, and a general example on S2TC:
Miscellaneous
Centroid-Centroid Construction
Dual co-uniform tilings (red) along with the originals (blue) of selected tilings. Generated by centroid-edge midpoint construction by polygon-centroid-vertex detection, rounding the angle of each co-edge to the nearest 15 degrees. Since the unit size of tilings varies from 15 to 18 pixels and every regular polygon slightly differs, there is some overlap or breaks of dual edges (an 18-pixel size generator incorrectly generates co-edges from five 15-pixel size tilings, classifying some squares as triangles).
Other Edge-Edge Construction Comparisons
Other edge-edge construction comparisons. Rotates every 3 seconds.
Affine Linear Expansions
Below are affine linear expansions of other uniform tilings, from the original to the dual and back:
The first 12-uniform tiling contains all planigons with three types of vertices, and the second 12-uniform tiling contains all types of edges.
Optimized Tilings
If - tiling means dual uniform, Catalaves tiling, then there exists a 11-9 tiling, a 13-10 tiling, 15-11 tiling, a 19-12 tiling, two 22-13 tilings, and a 24-14 tiling. Also exists a 13-8 slab tiling and a 14-10 non-clock tiling. Finally, there are 7-5 tilings using all clock planigons:
Circle Packing
Each uniform tiling corresponds to a circle packing, in which circles of diameter 1 are placed at all vertex points, corresponding to the planigons. Below are the circle packings of the Optimized Tilings and all-edge tiling:
5-dual-uniform 4-Catalaves tilings
A slideshow of all 94 5-dual-uniform tilings with 4 distinct planigons. Changes every 6 seconds, cycles every 60 seconds.
Clock Tilings
All tilings with regular dodecagons in are shown below, alternating between uniform and dual co-uniform every 5 seconds:
65 k-Uniform Tilings
A comparison of 65 k uniform tilings in uniform planar tilings and their dual uniform tilings. The two lower rows coincide and are to scale:
References
Planigon tessellation cellular automata Alexander Korobov, 30 September 1999
B. N. Delone, “Theory of planigons”, Izv. Akad. Nauk SSSR Ser. Mat., 23:3 (1959), 365–386
Types of polygons
Euclidean tilings | Planigon | Physics,Mathematics | 3,001 |
173,077 | https://en.wikipedia.org/wiki/Starwisp | Starwisp is a hypothetical unmanned interstellar probe design proposed by the late Robert L. Forward. It is propelled by a microwave sail, similar to a solar sail in concept, but powered by microwaves from a human-made source. It would fly through the target system without slowing down.
Description
"Starwisp" is a concept for an ultra-low-mass interstellar probe pushed by a microwave beam. It was proposed by scientist and author Robert L. Forward in 1985, and further work was published by Geoffrey A. Landis in 2000. The proposed device uses beam-powered propulsion in the form of a high-power microwave antenna pushing a sail. The probe itself would consist of a mesh of extremely fine carbon wires about 100 m across, with the wires spaced the same distance apart as the 3 mm wavelength of the microwaves that will be used to push it.
Forward proposed that the wires would incorporate nanoscale computer circuitry, sensors, microwave power collection systems and microwave radio transmitters fabricated on the wire surfaces, giving the probe data collection and transmission capability. Being distributed across the entire sail, no "rigging" is needed, as would be the case if the mission electronics were placed in a separate probe that was pulled by the sail.
The original Starwisp concept assumed that the microwaves would be efficiently reflected, with the wire mesh surface acting as a superconductor and nearly perfectly efficient mirror. This assumption is not valid. Landis showed that a grid will absorb a significant fraction of the power incident on it, and therefore cannot stay cool enough to be superconducting. The design is thermally limited, hence the use of carbon as the material in Landis's concept.
Low mass was the key feature of the Starwisp probe. In Landis's calculations, the mesh has a density of only 100 kg/km2, for a total mass of 1 kg, plus a payload of 80 grams.
Although the diffraction limit severely constrains the range of the transmitting antenna, the probe is designed to have an acceleration of 24 m/s2, so that it can reach a significant fraction of the speed of light within a very short distance, before passing out of range. The antenna uses a microwave lens 560 km in diameter, would transmit 56 GW of power, and would accelerate the probe to 10% of the speed of light.
The probe would cruise without power for decades until it finally approached the target star, at which point the antenna which launched it would again target its beam on Starwisp. This would be done when the Starwisp was about 80% of the way to its destination, so that the beam and Starwisp would arrive there at the same time. At such extreme long range the antenna would be unable to provide any propulsion, but Starwisp would be able to use its wire sail to collect and convert some of the microwave energy into electricity to operate its sensors and transmit the data it collects back home. Starwisp would not slow down at the target star, performing a high-speed flyby mission instead.
Since the antenna is only required for a few days at Starwisp's launch and again for another few days several decades later to power it while it passes its target, Starwisp probes might be mass-produced and launched by the maser every few days. In this manner, a continuous stream of data could be collected about distant solar systems even though any given Starwisp probe only spends a few days travelling through it. Alternatively, the launching transmitter could be used in the interim to transmit power to Earth for commercial use, as with a solar power satellite.
Possible methods of fabrication
Constructing such a delicate probe would be a significant challenge. One proposed method would be to "paint" the probe and its circuitry onto an enormous sheet of plastic which degrades when exposed to ultraviolet light, and then wait for the sheet to evaporate away under the assault of solar UV after it has been deployed in space.
Another proposed method noted that the Starwisp probe wires were of the same physical scale as wires and circuit elements on modern computer microchips and could be produced by the same photolithographic fabrication technologies as those of computer chips. The probe would have to be built in sections the size of current chip fabrication silicon wafers and then connected together.
Technical problems
A major problem this design would face would be the radiation encountered en route. Travelling at 20% of light speed, ordinary interstellar hydrogen would become a significant radiation hazard, and the Starwisp would be without shielding and likely without active self-repair capability. Another problem would be keeping the acceleration of the Starwisp uniform enough across its sail area so that its delicate wires would not tear or be twisted out of shape. Distorting the shape of the Starwisp even slightly could result in a runaway catastrophe, since one portion of the Starwisp would be reflecting microwaves in a different direction than the other portion and be thrust even farther out of shape. Such delicate and finely-balanced control may prove impossible to realize.
The possibility of using a dusty plasma sail in which a dusty substance that is maintained as a plasma within space is responsible for the reflection of electromagnetic radiation could circumvent problems associated with radiation damage to the medium responsible for the transfer of radiation pressure (the dusty plasma sail might not be as easy to damage as a thin film or the like). Dusty plasma sails can also adapt their three-dimensional structure in real time to ensure reflection perpendicular to any incident light/microwave beam.
In fiction
In a science fiction story Forward suggested that the beam from a solar power satellite could be used to push a Starwisp probe while the solar power satellite was being tested after construction.
See also
Breakthrough Starshot, a funded proposal for laser-propelled Starwisp-type spacecraft
References
External links
Light Sails
Small Laser-propelled Interstellar Probe
Setting Sail for the Stars
Beamed Power Propulsion To The Stars
Hypothetical spacecraft
Interstellar travel
1985 in science | Starwisp | Astronomy,Technology | 1,218 |
3,279,703 | https://en.wikipedia.org/wiki/Gleason%20grading%20system | The Gleason grading system is used to help evaluate the prognosis of men with prostate cancer using samples from a prostate biopsy. Together with other parameters, it is incorporated into a strategy of prostate cancer staging which predicts prognosis and helps guide therapy. A Gleason score is given to prostate cancer based upon its microscopic appearance.
Cancers with a higher Gleason score are more aggressive and have a worse prognosis. Pathological scores range from 2 to 10, with higher numbers indicating greater risks and higher mortality. The system is widely accepted and used for clinical decision making even as it is recognised that certain biomarkers, like ACP1 expression, might yield higher predictive value for future disease course.
The histopathologic diagnosis of prostate cancer has implications for the possibility and methodology of Gleason scoring. For example, it is not recommended in signet-ring adenocarcinoma or urothelial carcinoma of the prostate, and the scoring should discount the foamy cytoplasms seen in foamy gland carcinoma.
A total score is calculated based on how cells look under a microscope, with the first half of the score based on the dominant, or most common cell morphology (scored 1 to 5), and the second half based on the non-dominant cell pattern with the highest grade (scored 1 to 5). These two numbers are then combined to produce a total score for the cancer.
Specimens and processing
Most often, a urologist or radiologist will remove a cylindrical sample (biopsy) of prostate tissue through the rectum (or, sometimes the perineum), using hollow needles, and biomedical scientists in a histology laboratory prepare microscope slides for H&E staining and immunohistochemistry for diagnosis by a pathologist. If the prostate is surgically removed, a pathologist will slice the prostate for a final examination.
Histologic patterns
A pathologist microscopically examines the biopsy specimen for certain "Gleason" patterns. These Gleason patterns are associated with the following features:
Pattern 1 - The cancerous prostate closely resembles normal prostate tissue. The glands are small, well-formed, and closely packed. This corresponds to a well differentiated carcinoma.
Pattern 2 - The tissue still has well-formed glands, but they are larger and have more tissue between them, implying that the stroma has increased. This also corresponds to a moderately differentiated carcinoma.
Pattern 3 - The tissue still has recognizable glands, but the cells are darker. At high magnification, some of these cells have left the glands and are beginning to invade the surrounding tissue or having an infiltrative pattern. This corresponds to a moderately differentiated carcinoma.
Pattern 4 - The tissue has few recognizable glands. Many cells are invading the surrounding tissue in neoplastic clumps. This corresponds to a poorly differentiated carcinoma.
Pattern 5 - The tissue does not have any or only a few recognizable glands. There are often just sheets of cells throughout the surrounding tissue. This corresponds to an anaplastic carcinoma.
In the present form of the Gleason system, prostate cancer of Gleason patterns 1 and 2 are rarely seen. Gleason pattern 3 is by far the most common.
Primary, secondary and tertiary grades
After analyzing the tissue samples, the pathologist then assigns a grade to the observed patterns of the tumor specimen.
Primary grade - assigned to the dominant pattern of the tumor (has to be greater than 50% of the total pattern seen).
Secondary grade - assigned to the next-most frequent pattern (has to be less than 50%, but at least 5%, of the pattern of the total cancer observed).
Tertiary grade - increasingly, pathologists provide details of the "tertiary" component. This is where there is a small component of a third (generally more aggressive) pattern.
Scores and prognoses
The pathologist then sums the pattern-number of the primary and secondary grades to obtain the final Gleason score. If only two patterns are seen, the first number of the score is that of the tumor's primary grade while the second number is that of the secondary grade, as described in the previous section. If three patterns are seen, the first number of the score would be the primary grade and the second number the pattern with the highest grade.
For example, if the primary tumor grade was 2 and the secondary tumor grade was 3 but some cells were found to be grade 4, the Gleason score would be 2+4=6. This is a slight change from the pre-2005 Gleason system where the second number was the secondary grade (i.e., the grade of the second-most common cell line pattern).
Key- blue: Gleason pattern 3 region, yellow: Gleason pattern 4 region, red: Gleason pattern 5 region
Gleason scores range from 2 to 10, with 2 representing the most well-differentiated tumors and 10 the least-differentiated tumors. Gleason scores have often been categorized into groups that show similar biologic behavior: low-grade (well-differentiated), intermediate-grade, moderate to poorly differentiated or high-grade.
More recently, an investigation of the Johns Hopkins Radical Prostatectomy Database (1982-2011) led to the proposed reporting of Gleason grades and prognostic grade groups as:
Gleason score ≤ 6 (prognostic grade group I);
Gleason score 3+4=7 (prognostic grade group II) indicating the majority is pattern 3;
Gleason score 4+3=7 (prognostic grade group III) where pattern 4 is dominant;
Gleason score 4+4=8 (prognostic grade group IV);
Gleason scores 9-10 (prognostic grade group V).
Prostate cancers with a Gleason score ≤ 6 usually have rather good prognoses.
Grading mechanism
The Gleason grade of architectural pattern is sometimes referred to as the Gleason architectural pattern.
The Gleason grade is based on tissue architectural patterns rather than purely cytological changes. These tissue patterns are classified into 5 grades, numbered 1 though 5. Lower numbers indicate more differentiation, with pattern 5 being the least differentiated. Differentiation is the degree to which the tissue, in this case the tumor, resembles native tissue. Greater resemblance (lower grade) is typically associated with a better prognosis.
However, the Gleason score is not simply the highest grade (least differentiated) pattern within the tumor. Rather, it is a combination of the most two most frequent patterns seen. This recognizes that prostatic carcinomas have multiple patterns and that prognosis is more accurately determined by adding the scores of the two most prevalent patterns. Using this system, the grades of the most prevalent and second most prevalent patterns (if at least 5% of the total), are added together to yield the overall Gleason score.
For example, if the most prevalent pattern/grade is 2, and the second most prevalent grade is 1, then the Gleason score is 2+1=3. If the neoplasm has only one pattern, the grade of that pattern is doubled to obtain the score. For example, if a tumor is entirely grade 1, the Gleason score would be 1+1=2. The most differentiated tumor would have the lowest score, Gleason 2 (1+1), while the most undifferentiated neoplasm (not resembling native prostate tissue) would have the highest score, Gleason 10 (5+5). Gleason scores range from 2 to 10; by definition there is no score of 0 or 1.
Cytological differences between normal prostate and neoplastic glands are evident in changes to the typical two cell layers of the gland. In prostatic adenocarcinoma, the basal (bottom, usually cuboidal type) cell layer is lost, with only the top layer (usually columnar to pseudostratified) remaining.
Score descriptions
Using this system, the most well-differentiated tumors have a Gleason score/grade of 2, and the least-differentiated tumors a score of 10. Range by definition is from 2-10, with architectural type from 1-5, and always added together or doubled, as described above. Gleason scores are often grouped together, based on similar behaviour: Grade 2-4 as well-differentiated, Grade 5-6 as intermediately-differentiated, Grade 7 as moderately to poorly differentiated (either 3+4=7, where the majority is pattern 3, or 4+3=7 in which pattern 4 dominates and indicates less differentiation., and Grade 8-10 as "high-grade."
Gleason 1
Gleason pattern 1 is the most well-differentiated tumor pattern. It is a well-defined nodule of single/separate, closely/densely packed, back-to-back gland pattern that does not invade into adjacent healthy prostatic tissue. The glands are round to oval shaped and proportionally large, compared to Gleason pattern 3 tumors, and are approximately equal in size and shape to one another.
Gleason 2
Gleason 2 is fairly well-circumscribed nodules of single, separate glands. However, the glands are looser in arrangement and not as uniform as in pattern 1. Minimal invasion by neoplastic glands into the surrounding healthy prostate tissue may be seen. Similar to Gleason 1, the glands are usually larger than those of Gleason 3 patterns, and are round to oval in shape. Thus the main difference between Gleason 1 and 2 is the density of packing of the glands seen; invasion is possible in Gleason 2, but by definition not in Gleason 1.
Gleason 3
Gleason 3 is a clearly infiltrative neoplasm, with extension into adjacent healthy prostate tissue. The glands alternate in size and shape, and are often long/angular. They are usually small/micro-glandular in comparison to Gleason 1 or 2 grades. However, some may be medium to large in size. The small glands of Gleason 3, in comparison to the small and poorly defined glands of pattern 4, are distinct glandular units. Mentally you could draw a circle around each of the glandular units in Gleason 3.
Gleason 4
Gleason pattern 4 glands are no longer single/separated glands like those seen in patterns 1-3. They look fused together, difficult to distinguish, with rare lumen formation vs Gleason 1-3 which usually all have open lumens (spaces) within the glands, or can be cribriform-(resembling the cribriform plate/similar to a sieve: an item with many perforations). Fused glands are chains, nests, or groups of glands that are no longer entirely separated by stroma-(connective tissue that normally separates individual glands in this case). Fused glands contain occasional stroma giving the appearance of "partial" separation of the glands. Due to this partial separation, fused glands sometimes have a scalloped (think looking at a slice of bread with bite taken out of it) appearance at their edges.
Gleason 5
Neoplasms have no glandular differentiation (thus not resembling normal prostate tissue at all). It is composed of sheets (groups of cells almost planar in appearance (like the top of a box), solid cords (group of cells in a rope like fashion running through other tissue/cell patterns seen), or individual cells. You should not see round glands with lumenal spaces that can be seen in the other types that resemble more the normal prostate gland appearance.
Prognosis
Gleason scores 2-4 are typically found in smaller tumors located in the transitional zone (around the urethra). These are typically found incidentally on surgery for benign prostatic hyperplasia, which is not itself a precursor lesion for prostatic carcinoma.
The majority of treatable/treated cancers are of Gleason scores 5-7 and are detected due to biopsy after abnormal digital rectal exam or prostate specific antigen evaluation. The cancer is typically located in the peripheral zone usually the posterior portion, explaining the rationale of performing the digital rectal exam.
Tumors with Gleason scores 8-10 tend to be advanced neoplasms that are unlikely to be cured. Although some evidence suggests that prostate cancers will become more aggressive over time, Gleason scores typically remain stable for several years.
The Gleason scores then become part of the TNM or Whitmore-Jewett prostate cancer staging system to provide prognosis.
History
The Gleason scoring system takes its name from Donald Gleason, a pathologist at the Minneapolis Veterans Affairs Hospital, who developed it with colleagues at that facility in the 1960s.
In 2005 the International Society of Urological Pathology altered the Gleason system, refining the criteria and changing the attribution of certain patterns. It has been shown that this "modified Gleason score" has higher performance than the original one, and is currently assumed standard in urological pathology. In this form, it remains an important tool.
However, problematic aspects of the original Gleason grading system still characterize the 2005 revision. The predominant lowest score assigned is Gleason 3+3 = 6. Patients who are told their Gleason score is 6 out of 10 may interpret that they have a more aggressive intermediate cancer and experience greater anxiety. More importantly, some classification systems fail to clearly distinguish between Gleason 3+4 = 7 and Gleason 4+3 = 7, with the latter having a worse prognosis.
Therefore, in 2014 an international multidisciplinary conference was convened to revise the 2005 system. A 5-point Gleason Grade grouping similar to those such as PI-RADS used with prostate MRI evaluations was proposed to denote prognostically distinct stratification. Grade 1 would indicate the lowest-risk cancer while Grade 5 would indicate the most aggressive disease. The system was tested and validated against 20,000 prostatectomy specimens and at least 16,000 biopsy samples.
The majority of conference participants concurred on the superiority of the scale over the 2005 Gleason grading system, pointing to the likelihood that overtreatment could be avoided for those patients whose cancer was assigned Grade 1. The World Health Organization's 2016 edition of Pathology and Genetics: Tumours of the Urinary System and Male Genital Organs has accepted the 2014 system, which can be used in conjunction with the 2005 Gleason system.
See also
Histopathologic diagnosis of prostate cancer
References
External links
Pathology slides and explanation. [Free]
U9play WHO, Geneva Foundation for Medical Education and Research, Prostate cancer, Gleason score. 51 images.
Cancer staging
Histopathology
Urology
Medical scoring system
Prostate cancer | Gleason grading system | Chemistry | 3,085 |
11,733,308 | https://en.wikipedia.org/wiki/Friedel%27s%20law | Friedel's law, named after Georges Friedel, is a property of Fourier transforms of real functions.
Given a real function , its Fourier transform
has the following properties.
where is the complex conjugate of .
Centrosymmetric points are called Friedel's pairs.
The squared amplitude () is centrosymmetric:
The phase of is antisymmetric:
.
Friedel's law is used in X-ray diffraction, crystallography and scattering from real potential within the Born approximation. Note that a twin operation ( Opération de maclage) is equivalent to an inversion centre and the intensities from the individuals are equivalent under Friedel's law.
References
Fourier analysis
Crystallography | Friedel's law | Physics,Chemistry,Materials_science,Engineering | 148 |
60,844,925 | https://en.wikipedia.org/wiki/Prasiola%20crispa | Prasiola crispa is a small terrestrial green alga. It has been recorded world-wide mostly from cold-temperate to polar regions.
Taxonomy
The species, first described as Ulva crispa Lightfoot, is the type of the genus Prasiola.
A lectotype was nominated for the species, the type location of which was provided in accompanying notation as walls that faced north and were favoured as urinals.
The specific epithet is said to translate as "crisped", a reference to the irregular convolutions of the species.
Description
This is a small green alga growing to about 6 cm long. The frond is round in shape, flattened. Generally one cell thick, the cells are arranged in rows or in groups of four.
It seems to be an important food source for Antarctic collembolans.
The species has been used a model for the study of the effects of high intensities of UV radiation on photosynthesis.
Reproduction
Reproduction is by akinetes and aplanospores.
Distribution
Recorded world-wide mostly from cold-temperate to polar regions, e.g. from Iceland, the British Isles including the Isle of Man, New Zealand, Japan and the Pacific shores of North America. In Antarctica, the species lives near penguin colonies.
Conservation status
In Iceland, it is red listed as a vulnerable species (VU).
References
Prasiolales
Seaweeds
Species described in 1777
Lichen photobiont | Prasiola crispa | Biology | 297 |
54,440,879 | https://en.wikipedia.org/wiki/Butixocort | Butixocort, also known as tixocortol butyrate, is a synthetic glucocorticoid corticosteroid.
References
Butyrate esters
Corticosteroid esters
Diketones
Glucocorticoids
Pregnanes
Thiols | Butixocort | Chemistry | 62 |
64,187,174 | https://en.wikipedia.org/wiki/Hydraulic%20signaling%20in%20plants | Hydraulic signals in plants are detected as changes in the organism's water potential that are caused by environmental stress like drought or wounding. The cohesion and tension properties of water allow for these water potential changes to be transmitted throughout the plant.
Plants respond to external stimuli through thigmomorphogenesis. For example, bending a shoot can cause arrestment of growth on another area of the plant. These types of nonlocal responses can be induced by long distance signaling. Long distance communication in plants must satisfy two things: First, signaling must occur rapidly to an apical area of the plant; Second, the signal must be perceived at the apical site and be converted to a physiological or thigmorphogenetic response. One form of long distance signaling is through hydraulic pulses from the roots to the shoots of a plant. Tree branches and stems contain microchannels that make up the xylem network and serve to carry water longitudinally. Stimuli like wounding can cause tension and compression of plant tissues, which pinches the cross section of the shoot. Hydraulic signaling begins with a local response like water expulsion, creating a suction in the vascular system. The compression of the cross section will then lead to a general increase of hydraulic pressure in the channels of the shoot. This extensive change in hydraulic pressure will lead to activation of hydraulic sensors.
Water potential
The driving force of the movement of water is the water potential gradient. The water potential gradient is defined by comparing the potential energy of water to pure water at standard conditions. This water potential gradient must be maintained from the soil through the plant and into the air via transpiration. In the xylem, water is transported throughout the plant following increasing water potential differences. These differences are determined by soil water availability and vapor pressure deficit. If this gradient is flipped the translocation of water will occur in the opposite direction. The water potential is the combination of the pressure potential, the osmotic potential and the gravitational contribution. The translocation of water can be restricted by resistances like stomatal aperture, xylem structure related resistance to flow etc.
Long-distance signaling
In order for plants to respond and adapt to external stimuli, long distance signaling is required. In general terms, long distance signaling is defined as the ability to have a widespread response when just one distinct area is stimulated. In plants, water uptake must be tightly controlled, so long-distance signaling by hydraulic cues coordinate plants above and below-ground organs. The daily physiological behavior of plants is tightly controlled by hydraulic signals. Gradients of water potentials are transferred across the plant through hydraulic signals. If the hydraulic signal originated in the root, it will result in local water potential changes, and consequently turgor changes. The water potential changes can be due to dry soil, water loss via transpiration or physically wounding the plant. These local water potential changes are then transmitted quickly over long-distances as hydraulic signals. Hydraulic signaling is fast and effective because of the cohesion and tension properties of water. Hydraulic signals can be propagated downward or upward, relaying water potential gradients throughout the entire plant.
Hydraulic signals can be sensed in a few ways all relating to how an increase in water potential affects the plants. Because water leaves the cell, there is a reduction in the pressure potential and an increase in solute concentration. This is one way the hydraulic signal can be sensed, through sensing the osmotic environment. Increase in water potential also causes mechanical forces on the cell wall and plasma membrane of the cell. This is the second way to sense hydraulic signaling, by sensing the changes in the mechanical forces on the cell wall.
Experimental methods for studying hydraulic signaling
Linear beam theory
Biomimetic systems can be used to mimic the microchannels inside branches. These synthetic plant systems are made from polydimethylsiloxane (PDMS) and 3D molded like branches and filled with a silicone oil (with viscosity 1 Pa*s). Channels are connected by a differential pressure sensor. The initial branch is straight and the internal water pressure is equal to the atmospheric pressure |Pref-P0=0|. Hydraulic pulses are then induced by automated linear motor deplacement, creating a bend in the synthetic branches which results in a rise in overpressure (Pref), reaching a value of |Pref-P0| or deltaP. Returning back to the initial state of the branch will bring the value of deltaP back to 0. The observation is that overpressure increases quadratically with bending strain. This response changes with variation in beam rigidity.
Nonlinear poroelastic coupling
In a nonlinear poroelastic system, elastic tubes begin straight. When bent, elastic strain increases proportionally with the distance from the initial position. This induces a bending elastic energy per unit of volume that is quadratically related to the transverse radius. The system will lower this elastic energy by squeezing its cross section. This transverse compression leads to a decrease in the channel volume creating a global increase in pressure. Therefore, the mechanism of generating hydraulic pressure is due to the coupling of bending and the transverse deformation of the elastic beam.
Hydraulic signaling in natural branches
Louf et al. has conducted research on hydraulic signaling in 3 species: P. sylvestris, Quercus ilex, and P.alba. Their findings can be summarized by these points: Bending of a branch leads to an increase in xylem water pressure. The magnitude of response depends on the species of plant and the environmental conditions. Hydraulic pulses were found to be greater in trees grown outside with stiffer properties, also proving that elasticity plays a role in hydraulic pulses.
Mechanism
The overall pathway of hydraulic signaling in plants is similar to that of a sensory pathway, starting with basic perception of the signal by a sensor, which then converts the hydraulic signal into a chemical signal: abscisic acid or ABA. This conversion to a chemical signal leads to the control of different physiological responses in the plant since ABA is a plant hormone known to mediate many plant developmental processes including organ size, stomatal closure, and dormancy in the plants’ seeds and buds.
Hydraulic signals are primarily detected as decreases in water potential, usually caused by increases in solute concentration or drought. This decrease in water potential is systemic and transferred throughout the plant vascular network via the xylem. Water follows down the water potential gradient from parenchyma cells into the xylem, ultimately leading to a decrease in pressure potential and osmotic potential in the adjacent cells to the xylem. The hydraulic sensor, which is yet to be known, resides on the inner membrane of the parenchyma cells and detects the decreases in pressure and solute potential through an unknown mechanism. After detection, the unidentified sensor initiates a signal cascade, leading to a calcium transient and subsequent reactive oxygen species (ROS) formation. These ROS are proposed to go on to target ABA biosynthesis enzymes, leading to synthesis of ABA in the parenchyma and later export to regions of the plant requiring the appropriate responses. In an example, ABA response to a hydraulic signal from the roots- a decrease in water potential- is thought to reach the guard cells to stimulate stomatal closure. Despite an unidentified hydraulic sensor(s) and the mechanism of which this sensor detects decreases in pressure and solute potential in the parenchyma, this primary site of ABA biosynthesis is thought to additionally participate as the main location of hydraulic signal perception, vital to mediation of water potential in the plant.
ABA
Abscisic acid (ABA) is a phytohormone that plays a significant role in the plants’ response to drought conditions. During drought, its biosynthesis is triggered and controls many physiological responses. ABA triggers root growth at low concentrations and closes stomata to prevent water loss from transpiration. ABA is essential for hydraulic signals because of its response to local water potential changes. ABA is also known to increase hydraulic conductance by increasing aquaporin expression.
Hydraulic sensors
Although sensor(s) for hydraulic signals are unknown and still being investigated, several sensor candidates have been suggested. One candidate for a hydraulic signal sensor has been MCA1, a plasma membrane protein correlated with mechanosensing via calcium-mediated influx in Arabidopsis thaliana. Research has found that MCA1 increased cytoplasmic calcium concentrations in response to a mechanosensory input: plasma membrane distortion in Arabidopsis.
Another sensor candidate proposed are PERKs, members of the proline-rich receptor kinase family in Arabidopsis as well. PERK4 specifically plays a crucial role in abscisic acid (ABA) signalling and response and has shown to be an ABA- and calcium-activated protein kinase. Both MCA1 and PERK4 appear to correlate with cytoplasmic calcium gradients and an early response to hydraulic signals since calcium is known to be involved in plants’ early responses to mechanosensation.
Despite research on these sensor candidates, both ABA and calcium gradient participation in early events of hydraulic signaling have made it particularly difficult to distinguish the order of which each part plays in the hydraulic signaling pathway.
MCA1
MCA1 has been identified as a candidate for a Ca2+ permeable mechanosensitive channel in Arabidopsis thaliana. Overexpression of plasma membrane protein MCA1 causes an increase in calcium uptake from the roots, which then causes an increase in free calcium in the cytoplasm. MCA1 expression in yeast mutants lacking a high affinity calcium influx system will also increase calcium uptake.
MCA2
MCA2 is a paralog of MCA1 that was identified in Arabidopsis thaliana. Protein sequencing technology reveals that the two genes are 72.7% identical and 89.4% similar in amino acid sequence, making MCA2 a reasonable gene to use in studies to determine the function of MCA1 in calcium uptake. Reverse transcription PCR analysis indicates that MCA2 is expressed in the plasma membrane in leaves, flowers, roots, and stems. Knockout of the MCA2 gene causes a decrease in calcium uptake in the roots, relative to the wildtype, suggesting that the MCA2 gene is involved in calcium uptake.
Using GUS staining, researchers were able to find expressions of MCA1 and MCA2 in the pericycle and endodermis of the root in Arabidopsis. No expression was identified in the cortex or epidermis. Rise in cytosolic calcium levels in the pericycle and endodermis under drought conditions suggest that these cells play a role in calcium signaling. The spatial expression of MCA1 and MCA2 and the changes in calcium concentration in the pericycle and endodermis suggests that both MCA1 and MCA2 play a role in symplastic calcium transport and signaling.
PERK4
Proline-rich extensin-like receptor kinases 4 (PERK4) is a gene expressed in the roots and flowers in Arabidopsis thaliana that localizes in the plasma membrane and plays a role in ABA signaling. Using protein motif analysis, a membrane localization signal, a transmembrane domain, and an intracellular kinase domain were identified in PERK4. To study the role of PERK4 and ABA, mutants were made by inserting T-DNA. PERK4 mutants showed a decrease in ABA sensitivity which affects seedling germination and root tip growth. Mutating PERK4 causes cytosolic free calcium levels to decrease in roots relevant to the wild type. The function of PERK4 has been proposed in early stage ABA signalling to inhibit root elongation by disturbing cytoplasmic calcium gradients.
Ongoing research
Arabidopsis thaliana has been a primary model system in the search for the hydraulic sensor however, has not yet produced a certain answer. Screens for plant mutants affected in hydraulic signaling have been necessary yet, none have been reported so far. Some plant mutants have been distinguished by using the Arabidopsis line pAtH-B6::LUC with lesions upstream of ABA action. Recent years prior to 2013 have shown more hydraulic sensor candidates such as osmosensors and turgor sensors however, research is ongoing as to the specific roles they may play in hydraulic signaling in plants.
References
Plant physiology | Hydraulic signaling in plants | Biology | 2,514 |
7,944,158 | https://en.wikipedia.org/wiki/Project%20Gnome%20%28nuclear%20test%29 | Project Gnome was the first nuclear test of Project Plowshare and was the first continental nuclear weapon test since Trinity to be conducted outside of the Nevada Test Site, and the second test in the state of New Mexico after Trinity. It was tested in southeastern New Mexico on December 10, 1961, approximately 40 km (25 mi) southeast of Carlsbad, New Mexico.
Background
First announced in 1958, Gnome was delayed by the testing moratorium between the United States and the Soviet Union that lasted from November 1958 until September 1961, when the Soviet Union resumed nuclear testing, thus ending the moratorium. The site selected for Gnome is located roughly 40 km (25 mi) southeast of Carlsbad, New Mexico, in an area of salt and potash mines, along with oil and gas wells.
Unlike most nuclear tests, which were focused on weapon development, Shot Gnome was designed to focus on scientific experiments:
"Study the possibility of using the heat produced by a nuclear explosion to produce steam for the production of electric power."
"Explore the feasibility of recovering radioisotopes for scientific and industrial applications."
"Use the high flux of neutrons produced by the detonation for a variety of measurements that would contribute to the scientific knowledge in general and to the reactor development program in particular."
It was discovered during the 1957 Plumbbob-Rainier tests that an underground nuclear detonation creates large quantities of heat as well as radioisotopes, but that most would quickly become trapped in the molten rock and become unusable as the rock resolidified. For this reason, it was decided that Gnome would be detonated in bedded rock salt. The plan was to then pipe water through the molten salt and use the generated steam to produce electricity. The hardened salt could be subsequently dissolved in water to extract the radioisotopes. Gnome was considered extremely important to the future of nuclear science, because it could show that nuclear weapons might be used in peaceful applications. The Atomic Energy Commission invited representatives from various nations, the U.N., the media, interested scientists and some Carlsbad residents.
While Gnome is considered the first test of Project Plowshare, it was also part of the Vela program, which was established to improve the ability of the United States to detect underground and high-altitude nuclear detonations. Vela Uniform was the phase of the program concerned with underground testing. Everything from seismic signals, radiation, ground wave patterns, electromagnetic pulse, and acoustic measurements were studied at Gnome under Vela Uniform.
Gnome shot and aftereffects
Gnome was placed underground at the end of a tunnel that was supposed to be self-sealing upon detonation. Gnome was detonated on 10 December 1961, with a yield of 3.1 kilotons. Even though the Gnome shot was supposed to seal itself, the plan did not quite work. Two to three minutes after detonation, smoke and steam began to rise from the shaft. Consequently, some radiation was released and detected off-site, but it quickly decayed. The cavity volume was calculated to be with an average radius of in the lower portion measured.
The Gnome detonation created a cavity about wide and almost high with a floor of melted rock and salt. A new shaft was drilled near the original and, on 17 May 1962, crews entered the Gnome Cavity. Even though almost six months had passed since the detonation, the temperature inside the cavity was still around . Inside, they found stalactites made of melted salt, as well as the walls of the cavity covered in salt.
The intense radiation of the detonation colored the salt multiple shades of blue, green, and violet. Nonetheless, the explorers encountered only five milliroentgen, and it was considered safe for them to enter the cavern and cross its central rubble pile. While the three-kiloton explosion had melted 2400 tons of salt, the explosion had caused the collapse of the sides and top of the chamber, adding 28,000 tons of rubble that mixed with the molten salt and rapidly reduced its temperature. This was the reason the drilling program had originally been unsuccessful, finding temperatures of only , without high pressure steam, though the boreholes had encountered occasional pockets of molten salt at up to deeper amid the rubble.
Historical site
The Gnome-Coach Site is open to the public and managed by the U.S. Department of Energy Office of Legacy Management. Today, all that exists on the surface to show what occurred below is a small concrete monument with two weathered plaques.
The historical plaque at ground zero reads:
United States Atomic Energy Commission
Dr. Glenn T. Seaborg, Chairman
Project Gnome
December 10, 1961
The first nuclear detonation in the Plowshare Program to develop peaceful uses for nuclear explosives was conducted below this spot at a depth of 1,216 feet in a stratum of rock salt. The explosive, equivalent to 3,100 tons of TNT, was detonated at the end of a horizontal passage leading from a vertical shaft located 1,116 feet southwest of this point. Among the many objectives was the production and recovery of useful radioactive isotopes, the study of heat recovery, the conduct of neutron physics experiments, and the provision of a seismic source for geophysical studies.
See also
Project Gasbuggy
Waste Isolation Pilot Plant
References
External links
U.S. Department of Energy: Project Gnome
Project Gnome: US Atomic Energy Commission Plowshare Program (1961)
American nuclear test sites
American nuclear weapons testing
Eddy County, New Mexico
Explosions in 1961
Gnome
Gnome
1961 in New Mexico
1961 in the United States
Gnome
Gnome | Project Gnome (nuclear test) | Chemistry | 1,134 |
13,025,464 | https://en.wikipedia.org/wiki/2007%20Rio%20de%20Janeiro%20train%20collision | The Rio de Janeiro train collision occurred on August 30, 2007, when two trains collided in the Nova Iguaçu suburb of Rio de Janeiro, Brazil. Eleven people were killed.
The accident happened at 16:09 at a junction near Austin station in Nova Iguaçu in the region of Baixada Fluminense on the outskirts of Rio de Janeiro. A passenger train traveling at approximately 60 mph carrying 850 people collided with the back of an empty 6 car passenger train that was merging in between two parallel tracks killing 8 people and injuring 111, 15 seriously.
The trains were so badly damaged in the collision, that rescue workers were forced to use blowtorches to cut through the mangled metal to reach the passengers. The occupied train was operated by the company Supervia.
References
See also
Train wreck
2007 disasters in Brazil
Railway accidents in 2007
Train collisions in Brazil
Public transport in Rio de Janeiro (city)
August 2007 events in South America | 2007 Rio de Janeiro train collision | Technology | 189 |
23,812,732 | https://en.wikipedia.org/wiki/Green%20building%20council | A Green Building Council (GBC) is any national non-profit, non-government organization that is part of a global network recognized by the World Green Building Council. A green building council's goal is to promote a transformation of the built environment towards one that is sustainable (buildings and cities that are environmentally sensitive, economically viable, socially just and culturally significant).
List of recognised GBCs
As of December 2022, there were at least 41 nations with established GBCs, 10 recognized as "emerging" members, 26 countries in the prospective category and 2 in the affiliate category. As of the end of 2020, there were around 70 GBCs at various stages of development.
The Green Building Council Russia (RuGBC) formed in 2009 and is seeking Emerging Status. Growth in the CIS countries accompanies growth in the number of green construction projects in those countries, that is, those certified to LEED or BREEAM standard.
The 41 established councils are
Argentina Green Building Council
Green Building Council of Australia
Austrian Sustainable Building Council
Green Building Council Brasil
Canada Green Building Council
Chile Green Building Council
China Green Building Council
Colombia Green Building Council
Green Building Council Costa Rica
Croatia Green Building Council
Dutch Green Building Council
Emirates Green Building Council
Green Building Council Finland
France Green Building Council (this NGO merged with the French HQE Association in 2016 to form the HQE Association-France GBC, which brought together more than 200 members)''
German Sustainable Building Council
Guatemala Green Building Council
Hong Kong Green Building Council
Indian Green Building Council
Green Building Council Indonesia
Irish Green Building Council
Italy Green Building Council
Jordan Green Building Council
Japan Green Building Consortium
Korea Green Building Council
Malaysia Green Building Council
Mexico Green Building Council
New Zealand Green Building Council
Pakistan Green Building Council
Panama Green Building Council
Peru Green Building Council
Philippine Green Building Council
Polish Green Building Council
Singapore Green Building Council
Green Building Council of South Africa
Green Building Council España
Sweden Green Building Council
Taiwan Green Building Council
Turkish Green Building Council
UK Green Building Council
U.S. Green Building Council
List of Emerging Green Building Councils
Bulgarian Green Building Council
Swiss Sustainable Building Council
Egypt Green Building Council
Sustainable Building Council Greece
Kuwait Green Building Council
Kazakhstan Green Building Council
Lebanon Green Building Council
Paraguay Green Building Council
El Salvador Green Building Council
List of Prospective Green Building Councils
Bahrain Green Building Council
Green Building Council Bolivia
Botswana Green Building Council
Green Building Council Cameroon
CEES Ecuador Green Building Council
Ghana Green Building Council
Green Building Council Iceland
Cambodia Green Building Council
Green Building Council of Sri Lanka
Luxembourg Green Building Council
Morocco Green Building Council
Green Building Council Mauritius
Green Building Council Namibia
Green Building Council Nigeria
Panama Green Building Council
Palestine Green Building Council
Serbia Green Building Council
Rwanda Green Building Organization
Green Building Council Slovenia
Tunisia Green Building Council
Tanzania Green Building Council
Uganda Green Building Council
Uruguay Green Building Council
Venezuela Green Building Council
Vietnam Green Building Council
Green Building Council Zimbabwe
See also
Sustainable architecture
Sustainable city
References
International environmental organizations
Sustainable building | Green building council | Engineering | 565 |
24,508,930 | https://en.wikipedia.org/wiki/Gymnopilus%20latus | Gymnopilus latus is a species of mushroom in the family Hymenogastraceae.
See also
List of Gymnopilus species
External links
Gymnopilus latus at Index Fungorum
latus
Fungi of North America
Taxa named by William Alphonso Murrill
Fungus species | Gymnopilus latus | Biology | 63 |
14,476,384 | https://en.wikipedia.org/wiki/Mass%20versus%20weight | In common usage, the mass of an object is often referred to as its weight, though these are in fact different concepts and quantities. Nevertheless, one object will always weigh more than another with less mass if both are subject to the same gravity (i.e. the same gravitational field strength).
In scientific contexts, mass is the amount of "matter" in an object (though "matter" may be difficult to define), but weight is the force exerted on an object's matter by gravity. At the Earth's surface, an object whose mass is exactly one kilogram weighs approximately 9.81 newtons, the product of its mass and the gravitational field strength there. The object's weight is less on Mars, where gravity is weaker; more on Saturn, where gravity is stronger; and very small in space, far from significant sources of gravity, but it always has the same mass.
Material objects at the surface of the Earth have weight despite such sometimes being difficult to measure. An object floating freely on water, for example, does not appear to have weight since it is buoyed by the water. But its weight can be measured if it is added to water in a container which is entirely supported by and weighed on a scale. Thus, the "weightless object" floating in water actually transfers its weight to the bottom of the container (where the pressure increases). Similarly, a balloon has mass but may appear to have no weight or even negative weight, due to buoyancy in air. However the weight of the balloon and the gas inside it has merely been transferred to a large area of the Earth's surface, making the weight difficult to measure. The weight of a flying airplane is similarly distributed to the ground, but does not disappear. If the airplane is in level flight, the same weight-force is distributed to the surface of the Earth as when the plane was on the runway, but spread over a larger area.
A better scientific definition of mass is its description as being a measure of inertia, which is the tendency of an object to not change its current state of motion (to remain at constant velocity) unless acted on by an external unbalanced force. Gravitational "weight" is the force created when a mass is acted upon by a gravitational field and the object is not allowed to free-fall, but is supported or retarded by a mechanical force, such as the surface of a planet. Such a force constitutes weight. This force can be added to by any other kind of force.
While the weight of an object varies in proportion to the strength of the gravitational field, its mass is constant, as long as no energy or matter is added to the object. For example, although a satellite in orbit (essentially a free-fall) is "weightless", it still retains its mass and inertia. Accordingly, even in orbit, an astronaut trying to accelerate the satellite in any direction is still required to exert force, and needs to exert ten times as much force to accelerate a 10ton satellite at the same rate as one with a mass of only 1 ton.
Overview
Mass is (among other properties) an inertial property; that is, the tendency of an object to remain at constant velocity unless acted upon by an outside force. Under Sir Isaac Newton's -year-old laws of motion and an important formula that sprang from his work, an object with a mass, m, of one kilogram accelerates, a, at one meter per second per second (about one-tenth the acceleration due to Earth's gravity) when acted upon by a force, F, of one newton.
Inertia is seen when a bowling ball is pushed horizontally on a level, smooth surface, and continues in horizontal motion. This is quite distinct from its weight, which is the downwards gravitational force of the bowling ball one must counter when holding it off the floor. The weight of the bowling ball on the Moon would be one-sixth of that on the Earth, although its mass remains unchanged. Consequently, whenever the physics of recoil kinetics (mass, velocity, inertia, inelastic and elastic collisions) dominate and the influence of gravity is a negligible factor, the behavior of objects remains consistent even where gravity is relatively weak. For instance, billiard balls on a billiard table would scatter and recoil with the same speeds and energies after a break shot on the Moon as on Earth; they would, however, drop into the pockets much more slowly.
In the physical sciences, the terms "mass" and "weight" are rigidly defined as separate measures, as they are different physical properties. In everyday use, as all everyday objects have both mass and weight and one is almost exactly proportional to the other, "weight" often serves to describe both properties, its meaning being dependent upon context. For example, in retail commerce, the "net weight" of products actually refers to mass, and is expressed in mass units such as grams or ounces (see also Pound: Use in commerce). Conversely, the load index rating on automobile tires, which specifies the maximum structural load for a tire in kilograms, refers to weight; that is, the force due to gravity. Before the late 20th century, the distinction between the two was not strictly applied in technical writing, so that expressions such as "molecular weight" (for molecular mass) are still seen.
Because mass and weight are separate quantities, they have different units of measure. In the International System of Units (SI), the kilogram is the basic unit of mass, and the newton is the basic unit of force. The non-SI kilogram-force is also a unit of force typically used in the measure of weight. Similarly, the avoirdupois pound, used in both the Imperial system and U.S. customary units, is a unit of mass, and its related unit of force is the pound-force.
Converting units of mass to equivalent forces on Earth
When an object's weight (its gravitational force) is expressed in "kilograms", this actually refers to the kilogram-force (kgf or kg-f), also known as the kilopond (kp), which is a non-SI unit of force. All objects on the Earth's surface are subject to a gravitational acceleration of approximately 9.8 m/s2. The General Conference on Weights and Measures fixed the value of standard gravity at precisely 9.80665 m/s2 so that disciplines such as metrology would have a standard value for converting units of defined mass into defined forces and pressures. Thus the kilogram-force is defined as precisely 9.80665 newtons. In reality, gravitational acceleration (symbol: g) varies slightly with latitude, elevation and subsurface density; these variations are typically only a few tenths of a percent. See also Gravimetry.
Engineers and scientists understand the distinctions between mass, force, and weight. Engineers in disciplines involving weight loading (force on a structure due to gravity), such as structural engineering, convert the mass of objects like concrete and automobiles (expressed in kilograms) to a force in newtons (by multiplying by some factor around 9.8; 2 significant figures is usually sufficient for such calculations) to derive the load of the object. Material properties like elastic modulus are measured and published in terms of the newton and pascal (a unit of pressure related to the newton).
Buoyancy and weight
Usually, the relationship between mass and weight on Earth is highly proportional; objects that are a hundred times more massive than a one-liter bottle of soda almost always weigh a hundred times more—approximately 1,000 newtons, which is the weight one would expect on Earth from an object with a mass slightly greater than 100 kilograms. Yet, this is not always the case and there are familiar objects that violate this proportionality.
A common helium-filled toy balloon is something familiar to many. When such a balloon is fully filled with helium, it has buoyancy—a force that opposes gravity. When a toy balloon becomes partially deflated, it often becomes neutrally buoyant and can float about the house a meter or two off the floor. In such a state, there are moments when the balloon is neither rising nor falling and—in the sense that a scale placed under it has no force applied to it—is, in a sense perfectly weightless (actually as noted below, weight has merely been redistributed along the Earth's surface so it cannot be measured). Though the rubber comprising the balloon has a mass of only a few grams, which might be almost unnoticeable, the rubber still retains all its mass when inflated.
Again, unlike the effect that low-gravity environments have on weight, buoyancy does not make a portion of an object's weight vanish; the missing weight is instead being borne by the ground, which leaves less force (weight) being applied to any scale theoretically placed underneath the object in question (though one may perhaps have some trouble with the practical aspects of accurately weighing something individually in that condition). If one were however to weigh a small wading pool that someone then entered and began floating in, they would find that the full weight of the person was being borne by the pool and, ultimately, the scale underneath the pool. Whereas a buoyant object (on a properly working scale for weighing buoyant objects) would weigh less, the object/fluid system becomes heavier by the value of object's full mass once the object is added. Since air is a fluid, this principle applies to object/air systems as well; large volumes of air—and ultimately the ground—supports the weight a body loses through mid-air buoyancy.
The effects of buoyancy do not just affect balloons; both liquids and gases are fluids in the physical sciences, and when all macrosize objects larger than dust particles are immersed in fluids on Earth, they have some degree of buoyancy. In the case of either a swimmer floating in a pool or a balloon floating in air, buoyancy can fully counter the gravitational weight of the object being weighed, for a weighing device in the pool. However, as noted, an object supported by a fluid is fundamentally no different from an object supported by a sling or cable—the weight has merely been transferred to another location, not made to disappear.
The mass of "weightless" (neutrally buoyant) balloons can be better appreciated with much larger hot air balloons. Although no effort is required to counter their weight when they are hovering over the ground (when they can often be within one hundred newtons of zero weight), the inertia associated with their appreciable mass of several hundred kilograms or more can knock fully grown men off their feet when the balloon's basket is moving horizontally over the ground.
Buoyancy and the resultant reduction in the downward force of objects being weighed underlies Archimedes' principle, which states that the buoyancy force is equal to the weight of the fluid that the object displaces. If this fluid is air, the force may be small.
Buoyancy effects of air on measurement
Normally, the effect of air buoyancy on objects of normal density is too small to be of any consequence in day-to-day activities. For instance, buoyancy's diminishing effect upon one's body weight (a relatively low-density object) is that of gravity (for pure water it is about that of gravity). Furthermore, variations in barometric pressure rarely affect a person's weight more than ±1 part in 30,000. However, in metrology (the science of measurement), the precision mass standards for calibrating laboratory scales and balances are manufactured with such accuracy that air density is accounted for to compensate for buoyancy effects. Given the extremely high cost of platinum-iridium mass standards like the international prototype of the kilogram (the mass standard in France that defined the magnitude of the kilogram), high-quality "working" standards are made of special stainless steel alloys with densities of about 8,000 kg/m3, which occupy greater volume than those made of platinum-iridium, which have a density of about 21,550 kg/m3. For convenience, a standard value of buoyancy relative to stainless steel was developed for metrology work and this results in the term "conventional mass". Conventional mass is defined as follows: "For a mass at 20 °C, ‘conventional mass’ is the mass of a reference standard of density 8,000 kg/m3 which it balances in air with a density of 1.2 kg/m3." The effect is a small one, 150 ppm for stainless steel mass standards, but the appropriate corrections are made during the manufacture of all precision mass standards so they have the true labeled mass.
Whenever a high-precision scale (or balance) in routine laboratory use is calibrated using stainless steel standards, the scale is actually being calibrated to conventional mass; that is, true mass minus 150 ppm of buoyancy. Since objects with precisely the same mass but with different densities displace different volumes and therefore have different buoyancies and weights, any object measured on this scale (compared to a stainless steel mass standard) has its conventional mass measured; that is, its true mass minus an unknown degree of buoyancy. In high-accuracy work, the volume of the article can be measured to mathematically null the effect of buoyancy.
Types of scales and what they measure
When one stands on a balance-beam-type scale at a doctor’s office, they are having their mass measured directly. This is because balances ("dual-pan" mass comparators) compare the gravitational force exerted on the person on the platform with that on the sliding counterweights on the beams; gravity is the force-generating mechanism that allows the needle to diverge from the "balanced" (null) point. These balances could be moved from Earth's equator to the poles and give exactly the same measurement, i.e. they would not spuriously indicate that the doctor's patient became 0.3% heavier; they are immune to the gravity-countering centrifugal force due to Earth's rotation about its axis. But if one steps onto spring-based or digital load cell-based scales (single-pan devices), one is having one's weight (gravitational force) measured; and variations in the strength of the gravitational field affect the reading. In practice, when such scales are used in commerce or hospitals, they are often adjusted on-site and certified on that basis, so that the mass they measure, expressed in pounds or kilograms, is at the desired level of accuracy.
Use in United States commerce
In the United States of America the United States Department of Commerce, the Technology Administration, and the National Institute of Standards and Technology (NIST) have defined the use of mass and weight in the exchange of goods under the Uniform Laws and Regulations in the areas of legal metrology and engine fuel quality in NIST Handbook 130:
K. "Mass" and "Weight" [See Section K. NOTE]
The mass of an object is a measure of the object’s inertial property, or the amount of matter it contains. The weight of an object is a measure of the force exerted on the object by gravity, or the force needed to support it. The pull of gravity on the earth gives an object a downward acceleration of about 9.8 m/s2. In trade and commerce and everyday use, the term "weight" is often used as a synonym for "mass". The "net mass" or "net weight" declared on a label indicates that the package contains a specific amount of commodity exclusive of wrapping materials. The use of the term "mass" is predominant throughout the world, and is becoming increasingly common in the United States.
(Added 1993)
Section K. NOTE: When used in this law (or regulation), the term "weight" means "mass". (see paragraphs K. "Mass" and "Weight" and L. Use of the Terms "Mass" and "Weight" in Section I. Introduction of NIST Handbook 130 for an explanation of these terms.)
(Note Added 1993)
L. Use of the Terms "Mass" and "Weight" [See Section K. NOTE]
When used in this handbook, the term "weight" means "mass". The term "weight" appears when inch-pound units are cited, or when both inch-pound and SI units are included in a requirement. The terms "mass" or "masses" are used when only SI units are cited in a requirement. The following note appears where the term "weight" is first used in a law or regulation.
U.S. federal law, which supersedes this handbook, also defines weight, particularly Net Weight, in terms of the avoirdupois pound or mass pound. From 21 CFR § 101.105 Declaration of net quantity of contents when exempt:
(a) The principal display panel of a food in package form shall bear a declaration of the net quantity of contents. This shall be expressed in the terms of weight, measure, numerical count, or a combination of numerical count and weight or measure. The statement shall be in terms of fluid measure if the food is liquid, or in terms of weight if the food is solid, semisolid, or viscous, or a mixture of solid and liquid; except that such statement may be in terms of dry measure if the food is a fresh fruit, fresh vegetable, or other dry commodity that is customarily sold by dry measure. If there is a firmly established general consumer usage and trade custom of declaring the contents of a liquid by weight, or a solid, semisolid, or viscous product by fluid measure, it may be used. Whenever the Commissioner determines that an existing practice of declaring net quantity of contents by weight, measure, numerical count, or a combination in the case of a specific packaged food does not facilitate value comparisons by consumers and offers opportunity for consumer confusion, he will by regulation designate the appropriate term or terms to be used for such commodity.
(b)(1) Statements of weight shall be in terms of avoirdupois pound and ounce.
See also 21 CFR § 201.51 – Declaration of net quantity of contents for general labeling and prescription labeling requirements.
See also
Apparent weight
Gravimeter
Pound (force)
References
Concepts in physics
Mass
Force
Conceptual distinctions | Mass versus weight | Physics,Mathematics | 3,818 |
26,357,305 | https://en.wikipedia.org/wiki/Alternative%20Energy%20Promotion%20Centre | The Alternative Energy Promotion Centre (AEPC; Nepali: वैकल्पिक ऊर्जा प्रवर्द्धन केन्द्र, Vaikalpik Urja Pravardhan Kendra) is an independently functioning government institution established by the Government of Nepal with the objectives to popularize and promote the use of renewable energy technologies, raise the living standards of the rural people, protect the environment, and develop commercially viable renewable energy industries in the country. It is governed by nine member board representing government, non-government, industry and financial sectors. AEPC was established on November 3, 1996 (Kartik 18, 2053 B.S.) under then Ministry of Science and Technology of the Government of Nepal. Currently, it is under the Ministry of Energy, Water Resources and Irrigation.
According to The Guardian the solar box system uses copper indium gallium selenide solar cells (CIGS) that are bonded with a tensile fabric. The strength of the combined material can cope with being rolled in and out, and it can be in full operation a few minutes after it is deployed. “It is like a microgrid in a box. It has all of the components integrated into it that you need to run a 24 hour microgrid.”
See also
Micro hydropower in Nepal
References
Energy in Nepal
Sustainable energy
Renewable energy organizations
1996 establishments in Nepal | Alternative Energy Promotion Centre | Engineering | 280 |
33,102,937 | https://en.wikipedia.org/wiki/Activating%20function | The activating function is a mathematical formalism that is used to approximate the influence of an extracellular field on an axon or neurons. It was developed by Frank Rattay and is a useful tool to approximate the influence of functional electrical stimulation (FES) or neuromodulation techniques on target neurons. It points out locations of high hyperpolarization and depolarization caused by the electrical field acting upon the nerve fiber. As a rule of thumb, the activating function is proportional to the second-order spatial derivative of the extracellular potential along the axon.
Equations
In a compartment model of an axon, the activating function of compartment n, , is derived from the driving term of the external potential, or the equivalent injected current
,
where is the membrane capacity, the extracellular voltage outside compartment relative to the ground and the axonal resistance of compartment .
The activating function represents the rate of membrane potential change if the neuron is in resting state before the stimulation. Its physical dimensions are V/s or mV/ms. In other words, it represents the slope of the membrane voltage at the beginning of the stimulation.
Following McNeal's simplifications for long fibers of an ideal internode membrane, with both membrane capacity and conductance assumed to be 0 the differential equation determining the membrane potential for each node is:
,
where is the constant fiber diameter, the node-to-node distance, the node length the axomplasmatic resistivity, the capacity and the ionic currents. From this the activating function follows as:
.
In this case the activating function is proportional to the second order spatial difference of the extracellular potential along the fibers. If and then:
.
Thus is proportional to the second order spatial differential along the fiber.
Interpretation
Positive values of suggest a depolarization of the membrane potential and negative values a hyperpolarization of the membrane potential.
References
Bioelectrochemistry
Computational neuroscience | Activating function | Chemistry | 400 |
5,287,081 | https://en.wikipedia.org/wiki/Redu%20Station | Redu Station is an ESTRACK radio antenna station for communication with spacecraft. The station is located in Wallonia, about one kilometer away from the village of Redu, Belgium. The ground terminals provide tracking capabilities in C band, L-band, S-band, Ku band, and Ka band as well as provide in-orbit tests of telecommunication satellites.
External links
ESA webpage on ESTRACK, including links to all stations
ESA/ESTRACK Redu station page
ESA Redu ground station gallery
ESTRACK facilities
Buildings and structures in Luxembourg (Belgium)
Libin, Belgium | Redu Station | Astronomy | 121 |
18,717,526 | https://en.wikipedia.org/wiki/Thin-film%20drug%20delivery | Thin-film drug delivery uses a dissolving film or oral drug strip to administer drugs via absorption in the mouth (buccally or sublingually) and/or via the small intestines (enterically). A film is prepared using hydrophilic polymers that rapidly dissolves on the tongue or buccal cavity, delivering the drug to the systemic circulation via dissolution when contact with liquid is made.
Thin-film drug delivery has emerged as an advanced alternative to the traditional tablets, capsules and liquids often associated with prescription and OTC medications. Similar in size, shape and thickness to a postage stamp, thin-film strips are typically designed for oral administration, with the user placing the strip on or under the tongue (sublingual) or along the inside of the cheek (buccal). These drug delivery options allow the medication to bypass the first pass metabolism thereby making the medication more bioavailable. As the strip dissolves, the drug can enter the blood stream enterically, buccally or sublingually. Evaluating the systemic transmucosal drug delivery, the buccal mucosa is the preferred region as compared to the sublingual mucosa. Oral Thin Films (Oral Dissolvable Strips) address several of the disadvantages of tablets or capsules such as dysphagia or the inability to adjust dosing to patient parameters, often resulting to a lack of treatment adherence, especially in low-resource settings.
Different buccal delivery products have been marketed or are proposed for certain diseases like trigeminal neuralgia, Ménière's disease, diabetes, and addiction. There are many commercial non-drug product to use thin films like Mr. Mint and Listerine PocketPaks breath freshening strips. Since then, thin-film products for other breath fresheners, as well as a number of cold, flu, anti-snoring and gastrointestinal medications, have entered the marketplace. There are currently several projects in development that will deliver prescription drugs using the thin-film dosage form.
Formulation of oral drug strips involves the application of both aesthetic and performance characteristics such as strip-forming polymers, plasticizers, active pharmaceutical ingredient, sweetening agents, saliva stimulating agent, flavoring agents, coloring agents, stabilizing and thickening agents. From the regulatory perspectives, all excipients used in the formulation of oral drug strips should be approved for use in oral pharmaceutical dosage forms.
Oral drug strip development
Strip forming polymers
The polymer employed should be non-toxic, non-irritant and devoid of leachable impurities. It should have good wetting and spreadability property. The polymer should exhibit sufficient peel, shear and tensile strengths. The polymer should be readily available and should not be very expensive. Film obtained should be tough enough so that there won't be any damage while handling or during transportation. Combination of microcrystalline cellulose and maltodextrin has been used to formulate Oral Strips of piroxicam made by hot melt extrusion technique. Pullulan has been the most widely used film former (used in Listerine PocketPak, Benadryl, etc.)
Plasticizers
Plasticizer is a vital ingredient of the OS formulation. It helps to improve the flexibility and reduces the brittleness of the strip. Plasticizer significantly improves the strip properties by reducing the glass transition temperature of the polymer. Glycerol, Propylene glycol, low molecular weight polyethylene glycols, phthalate derivatives like dimethyl, diethyl and dibutyl phthalate, Citrate derivatives such as tributyl, triethyl, acetyl citrate, triacetin and castor oil are some of the commonly used plasticizer excipients.
Active pharmaceutical ingredient
Since the size of the dosage form has limitation, high-dose molecules are difficult to be incorporated in OS. Generally 5%w/w to 30%w/w of active pharmaceutical ingredients can be incorporated in the oral strip.
Sweetening, flavoring and coloring agents
An important aspect of thin film drug technology is its taste and color. The sweet taste in formulation is more important in case of pediatric population. Natural sweeteners as well as artificial sweeteners are used to improve the flavor of the mouth dissolving formulations for the flavors changes from individual to individual. Pigments such as titanium dioxide is incorporated for coloring.
Stabilizing and thickening agents
The stabilizing and thickening agents are employed to improve the viscosity and consistency of dispersion or solution of the strip preparation solution or suspension before casting. Drug content uniformity is a requirement for all dosage forms, particularly those containing low dose highly potent drugs. To uniquely meet this requirement, thin film formulations contain uniform dispersions of drug throughout the whole manufacturing process. Since this criterion is essential for the quality of the thin film and final pharmaceutical dosage form, the use of Laser Scanning Confocal Microscopy (LSCM) was recommended to follow the manufacturing process.
Oral strips in development
An increasing number of film-based therapeutics are in development, including:
Sildenafil citrate indicated for the treatment of erectile disfunction (ED), is being developed for use by Cure Pharmaceutical.
Montelukast indicated for the treatment of dementia, asthma and allergy, is being developed variously for uses as a film by IntelGenx and Aquestive Therapeutics (formerly known as Monosol Rx).
Midatech, a company specializing in nanotechnology, is partnering with Aquestive Therapeutics to create a film-based insulin. (Sachs Associates. 5th Annual European Life Science CEP Forum for Partnering and Investing. March 6–7, 2012. Zurich, Switzerland.)
Rizatriptan indicated for the treatment of migraine, is being developed for use as a film by IntelGenx, Aquestive Therapeutics, and Zim Laboratories Ltd.
Aquestive Therapeutics is also developing a testosterone film-based therapeutic for the treatment of male hypogonadism. The product is currently in phase 1.
Undergraduate biomedical engineering students at Johns Hopkins University have created a new drug delivery system based on the thin-film technology used by a breath freshener. Laced with a vaccine against rotavirus, the strips could be used to provide the vaccine to infants in impoverished areas.
Other molecules like sildenafil citrate, tadalafil, methylcobalamin and vitamin D3 are also developed by IntelGenx Zim Laboratories Ltd.
Oak Therapeutics, a drug delivery company, has developed an oral thin film (oral disposable strip) for the treatment of Xerostomia, and has isoniazid-rifapentine (for the treatment of latent tuberculosis) and abacavir-lumevadine-dolutegravir (ALD, for the treatment of HIV/AIDS) oral dissolvable strips in develoment, funded by grants from the National Institutes of Health (NIH).
References
Drug delivery devices
Dosage forms
Thin films | Thin-film drug delivery | Chemistry,Materials_science,Mathematics,Engineering | 1,472 |
58,448,717 | https://en.wikipedia.org/wiki/Aspergillus%20nomius | Aspergillus nomius is a species of fungus in the genus Aspergillus. It is from the Flavi section. The species was first described in 1987. It has been reported to produce aflatoxin B1, aflatoxin B2, aflatoxin G1, aflatoxin G2, aspergillic acid, kojic acid, nominine, paspaline, pseurotin, and tenuazonic acid. A. nomius has been identified as the cause of human infections.
Growth and morphology
A. nomius has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
nomius
Fungi described in 1987
Fungus species | Aspergillus nomius | Biology | 178 |
23,009,573 | https://en.wikipedia.org/wiki/Galileoscope | The Galileoscope is a small mass-produced refractor telescope, designed with the intention of increasing public interest in astronomy and science. It was developed for the International Year of Astronomy 2009. It is meant to be an inexpensive means by which millions of people can view the same things seen by Galileo Galilei, such as the craters of Earth's Moon, four of Jupiter's moons, and the Pleiades. The small telescope has an aperture of and a relatively long focal length of 500 mm, for a focal ratio of f/10.
Design and configurations
The Galileoscope uses a focuser, giving the telescope a great deal of versatility, since this is the standard size for eyepieces used in most amateur and some professional telescopes. This means the Galileoscope can be used with relatively cheap extra eyepieces to produce magnifications up to 100, or even 200 times (with a 5 mm in combination with the included 2× Barlow lens). However, a magnification of more than 125× would not be recommended for a scope this size because its focal ratio limits sharpness beyond this. Additionally, the design of the slide-in/out focusing tube, without any gears or knobs, makes it nearly impossible to focus above the 125× limit.
It also utilizes achromat glass lenses in the objective-lens – the large one in front – as well as in the eyepiece (4 lenses of two types of high quality plastic, known as a Plossl configuration) to prevent chromatic aberration, producing a clearer image. This is because single lenses, as are often used in cheap scopes, refract light of different colors in different angles (chromatic aberration). In practice this means all images will have blueish blurred edges on one side, reddish on the other, making the image very unclear. By using two types of glass for the two lenses this gets compensated to some degree, resulting in a sharper and clearer image. Depending on the configuration, 4, 6 or 8 lenses are used. The 4-lens configuration results in a telescope in some ways similar to Galileo's, with 17× magnification and a very small field of view. The 6-lens configuration provides 25× magnification, and the 8-lens configuration allows for 50× magnification. The user may easily switch between these configurations by changing the eyepiece.
See also
List of telescope types
References
External links
Specifications
Review by Phil Plait, author of BadAstronomy.com
Review on www.cloudynights.com was selected a gear of the year 2009
http://www.galileoscope.org - the Galileoscope website
Assembly instructions
The moon viewed through a Galileoscope
Telescope manufacturers | Galileoscope | Astronomy | 562 |
8,286,275 | https://en.wikipedia.org/wiki/Suspension%20polymerization | In polymer chemistry, suspension polymerization is a heterogeneous radical polymerization process that uses mechanical agitation to mix a monomer or mixture of monomers in a liquid phase, such as water, while the monomers polymerize, forming spheres of polymer. The monomer droplets (size of the order 10-1000 μm) are suspended in the liquid phase. The individual monomer droplets can be considered as undergoing bulk polymerization. The liquid phase outside these droplets help in better conduction of heat and thus tempering the increase in temperature.
While choosing a liquid phase for suspension polymerization, low viscosity, high thermal conductivity and low-temperature variation of viscosity are generally preferred. The primary advantage of suspension polymerization over other types of polymerization is that a higher degree of polymerization can be achieved without monomer boil-off. During this process, there is often a possibility of these monomer droplets sticking to each other and causing creaming in the solution. To prevent this, the mixture is carefully stirred or a protective colloid is often added. One of the most common suspending agents is polyvinyl alcohol (PVA). Usually, the monomer conversion is completed unlike in bulk polymerization, and the initiator used in this is monomer-soluble.
This process is used in the production of many commercial resins, including polyvinyl chloride (PVC), a widely-used plastic, styrene resins including polystyrene, expanded polystyrene, and high-impact polystyrene, as well as poly(styrene-acrylonitrile) and poly(methyl methacrylate).
Particle properties
Suspension polymerization is divided into two main types, depending on the morphology of the particles that result. In bead polymerization, the polymer is soluble in its monomer and the result is a smooth, translucent bead. In powder polymerization, the polymer is not soluble in its monomer and the resultant bead will be porous and irregular. The morphology of the polymer can be changed by adding a monomer diluent, an inert liquid that is insoluble with the liquid matrix. The diluent changes the solubility of the polymer in the monomer and gives a measure of control over the porosity of the resulting polymer.
The polymer beads that result can range in size from 100 nm to 5 mm. The size is controlled by the stirring speed, the volume fraction of monomer, the concentration and identity of the stabilizers used, and the viscosities of the different components. The following equation derived empirically summarizes some of these interactions:
d is the average particle size, k includes parameters related to the reaction vessel design, Dv is the reaction vessel diameter, Ds is the diameter of the stirrer, R is the volume ratio of the monomer to the liquid matrix, N is the stirring speed, νm and νl are the viscosity of the monomer phase and liquid matrix respectively, ε is the interfacial tension of the two phases, and Cs is the concentration of stabilizer. The most common way to control the particle size is to change the stirring speed.
See also
Dispersion polymerization
Radical polymerization
Polymer
Polymerization
Step-growth polymerization
Emulsion polymerization
Superabsorbent polymer
References
Polymerization reactions
fr:Procédé de polymérisation#Polymérisation en suspension | Suspension polymerization | Chemistry,Materials_science | 703 |
41,907,076 | https://en.wikipedia.org/wiki/Lists%20of%20constellations | The following lists of constellations are available:
IAU designated constellations – a list of the current, or "modern", constellations
Former constellations – a list of former constellations
Chinese constellations – traditional Chinese astronomy constellations
List of Nakshatras – sectors along the Moon's ecliptic
IAU designated constellations by area – a list of constellations ranked by area
IAU designated constellations by geographical visibility – a list of constellations listed by the latitudes from which they can be seen
See also
Lists of astronomical objects
Asterism (astronomy) | Lists of constellations | Astronomy | 118 |
67,910,765 | https://en.wikipedia.org/wiki/Hovlinc | Hovlinc RNA is a self-cleaving ribozyme of about 168 nucleotides found in a very long noncoding RNA (vlincRNA) in humans, chimpanzees, and gorillas. The word "hovlinc" comes from "hominin vlincRNA-located" RNA. Hovlinc is only a fourth known case of a ribozyme in human. Self-cleavage activity of Hovlinc has been shown in human, chimpanzees and bonobos, but is absent in gorillas, raising questions about Hovlinc's biological function and evolution.
There are only a few known examples of ribozymes in human, including Hovlinc, Mammalian CPEB3 ribozyme, Hammerhead ribozyme (HH9 and HH10) and B2 SINE ribozyme. Presumably Hovlinc acquired its self-cleaving activity about 10 to 13 million of years ago, which coincides with the last common ancestor of humans, gorillas, and chimpanzees. Hovlinc presents catalytic activity in hominids but not in gorillas where a mutation abolished the self-cleavage activity.
Hovlinc is a very structured RNA that contains four stem loops joined in a central loop, it also features large pseudoknots that help to bring together the second and fourth helices, which helps the RNA to get the more compacted structure that allows catalytic activity.
References
External links
Hovlinc family in Rfam
Ribozymes
Nucleotides
RNA
Genetics | Hovlinc | Chemistry,Biology | 336 |
15,659,728 | https://en.wikipedia.org/wiki/List%20of%20heritage%20railroads%20in%20the%20United%20States | This is a list of heritage railroads in the United States; there are currently no such railroads in two U.S. states, Mississippi and North Dakota.
Heritage railroads by state
Alabama
Heart of Dixie Railroad Museum, Shelby & Southern Railroad and Calera & Shelby Railroad
North Alabama Railroad Museum, Mercury and Chase Railroad
Wales West Light Railway
Alaska
Tanana Valley Railroad Museum in Pioneer Park (1899 engine)
White Pass and Yukon Route
Arizona
Arizona Railway Museum (No excursions listed)
Arizona State Railroad Museum (In planning stages)
Arizona Street Railway Museum (Phoenix Trolley Museum)
Grand Canyon Railway
McCormick-Stillman Railroad Park in Scottsdale
Old Pueblo Trolley
Superstition Narrow Gauge Railroad
Verde Canyon Railroad
Arkansas
Arkansas and Missouri Railroad
Eureka Springs and North Arkansas Railway
Fort Smith Trolley Museum
Metro Streetcar
California
Angels Flight
Billy Jones Wildcat Railroad, uses repurposed narrow gauge steam engines and is partly the inspiration for Walt Disney's theme park, Disneyland
Calico and Odessa Railroad
California State Railroad Museum
California Western Railroad, also called The Skunk Train
Disneyland Railroad (three locomotives are historic)
Fillmore and Western Railway - short line used by Hollywood film industry. (Lease agreement ended in 2021)
Ghost Town & Calico Railroad in Knott's Berry Farm
Golden Gate Railroad Museum (No excursions listed)
Napa Valley Wine Train
Nevada County Narrow Gauge Railroad & Transportation Museum
Niles Canyon Railway
Nut Tree Railroad
Pacific Coast Railroad in Santa Margarita
Pacific Southwest Railway Museum
Placerville & Sacramento Valley Railroad, oldest railroad west of the Mississippi
Port of LA Waterfront Red Car, a rebuilt part of the original Pacific Electric Railway system (Closed in 2015)
Poway–Midland Railroad
Sierra Railway - Railtown 1897 State Historic Park
Red Car Trolley
Redwood Valley Railway
Roaring Camp & Big Trees Narrow Gauge Railroad
Sacramento RiverTrain
Sacramento Southern Railroad
San Bernardino Railroad Historical Society (For AT&SF 3751 excursion trips)
San Diego Trolley Silver Line
San Francisco Municipal Railway
E Embarcadero streetcar line
F Market & Wharves streetcar line
San Francisco cable car system
San Jose Steam Railroad Museum (Proposed)
San Luis Obispo Railroad Museum
Santa Cruz, Big Trees and Pacific Railway
Sierra Railroad
Sonoma TrainTown Railroad
Southern California Railway Museum (Formerly known as the Orange Empire Railway Museum from 1956-2018)
Western Pacific Railroad Museum
Western Railway Museum
Yosemite Mountain Sugar Pine Railroad
Colorado
Colorado Railroad Museum
Como Roundhouse, Railroad Depot and Hotel Complex
Cripple Creek and Victor Narrow Gauge Railroad
Cumbres and Toltec Scenic Railroad
Durango and Silverton Narrow Gauge Railroad
Fort Collins trolley
Georgetown Loop Railroad
Leadville, Colorado and Southern Railroad
Manitou and Pikes Peak Railway
Pikes Peak Historical Street Railway Foundation
Platte Valley Trolley
Rio Grande Scenic Railroad (Currently in receivership)
Royal Gorge Route Railroad
Ski Train
Tiny Town Railroad
Connecticut
Connecticut Trolley Museum
Danbury Railway Museum
Essex Steam Train operated by the Valley Railroad Company
Naugatuck Railroad and Railroad Museum of New England
Shore Line Trolley Museum
Delaware
Wilmington and Western Railroad
Florida
Florida Railroad Museum
Gold Coast Railroad Museum
Kirby Family Farm Train
Orlando & Northwestern Railway (Closed in 2020)
Seminole Gulf Railway
Serengeti Express in Busch Gardens Tampa
Sugar Express
TECO Line Streetcar
Tavares, Eustis & Gulf Railroad (Closed in 2017)
Walt Disney World Railroad (four locomotives are historic)
Wildlife Express Train at Disney's Animal Kingdom
Georgia
Agrirama Logging Train
Azalea Sprinter
Blue Ridge Scenic Railway
Chickamauga Train from Tennessee Valley Railroad Museum
Georgia Coastal Railway
River Street Streetcar (Out of service)
Georgia State Railroad Museum
SAM Shortline Excursion Train
Six Flags Over Georgia Park Railroad in Six Flags Over Georgia
Southeastern Railway Museum
Stone Mountain Scenic Railroad
Tallulah Falls Railroad Museum
Hawaii
Grove Farm Plantation Museum Railroad
Hawaiian Railway
Kauai plantation railroad
Lahaina Ka'anapali & Pacific Railroad (Closed in 2014, currently defunct)
Idaho
Silverwood Theme Park
Thunder Mountain Line (Closed in 2015)
Illinois
Fox River Trolley Museum
Illinois Railway Museum
Monticello Railway Museum
Silver Creek & Stephenson Railroad
Indiana
Fort Wayne Railroad Historical Society (For NKP 765 excursion trips, future)
Hesston Steam Museum (For Hesston and Galena Creek excursions)
Hoosier Valley Railroad Museum
Indiana Railway Museum
National New York Central Railroad Museum
Ohio River Scenic Railway
Whitewater Valley Railroad
Nickel Plate Express
Iowa
Boone and Scenic Valley Railroad
Fourth Street Elevator
Midwest Central Railroad
Midwest Electric Railway
Kansas
Abilene and Smoky Valley Railroad
Midland Railway
Kentucky
Big South Fork Scenic Railway
Bluegrass Railroad and Museum
Kentucky Railway Museum
Kentucky Steam Heritage Corporation (For C&O 2716 excursion trips)
My Old Kentucky Dinner Train (Between Bardstown and Limestone Springs)
Louisiana
Canal Streetcar Line
Louisiana Steam Train Association
Old Hickory Railroad
Riverfront Streetcar Line
St. Charles Streetcar Line
Maine
Belfast & Moosehead Lake Railway
Boothbay Railway Village
Downeast Scenic Railroad
Maine Narrow Gauge Railroad Museum
Midcoast Railservice (Coastliner)
Sandy River and Rangeley Lakes Railroad
Seashore Trolley Museum
Wiscasset, Waterville and Farmington Railway
Maryland
B&O Railroad Museum
Baltimore Streetcar Museum
National Capital Trolley Museum
Walkersville Southern Railroad
Western Maryland Scenic Railroad
Massachusetts
Berkshire Scenic Railway Museum
Cape Cod Central Railroad
Edaville Railroad
Lowell National Historical Park Trolley Line
MBTA Mattapan Trolley
Michigan
Coopersville and Marne Railway
Huckleberry Railroad
Lake Linden & Torch Lake Railroad (Track is on museum grounds)
Little River Railroad
Michigan Transit Museum
Neo Wilson Memorial Railway
Quincy and Torch Lake Cog Railway
Southern Michigan Railroad Society
Steam Railroading Institute
Weiser Railroad
Minnesota
Como-Harriet Streetcar Line
Gopher State Railway Museum
Excelsior Streetcar Line
Ironhorse Central Railroad
Lake Superior and Mississippi Railroad
Minnesota Transportation Museum
North Shore Scenic Railroad
Northfield & Cannon Valley Railroad (Closed in 2007)
Osceola and St. Croix Valley Railway
Mississippi
Cleveland Train Museum
Missouri
Belton, Grandview and Kansas City Railroad
Branson Scenic Railway
Delmar Loop Trolley
Frisco Silver Dollar Line in Silver Dollar City
St. Louis, Iron Mountain and Southern Railway
Tommy G. Robertson Railroad in Six Flags St. Louis
Worlds of Fun Park Railroad in Worlds of Fun
National Museum of Transportation (trolley on grounds)
Montana
Alder Gulch Shortline Railroad
Central Montana Rail, Inc.
Nebraska
Fremont and Elkhorn Valley Railroad (Ended service)
Nebraska Railroad Museum (In transition to move to permanent home)
Omaha Zoo Railroad in Henry Doorly Zoo and Aquarium
Nevada
Nevada Northern Railway
Nevada State Railroad Museum
Nevada Southern Railroad Museum
Virginia and Truckee Railroad
New Hampshire
Conway Scenic Railroad
Granite State Scenic Railway (formerly known as Hobo Railroad)
Wilton Scenic Railroad
Winnipesaukee Scenic Railroad
Cafe Lafayette Dinner Train
Mount Washington Cog Railway
Silver Lake Railroad
White Mountain Central Railroad
New Jersey
Black River and Western Railroad
Woodstown Central Railroad
Cape May Seashore Lines
Delaware River Rail Excursions
New Jersey Museum of Transportation
Whippany Railway Museum
New Mexico
Cumbres and Toltec Scenic Railroad
New Mexico Steam Locomotive and Railroad Historical Society (For AT&SF 2926 excursion trips)
Sky Railway
New York
Adirondack Railroad
Arcade and Attica Railroad
Buffalo Cattaraugus and Jamestown Scenic Railway
Catskill Mountain Railroad
Cooperstown and Charlotte Valley Railroad
Delaware and Ulster Railroad
Medina Railroad Museum
New York Museum of Transportation
New York Transit Museum
Railroad Museum of Long Island
Rochester & Genesee Valley Railroad Museum
Saratoga Corinth & Hudson Railway
Saratoga and North Creek Railway (Closed in April 2018)
Trolley Museum of New York
Troy and New England Railway
North Carolina
Craggy Mountain Line
Great Smoky Mountains Railroad
Handy Dandy Railroad
New Hope Valley Railway
North Carolina Transportation Museum
Tweetsie Railroad
Ohio
Age of Steam Roundhouse (Several operating steam locomotives, but no excursions listed)
Cedar Point and Lake Erie Railroad in Cedar Point
Cuyahoga Valley Scenic Railroad
Hocking Valley Scenic Railway
Kings Island & Miami Valley Railroad in Kings Island
Lake Shore Railway Association (Lorain and West Virginia Railway)
Lebanon Mason Monroe Railroad
Toledo, Lake Erie and Western Railway
Zanesville and Western Scenic Railroad
Cincinnati Union Terminal
Oklahoma
El Reno Trolley
Oklahoma Railway Museum
Oregon
Astoria Riverfront Trolley
Eagle Cap Excursion Train
Mount Hood Railroad
Oregon Coast Scenic Railroad
Oregon Electric Railway Museum
Oregon Pacific Railroad
Oregon Rail Heritage Center
Santiam Excursion Train
Sumpter Valley Railway
Train Mountain Railroad
Willamette Shore Trolley
Pennsylvania
Allentown and Auburn Railroad
Bellefonte Historical Railroad
Colebrookdale Railroad
Dry Gulch Railroad in Hersheypark
Duquesne Incline
East Broad Top Railroad
Everett Railroad
Horseshoe Curve Incline
Johnstown Inclined Plane
Kiski Junction Railroad (Operations suspended 2016)
Lehigh Gorge Scenic Railway
Middletown and Hummelstown Railroad
Monongahela Incline
New Hope and Ivyland Railroad (New Hope Railroad)
Northern Central Railway of York
Oil Creek and Titusville Railroad
Pioneer Tunnel Coal Mine & Steam Train
Railroaders Memorial Museum
Reading Blue Mountain and Northern Railroad
Reading Railroad Heritage Museum
Rockhill Trolley Museum
SEPTA Route 15
Steamtown National Historic Site
Stewartstown Railroad
The Stourbridge Line
Strasburg Rail Road
Tioga Central Railroad (Closed in 2019)
Wanamaker, Kempton and Southern Railroad
Wawa and Concordville Railroad (Closed in 1968)
West Chester Railroad
Westmoreland Scenic Railroad (Closed in 2004)
Williams Grove Historical Steam Engine Association
Rhode Island
Newport and Narragansett Bay Railroad
South Carolina
Rockton, Rion and Western Railroad/South Carolina Railroad Museum/Rockton and Rion Railroad Historic District
South Dakota
Black Hills Central Railroad
Prairie Village, Herman and Milwaukee Railroad
Tennessee
Dollywood Express in Dollywood
Lookout Mountain Incline Railway
MATA Trolley (Memphis)
Southern Appalachia Railway Museum
Tennessee Central Railway Museum
Tennessee Valley Railroad Museum
Three Rivers Rambler
Texas
Austin Steam Train Association (presently runs only diesel equipment)
Galveston Island Trolley
Grapevine Vintage Railroad
Jefferson and Cypress Bayou Railway
Longhorn and Western Railroad (Texas Transportation Museum)
McKinney Avenue Transit Authority
Rosenberg Railroad Museum
Six Flags & Texas Railroad in Six Flags Over Texas
Six Flags Fiesta Texas Park Railroad in Six Flags Fiesta Texas
Texas Railroad Museum (Proposed)
Texas State Railroad
Utah
Golden Spike National Historic Site (Promontory Summit, Utah)
Heber Valley Railroad
Lagoon Wild Kingdom Train
Vermont
Green Mountain Railroad
Steamtown, U.S.A. (Moved to Scranton, Pennsylvania as Steamtown National Historic Site)
Virginia
Busch Gardens Railway in Busch Gardens Williamsburg
Virginia Scenic Railway (Buckingham Branch)
James River Rambler (Buckingham Branch)
Virginia Museum of Transportation (For N&W 611 excursion trips)
Washington
Anacortes Railway (Defunct)
Chehalis–Centralia Railroad
Chelatchie Prairie Railroad
George Benson Waterfront Streetcar Line (Closed in 2005)
Issaquah Valley Trolley (Closed in 2020, reopening proposed)
Inland Northwest Rail Museum
Lake Whatcom Railway (Out of Service since 2019)
Mt. Rainier Scenic Railroad
Pend Oreille Valley Railroad (Ended excursion service in 2016)
Snoqualmie Valley Railroad
Yakima Electric Railway Museum
West Virginia
Cass Scenic Railroad State Park
Durbin and Greenbrier Valley Railroad
Potomac Eagle Scenic Railroad
Wisconsin
East Troy Electric Railroad
Kenosha Streetcar
Kettle Moraine Scenic Railway (Closed in October 2001)
Lumberjack Steam Train
Mid-Continent Railway Museum
National Railroad Museum, Ashwaubenon (includes a small rail loop)
Osceola and St. Croix Valley Railway
Riverside and Great Northern Railway
Wisconsin Great Northern Railroad
Wyoming
Evanston Roundhouse
Union Pacific Steam Program (Cheyenne, Wyoming)
Wyoming Transportation Museum
Territories
Puerto Rico
Train of the South
See also
List of heritage railways
List of heritage railways in Canada
List of scenic railroads
Heritage streetcar
References
External links
United States
United States railway-related lists | List of heritage railroads in the United States | Engineering | 2,267 |
49,136,553 | https://en.wikipedia.org/wiki/Vector%20generalized%20linear%20model | In statistics, the class of vector generalized linear models (VGLMs) was proposed to
enlarge the scope of models catered for by generalized linear models (GLMs).
In particular, VGLMs allow for response variables outside the classical exponential family
and for more than one parameter. Each parameter (not necessarily a mean) can be transformed by a link function.
The VGLM framework is also large enough to naturally accommodate multiple responses; these are
several independent responses each coming from a particular statistical distribution with
possibly different parameter values.
Vector generalized linear models are described in detail in Yee (2015).
The central algorithm adopted is the iteratively reweighted least squares method,
for maximum likelihood estimation of usually all the model parameters. In particular,
Fisher scoring is implemented by such, which, for most models,
uses the first and expected second derivatives of the log-likelihood function.
Motivation
GLMs essentially cover one-parameter models from the classical exponential family,
and include 3 of the most important statistical regression models:
the linear model, Poisson regression for counts, and logistic regression
for binary responses.
However, the exponential family is far too limiting for regular data analysis.
For example, for counts, zero-inflation, zero-truncation and overdispersion are regularly
encountered, and the makeshift adaptations made to the binomial and
Poisson models in the form of quasi-binomial and
quasi-Poisson can be argued as being ad hoc and unsatisfactory.
But the VGLM framework readily handles models such as
zero-inflated Poisson regression,
zero-altered Poisson (hurdle) regression,
positive-Poisson regression, and
negative binomial regression.
As another example, for the linear model,
the variance of a normal distribution is relegated
as a scale parameter and it is treated
often as a nuisance parameter (if it is considered as a parameter at all).
But the VGLM framework allows the variance to be modelled using covariates.
As a whole, one can loosely think of VGLMs as GLMs that handle many models
outside the classical exponential family and are not restricted to estimating
a single mean.
During estimation,
rather than using weighted least squares
during IRLS, one uses generalized least squares to handle the
correlation between the M linear predictors.
Data and notation
We suppose that the response or outcome or the dependent variable(s), , are assumed to be generated from a particular distribution. Most distributions are univariate, so that , and an example of is the bivariate normal distribution.
Sometimes we write our data as
for . Each of the n observations are considered to be
independent.
Then .
The are known positive prior weights, and often .
The explanatory or independent variables are written ,
or when i is needed, as .
Usually there is an intercept, in which case
or .
Actually, the VGLM framework allows for S responses, each of dimension .
In the above S = 1. Hence the dimension of is more generally . One handles S responses by code such
as vglm(cbind(y1, y2, y3) ~ x2 + x3, ..., data = mydata) for S = 3.
To simplify things, most of this article has S = 1.
Model components
The VGLM usually consists of four elements:
1. A probability density function or probability mass function from some statistical distribution which has a log-likelihood , first derivatives and expected information matrix that can be computed. The model is required to satisfy the usual MLE regularity conditions.
2. Linear predictors described below to model each parameter ,
3. Link functions such that
4. Constraint matrices for each of full column-rank and known.
Linear predictors
Each linear predictor is a quantity which incorporates
information about the independent variables into the model.
The symbol (Greek "eta")
denotes a linear predictor and a subscript j is used to denote the jth one.
It relates the jth parameter to the explanatory variables, and
is expressed as linear combinations (thus, "linear")
of unknown parameters
i.e., of regression coefficients .
The jth parameter, , of the distribution depends on the
independent variables, through
Let be the vector of
all the linear predictors. (For convenience we always let
be of dimension M).
Thus all the covariates comprising potentially affect all the parameters through the linear predictors . Later, we will allow the linear predictors to be generalized to additive predictors, which is the sum of smooth functions of each and each function is estimated from the data.
Link functions
Each link function provides the relationship between a linear predictor and a
parameter of the distribution.
There are many commonly used link functions, and their choice can be somewhat arbitrary. It makes sense to try to match the domain of the link function to
the range of the distribution's parameter value.
Notice above that the allows a different link function for each parameter.
They have similar properties as with generalized linear models, for example,
common link functions include the logit link for parameters in ,
and the log link for positive parameters. The VGAM package has function identitylink() for parameters that can assume both positive and negative values.
Constraint matrices
More generally, the VGLM framework allows for any linear constraints between the regression coefficients of each linear predictors. For example, we may want to set some to be equal to 0, or constraint some of them to be equal. We have
where the are the constraint matrices.
Each constraint matrix is known and prespecified, and has M rows, and between 1 and M columns. The elements of constraint matrices are finite-valued, and often they are just 0 or 1.
For example, the value 0 effectively omits that element while a 1 includes it.
It is common for some models to have a parallelism assumption, which means that
for , and
for some models, for too.
The special case when for
all is known as trivial constraints; all the
regression coefficients are estimated and are unrelated.
And is known as an intercept-only parameter
if the jth row of all the are equal to for , i.e., equals an intercept only. Intercept-only parameters are thus modelled as simply as possible, as a scalar.
The unknown parameters, ,
are typically estimated by the method of maximum likelihood.
All the regression coefficients may be put into a matrix as follows:
The xij facility
With even more generally, one can allow the value of a variable
to have a different value for each .
For example, if each linear predictor is for a different time point then
one might have a time-varying covariate.
For example,
in discrete choice models, one has
conditional logit models,
nested logit models,
generalized logit models,
and the like, to distinguish between certain variants and
fit a multinomial logit model to, e.g., transport choices.
A variable such as cost differs depending on the choice, for example,
taxi is more expensive than bus, which is more expensive than walking.
The xij facility of VGAM allows one to
generalize
to .
The most general formula is
Here the is an optional offset; which translates
to be a matrix in practice.
The VGAM package has an xij argument that allows
the successive elements of the diagonal matrix to be inputted.
Software
Yee (2015) describes an R package
implementation in the
called VGAM.
Currently this software fits approximately 150 models/distributions.
The central modelling functions are vglm() and vgam().
The family argument is assigned a VGAM family function,
e.g., family = negbinomial for negative binomial regression,
family = poissonff for Poisson regression,
family = propodds for the proportional odd model or
cumulative logit model for ordinal categorical regression.
Fitting
Maximum likelihood
We are maximizing a log-likelihood
where the are positive and known prior weights.
The maximum likelihood estimates can be found
using an iteratively reweighted least squares algorithm using
Fisher's scoring method, with updates of the form:
where is
the Fisher information matrix at iteration a.
It is also called the expected information matrix, or EIM.
VLM
For the computation, the (small) model matrix constructed
from the RHS of the formula in vglm()
and the constraint matrices are combined to form a big model matrix.
The IRLS is applied to this big X. This matrix is known as the VLM
matrix, since the vector linear model is the underlying least squares
problem being solved. A VLM is a weighted multivariate regression where the
variance-covariance matrix for each row of the response matrix is not
necessarily the same, and is known.
(In classical multivariate regression, all the errors have the
same variance-covariance matrix, and it is unknown).
In particular, the VLM minimizes the weighted sum of squares
This quantity is minimized at each IRLS iteration.
The working responses (also known as pseudo-response and adjusted dependent vectors) are
where the are known as working weights or working weight matrices. They are symmetric and positive-definite. Using the EIM helps ensure that they are all positive-definite (and not just the sum of them) over much of the parameter space. In contrast, using Newton–Raphson would mean the observed information matrices would be used, and these tend to be positive-definite in a smaller subset of the parameter space.
Computationally, the Cholesky decomposition is used to invert the working weight matrices and to convert the overall generalized least squares problem into an ordinary least squares problem.
Examples
Generalized linear models
Of course, all generalized linear models are a special cases of VGLMs.
But we often estimate all parameters by full maximum likelihood estimation rather
than using the method of moments for the scale parameter.
Ordered categorical response
If the response variable is an ordinal measurement with M + 1 levels, then one may fit a model function of the form:
where
for
Different links g lead to proportional odds models or ordered probit models,
e.g., the VGAM family function cumulative(link = probit) assigns a probit link to the cumulative
probabilities, therefore this model is also called the cumulative probit model.
In general they are called cumulative link models.
For categorical and multinomial distributions, the fitted values are an (M + 1)-vector of probabilities, with the property that all probabilities add up to 1. Each probability indicates the likelihood of occurrence of one of the M + 1 possible values.
Unordered categorical response
If the response variable is a nominal measurement,
or the data do not satisfy the assumptions of an ordered model, then one may fit a model of the following form:
for The above link is sometimes called the multilogit link,
and the model is called the multinomial logit model.
It is common to choose the first or the last level of the response as the
reference or baseline group; the above uses the last level.
The VGAM family function multinomial() fits the above model,
and it has an argument called refLevel that can be assigned
the level used for as the reference group.
Count data
Classical GLM theory performs Poisson regression for count data.
The link is typically the logarithm, which is known as the canonical link.
The variance function is proportional to the mean:
where the dispersion parameter is typically fixed at exactly one. When it is not, the resulting quasi-likelihood model is often described as Poisson with overdispersion, or quasi-Poisson; then is commonly estimated by the method-of-moments and as such,
confidence intervals for are difficult to obtain.
In contrast, VGLMs offer a much richer set of models to handle overdispersion with respect to the Poisson, e.g., the negative binomial distribution and several variants thereof. Another count regression model is the generalized Poisson distribution. Other possible models are the zeta distribution and the Zipf distribution.
Extensions
Reduced-rank vector generalized linear models
RR-VGLMs are VGLMs where a subset of
the B matrix is of a lower rank.
Without loss of generality, suppose that is a partition of the covariate vector. Then the part of the B matrix corresponding to is of the form where and
are thin matrices (i.e., with R columns), e.g., vectors if the rank R = 1. RR-VGLMs potentially offer several advantages when applied to certain
models and data sets. Firstly, if M and p are large then the number of regression coefficients
that are estimated by VGLMs is large (). Then RR-VGLMs can reduce the number of estimated regression coefficients enormously if R is low, e.g., R = 1
or R = 2. An example of a model where this is particularly useful is the RR-multinomial logit model, also known as the stereotype model.
Secondly,
is an R-vector of latent variables, and often these can be usefully interpreted.
If R = 1 then we can write
so that the latent variable comprises loadings on the explanatory variables.
It may be seen that RR-VGLMs take optimal linear combinations of the
and then a VGLM is fitted to the explanatory variables . Thirdly, a biplot can be produced if R = 2 , and this allows the model to be visualized.
It can be shown that RR-VGLMs are simply VGLMs where the constraint matrices for
the variables in are unknown and to be estimated.
It then transpires that for
such variables.
RR-VGLMs can be estimated by an alternating algorithm which fixes
and estimates and then fixes and estimates , etc.
In practice, some uniqueness constraints are needed for
and/or . In VGAM, the rrvglm() function uses corner constraints by default, which means that the top R rows of is set to . RR-VGLMs were proposed in 2003.
Two to one
A special case of RR-VGLMs is when R = 1 and M = 2. This is dimension reduction from 2 parameters to 1 parameter. Then it can be shown that
where elements and are estimated. Equivalently,
This formula provides a coupling of and . It induces a relationship between two parameters of a model that can be useful, e.g., for modelling a mean-variance relationship. Sometimes there is some choice of link functions, therefore it offers a little flexibility when coupling the two parameters, e.g., a logit, probit, cauchit or cloglog link for parameters in the unit interval. The above formula is particularly useful for the negative binomial distribution, so that the RR-NB has variance function
This has been called the NB-P variant by some authors. The and are estimated, and it is also possible to obtain approximate confidence intervals for them too.
Incidentally, several other useful NB variants can also be fitted, with the help of selecting the right combination of constraint matrices. For example, NB − 1, NB − 2 (negbinomial() default), NB − H; see Yee (2014) and Table 11.3 of Yee (2015).
RCIMs
The subclass of row-column interaction models
(RCIMs) has also been proposed; these are a special type of RR-VGLM.
RCIMs apply only to a matrix Y response and there are
no explicit explanatory variables .
Instead, indicator variables for each row and column are explicitly set up, and an order-R
interaction of the form is allowed.
Special cases of this type of model include the Goodman RC association model
and the quasi-variances methodology as implemented by the qvcalc R package.
RCIMs can be defined as a RR-VGLM applied to Y with
For the Goodman RC association model, we have so that
if R = 0 then it is a Poisson regression fitted to a matrix of counts with row effects and column effects; this has a similar idea to a no-interaction two-way ANOVA model.
Another example of a RCIM is if is the identity link and the parameter is the median and the model corresponds to an asymmetric Laplace distribution; then a no-interaction RCIM is similar to a technique called median polish.
In VGAM, rcim() and grc() functions fit the above models.
And also Yee and Hadi (2014)
show that RCIMs can be used to fit unconstrained quadratic ordination
models to species data; this is an example of indirect gradient analysis in
ordination (a topic in statistical ecology).
Vector generalized additive models
Vector generalized additive models (VGAMs) are a major
extension to VGLMs in which the linear predictor is not restricted to be
linear in the covariates but is the
sum of smoothing functions applied to the :
where
These are M additive predictors.
Each smooth function is estimated from the data.
Thus VGLMs are model-driven while VGAMs are data-driven.
Currently, only smoothing splines are implemented in the VGAM package.
For M > 1 they are actually vector splines, which estimate the component functions
in simultaneously.
Of course, one could use regression splines with VGLMs.
The motivation behind VGAMs is similar to
that of
Hastie and Tibshirani (1990)
and
Wood (2017).
VGAMs were proposed in 1996
.
Currently, work is being done to estimate VGAMs using P-splines
of Eilers and Marx (1996)
.
This allows for several advantages over using smoothing splines and vector backfitting, such as the
ability to perform automatic smoothing parameter selection easier.
Quadratic reduced-rank vector generalized linear models
These add on a quadratic in the latent variable to the RR-VGLM class.
The result is a bell-shaped curve can be fitted to each response, as
a function of the latent variable.
For R = 2, one has bell-shaped surfaces as a function of the 2
latent variables---somewhat similar to a
bivariate normal distribution.
Particular applications of QRR-VGLMs can be found in ecology,
in a field of multivariate analysis called ordination.
As a specific rank-1 example of a QRR-VGLM,
consider Poisson data with S species.
The model for Species s is the Poisson regression
for . The right-most parameterization which uses the symbols has particular ecological meaning, because they relate to the species abundance, optimum and tolerance respectively. For example, the tolerance is a measure of niche width, and a large value means that that species can live in a wide range of environments. In the above equation, one would need in order
to obtain a bell-shaped curve.
QRR-VGLMs fit Gaussian ordination models by maximum likelihood estimation, and
they are an example of direct gradient analysis.
The cqo() function in the VGAM package currently
calls optim() to search for the optimal
, and given that, it is easy to calculate
the site scores and fit a suitable generalized linear model to that.
The function is named after the acronym CQO, which stands for
constrained quadratic ordination: the constrained is for direct
gradient analysis (there are environmental variables, and a linear combination
of these is taken as the latent variable) and the quadratic is for the
quadratic form in the latent variables
on the scale.
Unfortunately QRR-VGLMs are sensitive to outliers in both the response
and explanatory variables, as well as being computationally expensive, and
may give a local solution rather than a global solution.
QRR-VGLMs were proposed in 2004.
See also
generalized linear models
R (software)
Regression analysis
Statistical model
Natural exponential family
References
Further reading
Actuarial science
Regression models | Vector generalized linear model | Mathematics | 4,124 |
6,419,756 | https://en.wikipedia.org/wiki/Petrophysics | Petrophysics (from the Greek πέτρα, petra, "rock" and φύσις, physis, "nature") is the study of physical and chemical rock properties and their interactions with fluids.
A major application of petrophysics is in studying reservoirs for the hydrocarbon industry. Petrophysicists work together with reservoir engineers and geoscientists to understand the porous media properties of the reservoir. Particularly how the pores are interconnected in the subsurface, controlling the accumulation and migration of hydrocarbons. Some fundamental petrophysical properties determined are lithology, porosity, water saturation, permeability, and capillary pressure.
The petrophysicists workflow measures and evaluates these petrophysical properties through well-log interpretation (i.e. in-situ reservoir conditions) and core analysis in the laboratory. During well perforation, different well-log tools are used to measure the petrophysical and mineralogical properties through radioactivity and seismic technologies in the borehole. In addition, core plugs are taken from the well as sidewall core or whole core samples. These studies are combined with geological, geophysical, and reservoir engineering studies to model the reservoir and determine its economic feasibility.
While most petrophysicists work in the hydrocarbon industry, some also work in the mining, water resources, geothermal energy, and carbon capture and storage industries. Petrophysics is part of the geosciences, and its studies are used by petroleum engineering, geology, geochemistry, exploration geophysics and others.
Fundamental petrophysical properties
The following are the fundamental petrophysical properties used to characterize a reservoir:
Lithology: A description of the rock's physical characteristics, such as grain size, composition and texture. By studying the lithology of local geological outcrops and core samples, geoscientists can use a combination of log measurements, such as natural gamma, neutron, density and resistivity, to determine the lithology down the well.
Porosity: The pore space volume portion related to the bulk rock volume, symbolized as . It is typically calculated using data from an instrument that measures the reaction of the rock to bombardment by neutrons or gamma rays but can also be derived from sonic and NMR logging. A helium porosimeter is the main technique to measure grain volume and porosity in the laboratory.
Water saturation: The fraction of the pore space occupied by water. This is typically calculated using data from an instrument that measures the resistivity of the rock and applying empirical or theoretical water saturation models; the most worldwide used is Archie's (1942) model. It is known by the symbol .
Permeability: The quantity of fluid (water or hydrocarbon) that can flow through a rock as a function of time and pressure, related to how interconnected the pores are, and it is known by the symbol . Formation testing is the only tool that can directly measure a rock formation's permeability down a well. In case of its absence, which is common in most cases, an estimate for permeability can be derived from empirical relationships with other measurements such as porosity, NMR and sonic logging. Darcy's law is applied in the laboratory to measure the core plug permeability with an inert gas or liquid (i.e. that does not react with the rock).
Formation thickness (h) of rock with enough permeability to deliver fluids to a well bore, this property is often called “net reservoir rock.” In the oil and gas industry, another quantity “net pay” is computed which is the thickness of rock that can deliver hydrocarbons to the well bore at a profitable rate.
Rock mechanical properties
The rock's mechanical or geomechanical properties are also used within petrophysics to determine the reservoir strength, elastic properties, hardness, ultrasonic behaviour, index characteristics and in situ stresses.
Petrophysicists use acoustic and density measurements of rocks to compute their mechanical properties and strength. They measure the compressional (P) wave velocity of sound through the rock and the shear (S) wave velocity and use these with the density of the rock to compute the rock's compressive strength, which is the compressive stress that causes a rock to fail, and the rocks' flexibility, which is the relationship between stress and deformation for a rock. Converted-wave analysis is also determines the subsurface lithology and porosity.
Geomechanics measurements are useful for drillability assessment, wellbore and open-hole stability design, log strength and stress correlations, and formation and strength characterization. These measurements are also used to design dams, roads, foundations for buildings, and many other large construction projects. They can also help interpret seismic signals from the Earth, either manufactured seismic signals or those from earthquakes.
Methods of petrophysical analysis
Core analysis
As core samples are the only evidence of the reservoir's formation rock structure, the Core analysis is the "ground truth" data measured at laboratory to comprehend the key petrophysical features of the in-situ reservoir. In the petroleum industry, rock samples are retrieved from the subsurface and measured by oil or service companies' core laboratories. This process is time-consuming and expensive; thus, it can only be applied to some of the wells drilled in a field. Also, proper design, planning and supervision decrease data redundancy and uncertainty. Client and laboratory teams must work aligned to optimise the core analysis process.
Well-logging
Well Logging is a relatively inexpensive method to obtain petrophysical properties downhole. Measurement tools are conveyed downhole using either wireline or LWD method.
An example of wireline logs is shown in Figure 1. The first “track” shows the natural gamma radiation level of the rock. The gamma radiation level “log” shows increasing radiation to the right and decreasing radiation to the left. The rocks emitting less radiation have more yellow shading. The detector is very sensitive, and the amount of radiation is very low. In clastic rock formations, rocks with smaller amounts of radiation are more likely to be coarser-grained and have more pore space, while rocks with higher amounts of radiation are more likely to have finer grains and less pore space.
The second track in the plot records the depth below the reference point, usually the Kelly bush or rotary table in feet, so these rock formations are 11,900 feet below the Earth's surface.
In the third track, the electrical resistivity of the rock is presented. The water in this rock is salty. The electrolytes flowing inside the pore space within the water conduct electricity resulting in lower resistivity of the rock. This also indicates an increased water saturation and decreased hydrocarbon saturation.
The fourth track shows the computed water saturation, both as “total” water (including the water bound to the rock) in magenta and the “effective water” or water that is free to flow in black. Both quantities are given as a fraction of the total pore space.
The fifth track shows the fraction of the total rock that is pore space filled with fluids (i.e. porosity). The display of the pore space is divided into green for oil and blue for movable water. The black line shows the fraction of the pore space, which contains either water or oil that can move or be "produced" (i.e. effective porosity). While the magenta line indicates the toral porosity, meaning that it includes the water that is permanently bound to the rock.
The last track represents the rock lithology divided into sandstone and shale portions. The yellow pattern represents the fraction of the rock (excluding fluids) composed of coarser-grained sandstone. The gray pattern represents the fraction of rock composed of finer-grained, i.e. "shale." The sandstone is the part of the rock that contains the producible hydrocarbons and water.
Modelling
Reservoir models are built by reservoir engineering in specialised software with the petrophysical dataset elaborated by the petrophysicist to estimate the amount of hydrocarbon present in the reservoir, the rate at which that hydrocarbon can be produced to the Earth's surface through wellbores and the fluid flow in rocks. Similar models in the water resource industry compute how much water can be produced to the surface over long periods without depleting the aquifer.
Rock volumetric model for shaly sand formation
Shaly sand is a term referred to as a mixture of shale or clay and sandstone. Hence, a significant portion of clay minerals and silt-size particles results in a fine-grained sandstone with higher density and rock complexity.
The shale/clay volume is an essential petrophysical parameter to estimate since it contributes to the rock bulk volume, and for correct porosity and water saturation, evaluation needs to be correctly defined. As shown in Figure 2, for modelling clastic rock formation, there are four components whose definitions are typical for shaly or clayey sands that assume: the rock matrix (grains), clay portion that surrounds the grains, water, and hydrocarbons. These two fluids are stored only in pore space in the rock matrix.
Due to the complex microstructure, for a water-wet rock, the following terms comprised a clastic reservoir formation:
Vma = volume of matrix grains.
Vdcl = volme of dry clay.
Vcbw = volume of clay bound water.
Vcl = volume of wet clay (Vdcl +Vcbw).
Vcap = volume of capillary bound water.
Vfw = volume of free water.
Vhyd = volume of hydrocarbon.
ΦT = Total porosity (PHIT), which includes the connected and not connected pore throats.
Φe = Effective porosity which includes only the inter-connected pore throats.
Vb = bulk volume of the rock.
Key equations:
Vma + Vcl + Vfw + Vhyd = 1
Rock matrix volume + wet clay volume + water free volume + hydrocarbon volume = bulk rock volume
Scholarly societies
The Society of Petrophysicists and Well Log Analysts (SPWLA) is an organisation whose mission is to increase the awareness of petrophysics, formation evaluation, and well logging best practices in the oil and gas industry and the scientific community at large.
See also
Archie's law
Formation evaluation
Gardner's relation
Petrology
References
Further reading
External links
Petrophysics Forum
Crains Petrophysical Handbook
RockPhysicists
Society of Petrophysicists and Well Log Analysts (SPWLA)
Petroleum engineering
Applied and interdisciplinary physics | Petrophysics | Physics,Engineering | 2,238 |
41,162,624 | https://en.wikipedia.org/wiki/Human%20mating%20strategies | In evolutionary psychology and behavioral ecology, human mating strategies are a set of behaviors used by individuals to select, attract, and retain mates. Mating strategies overlap with reproductive strategies, which encompass a broader set of behaviors involving the timing of reproduction and the trade-off between quantity and quality of offspring.
Relative to those of other animals, human mating strategies are unique in their relationship with cultural variables such as the institution of marriage. Humans may seek out individuals with the intention of forming a long-term intimate relationship, marriage, casual relationship, or friendship. The human desire for companionship is one of the strongest human drives. It is an innate feature of human nature and may be related to the sex drive. The human mating process encompasses the social and cultural processes whereby one person may meet another to assess suitability, the courtship process and the process of forming an interpersonal relationship. Commonalities, however, can be found between humans and nonhuman animals in mating behavior, as in the case of animal sexual behavior in general and assortative mating in particular.
Theoretical background
Parental investment
Research on human mating strategies is guided by the theory of sexual selection, and in particular, Robert Trivers' concept of parental investment. Trivers defined parental investment as "any investment by the parent in an individual offspring that increases the offspring's chance of surviving (and hence reproductive success) at the cost of the parent's ability to invest in other offspring." The support given to each offspring typically differs between the father and mother. Trivers posited that it is the differential parental investment between males and females that drives the process of sexual selection. In turn, sexual selection leads to the evolution of sexual dimorphism in mate choice, competitive ability, and courtship displays (see secondary sex characteristics).
Minimum parental investment is the least required care for successful reproduction. In humans, females have a higher minimum parental investment. They have to invest in internal fertilization, placentation, and gestation, followed by childbirth and lactation. While human males can invest heavily in their offspring as well, their minimum parental investment is still lower than that of females.
This same concept can be looked at from an economic perspective regarding the costs of engaging in sexual relations. Females incur the higher costs, as they carry the possibility of becoming pregnant among other costs. Conversely, males have comparatively minimal costs of having a sexual encounter. Therefore, evolutionary psychologists have predicted a number of sex differences in human mating psychologies.
Life history strategies
Life history theory helps to explain differences in timing of sexual relationships, quantity of sexual partners, and parental investment. According to this theory, organisms have a limited supply of energy, which they use to develop their bodies. This energy is put on a theoretical spectrum of how organisms prioritize energy use. At one end of the spectrum, the organism prioritizes speeding up physical development and reaching sexual maturation quickly, which is deemed a "fast" strategy. Organisms implementing a "fast" strategy seek to have sexual relationships earlier, multiple mates, and to invest less energy in their offspring. On the other end of the spectrum is "slow" strategy, in which organisms prioritize physical development. "Slow" strategy organisms seek to have sexual relationships later, fewer mates, and invest more heavily in their offspring.
Generally, fast strategies are developed in populations that are r-selected (r being the maximal intrinsic value of natural increase), and slow strategies are developed in populations that are K-selected (K being the carrying capacity of the population, or how many individuals within a population that their environment can support). Species that are r-selected tend to reproduce faster, be specialists, and to be smaller. K-selected organisms reproduce less over the course of their lifetimes, but individuals live longer; they are more likely to be larger and to be generalists. Species exist along an r-K continuum, rather than being one or the other. Humans are considered a K-selected species, meaning that on the whole, they pursue "slow" strategies relative to other species.
Life history characteristics include age at sexual maturity, gestation period, birth weight, litter size, postnatal growth rates, breastfeeding duration, birth spacing, length of juvenile dependence, level of parental investment, adult body size, and longevity. Variation in these traits between individuals, according to life-history theory, is due to homeostasis, reproduction, and growth. For example, if more of one species' resources are going towards reproduction than physical growth, then the age at which they reach sexual maturity will be earlier than a species that devotes more energy to physical growth.
These strategies are unconscious and help increase the organism's reproductive success in a given environment. Early childhood environments may play a part in which strategy a person unconsciously pursues. In a hostile environment, risk and unpredictability is increased and therefore survival is a higher priority. A "fast" strategy is more likely to be pursued by populations living in hostile environments in order to reach maturity and reproduce quickly. In less risky environments, populations are more likely to pursue a "slow" strategy to physically develop first and then reproduce. This concept has been applied to humans as well, though there are differences in life history strategy application both between and within species.
Challenges with applying life history strategies to humans
The binary between "fast" and "slow" mating strategies as applied to humans can be misleading. Those who pursue "fast" strategies may face criticism in the form of cross-cultural contempt or ethical and/or religious critique. For example, in societies which portray women as more likely to pursue slow strategies, female sexual behavior may be taboo.
One theory, "psychosocial acceleration theory", refers to the predictions about human development of "fast" or "slow" strategies given individuals' experience of their environment while young. It predicts that people born into harsher environments (in which they have less control over the threats around them) are more likely to reach sexual maturity faster and to reproduce earlier, due to phenotypic plasticity (external cues prompting change in physiology and behavior). Evolutionary psychologists use three metrics to describe environments that predict which life history strategy people will choose: resource availability, harshness, and unpredictability. Harshness and unpredictability come into play when resource availability is satisfied, because without resources, individuals have few opportunities to mature and reproduce. For example, in humans, low resource availability could refer to food insecurity, and unpredictability could refer to frequently moving houses or switching schools. Smoking, poor health status, and low personal care are all traits that have been shown to be correlated with earlier sexual experiences, earlier births, and more short-term sex partners. Although psychologists describe these traits as a "cascade", in which a set of childhood experiences and traits affect later-in-life sexual behavior in specific, grouped ways, studies show that sexual consequences can vary across cultures and class and might not be as linearly related to childhood experiences as has been assumed.
Human life history theories in psychology focus on behavioral choices like mate choice and parenting effort (see Evolutionary Anthropology), while in evolutionary ecology, they focus on allocation of energy to maximize success and reproduction.
Several studies undermine the psychological application of life history theories in humans. For example, it has been found that extrinsic mortality (the harshness of an individual's environment) does not directly affect whether people adopt a fast or slow strategy. The reason that extrinsic mortality appears to do so is that it increases competition within populations: it is more accurate to say that harsh environments create situations of high competition, in which people are more likely to adopt fast strategies to maximize their chances of reproduction, than it is to say that individuals in harsh environments adopt fast strategies because otherwise they would die before reproducing.
Another study questioning life history theory in humans was a meta-analysis of pace-of-life studies. The pace-of-life syndrome hypothesis relates environmental factors (unpredictable environments, high predation, etc.) to behavior (earlier mating, more sexual partners, etc.), thus creating a link between behavior, phenotype, and the environment. The analysis, however, suggested that pace-of-life studies had few significant findings regarding differences between individuals due to environment. This means that the link between individuals experiencing difficult environments growing up and their later sexual behavior may be tenuous, or else too muddied with confounding variables to track.
Behavior sciences might not, in general, be a good framework with which to consider life-history theory. Biological life-history theory is based on tradeoffs between energy expenditure and the benefits of reproduction, and these tradeoffs are difficult to measure in humans due to: inability to ascertain tradeoffs among phenotypically different individuals, poor models for tradeoffs, and a reliance on allo-parental investment. It has been proposed that life history theory in humans could be made more useful by considering the principle of time preferences shared between evolutionary biology and psychology, recognizing that individuals will see their assets as more valuable in the present than in the future. Individuals who place a higher "discount rate" on their reproductive abilities, or see it as much more valuable now than later, are more likely to mate earlier and pursue fast strategies.
Sex similarities
Assortative mating
Human mating is inherently non-random. Despite the common trope "opposites attract", humans generally prefer mates who share the same or similar traits, such as genetics, quantitative phenotypes like height or body-mass index, skin pigmentation, the level of physical attractiveness, disease risk (including cancers and mental disorders), race or ethnicity, facial features, socioeconomic factors (such as (potential) income level and occupational prestige), cultural backgrounds, moral values, religious beliefs, political orientation, (perceived) personality traits (such as conscientiousness or extraversion), behavioral characteristics (such as the level of generosity or the propensity for alcoholism), educational attainment, and IQ or general intelligence. Furthermore, in the past, marriage across status lines was more common. Women typically looked for a man of high status (hypergamy), a sign of access to resources. However, men were usually willing to marry down the socioeconomic ladder (hypogamy) if the woman was young, good looking, and possessed domestic skills (proxies of fertility). In the modern world, people tend to desire well-educated and intelligent children; this goal is better achieved by marrying bright people with high incomes, resulting in the intensification of economic assortative mating. Indeed, better educated parents tend to have children who are not only well-educated but also healthy and successful. For this reason, when judging the value of a potential mate, people commonly consider the other person's grasp of grammar (a proxy of socioeconomic status of educational level), teeth quality (indicators of health and age), and self-confidence (psychological stability). Furthermore, the age gap between two partners has also declined. In other words, men and women became more symmetrical in the socioeconomic traits they desire in a mate. Among the aforementioned traits, the correlations in age, race or ethnicity, religion, educational attainment, and intelligence between spouses are the most pronounced, while height is one of the most heritable, with mating partners sharing 89% of the genetic variations affecting the preference for height.
It is not unusual for couples to look alike (as if they were related). Besides assortative mating, some people are unconsciously attracted by their own faces or prefer familiar-looking ones for ease of cognitive processing. People who are emotionally close to their opposite-sex parents may be prone to unknowingly selecting mates bearing resemblance to said parents, who served as role models for what a desirable mate should be like, a phenomenon called sexual imprinting.
Public secondary school is the last time people of various backgrounds are lumped together in the same setting. After that, they begin sorting themselves out by various measures of social screening. Among those marrying late (relative to the time when they left school), socioeconomic status is especially important. In societies where the numbers of highly educated and career-minded women are increasing, the role of socioeconomic status is likely to be even more important in the future. These women generally do not choose to mate with men who are less occupationally and educationally accomplished than they are. For this reason, in societies where they outnumber men, the competition for high-quality males has been intensifying. This trend first emerged in Europe and North America but has been spreading to other places as well.
Positive assortative mating raises the chances of a given trait being passed on to the couple's offspring, strengthens the bond between the parents, and increases genetic similarity between family members, whereupon in-group altruism and inclusive fitness are enhanced. That the two partners are culturally compatible reduces uncertainty in lifestyle choices and ensures social support. In some cases, homogamy can also increase the couple's fertility and the number of offspring surviving till adulthood. On the other hand, there is evolutionary pressure against mating with people too genetically similar to oneself, such as members of the same nuclear family. In addition, children born into parents who are cousins have an increased risk of autosomal recessive genetic disorders, and this risk is higher in populations that are already highly ethnically homogeneous. Children of more distantly related cousins have less risk of these disorders, though still higher than the average population. Therefore, humans tend to maximize the genetic similarity of their mates while avoiding excessive inbreeding or incest. First-cousin marriages nowadays are rare and are in fact prohibited in a number of jurisdictions worldwide. In general, humans seem to prefer mates who are (the equivalent of) second or higher-parity cousins. Genetic analyses suggest that the genomic correlation between spouses is comparable to that between second cousins. In the past, there was indeed some awareness of the dangers of inbreeding, as can be seen in legal prohibitions in some societies, while in the current era, better transportation infrastructure makes it less likely to occur. Moreover, modern transportation has diminished residential propinquity as a factor in assortative mating. But cultural anthropologists have noted that avoidance of inbreeding cannot be the sole basis for the incest taboo because the boundaries of the incest prohibition vary widely between cultures, and not necessarily in ways that maximize the avoidance of inbreeding. A study indicated that between 1800 and 1965 in Iceland, more children and grandchildren were produced from marriages between third or fourth cousins (people with common great-great- or great-great-great-grandparents) than from other degrees of consanguinity.
While human assortative mating is usually positive, in the case of the major histocompatibility complex (MHC) on chromosome 6, humans tend to be more attracted to those who are genetically different in this region, judging from their odors. This promotes MHC heterogeneity in their offspring, making them more resistant to pathogens. Another example of negative assortative mating is among people with traits linked to high testosterone (such as analytical thinking and spatial reasoning) and those traits due to high estrogen (empathy and social skills). They generally find each other appealing.
Assortative mating is partly due to social effects. For instance, religious people are more likely to meet their potential mates in their places of worship while highly educated people typically meet their future spouses in institutions of higher learning. Nevertheless, it can have a quantitatively discernible impact upon the human genome and as such has implications for human evolution even in the presence of population stratification. Pleiotropy, or the phenomenon in which a single gene can influence multiple traits, and assortative mating are responsible for the correlations between some sexually selected traits in humans, such as height and IQ, which are weakly positively correlated. In a knowledge-based economy, educational and socioeconomic assortative mating contributes to the growth in household income inequality, as parents with higher incomes and levels of education tend to invest more in their offspring, giving them an edge later in life.
Dating
People date to assess each other's suitability as a partner in an intimate relationship or as a spouse. Dating rules may vary across different cultures, and some societies may even replace the dating process by a courtship instead.
Double standards and infidelity
Both men and women apply one set of standards for themselves and another for their partners. In particular, what counts as sexual contact is different depending on the person engaging in the act, oneself or one's partner. If the person in question is the one to do it, they are unlikely to consider it infidelity compared to when their partner does it. Nevertheless, women are more likely than men to be judged harshly for their promiscuity, even in the most gender-egalitarian of modern societies like Norway. In fact, women are the most aggressive in shaming other women for being promiscuous.
Flirting
To bond or express sexual interest, people flirt. Social anthropologist Kate Fox posits two main types of flirting: flirting for fun and flirting with intent. Flirting for fun can take place between friends, co-workers, or total strangers who wish to get to know each other. This type of flirting does not seek sexual intercourse or romantic relationship, but increases the bonds between two people.
Flirting with intent plays a role in mate-selection. The person flirting sends out signals of sexual availability to another, and hopes to see the interest returned to encourage continued flirting. Flirting can involve non-verbal signs, such as an exchange of glances, hand-touching, hair-touching, or verbal signs, such as chatting up, flattering comments, and exchange of telephone numbers to enable further contact.
Kissing
While parental kissing was common throughout human history, romantic or sexual kissing was by no means universal. Historical evidence suggests that this practice arose independently in different complex or stratified societies, such as India, Mesopotamia, and Egypt during the Bronze Age, but did not necessarily spread to other places. Kissing is also more common in colder climates. As is the case with other primates, humans kiss to determine mate suitability and enhance reproduction.
Matchmaking
Historically, one of the roles of the family was to select spouses of the opposite sex but from the same race or ethnicity and religion for the children. In many cultural traditions, a date may be arranged by a third party, who may be a family member, acquaintance, or (professional) matchmaker. Such a matchmaker could be a religious leader in a community where religious attendance is common. In some cultures, a marriage may be arranged by the couple's parents or an outside party. In some cultures, such as India, arranged marriages are common while in others, such as the United States, these are deemed unacceptable. From the 2000s onward, internet dating—a new form of matchmaking—has become increasingly popular.
Sex differences
Short-term and long-term mating
Due to differential parental investment, the less investing sex should display more intrasexual competitiveness. This is because they can invest less in each offspring and therefore can reproduce at a higher frequency, which allows them to compete for more mates. Additionally, the higher investing sex should be more choosy in their mate. Since they have a higher minimum parental investment, they carry greater costs with each sexual encounter. These costs lead them to have higher selection standards and therefore they are more choosy.
In humans, females have the higher obligatory biological parental investment. In short-term mating, females are choosier as they have the bigger parental investment. In long-term mating, males and females are equally choosy as they have the same amount of parental investment. Therefore, female and male intrasexual competition and female and male choosiness is equally high in long-term mating but not in short-term mating.
Since males have the lower obligatory parental investment, they should pursue a short-term mating strategy more often than females. Short term mating is characterized by casual, low commitment sexual relationships with many partners that do not last a long time. Additionally, males benefit more from short-term mating than females do. Because males generally pursue short-term mating strategies, their overall reproductive success is higher than that of females, however it is also more variable. This means that males are able to have more offspring on average, however only relatively few males are able to have a very large number of offspring. Due to this short-mating strategy, males have a greater desire for sexual variety, need less time to consent to intercourse, and seek short-term mates more than females.
However, females also pursue short-term mates, but the motivations differ from males. Females can benefit from short-term mating in numerous ways. First, it allows for a quick extraction of resources. Women in a stressed situation may benefit from protection from a male and short term mating is a way to achieve this as is seen in contemporary asylum seeker anthropological studies.
One prominent hypothesis is that ancestral women selectively engaged in short-term mating with men capable of transmitting genetic benefits to their offspring such as health, disease resistance, or attractiveness (see good genes theory and sexy son hypothesis). Since women cannot inspect men's genes directly, they may have evolved to infer genetic quality from certain observable characteristics (see indicator traits). One prominent candidate for a "good genes" indicator includes absence of fluctuating asymmetry, or the degree to which men have perfect bodily symmetry. Other candidates include masculine facial features, behavioral dominance, and low vocal pitch. Evolutionary psychologists have therefore indicated that women pursuing a short-term mating strategy have higher preferences for these good gene indicators, and men who possess good genes indicators are more successful in pursuing short-term mating strategies than men who do not. Indeed, research indicates that self-perceived physical attractiveness, absence of fluctuating asymmetry, and low vocal pitch are positively related to short-term mating success in men but not in women.
Conversely, long-term mating is marked by serious committed sexual relationships with relatively few partners. While males generally pursue a short-term mating strategy when possible, females typically pursue a long-term mating strategy. Long-term strategies are characterized by extended courtships, high investment, and few sexual partners. While pursuing a long-term strategy, females are able to get resources from males over the course of the relationship. Female mating psychology is generally more focused on finding high quality mates rather than increasing the quantity of their mates, which is reflected in their pursuit of a long-term strategy. Additionally, they also benefit from higher parental investment by males. Women are thought to seek long-term partners with resources (such as shelter and food) that provide aid and support survival of offspring. To achieve this, women are thought to have evolved extended sexuality. The key benefit for males pursuing a long-term strategy is higher parental certainty. However, both sexes pursue both strategies and get benefits from both strategies. Additionally, humans typically do not pursue the extremes of either short or long-term mating strategies.
It is possible that females are more prone to psychological depression than males if they are subject to K-selection. Because women's reproductive decisions are made with more risks then men's, postpartum depression could be a signal to women that they faced a bad investment opportunity, would be evolutionarily adaptive. By the same token, some researchers hypothesized that postpartum depression is more likely to occur in mothers who are suffering a fitness cost, in order to inform them that they should reduce or withdraw investment in their infants. Moreover, there is some evidence that postpartum depression could function as a bargaining strategy, in which parents who were not receiving adequate support from their partners withdrew their investment in order to elicit additional support. In support of this, Hagen found that postpartum depression in one spouse was related to increased levels of child investment in the other spouse.
Mate value
Mate values correspond to future reproductive success likelihood of an individual. Mate value contains the ability of the individual to produce healthy offspring in the future, based on the individual's age and sex. The mate value of each sex is determined by what the opposite sex desires in a mate, so male mate values is determined by what females desire and vice versa. Over time, the individuals who had higher mate values had higher reproductive success. These qualities that make up mate value evolved into what is considered physically attractive. Thus individuals with a high mate value are perceived to be more attractive by the opposite sex than those with low mate value. Additionally, individuals with a high mate value are more able to be more choosy in their mates and reproduce more often than those with a low mate value. Due to biological differences between the sexes, it is predicted that there are differences in what the sexes desire in a mate. Therefore, it is thought that there are differences between male and female mate values.
Mate value is perceived through signals and cues. Signals are characteristics that have been selected for because they offer reliable changes in receiver behavior that lead to higher reproductive success for the receiver. Conversely, cues have not been selected for to carry meaning, but instead are byproducts. However, with sexual selection, cues can become signals over time. Costly signals are ones that require intense effort for the signaler to send. Because they require high investment, costly signals are typically honest signals of underlying genetic qualities. However, signals that are not costly enough can be faked and therefore are not associated with the underlying benefits.
Evolutionary psychologists have predicted that men generally place a greater value on youth and physical attractiveness in a mate than do women. Youth is associated with reproductive value in women, because their ability to have offspring decreases dramatically over time compared to men. Therefore, males typically prefer to mate with females who are younger than themselves, except when they are maturing in their teens. The features that men find physically attractive in women are thought to signal health and fertility. Examples of the determinants of female attractiveness includes the waist-to-hip ratio and curvaceousness. While this is found across cultures, there are differences with regards to what the ideal waist-to-hip ratio is, ranging from 0.6 in China, South America, and some of Africa to 0.8 in Cameroon and among the Hadza tribe of Tanzania. In the United States, divergent preferences of African- and European-Americans have been noted. There is also evidence of variation across time, even within a single culture or civilization. On the other hand, there is evidence that a mother's waist-hip ratio before pregnancy is correlated with her child's cognitive ability, as hip fat, which contains long chain polyunsaturated fatty acids, critical for the development of the fetus's brain.
One factor that affects a woman's waist-to-hip ratio is her gynoid fat distribution, where the energy is stored for pregnancy for early infant care, including breastfeeding. A female human's waist–hip ratio is at its optimal minimum during times of peak fertility—late adolescence and early adulthood, before increasing later in life.
Additionally, physical attractiveness signals genetic quality for both males and females. Men who preferentially mated with healthy, fertile, and reproductively valuable women would have left more descendants than men who did not. Since men's reproductive value does not decline as steeply with age as does women's, women are not expected to exhibit as strong of a preference for youth in a mate.
However, male mate value is partly based upon his ability to acquire resources. This is because one of the costs of pregnancy is the limited ability to get resources for oneself. Additionally, it signals ability of the male to commit and invest in the female and her offspring. Male resource investment increases the likelihood the offspring will survive and reproduce itself. Due to this, females are typically attracted to older males, since they are likely to have a greater ability to provide resources and have a higher social status. Evolutionary psychologists have speculated that women are relatively more attracted to ambition and social status in a mate because they associate these characteristics with men's access to resources. Women who preferentially mated with men capable of investing resources in themselves and their offspring, thereby ensuring their offspring's survival, would have left more descendants than women who did not. Male mate value is also determined by his physical and social dominance, which are signals to high quality genes. In addition, women tend to be attracted to men who are taller than they themselves are and who display a high degree of facial symmetry, masculine facial dimorphism, upper body strength, broad shoulders, a relatively narrow waist, and a V-shaped torso.
Body odor, which contains pheromones, is another crucial criterion in assessing the suitability of a mate. In humans, some olfactory receptions are directly connected to the parts of the brain controlling reproductive behavior. Men are able to detect women's sexual arousal by the sense of smell, and a woman's smell may increase a man's level of arousal.
Sexual desire
Sexual selection theory states that because of their lower minimum parental investment, men can achieve greater reproductive success by mating with multiple women than women can achieve by mating with multiple men. Evolutionary psychologists therefore argue that ancestral men who possessed a desire for multiple short-term sex partners, to the extent that they were capable of attracting them, would have left more descendants than men without such a desire. Ancestral women, by contrast, would have maximized reproductive success not by mating with as many men as possible, but by selectively mating with those men who were most able and willing to invest resources in their offspring. Gradually in a bid to compete to get resources from potential men, women have evolved to show extended sexuality.
One classic study of college students at Florida State University found that among 96 subjects chosen for attractiveness, approached on campus by opposite-sex confederates and asked if they wanted to "go to bed" with him/her, 75% of the men said yes while 0% of the women said yes. Evidence also indicates that, across cultures, men report a greater openness to casual sex, a larger desired number of sexual partners, and a greater desire to have sex sooner in a relationship. These sex differences have been shown to be reliable across various studies and methodologies. However, there is some controversy as to the scope and interpretation of these sex differences.
Evolutionary research often indicates that men have a strong desire for casual sex, unlike women. Men are often depicted as wanting numerous female sexual partners to maximize reproductive success. Evolutionary mechanisms for short-term mating are evident today. Mate-guarding behaviors and sexual jealousy point to an evolutionary history in which sexual relations with multiple partners became a recurrent adaptive problem, while the willingness of modern-day men to have sex with attractive strangers, and the prevalence of extramarital affairs in similar frequencies cross-culturally, are evidence of an ancestral past in which polygamous mating strategies were adopted.
Flanagan and Cardwell argue that men could not pursue this ideology without willing female partners. Every time a man has a new sexual partner, the woman also has a new sexual partner. It has been proposed, therefore, that casual sex and numerous sexual partners may also confer some benefit to females. That is, they would produce more genetically diverse offspring as a result, which would increase their chances of successfully rearing children to adolescence, or independence.
Error management theory states that psychological processes should be biased to minimize costs of making incorrect judgments and decisions. Since males generally pursue a short-term mating strategy, the costs of not having sexual intercourse is higher than having sexual intercourse. Therefore, the cost for a male thinking a female does not desire to engage in sexual intercourse when in fact she does is higher than perceiving a female does want to have sexual intercourse when she actually does not. Conversely, since females generally pursue a long-term strategy, the costs of having sexual intercourse is higher than not having sexual intercourse. Therefore, the cost for a female of perceiving a male wants to invest when he does not is higher than perceiving a male does not want to invest when in fact he does want to invest. Due to these costs, males and females have developed separate psychological mechanisms where males over-perceive female desire for sex and females under-perceive male commitment. However, males accurately perceive female commitment and females accurately perceive male sexual interests.
Mate retention
In addition to acquiring and attracting mates, humans need to retain their mate over a certain period of time. This is especially important in long-term, pair-bonded relationships. It has been hypothesized that feelings of love have evolved to keep humans in their mating relationship. It has been shown that feelings of love motivate individuals to pursue their current partner and stray away from alternatives. Additionally, proclaiming feelings of love increases the attachment and commitment to the current partner. Further, when proclaiming recalling love and commitment, oxytocin, a hormone associated with pair-bonding activities, increases in the bloodstream. This links physiological indicators with mate retention behaviors.
Despite this link, maintaining a pair-bonded relationship can be difficult, especially around alternative mates. When presented with alternative mates with a high mate value, humans tend to view their current relationship less favorably. This occurs when males are presented with physically attractive females, and it occurs for females when they are presented with socially dominant males. However, there are psychological counter-measures to these processes. First, individuals in a committed relationship tend to devalue alternative mate options, thus finding them less attractive. Second, these individuals do not always consider potential alternatives. Instead they pay less attention to alternative mates and therefore do not undergo the devaluation process. These mechanisms tend to happen unconsciously and help the individual maintain their current relationship.
There are several strategies that an individual can do to retain their mate. First off, individuals should engage in more mate retention strategies when their mate is of high value. Therefore, males with more physically attractive mates and females with mates that have more resources engage in more mate retention behaviors. Additionally, to retain their mates, males undertake resource displays and females enhance their physical appearance. Finally, jealousy helps maintain relationships. Jealousy is associated with the threat of mate loss and helps individuals engage in behaviors to keep their current mate. However, males and females differ on what cues jealousy. Since males have issues confirming parental certainty, they become more jealous than females for sexual cheating. However, historically females needed male resources for offspring investment. Therefore, females become more jealous over emotional cheating, as it threatens the devotion of resources to her and her offspring.
Intrasexual competition
For both sexes, high social status and ample access to resources are important for evolutionary success. But each sex has its own strategies for competing against members of the same sex. To safeguard their genetic interests, girls and women tend to form alliances with kin, affines (in-laws), and a few select female friends. Instead of direct competition, females tend to disguise their efforts to outclass their competitors in order to avoid physical harm and violence unless they are already of high status, in which case they can rely on greater protection and greater access to resources. Other strategies include enforcing equality within a social clique in order to minimize competition and excluding other females—that is, potential competitors—from one's social circles.
Individual differences
Sociosexual Orientation Inventory
Just as there are differences between the sexes in mating strategies, there are differences within the sexes and such within-sex variation is substantial. Individual differences in mating strategies are commonly measured using the Sociosexual Orientation Inventory (SOI), a questionnaire that includes items assessing past sexual behavior, anticipated future sexual behavior, and openness to casual sex. Higher scores on the SOI indicate a sexually unrestricted mating strategy, which indicates an openness to casual sex and more partners. Conversely, lower scores on the SOI indicate a sexually restricted mating strategy, which a focus on higher commitment and fewer partners.
Several studies have found that scores on the SOI are related to mate preferences, with more sexually restricted individuals preferring personal/parenting qualities in a mate (e.g. responsibility and loyalty), and with less sexual restricted individual preferring qualities related to physical attractiveness and social visibility. Other studies have shown that SOI scores are related to personality traits (i.e. extraversion, erotophilia, and low agreeableness), conspicuous consumption in men as a means to attract women, and increased allocation of visual attention to attractive opposite-sex faces.
Short-term vs. long-term mating
Evolutionary psychologists have proposed that individuals adopt conditional mating strategies in which they adjust their mating tactics to relevant environmental or internal conditions, which is called strategic pluralism. The concept of sexual pluralism states that humans do not pursue the same mating strategy all of the time. There are different motivations and environmental influences that determine the mating strategy which a person will adopt. The long-term and short-term mating behaviors are triggered in the individual by the current strategy being pursued. Therefore, not only are there differences between the sexes in long-term and short-term mating, but there are also differences within the sexes. To the extent that ancestral men were capable of pursuing short-term mating strategies with multiple women, they tend to do so. However, not every male is able to pursue this option. Additionally, even though most women pursue a long term mating strategy, some females pursue a short-term strategy.
Differences among males
When possible, males will typically pursue a short-term mating strategy. The ability to do this depends upon their mate value, so males with a high mate value are more likely to pursue a short-term mating strategy. High mate value males have been shown to have sexual intercourse earlier and more often than low mate value males. Self-esteem and physical attractiveness have been shown to be related to male pursuing a short term mating strategy. Additionally, males with more testosterone have been shown to pursue more short-term strategies.
However, not all males pursue a short-term mating strategy. There are several reasons for this. First, long-term mating has its own advantages that have already been discussed. Second, while males of higher mate value and status have opportunities to pursue short-term mates, low mate value males typically do not have the same opportunities. Since females generally prefer long-term mating strategies, the few who would mate in the short-term are already paired with the high mate value males. Additionally, the benefits of short-term mating for females are only obtained through high mate value males. Therefore, low status males are more likely to pursue long-term mating strategy.
Differences among females
While more attractive males tend to pursue a short-term mating strategy, more attractive females tend to pursue a more long-term mating strategy. Additionally, younger females are more likely to pursue a short-term mating strategy, as well as those who are not satisfied with their current partner.
The ovulatory cycle has been shown to influence a female's mating strategy. In the late follicular phase, women are the most fertile in the ovulatory cycle. During this time, there is evidence that females tend to pursue a short-term oriented mating strategy over a long-term one. Additionally, female sexual desires increase as well as their attraction towards more masculine males.
Additionally, female mating strategies can change across their lifetime. In their early thirties, females experience a peak in sexual desire. In turn, this increase influences females to pursue a more long or short term oriented strategy depending on the mate value of their current partner.
Mating plasticity
Research on the conditional nature of mating strategies has revealed that long-term and short-term mating preferences can be fairly plastic. Following exposure to cues that would have been affected mating in the ancestral past, both men and women appear to adjust their mating preferences in ways that would have historically enhanced their fitness. Such cues include the need to care for young, danger from animals and other humans, and resource availability. Additionally, there is evidence that the female sex drive is more plastic than male sex drive, because they are the selecting sex. Since females typically chose when and with whom to engage in sex, this sex drive plasticity could be an effect of female mate choice.
Asexuality
While the general lack of sexual attraction—asexuality—has traditionally been viewed as a problem to be rectified, research since the 2010s has cast doubt upon this view. Research on how asexual individuals forge social relationships, including romantic ones, is ongoing.
Environmental predictors
Culture
Evolutionary psychologists have investigated different strategies and environmental influences across different cultures and confirmed that men tend to report a greater preference for youth and physical attractiveness in a mate than do women. Additionally, women tend to report a greater preference for ambition and social status in a mate than do men. The specific role that culture plays in modulating sex differences in mate preferences is subject to debate. Cultural variations in mate preference can be due to the evolved differences between males and females in a given culture.
Culture also has a link to mating strategies in the form of marriage systems in the society. Specifically, pathogens have been linked to whether a society is more likely to have polygynous or monogamous marriage systems. Cultures with high pathogen stress are more likely to have polygynous marriage systems, especially exogamous polygamy systems. This is helpful for both males and females, as males obtain greater genetic diversity for their offspring and females have access to healthy males, which are typically lacking in high pathogen societies. Conversely, monogamy is often absent from high pathogen environments, but common in low pathogen environments.
Further, since physical attractiveness is thought to signal health and disease resistance, evolutionary psychologists have predicted that, in societies high in pathogen prevalence, people value attractiveness more in a mate. Indeed, research has confirmed that pathogen prevalence is associated with preferences for attractiveness across nations. Women in nations with high pathogen prevalence also show greater preferences for facial masculinity. Researchers have also reasoned that sexual contact with multiple individuals increases the risk of disease transmission, thereby increasing the costs of pursuing a short-term mating strategy. Consistent with this reasoning, higher pathogen prevalence is associated with lower national SOI scores. Finally, several studies have found that experimentally manipulating disease salience has a causal influence on attractiveness preferences and SOI scores in predicted directions.
Sex ratio
The local operational sex ratio has been shown to have an impact on mating strategies. This is defined as the ratio of marriage-age males to marriage-age females, with a high ratio representing more males and a low ratio representing more females in the local area. When there is an imbalance of sexes, the rare sex typically has more choice, while the plentiful sex has to compete more strategically for the rare sex. This leads to the plentiful sex competing on specific areas that the rare sex finds attractive. Additionally, the plentiful sex will adopt more of the rare sex's mating strategy. In a population with a low sex ratio, females will adopt a more short-term mating strategy and will compete more intensely on things like physical attractiveness. On the other hand, in a society with a high sex ratio, males will adopt a more long-term strategy to attractive females. (See going steady.) For example, in the major metropolitan areas of China, females are generally in short supply and as such are more likely to be fulfilled should they find a mate while many men are simply left out of the dating market. On the other hand, on the Island of Manhattan and in some Western university campuses, females are in excess and as such they compete intensely for male attention, giving rise to hookup culture and short-term mating websites such as Tinder.
In 2005, the evolutionary psychologist David Schmitt conducted a multinational survey of sexual attitudes and behaviors involving 48 countries called the International Sexual Description Project (ISSR). Schmitt assessed relationships between several societal-level variables and average scores on the SOI. One variable that was shown to significantly predict a nation's average SOI score was the Operational Sex Ratio (OSR). This prediction was confirmed; OSR was significantly positively correlated with national SOI scores. Another variable that Schmitt predicted would influence SOI scores was the need for biparental care. In societies where extensive care from both parents is needed to ensure offspring survival, the costs of having sex with an uncommitted partner are much higher. Schmitt found significant negative correlations between several indices of need for biparental care (e.g. infant mortality, child malnutrition, and low birth-weight infants) and national SOI scores.
Income, education, and individual empowerment
During times of economic distress, women would be highly reluctant to commit to low-status men in long-term relationships and men would be delaying marriage, if they ever get married at all, in order to accumulate enough resources to attract attention. Consequently, both marriage and birth rates would drop. In addition, because the number of children a woman can have over her lifetime is much smaller than that of a man, under harsh economic realities women tend to sacrifice their careers in favor of domestic duties in order to safeguard their genetic interests. Traditional gender roles would be reinforced as a result.
Historically, marriage was the best option to gain independence from one's parents, and people generally married early in life and after short periods of courtship. This is no longer true in a modern society, where people are more independent on their parents and are willing to wait longer to find an ideal mate (a "soulmate"). Consequently, the average age at first marriage has increased, while many individuals are choosing to remain single. Furthermore, in a country where few children are born out of wedlock like Japan, those who are uninterested in having children tend to not get married.
Some sex differences in mate preferences may be attenuated by national levels of gender equity and gender empowerment. For example, as women gain more access to resources their mate preferences change. While in the past, women typically need to get married in order to ensure their own financial security, modern women are more likely to be able to achieve this on their own and as such are in a position to set high standards for potential mates. Finding a mate with access to material resources becomes less of a priority compared to finding someone with domestic skills and who can provide emotional support. As sociologist Philip Cohen explains, "It's an advantage to not need to be married, in terms of economics or social pressure. People can improve their career status and happiness on their own terms, and they can set the terms for potential mates in the future." In light of these findings, it has been suggested that both female physical attractiveness and male access to resources can be thought of as "necessities" in a mate while other qualities, as such as humor, can be categorized as "luxuries".
In the modern era, the availability of reliable contraception has severed the tie between sexual intercourse and reproduction. Furthermore, access to the combined oral contraceptive pill has been found to change a woman's taste in men. Women not on the pill tend to prefer men whose major histocompatibility complex (MHC) genes is different from their own, whereas those on the pill tend to find men with similar MHC genes more attractive.
Since the late twentieth century, marriages across the developed world have become unstable. Divorces have become much more common while people increasingly choose to remain single. In addition, as a culture becomes more individualistic, public support for traditional gender roles declines. Marriage becomes increasingly viewed as an option, rather than an obligation. In fact, since the 1960s, marriage has stopped being primarily focused on having and raising children but instead, the fulfillment of adults. Unmarried women were no longer considered "sick" or "immoral" the way they were in the past. In addition, neither working mothers nor single parenthood (what used to be called illegitimacy) was socially ostracized the way they used to be, at least in the Western world. But while marriage rates have declined, the prevalence of cohabitation has gone up. Cohabitation may help determine suitability of a mate before marriage. At the same time, significant numbers deem marriage to be an outdated institution and an overwhelming majority think it is unnecessary for a fulfilling or happy life, though they may remain open to that option. Meanwhile, married men have become noticeably less willing to disrupt the careers of their wives. Whereas in the past women had typically looked for men of high social status while men had not, by the late twentieth-century, men also looked for women of high earning potential, resulting in an even more pronounced educational and economic assortative mating. More generally, higher rates of university attendance and workforce participation by women affects the marital expectations of both sexes in that both men and women became more symmetrical in what they desired in a mate. The share of marriages in which both spouses were of the same educational level steadily rose. Moreover, it was no longer possible for a couple with one spouse having no more than a high-school diploma to earn about the national average; on the other hand, couples both of whom have at least a bachelor's degree could expect to make a significant amount above the national average. People thus have a clear economic incentive to seek out a mate with at least as high a level of education in order to maximize their potential income. A societal outcome of this was that as household gender equality improved because women had more choices, income inequality widened. Part of the reason why people increasingly married their socioeconomic and educational peers was technological change. Innovations that became commercially available in the twentieth century, such as the refrigerator or the washing machine, reduced the amount of time people needed to spend on housework, which diminished the importance of domestic skills.
Impact of and on culture
Adolescent behavior
From the neurological perspective, the well-known tendencies of teenagers to be emotional, impulsive, and to take high risks are due to the fact that the limbic system (responsible for emotional thought) is developing faster than the prefrontal cortex (logical reasoning). From the evolutionary viewpoint, this mismatch is adaptive in that it helps young people connect with other people (by being emotional) and learn to negotiate the complexities of life (by taking risks yet being more sensitive to rewards). As a result, teenagers are more prone to feelings of fear, anxiety, and depression than adults. In order to attract potential mates, males are especially prone to take risks and showcase their athleticism whereas females tend to direct attention to their beauty. Young males (who have the highest reproductive variance) take more risks than any other group in both experiments and observations. By undertaking risky endeavors, males are thought to signal the qualities which may be directly related to one's ability to provision and protect one's family, namely, physical skill, good judgment, or bravery. Social dominance, confidence, and ambition could help in competition among other males, while social dominance, ambition, and wealth might alleviate the costs of failure. In addition, traits like bravery and physical prowess may also be valued by cooperative partners due to their benefits in group-hunting and warfare, thereby increasing the potential audience for risk takers. The tendency of adolescent and young-adult males to engage in risky and aggressive behaviors is known as the 'young male syndrome'. His self-worth is tied to being perceived as a 'real man'. His likelihood of committing or falling victim to a violent crime peaks between his late teens and late twenties. Young females, on the other hand, are under strong peer pressure to be physically attractive, potentially leading to problems with their body image. A teenage girl or young woman's bond with her first sexual partner is often deep. In both sexes, intense adolescent intrasexual competition, amorous infatuations, and sexual experimentation are common.
Psychological research indicates the existence of a "reminiscence bump" between the ages of 10 and 30, a period important in human development, when people receive a substantial amount of feedback on their social status and reproductive desirability. Due to sex differences in mating strategies, it is more difficult for a female to alter the course of her reproductive career than it is more a male. In fact, females not only mature more quickly but also were historically more likely than males to marry and bear their first children before the age of 20. As a consequence, by late adolescence, it is, from the perspective of evolution, crucial that a girl finds herself a high-quality mate.
Whereas ancestral humans lived in small bands of related people of all ages, modern secondary school students share the same social environment as people from the same age groups from diverse backgrounds, an evolutionary novelty. Back then, social competition during adolescence proved crucial to future social and reproductive success, hence the strong desire to be popular. Today, it is possible for people to relocate to a different place or transfer to another school. Still, the curiosity about how the lives of others for the sake of comparison remains. Teenagers are also quite conformist with regards to their peers, for under ancestral condition, social ostracism was generally deadly. In 21st-century society, youths who rebel against the dominant culture or figures of authority tend become more homogeneous with respect to their own subculture, making their behavior the opposite to any claims of counterculture. This synchronization occurs even if more than two choices are available, such as multiple styles of beard rather than whether or not to have a beard. Mathematician Jonathan Touboul who studies how information propagation through society affects human behavior calls this the hipster effect.
Consumer psychology
According to psychologist Gad Saad, consumer behavior can only be truly understood in light of evolutionary psychology because consumer behavior "is rooted in a shared biological heritage based around four key Darwinian factors: survival, reproduction, kin selection, and reciprocal altruism." Consequently, products that can manipulate or enhance a person's body odor (perfumes and deodorants) and looks (cosmetics and plastic surgeries) are profitable businesses. In Brazil at the end of the twentieth century, for instance, there were more people selling Avon cosmetics than there were members of the armed forces. Similarly, in the United States, more money was spent on cosmetics and plastic surgeries than on education or social services.
One way to signal one's socioeconomic status is conspicuous consumption, or when individuals purchase luxurious items which provide little to no utility over less costly versions, thereby prioritizing self-promotion over economic sense. It is a common behavior of class and often involves strategic planning to maximize the audience of the display and the strength of the signal. Most signaling explanations of conspicuous consumption predict the targets of the signal will predominately be potential mates. Among males, the information signaled is thought to go beyond genetic quality and signal the potential for investment, which can be attractive to those seeking both long-term and short-term mating strategies. Among females, a suggested to benefit from conspicuous consumption in mating contexts is its hypothesized ability to demonstrate the commitment of one's partner and signal one's mate quality to rivals, both of which may help in intrasexual competition and deter mate poaching. Conspicuous consumption may also be useful for problems outside of acquiring mates. This can involve attempts at attracting other cooperative partners, who stand to gain from the signalers ability to confer benefits should they form an alliance. As in mating contexts, there may also be benefits to intimidating rivals, thereby decreasing the likelihood of direct competition for resources in the future. Its prevalence across cultures and social classes suggests that humans may be well suited to balancing the costs and benefits of the signal.
The notion that "sex sells" is now commonly accepted and utilized by advertisers. Nevertheless, some cultures (such as France) are more receptive to sex in advertising than others (such as South Korea).
Sensational journalism and gossip
Despite common objections, sensational news stories continue to attract a large audience. A 2003 analysis of 736 stories from 1700 to 2001 by Hank Davis and S. Lyndsay McLeod reveals that these stories could be categorized according themes with reproductive value, such as cheater detection and treatment of offspring. Davis and McLeod propose that sensational journalism serves the same purpose as gossip. Gossip is the sharing of both positive and negative information about a third person who may or may not be absent from the group, and as such is useful for acquiring potentially useful information about the social structure, rivals, as well as allies. It may also be used for the purposes of intrasexual competition, or the denigration of rivals in order to elevate oneself, with men gossiping about access to resources (wealth and achievement) and women about looks and reputations. However, women appear to be more likely to gossip than men and to think of it more positively than men. Furthermore, much gossip concerns social affairs. According to Frank T. McAndrew, the same psychological reasons that underlie more traditional forms gossip carry over to gossip about "celebrities" in the modern world because, on the evolutionary timescale, the birth of celebrity culture is a recent phenomenon.
Romance novels, fan fiction, and pornography
As defined by the Romance Writers of America, a romance novel features "a central love story and an emotionally satisfying and optimistic ending." Many also carry erotic undertones. Indeed, evolutionary psychologists have gained valuable insights into women's mate choice by studying romance novels popular among women, such as those sold by Harlequin. Popular contemporary female romance novels conform to strategies common among women, for example by avoiding short-term relationships, and as such pertain to their genetic interests. Five of the most common words in such novels are, in order of most to least frequent, 'love', 'bride', 'baby', 'man', and 'marriage' and the most common themes are commitment, reproduction, high-value—i.e. masculine—males, and resources. Romance novels sell rather well, with around 10,000 new titles appearing each year in the U.S. alone.
Fan fiction is the online equivalent of romance novels. During the first two decades of the 21st century, writing and reading fan fiction became a prevalent activity worldwide. Demographic data from various depositories revealed that those who read and wrote fan fiction were overwhelmingly young, in their teens and twenties, and female. For example, an analysis of the site fanfiction.net published in 2019 by data scientists Cecilia Aragon and Katie Davis showed that some 60 billion words of contents were added during the previous 20 years by 10 million English-speaking people whose median age was 15½ years. Much of fan fiction concerns the romantic pairing of fictional characters of interest, or 'shipping'. Fan fiction writers base their work on various internationally popular cultural phenomena such as K-pop, Star Trek, Harry Potter, Doctor Who, and My Little Pony, known as 'canon', as well as other things they considered important to their lives, like natural disasters. Socially dominant men—the so-called "alpha males"—are the most popular among women.
Males, by contrast, are generally more interested in pornography because it carries the same cues to female fertility they look for under mating conditions. Online pornography is now ubiquitous and popularly consumed. In their book A Billion Wicked Thoughts (2011) analyzing search-engine results, cognitive scientists Ogi Ogas and Sai Gaddam wrote, "Men's brains are designed to objectify females. The shapely curves of female ornamentation indicate how many years of healthy childbearing remain across a woman's entire lifetime." By letting her test subject watch erotic materials of various kinds—straight sex, gay sex, and bonobos—sexologist Meredith Chivers discovered an excellent agreement between the self-reported arousal of men and the amount of blood flow to their genitals. Men were only aroused by videos of straight sex. On the other hand, Chivers found a clear mismatch between the self-reports of women and what her devices measured. While women seemed easily aroused by videos of all three categories, increased blood flow alone was not enough to induce arousal. This seems to correspond with the different mating behaviors of men and women.
Music, film, and television
A 2011 study by Dawn R. Hobbs and Gordon G. Gallup of songs dating back over four centuries show that reproductive messaging has been a common theme among the most popular of songs. Hobbs and Gallup observe that their "content analysis of these messages revealed 18 reproductive themes that read like topics taken from an outline for a course on evolutionary psychology." An overwhelming majority (about 92%) of the songs that made it to the Billboard Top 10 in 2009 contained reproductive messages. In fact, "further analyses showed that the bestselling songs in all three charts featured significantly more reproductive messages than those that failed to make it into the Top Ten." Among contemporary English-language songs, country music tends to focus on commitment, parenting, and rejection; pop music on sex appeal, reputation, short-term strategies, and fidelity assurance; and rhythms and blues (R&B) and hip hop on sex appeal, resources, sex act, and status.
Hobbs and Gallup classified the reproductive messaging of the songs into 18 categories, including genitalia (e.g. "Baby Got Back" (1992) by Sir Mix-A-Lot), courtship displays and long-term mating ("I Wanna Hold Your Hand" (1963) by The Beatles), short-term mating ("LoveGame" (2009) by Lady Gaga), foreplay and arousal ("Sugar, Sugar" (1969) by The Archies), sex act ("Honky Tonk Women" (1969) by the Rolling Stones), sexual prowess ("Sixty Minute Man" (1951) by Billy Ward and the Dominoes), promiscuity, reputation, and derogation ("Roxanne" (1978) by the Police), commitment and fidelity ("Love Story" (2008) by Taylor Swift), access to resources ("For the Love of Money" (1973) by the O'Jays), rejection ("Red Light" (2009) by David Nail), infidelity, cheater detection, and mate poaching ("I Heard It Through the Grapevine" (1966) by Marvin Gaye), and parenting ("It Won't Be Like This For Long" (2008) by Darius Rucker).
Nevertheless, the evolutionary purpose of music, if such exists, remains unclear. Some researchers like Charles Darwin and Geoffrey Miller propose that it is a form of courtship that has evolved by means of sexual selection whereas others, such as Steven Pinker and Gary Markus, reject it as "auditory cheesecake"—no more than a purely cultural invention that is a by-product of evolved traits such as cognition and language.
A similar pattern is found in popular movies, where themes of survival (fighting epic battles), reproduction (courtship), kin selection (treatment of family members), and altruism (saving a stranger's life) are ubiquitous. Indeed, as in the case with novels or mythology, the number of basic plots is rather small. However, even though the standard assumption in many movies and stories is that people are looking to get married or at least desire a long-term partner, this is not necessarily true in real life.
Online dating
Online dating services have made it much easier for those who would otherwise never meet because their social circles do not intersect (perfect strangers) to find one another and pursue a romantic relationship together. They are especially useful for middle-aged individuals who have fewer options in real life, compared to those in their 20s. Compared to heterosexual couples, same-sex ones are much likely to have met online. Such platforms also offer goldmines of information for social scientists studying human mating behavior. Nevertheless, as of 2017, no new pattern has been identified; to the contrary, scientists have only found the strengthening of gender stereotypes, namely the attention to a prospective mate's socioeconomic status among women, the preference for youth and beauty among men, and the deliberate self-misrepresentation among both sexes. No longer do people looking for a mate have to confine themselves to their own backgrounds, though in practice, the data still indicates assortative mating.
Concerns that online dating makes people more "superficial" by giving them an incentive to judge one another based on looks are unfounded, since this is how humans normally behave. Moreover, while there are online dating sites geared towards short-term sexual relationships (hookups), others are designed to help those looking for a long-term arrangement, including marriage. Individuals who pursue the latter option are no less successful in finding the right mates.
Politics and religions
In general, the emotion of disgust can be divided into three categories: pathogen disgust, sexual disgust, and moral disgust. Sexual disgust leads to the avoidance of individuals and behaviors that jeopardizes one's long-term mating success. Moral disgust is being repelled by socially abnormal behaviors.
Some evolutionary psychologists have argued that mating strategies can influence political attitudes. According to this perspective, different mating strategies are in direct strategic conflict. For instance, the stability of long-term partnerships may be threatened by the availability of short-term sexual opportunities. Therefore, public policy measures that impose costs on casual sex may benefit people pursuing long-term mating strategies by reducing the availability of short-term mating opportunities outside of committed relationships. One public policy measure that imposes costs on people pursuing short-term mating strategies, and may thereby appeal to sexually restricted individuals, is the banning of abortion. In a doctoral dissertation, the psychologist Jason Weeden conducted statistical analyses on public and undergraduate datasets supporting the hypothesis that attitudes towards abortion are more strongly predicted by mating-relevant variables than by variables related to views on the sanctity of life.
Weeden and colleagues have also argued that attitudes towards drug legalization are driven by individual differences in mating strategies. Insofar as sexually restricted individuals associate recreational drug use with promiscuity, they may be motivated to oppose drug legalization. Consistent with this, one study found that the strongest predictor of attitudes towards drug legalization was scores on the SOI. This relationship remained strong even when controlling for personality traits, political orientation, and moral values. By contrast, nonsexual variables typically associated with attitudes towards drug legalization were strongly attenuated or eliminated when controlling for SOI and other sexuality-related measures. These findings were replicated in Belgium, Japan, and the Netherlands. Weeden and colleagues have made similar arguments and have conducted similar analyses in regard to religiosity; that is, religious institutions may function to facilitate high-fertility, monogamous mating and reproductive strategies.
On the other hand, there is evidence that as a society becomes wealthier, more urbanized, and more secular, religion is becoming increasingly irrelevant in the role of matchmaking.
See also
Mate choice in humans
Recent human evolution
Parental investment in humans
Sociosexuality
Online dating service
Alternative mating strategy
References
External links
Victorian mate choice by evolutionary psychologist Geoffrey Miller (8:46). Transcript.
Evolutionary psychology
Mating
Sexology
Sexuality | Human mating strategies | Biology | 13,767 |
311,544 | https://en.wikipedia.org/wiki/Kenneth%20Appel | Kenneth Ira Appel (October 8, 1932 – April 19, 2013) was an American mathematician who in 1976, with colleague Wolfgang Haken at the University of Illinois at Urbana–Champaign, solved the four-color theorem, one of the most famous problems in mathematics. They proved that any two-dimensional map, with certain limitations, can be filled in with four colors without any adjacent "countries" sharing the same color.
Biography
Appel was born in Brooklyn, New York, on October 8, 1932. He grew up in Queens, New York, and was the son of a Jewish couple, Irwin Appel and Lillian Sender Appel. He worked as an actuary for a brief time and then served in the U.S. Army for two years at Fort Benning, Georgia, and in Baumholder, Germany. In 1959, he finished his doctoral program at the University of Michigan, and he also married Carole S. Stein in Philadelphia. The couple moved to Princeton, New Jersey, where Appel worked for the Institute for Defense Analyses from 1959 to 1961. His main work at the Institute for Defense Analyses was doing research in cryptography. Toward the end of his life, in 2012, he was elected a Fellow of the American Mathematical Society. He died in Dover, New Hampshire, on April 19, 2013, after being diagnosed with esophageal cancer in October 2012.
Kenneth Appel was also the treasurer of the Strafford County Democratic Committee. He played tennis through his early 50s. He was a lifelong stamp collector, a player of the game of Go and a baker of bread. He and Carole had two sons, Andrew W. Appel, a noted computer scientist, and Peter H. Appel, and a daughter, Laurel F. Appel, who died on March 4, 2013. He was also a member of the Dover school board from 2010 until his death.
Schooling and teaching
Kenneth Appel received his bachelor's degree from Queens College in 1953. After serving the army he attended the University of Michigan where he earned his M.A. in 1956, and then later his Ph.D. in 1959. Roger Lyndon, his doctoral advisor, was a mathematician whose main mathematical focus was in group theory.
After working for the Institute for Defense Analyses, in 1961 Appel joined the Mathematics Department faculty at the University of Illinois as an Assistant Professor. While there Appel researched in group theory and computability theory. In 1967 he became an associate professor and in 1977 was promoted to professor. It was while he was at this university that he and Wolfgang Haken proved the four color theorem. From their work and proof of this theorem they were later awarded the Delbert Ray Fulkerson prize, in 1979, by the American Mathematical Society and the Mathematical Programming Society.
While at the University of Illinois Appel took on five students during their doctoral program. Each student helped contribute to the work cited on the Mathematics Genealogy Project.
In 1993 Appel moved to New Hampshire as Chairman of the Mathematics Department at the University of New Hampshire. In 2003 he retired as professor emeritus. During his retirement he volunteered in mathematics enrichment programs in Dover and in southern Maine public schools. He believed "that students should be afforded the opportunity to study mathematics at the level of their ability, even if it is well above their grade level."
Contributions to mathematics
The four color theorem
Kenneth Appel is known for his work in topology, the branch of mathematics that explores certain properties of geometric figures. His biggest accomplishment was proving the four color theorem in 1976 with Wolfgang Haken. The New York Times wrote in 1976:
Now the four-color conjecture has been proved by two University of Illinois mathematicians, Kenneth Appel and Wolfgang Haken. They had an invaluable tool that earlier mathematicians lacked—modern computers. Their present proof rests in part on 1,200 hours of computer calculation during which about ten billion logical decisions had to be made. The proof of the four-color conjecture is unlikely to be of applied significance. Nevertheless, what has been accomplished is a major intellectual feat. It gives us an important new insight into the nature of two-dimensional space and of the ways in which such space can be broken into discrete portions.
At first, many mathematicians were unhappy with the fact that Appel and Haken were using computers, since this was new at the time, and even Appel said, "Most mathematicians, even as late as the 1970s, had no real interest in learning about computers. It was almost as if those of us who enjoyed playing with computers were doing something non-mathematical or suspect." The actual proof was described in an article as long as a typical book titled Every Planar Map is Four Colorable, Contemporary Mathematics, vol. 98, American Mathematical Society, 1989.
The proof has been one of the most controversial of modern mathematics because of its heavy dependence on computer number-crunching to sort through possibilities, which drew criticism from many in the mathematical community for its inelegance: "a good mathematical proof is like a poem—this is a telephone directory!" Appel and Haken agreed in a 1977 interview that it was not "elegant, concise, and completely comprehensible by a human mathematical mind".
Nevertheless, the proof was the start of a change in mathematicians' attitudes toward computers—which they had largely disdained as a tool for engineers rather than for theoreticians—leading to the creation of what is sometimes called experimental mathematics.
Group theory
Kenneth Appel's other publications include an article with P.E. Schupp titled Artin Groups and Infinite Coxeter Groups. In this article Appel and Schupp introduced four theorems that are true about Coxeter groups and then proved them to be true for Artin groups. The proofs of these four theorems used the "results and methods of small cancellation theory."
References
External links
Kenneth I. Appel Biography
Author profile in the database zbMATH
1932 births
University of Michigan alumni
20th-century American mathematicians
21st-century American mathematicians
Graph theorists
University of Illinois Urbana-Champaign faculty
Fellows of the American Mathematical Society
2013 deaths
Mathematicians from New York (state)
Scientists from Brooklyn | Kenneth Appel | Mathematics | 1,254 |
38,776,376 | https://en.wikipedia.org/wiki/2-Furoic%20acid | 2-Furoic acid is an organic compound, consisting of a furan ring and a carboxylic acid side-group. Along with other furans, its name is derived from the Latin word furfur, meaning bran, from which these compounds were first produced. The salts and esters of furoic acids are known as furoates. 2-Furoic acid is most widely encountered in food products as a preservative and a flavouring agent, where it imparts a sweet, earthy flavour.
History
The compound was first described by Carl Wilhelm Scheele in 1780, who obtained it by the dry distillation of mucic acid. For this reason it was initially known as pyromucic acid. This was the first known synthesis of a furan compound, the second being furfural in 1821.
Despite this, it was furfural which came to set naming conventions for later furans.
Preparation and synthesis
2-Furoic acid can be synthesized by the oxidation of either furfuryl alcohol or furfural. This can be achieved either chemically or biocatalytically.
The current industrial route involves the Cannizaro reaction of furfural in an aqueous NaOH solution. This is a disproportionation reaction and produces a 1:1 ratio of 2-furoic acid and furfuryl alcohol (a 50% yield of each). It remains economical because both products have commercial value. The bio-catalytic route involves the microorganism Nocardia corallina. This produces 2-furoic acid in higher yields: 98% from 2-furfuryl alcohol and 88% from 2-furfural, but has yet to be commercialised.
Applications and occurrences
In terms of commercial uses, 2-furoic acid is often used in the production of furoate esters, some of which are drugs and pesticides.
In foods
It is a flavoring ingredient and achieved a generally recognized as safe (GRAS) status in 1995 by the Flavor and Extract Manufacturers Association (FEMA). 2-Furoic acid has a distinct odor described as sweet, oily, herbaceous, and earthy.
2-Furoic acid helps sterilize and pasteurize many foods. It forms in situ from 2-furfural. 2-Furoic acid is also formed during coffee roasting, with up to 205 mg/kg.
Optic properties
2-Furoic acid crystals are highly transparent in the 200–2000 nm wavelength region, are stable up to 130 °C, and generally have low absorption in the UV, visible, and IR ranges. In optical and dielectric studies, 2-furoic acid crystals may act as paraelectrics in the temperature range < 318 K and ferroelectrics in temperature ranges > 318 K.
Microbial metabolism
2-Furoic acid can be the sole source of carbon and energy for the organism Pseudomonas putida. The organism aerobically degrades the compound.
Hazards
The LD50 is 100 mg/kg (oral, rats).
References
Further reading
Reagents for organic chemistry
Aromatic acids
2-Furyl compounds | 2-Furoic acid | Chemistry | 648 |
8,770,941 | https://en.wikipedia.org/wiki/Empathy-altruism | Empathy-altruism is a form of altruism based on moral emotions or feelings for others.
Social exchange theory represents a seemingly altruistic behavior which benefits the altruist and outweighs the cost the altruist bears. Thus such behavior is self-interested. In contrast, C. Daniel Batson holds that people help others in need out of genuine concern for the well-being of the other person. The key ingredient to such helping is empathic concern. According to Batson's empathy-altruism hypothesis, if someone feels empathy towards another person, they will help them, regardless of what they can gain from it. An alternative hypothesis is empathy-joy, which states a person helps because they find pleasure at seeing another person experience relief. When a person does not feel empathy, the standards of social exchange theory apply.
Evidence
There has been significant debate over whether other-helping behavior is motivated by self- or other-interest. The prime actors in this debate are Daniel Batson, arguing for empathy-altruism, and Robert Cialdini, arguing for self-interest.
Batson recognizes that people sometimes help for selfish reasons. He and his team were interested in finding ways to distinguish between motives. In one experiment, students were asked to listen to tapes from a radio program. One of the interviews was with a woman named Carol, who talked about her bad car accident in which both of her legs were broken, her struggles and how behind she was becoming in class. Students who were listening to this particular interview were given a letter asking the student to share lecture notes and meet with her. The experimenters changed the level of empathy by telling one group to try to focus on how she was feeling (high empathy level) and the other group not to be concerned with that (low empathy level). The experimenters also varied the cost of not helping: the high cost group was told that Carol would be in their psychology class after returning to school and the low cost group believed she would finish the class at home. The results confirmed the empathy-altruism hypothesis: those in the high empathy group were almost equally likely to help her in either circumstance, while the low empathy group helped out of self-interest (seeing her in class every day made them feel guilty if they did not help).
Countering hypotheses
Batson and colleagues set out to show that empathy motivates other-regarding helping behavior not out of self-interest but out of true interest in the well-being of others.
They addressed two hypotheses that counter the empathy-altruism hypothesis:
Empathy-specific reward: Empathy triggers the need for social reward which can be gained by helping.
Empathy-specific punishment: Empathy triggers the fear of social punishment which can be avoided by helping.
See also
Affective neuroscience
C. Sue Carter
Edward O. Wilson
Frans de Waal
Helping behavior
Jean Decety
Moral emotions
Social neuroscience
Stephen Porges
Sympathy
W. D. Hamilton
References
Further reading
Batson, C. D., & Leonard, B. (1987). "Prosocial Motivation: Is it ever Truly Altruistic?" Advances in Experimental Social Psychology (Vol. 20, pp. 65–122): Academic Press.
Decety, J. & Batson, C.D. (2007). "Social neuroscience approaches to interpersonal sensitivity." Social Neuroscience, 2(3-4), 151–157.
Decety, J. & Ickes, W. (Eds.). (2009). The Social Neuroscience of Empathy. Cambridge: MIT Press, Cambridge.
Thompson, E. (2001). "Empathy and consciousness." Journal of Consciousness Studies, 8, 1–32.
Zahn-Waxler, C., & Radke-Yarrow, M. (1990). "The origins of empathic concern." Motivation and Emotion, 14, 107–125.
Altruism
Moral psychology
Empathy | Empathy-altruism | Biology | 808 |
245,978 | https://en.wikipedia.org/wiki/Elimination%20theory | In commutative algebra and algebraic geometry, elimination theory is the classical name for algorithmic approaches to eliminating some variables between polynomials of several variables, in order to solve systems of polynomial equations.
Classical elimination theory culminated with the work of Francis Macaulay on multivariate resultants, as described in the chapter on Elimination theory in the first editions (1930) of Bartel van der Waerden's Moderne Algebra. After that, elimination theory was ignored by most algebraic geometers for almost thirty years, until the introduction of new methods for solving polynomial equations, such as Gröbner bases, which were needed for computer algebra.
History and connection to modern theories
The field of elimination theory was motivated by the need of methods for solving systems of polynomial equations.
One of the first results was Bézout's theorem, which bounds the number of solutions (in the case of two polynomials in two variables at Bézout time).
Except for Bézout's theorem, the general approach was to eliminate variables for reducing the problem to a single equation in one variable.
The case of linear equations was completely solved by Gaussian elimination, where the older method of Cramer's rule does not proceed by elimination, and works only when the number of equations equals the number of variables. In the 19th century, this was extended to linear Diophantine equations and abelian group with Hermite normal form and Smith normal form.
Before the 20th century, different types of eliminants were introduced, including resultants, and various kinds of discriminants. In general, these eliminants are also invariant under various changes of variables, and are also fundamental in invariant theory.
All these concepts are effective, in the sense that their definitions include a method of computation. Around 1890, David Hilbert introduced non-effective methods, and this was seen as a revolution, which led most algebraic geometers of the first half of the 20th century to try to "eliminate elimination". Nevertheless Hilbert's Nullstellensatz, may be considered to belong to elimination theory, as it asserts that a system of polynomial equations does not have any solution if and only if one may eliminate all unknowns to obtain the constant equation 1 = 0.
Elimination theory culminated with the work of Leopold Kronecker, and finally Macaulay, who introduced multivariate resultants and U-resultants, providing complete elimination methods for systems of polynomial equations, which are described in the chapter on Elimination theory in the first editions (1930) of van der Waerden's Moderne Algebra.
Later, elimination theory was considered old-fashioned and removed from subsequent editions of Moderne Algebra. It was generally ignored until the introduction of computers, and more specifically of computer algebra, which again made relevant the design of efficient elimination algorithms, rather than merely existence and structural results. The main methods for this renewal of elimination theory are Gröbner bases and cylindrical algebraic decomposition, introduced around 1970.
Connection to logic
There is also a logical facet to elimination theory, as seen in the Boolean satisfiability problem. In the worst case, it is presumably hard to eliminate variables computationally. Quantifier elimination is a term used in mathematical logic to explain that, in some theories, every formula is equivalent to a formula without quantifier. This is the case of the theory of polynomials over an algebraically closed field, where elimination theory may be viewed as the theory of the methods to make quantifier elimination algorithmically effective. Quantifier elimination over the reals is another example, which is fundamental in computational algebraic geometry.
See also
Buchberger's algorithm
Faugère's F4 and F5 algorithms
Resultant
Triangular decomposition
Main theorem of elimination theory
References
Israel Gelfand, Mikhail Kapranov, Andrey Zelevinsky, Discriminants, resultants, and multidimensional determinants. Mathematics: Theory & Applications. Birkhäuser Boston, Inc., Boston, MA, 1994. x+523 pp.
David Cox, John Little, Donal O'Shea, Using Algebraic Geometry. Revised second edition. Graduate Texts in Mathematics, vol. 185. Springer-Verlag, 2005, xii+558 pp.,
Algebraic geometry
Computer algebra | Elimination theory | Mathematics,Technology | 867 |
78,630,019 | https://en.wikipedia.org/wiki/C22H30N2O | {{DISPLAYTITLE:C22H30N2O}}
The molecular formula C22H30N2O (molar mass: 338.50 g/mol) may refer to:
Secofentanyl
SR-16435
Molecular formulas | C22H30N2O | Physics,Chemistry | 55 |
35,626,853 | https://en.wikipedia.org/wiki/Bertrand%E2%80%93Edgeworth%20model | In microeconomics, the Bertrand–Edgeworth model of price-setting oligopoly looks at what happens when there is a homogeneous product (i.e. consumers want to buy from the cheapest seller) where there is a limit to the output of firms which are willing and able to sell at a particular price. This differs from the Bertrand competition model where it is assumed that firms are willing and able to meet all demand. The limit to output can be considered as a physical capacity constraint which is the same at all prices (as in Edgeworth's work), or to vary with price under other assumptions.
History
Joseph Louis François Bertrand (1822–1900) developed the model of Bertrand competition in oligopoly. This approach was based on the assumption that there are at least two firms producing a homogenous product with constant marginal cost (this could be constant at some positive value, or with zero marginal cost as in Cournot). Consumers buy from the cheapest seller. The Bertrand–Nash equilibrium of this model is to have all (or at least two) firms setting the price equal to marginal cost. The argument is simple: if one firm sets a price above marginal cost then another firm can undercut it by a small amount (often called epsilon undercutting, where epsilon represents an arbitrarily small amount) thus the equilibrium is zero (this is sometimes called the Bertrand paradox).
The Bertrand approach assumes that firms are willing and able to supply all demand: there is no limit to the amount that they can produce or sell. Francis Ysidro Edgeworth considered the case where there is a limit to what firms can sell (a capacity constraint): he showed that if there is a fixed limit to what firms can sell, then there may exist no pure-strategy Nash equilibrium (this is sometimes called the Edgeworth paradox).
Martin Shubik developed the Bertrand–Edgeworth model to allow for the firm to be willing to supply only up to its profit maximizing output at the price which it set (under profit maximization this occurs when marginal cost equals price). He considered the case of strictly convex costs, where marginal cost is increasing in output. Shubik showed that if a Nash equilibrium exists, it must be the perfectly competitive price (where demand equals supply, and all firms set price equal to marginal cost). However, this can only happen if market demand is infinitely elastic (horizontal) at the competitive price. In general, as in the Edgeworth paradox, no pure-strategy Nash equilibrium will exist. Huw Dixon showed that in general a mixed strategy Nash equilibrium will exist when there are convex costs. Dixon’s proof used the Existence Theorem of Partha Dasgupta and Eric Maskin. Under Dixon's assumption of (weakly) convex costs, marginal cost will be non-decreasing. This is consistent with a cost function where marginal cost is flat for a range of outputs, marginal cost is smoothly increasing, or indeed where there is a kink in total cost so that marginal cost makes a discontinuous jump upwards.
Later developments and related models
There have been several responses to the non-existence of pure-strategy equilibrium identified by Francis Ysidro Edgeworth and Martin Shubik. Whilst the existence of mixed-strategy equilibrium was demonstrated by Huw Dixon, it has not proven easy to characterize what the equilibrium actually looks like. However, Allen and Hellwig were able to show that in a large market with many firms, the average price set would tend to the competitive price.
It has been argued that non-pure strategies are not plausible in the context of the Bertrand–Edgworth model. Alternative approaches have included:
Firms choose the quantity they are willing to sell up to at each price. This is a game in which price and quantity are chosen: as shown by Allen and Hellwig and in a more general case by Huw Dixon that the perfectly competitive price is the unique pure-strategy equilibrium.
Firms have to meet all demand at the price they set as proposed by Krishnendu Ghosh Dastidar or pay some cost for turning away customers. Whilst this can ensure the existence of a pure-strategy Nash equilibrium, it comes at the cost of generating multiple equilibria. However, as shown by Huw Dixon, if the cost of turning customers away is sufficiently small, then any pure-strategy equilibria that exist will be close to the competitive equilibrium.
Introducing product differentiation, as proposed by Jean-Pascal Benassy. This is more of a synthesis of monopolistic competition with the Bertrand–Edgeworth model, but Benassy showed that if the elasticity of demand for the firms output is sufficiently high, then any pure strategy equilibrium that existed would be close to the competitive outcome.
"Integer pricing" as explored by Huw Dixon. Rather than treat price as a continuous variable, it is treated as a discrete variable. This means that firms cannot undercut each other by an arbitrarily small amount, one of the necessary ingredients giving rise to the non-existence of a pure strategy equilibrium. This can give rise to multiple pure-strategy equilibria, some of which may be distant from the competitive equilibrium price. More recently, Prabal Roy Chowdhury has combined the notion of discrete pricing with the idea that firms choose prices and the quantities they want to sell at that price as in Allen–Hellwig.
Epsilon equilibrium in the pure-strategy game. In an epsilon equilibrium, each firm is within epsilon of its optimal price. If the epsilon is small, this might be seen as a plausible equilibrium, due perhaps to menu costs or bounded rationality. For a given , if there are enough firms, then an epsilon-equilibrium exists (this result depends on how one models the residual demand – the demand faced by higher-priced firms given the sales of the lower-priced firms).
References
Resources
Edgeworth and modern oligopoly, Theory Xavier Vives
The Pure Theory of Monopoly, Francis Edgeworth
Economics models
Competition (economics)
Game theory
Oligopoly | Bertrand–Edgeworth model | Mathematics | 1,236 |
51,118,020 | https://en.wikipedia.org/wiki/Samyung%20ENC | Samyung ENC is a South Korean manufacturer of marine communication and navigation systems. The company is publicly listed and traded on the KOSDAQ.
Market share
Samyung ENC is a leading company in a highly fragmented marine electronics industry that includes Raymarine, Humminbird, Lowrance, Simrad, B&G, Magellan, Murphy, Naviop, Northstar, Samyung ENC, Sitex, TwoNav, Furuno, Geonav.
Products
The product line includes high frequency radio, GPS floater, and very high frequency (VHF) transmitter-receiver.
References
Manufacturing companies established in 1978
Engineering companies of South Korea
Manufacturing companies based in Busan
Navigation system companies
Marine electronics
South Korean brands
South Korean companies established in 1978
Companies listed on the Korea Exchange | Samyung ENC | Engineering | 162 |
73,725,722 | https://en.wikipedia.org/wiki/Delmonas | Delmonas (or Dalmonas, plural delmonai) – is a detail of folk costume of women originated in Lithuania Minor. It is an elaborately decorated purse, visibly attached to the waist by a band. This costume feature represents the Prussian Lithuanians and is not common to other ethnographic regions of Lithuania. In 2019 Delmonai of Lithuania Minor were inscribed into The Intangible Cultural Heritage Inventory of Lithuania as a form of folk art, traditional craftsmanship or agricultural activities.
Traditional appearance
Delmonas was usually sewn from velvet, wool, silk or cotton. Traditionally the color of fabric was dark, and delmonas was embroided using colorful threads and glass beads. Embroidery usually depicted flowers or other plants, birds, and sometimes the initials of the owner. Also some meaningful quotes, dates or initials, sometimes figurines could be embroided on a rectangle, rounded, flare, wavy bottom, trapezium, multiangular or other shape pocket. Delmonai were usually embroided by the women of Lithuania Minor themselves; however, they could also order delmonai from a tailor as well.
Usage and history
Worn at the end of the 18th century – beginning of the 20th century – when region was a part of the Kingdom of Prussia – delmonai served a certain practical function. Women could store small items while doing house chores or use it as a pocket going outside. This practical feature was not hidden at first – dalmonai was the most decorated item of the costume. However, during various political turmoil women of Lithuania Minor started to wear dalmonai secretly. They would be hidden under the skirt and used to hide money, documents or letters, especially for women who were willing or forced to leave their homeland. Many of such women would pass delmonai to younger generations, who would then donate them for various museums of Lithuania. Women of Klaipėda city reportedly continued to wear delmonai up to 1940s. The popularity of delmonai was restored in the revival of folk costume in 1960s and 1970s. Being used as a sign of Lithuania Minor identity, delmonai are also used as a modern accessory, continuing the uninterrupted production in Klaipėda region.
Gallery
Sources
Lithuanian folk art
National symbols of Lithuania
Intangible Cultural Heritage of Humanity
Textile arts
Parts of clothing
Lithuanian clothing | Delmonas | Technology | 479 |
23,519,569 | https://en.wikipedia.org/wiki/Duivenbode%27s%20six-wired%20bird-of-paradise | Duivenbode's six-wired bird-of-paradise, also known as Duivenbode's six-plumed bird-of-paradise, is a bird in the family Paradisaeidae that is an intergeneric hybrid between a western parotia and greater lophorina. The common name commemorates Maarten Dirk van Renesse van Duivenbode (1804–1878), Dutch trader of naturalia on Ternate.
History
Two adult male specimens are known of this hybrid, coming from the Geelvink Bay region of north-western New Guinea, and held in the American Museum of Natural History and the French Natural History Museum.
Notes
References
Hybrid birds of paradise
Birds of Western New Guinea
Intergeneric hybrids | Duivenbode's six-wired bird-of-paradise | Biology | 155 |
11,396,581 | https://en.wikipedia.org/wiki/Nodoid | In differential geometry, a nodoid is a surface of revolution with constant nonzero mean curvature obtained by rolling a hyperbola along a fixed line, tracing the focus, and revolving the resulting nodary curve around the line.
References
External links
Wolfram Demonstrations: Delaunay Nodoids
Surfaces | Nodoid | Mathematics | 60 |
29,389,753 | https://en.wikipedia.org/wiki/UDF%202457 | UDF 2457 is the Hubble Ultra Deep Field (UDF) identifier for a red dwarf star calculated to be about from Earth with a very dim apparent magnitude of 25.
The Milky Way galaxy is about 100,000 light-years in diameter, and the Sun is about 25,000 light-years from the Galactic Center. The small common star UDF 2457 may be one of the farthest known stars inside the main body of the Milky Way. Globular clusters (such as Messier 54 and NGC 2419) and stellar streams are located farther out in the galactic halo.
See also
UDFj-39546284 – farthest galaxy seen by the Hubble Ultra-Deep Field
References
Fornax
M-type main-sequence stars
Hubble Space Telescope
02457 | UDF 2457 | Astronomy | 165 |
6,815,986 | https://en.wikipedia.org/wiki/Trenbolone%20acetate | Trenbolone acetate, sold under brand names such as Finajet and Finaplix among others, is an androgen and anabolic steroid (AAS) medication used in veterinary medicine, specifically to increase the profitability of livestock by promoting muscle growth in cattle. It is given by injection into muscle.
Side effects of trenbolone acetate include symptoms of masculinization like acne, increased body hair growth, scalp hair loss, voice changes, and increased sexual desire. The drug is a synthetic androgen and anabolic steroid and hence is an agonist of the androgen receptor (AR), the biological target of androgens like testosterone and dihydrotestosterone (DHT). It has strong anabolic effects and highly androgenic effects, as well as potent progestogenic effects, and weak glucocorticoid effects. Trenbolone acetate is an androgen ester and a short-lasting prodrug of trenbolone in the body.
Trenbolone acetate was discovered in 1963 and was introduced for veterinary use in the early 1970s. In addition to its veterinary use, trenbolone acetate is used to improve physique and performance, for which purpose it is purchased from black market suppliers. The drug is a controlled substance in many countries and so non-veterinary use is generally illicit.
Uses
Veterinary uses
In the livestock industry, trenbolone acetate is more often called Finaplix. It was intentionally developed to promote androgen and gain muscle mass in cattle. Due to its properties, this allows livestock to grow as much muscle as possible before they are transported to a slaughterhouse.
Methyl cellulose and yellow dye are usually present in pellets given to livestock. A single dosage generally consists of ten pellets, and a package of Finaplix usually consists of one cartridge containing one hundred pellets. The medication is administered by subcutaneous injection into the posterior ear using an implanter gun. Finaplix is consistently implanted until the animal is ready to be slaughtered. There is no withholding period. Due to the common practice of trenbolone acetate use in veterinary medicine, it is quite common to find traces of trenbolone metabolites in cattle worldwide.
Non-medical uses
Bodybuilding
Trenbolone acetate has never been approved for use in humans and therefore guidelines for human consumption do not exist. However, athletes and bodybuilders have been using trenbolone acetate as a physique- and performance-enhancing drug for decades. Some argue there are many benefits for bodybuilder's using trenbolone acetate as an AAS. Unlike exogenous testosterone, trenbolone acetate does not cause fluid retention, so bodybuilders appear leaner; therefore, it is more commonly used during preparation for competitive events. Trenbolone acetate does not convert into an estrogenic metabolite; thus there are no estrogenic side effects. Trenbolone enanthate is a commonly used AAS and lasts much longer than trenbolone acetate with intramuscular injection.
Medical uses
Trenbolone acetate was never approved for use in humans and hence has no medical uses. However, as an AAS, it would be expected to be effective for treating indications in which other AAS are useful, such as androgen deficiency, wasting syndromes, muscle atrophy, and certain types of anemia.
Trenbolone hexahydrobenzylcarbonate was previously produced for human use by Negma Pharmaceuticals of France in 1.5 ml ampoules containing 76.2 mg of the steroid.
Side effects
Trenbolone acetate, like any other AAS, has many side effects. Its strong androgenic properties stimulate virilization, making it unsuitable for women pursuing physique or performance enhancement. The side effects of trenbolone acetate are similar to other AAS; however, the negative side effects specific to trenbolone acetate are as follows:
Androgenic
Trenbolone acetate has androgenic activity. Common side effects include oily skin, acne, seborrhea, increased facial or body hair growth, and accelerated scalp hair loss. Severity of these side effects varies based on an individual's genetics. Men susceptible to hair loss have a higher chance of becoming permanently bald. In women, voice deepening, hirsutism, clitoral enlargement, and general virilization may occur.
Hypogonadism
Trenbolone acetate contributes greatly to development of muscle mass and feed efficiency; however, administration of any AAS suppresses natural testosterone production and therefore has the potential to cause hypogonadism.
Cardiovascular
Administration of any AAS can lead to cardiovascular issues. Trenbolone acetate can have a strongly negative impact on cholesterol levels by suppressing high-density lipoprotein (HDL) cholesterol production and stimulating low-density lipoprotein (LDL) cholesterol production. When compared to oral AAS, trenbolone acetate exerts a stronger negative effect on cholesterol levels.
"Tren cough"
The exact mechanisms underlying "tren cough" are not known; however, trenbolone acetate's androgenic effect activates a variety of lipid-like active compounds called prostaglandins, many of which are inflammatory and vasoconstrictive. Prostaglandins act on two signaling pathways: cyclooxygenase (COX) (also known as prostaglandin-endoperoxide synthase) and lipoxygenase (LOX) (EC 1.13.11.34, EC 1.13.11.33, etc.). The bradykinin peptide is well known to promote a cough reaction associated with ACE inhibitors prescribed for hypertension.
Estrogenic and progestogenic
Trenbolone is not estrogenic; therefore, use does not lead to excess fluid retention. However, due to trenbolone's potent progestogenic activity (it binds with high affinity to the progesterone receptor), gynecomastia, characterized by development and swelling of breast tissue, may occur.
Pharmacology
Pharmacodynamics
Trenbolone acetate is a prodrug of trenbolone. Like other AAS, trenbolone is an agonist of the androgen receptor (AR) and hence has anabolic and androgenic activity as well as antigonadotropic activity. The potency of Trenbolone is not known, although it's often falsely believed to be five times high as that of testosterone. This is based on a book by William Llewellyn but has not been definitively proven. Trenbolone was never approved for human use, and therefore limited data on the subject exists. The relevant literature, is usually done in rats, which makes the 500/100 potency number inaccurate. Rats respond differently to androgens and are less sensitive to androgens. While some literature report a 5 fold higher potency, two other scientific reviews report a 3 fold higher potency, which makes it unclear as to how large the relative potency actually is. Trenbolone is an agonist of the progesterone receptor (PR), and in relation to this, has moderate to strong progestogenic activity. Conversely, trenbolone acetate is not a substrate for aromatase and hence lacks estrogenic activity. The compound also has weak glucocorticoid activity.
Similar to many other AAS, trenbolone acetate has the capability to produce insulin-like growth factor-1 (IGF-1). This naturally produced protein-based hormone affects every cell in the body of an organism and plays a large role in muscle recovery and rejuvenation. Extreme muscle growth and cell splitting compared is facilitated through trenbolone acetate administration when compared to other AAS. The facilitation of IGF-1 plays a significant role in the functions and properties of the central nervous system, pulmonary system, muscle tissue, ligaments, cartilage, and tendons.
Trenbolone acetate also has the ability to increase red blood cell count. With a larger amount of red blood cells, blood oxygenation is enhanced. This allows for enhanced muscular endurance and therefore promotes a faster rate of recovery. Trenbolone acetate is capable of inhibiting glucocorticoids such as cortisol. The properties of glucocorticoid are the opposite of androgens as muscle tissue depletion and fat gain is promoted. Administration of trenbolone acetate aims at decreasing the production of glucocorticoid hormones. Trenbolone acetate’s contribution to feed efficiency, also known as nutrient efficiency is what makes it an attractive AAS used for agricultural purposes. Food is one of the most anabolic substances that any living organism can consume, and therefore with the administration of trenbolone acetate, every nutrient in the body becomes a lot more valuable. This facilitates an organism's body that is exposed to the AAS to make better use of the nutrients already consumed.
Pharmacokinetics
The acetate ester of trenbolone acetate allows for slow release post injection. This ester gives trenbolone an activated elimination half-life of about 3 days.
Chemistry
Trenbolone acetate, or trenbolone 17β-acetate, is a synthetic estrane steroid and a derivative of nandrolone (19-nortestosterone). It is the C17β acetate ester of trenbolone, which itself is δ9,11-19-nortestosterone (δ9,11-19-NT) or estra-4,9,11-trien-17β-ol-3-one. Other trenbolone esters include trenbolone enanthate, trenbolone hexahydrobenzylcarbonate, and trenbolone undecanoate.
Structure–activity relationships
Trenbolone acetate is a modified form of nandrolone. The structure of trenbolone acetate is a 19-nor classification, which represents a structural change of the testosterone hormone. Trenbolone acetate lacks a carbon atom at the 19 position and carries a double bond at carbons 9 and 11. The position of these carbons slows its metabolism, which greatly increases its binding affinity to the AR, and inhibits it from undergoing aromatization into the corresponding estrogenic metabolite. Trenbolone acetate contains trenbolone modified with the addition of a carboxylic acid ester (acetic acid) at the 17β-hydroxyl group. This facilitates the slow release of the AAS from the area of injection.
History
Trenbolone acetate was first synthesized in 1963 and approved by the livestock industry as a growth promoter for beef cattle in the early 1970s. During this period of its first administration, trenbolone acetate was sold under the names Finajet and Finaject. The original manufacturer of trenbolone acetate discontinued during the late 1980s and administered the synthesis of subcutaneous pellets called Finaplix. These pellets aimed to increase muscle mass and lean tissue of cattle prior to slaughter to increase the profitability of livestock when measured in total pounds of meat sold.
The drug appears to have been an early development project of Roussel Uclaf, a French pharmaceutical company, and by the early 1970s, it was being sold as an injectable. There are a number of trenbolone esters but trenbolone acetate is the only one known to be produced in veterinary AAS manufacturers.
Trenbolone acetate became popular among bodybuilders and athletes during the early 1980s. During this period, the AAS was transported illegally from Europe in large quantities. Although trenbolone acetate was very popular for a short amount of time, the large amounts of supplies were discontinued in 1987. This decision was based upon the public concern of sports doping and its negative effects on athletes.
Society and culture
Generic names
Trenbolone acetate is the generic name of the drug and its , , and .
Brand names
Trenbolone acetate is or has been sold alone for veterinary use under the brand names Component TH, Component TS, Finaject, Finajet, Finaplix-H, and Finaplix-S. It is or has also been sold in combination with estradiol or estradiol benzoate for veterinary use under the brand names Revalor and Synovex.
Distribution and regulation
Trenbolone acetate, specifically referred to as Finaplix in the livestock industry, is available to purchase in veterinary drug markets. A typical cartridge usually comes in the form of 20 mg pellets. It generally comes in the form of implant pellets containing 20 mg of trenbolone acetate each. Preparations containing trenbolone acetate remain rare since its decline in production after the 1980s. Using AAS for any other purpose, or without a doctor's prescription, is illegal in most countries. Major sporting and bodybuilding organizations ban the use of controlled AAS, and the possession or sale of drugs can lead to arrest and conviction of drug-trafficking in many countries, including the United States and Australia. However, in the United Kingdom, owning AAS for personal use as a bodybuilding supplement is not illegal, but selling the AAS without a valid medical license or reason is still against the law.
Doping in sports
Regardless of their legality, AAS are still banned by most sporting leagues in the country, who routinely conduct drug tests to find the users of any AAS. There are known cases of doping in sports with trenbolone acetate by professional athletes.
References
Further reading
External links
Acetate esters
Androgen esters
Anabolic–androgenic steroids
Estranes
Glucocorticoids
Ketones
Sex hormone esters and conjugates
Progestogens
Veterinary drugs | Trenbolone acetate | Chemistry | 2,984 |
70,074,257 | https://en.wikipedia.org/wiki/Staci%20Simonich | Staci Simonich is an American environmental scientist who is a professor and dean for the College of Agricultural Sciences at Oregon State University. Her research considers how chemicals move through the environment. She was appointed Fellow of the American Association for the Advancement of Science in 2021.
Family
Simonich has two children, Noah a sophomore at Oregon State University, and Grace a senior at Crescent Valley High School. Grace was adopted from South Korea and has been an outstanding student all throughout high school.
Early life and education
Simonich grew up in Green Bay, Wisconsin. Her father worked in a paper mill. Her house was near the Fox River, which suffered from issues with pollution. These experiences inspired Simonich to work on environmental issues. Simonich was the first in her family to attend college. She studied chemistry at the University of Wisconsin–Green Bay. As part of her undergraduate research, she studied polychlorinated biphenyls in Green Bay. After graduating she moved to Indiana University Bloomington, where she studied the role of vegetation in removing organic pollutants from the atmosphere. During her doctoral research she studied polycyclic aromatic hydrocarbons in the atmospherere. Her research combines lab-based studies with field experiments and computational modelling. Simonich earned a Master of Business Administration at Oregon State University in 2020.
Research and career
Simonich joined Procter & Gamble, where she spent six years working on consumer food products. She investigated the environmental impacts of P&G ingredients.
Simonich joined Oregon State University in 2001 and continued her work on polycyclic aromatic hydrocarbons (PAHs). Elevated levels of combustion means that emissions of PAHs are high in Asia. Simonich collected PAHs before, during and after the 2008 Summer Olympics and analyzed for various different forms of hydrocarbons. She established a series of remote sites across the Pacific Northwest to monitor atmospheric transport of the PAHs from Beijing to North America. She has shown that PAHs persist over long distances, that they reach with other chemicals, and that they make use of various transport pathways.
Simonich has studied several different types of PAH and monitored their environmental impact. She is particularly interested in environmental remediation and ways to remove PAHs from soil. Unfortunately, some forms of bioremidation can lead the breakdown products that are more toxic than the original compounds.
Simonich was made Executive Associate Dean in 2020.
Awards and honors
2003 National Science Foundation CAREER Award
2011 Scientific and Technological Achievement Award Level III for Innovative Design, Implementation and Synthesis Assessing Impact of Airborne Contaminants on Western National Parks, United States Environmental Protection Agency
2013 Oregon State University Impact Award for Outstanding Scholarship
2013 Super Reviewer Award, Environmental Science & Technology (Journal)
2015 Oregon State University Excellence in Graduate Mentoring Award
2015 James and Mildred Oldfield/E.R. Jackman Team Award, in recognition of Oregon State University Superfund Research Program
2021 Elected Fellow of the American Association for the Advancement of Science
Selected publications
References
Year of birth missing (living people)
Living people
21st-century American scientists
21st-century American women scientists
Indiana University Bloomington alumni
University of Wisconsin–Green Bay alumni
Oregon State University alumni
Oregon State University faculty
Procter & Gamble people
People from Green Bay, Wisconsin
Environmental scientists
American women scientists | Staci Simonich | Environmental_science | 654 |
7,330,456 | https://en.wikipedia.org/wiki/Argonaute | The Argonaute protein family, first discovered for its evolutionarily conserved stem cell function, plays a central role in RNA silencing processes as essential components of the RNA-induced silencing complex (RISC). RISC is responsible for the gene silencing phenomenon known as RNA interference (RNAi). Argonaute proteins bind different classes of small non-coding RNAs, including microRNAs (miRNAs), small interfering RNAs (siRNAs) and Piwi-interacting RNAs (piRNAs). Small RNAs guide Argonaute proteins to their specific targets through sequence complementarity (base pairing), which then leads to mRNA cleavage, translation inhibition, and/or the initiation of mRNA decay.
The name of this protein family is derived from a mutant phenotype resulting from mutation of AGO1 in Arabidopsis thaliana, which was likened by Bohmert et al. to the appearance of the pelagic octopus Argonauta argo.
{{Infobox protein family
| Symbol = Piwi
| Name = Argonaute Piwi domain
| image = 1u04-argonaute.png
| width =
| caption = An argonaute protein from Pyrococcus furiosus. PDB . PIWI domain is on the right, PAZ domain to the left.
| Pfam = PF02171
| Pfam_clan =
| InterPro = IPR003165
| SMART =
| PROSITE = PS50822
| MEROPS =
| SCOP =
| TCDB =
| OPM family =
| OPM protein =
| CDD = cd02826
| PDB =
}}
RNA interference
RNA interference (RNAi) is a biological process in which RNA molecules inhibit gene expression, via either destruction of specific mRNA molecules or suppressing translation. RNAi has a significant role in defending cells against parasitic nucleotide sequences . In eukaryotes, including animals, RNAi is initiated by the enzyme Dicer. Dicer cleaves long double-stranded RNA (dsRNA, often found in viruses and small interfering RNA) molecules into short double stranded fragments of around 20 nucleotide siRNAs. The dsRNA is then separated into two single-stranded RNAs (ssRNA) – the passenger strand and the guide strand. Subsequently, the passenger strand is degraded, while the guide strand is incorporated into the RNA-induced silencing complex (RISC). The most well-studied outcome of the RNAi is post-transcriptional gene silencing, which occurs when the guide strand pairs with a complementary sequence in a messenger RNA molecule and induces cleavage by Argonaute, that lies in the core of RNA-induced silencing complex.
Argonaute proteins are the active part of RNA-induced silencing complex, cleaving the target mRNA strand complementary to their bound siRNA. Theoretically the dicer produces short double-stranded fragments so there should be also two functional single-stranded siRNA produced. But only one of the two single-stranded RNA here will be utilized to base pair with target mRNA. It is known as the guide strand, which is incorporated into the Argonaute protein and leads gene silencing. The other single-stranded RNA, named the passenger strand, is degraded during the RNA-induced silencing complex process.
Once the Argonaute is associated with the small RNA, the enzymatic activity conferred by the PIWI domain cleaves only the passenger strand of the small interfering RNA. RNA strand separation and incorporation into the Argonaute protein are guided by the strength of the hydrogen bond interaction at the 5′-ends of the RNA duplex, known as the asymmetry rule. Also the degree of complementarity between the two strands of the intermediate RNA duplex defines how the miRNA are sorted into different types of Argonaute proteins.
In animals, Argonaute associated with miRNA binds to the 3′-untranslated region of mRNA and prevents the production of proteins in various ways. The recruitment of Argonaute proteins to targeted mRNA can induce mRNA degradation. The Argonaute-miRNA complex can also affect the formation of functional ribosomes at the 5′-end of the mRNA. The complex here competes with the translation initiation factors and/or abrogate ribosome assembly. Also, the Argonaute-miRNA complex can adjust protein production by recruiting cellular factors such as peptides or post translational modifying enzymes, which degrade the growing of polypeptides.
In plants, once de novo double-stranded (ds) RNA duplexes are generated with the target mRNA, an unknown RNase-III-like enzyme produces new siRNAs, which are then loaded onto the Argonaute proteins containing PIWI domains, lacking the catalytic amino acid residues, which might induce another level of specific gene silencing.
Functional domains and mechanism
The Argonaute (AGO) gene family encodes six characteristic domains: N- terminal (N), Linker-1 (L1), PAZ, Linker-2 (L2), Mid, and a C-terminal PIWI domain.
The PAZ domain is named for Drosophila Piwi, Arabidopsis Argonaute-1, and Arabidopsis Zwille (also known as pinhead, and later renamed argonaute-10), where the domain was first recognized to be conserved. The PAZ domain is an RNA binding module that recognizes single-stranded 3′ ends of siRNA, miRNA and piRNA, in a sequence independent manner.
PIWI is named after the Drosophila Piwi protein. Structurally resembling RNaseH, the PIWI domain is essential for the target cleavage. The active site with aspartate–aspartate–glutamate triad harbors a divalent metal ion, necessary for the catalysis. Family members of AGO that lost this conserved feature during evolution lack the cleavage activity. In human AGO, the PIWI motif also mediates protein-protein interaction at the PIWI box, where it binds to Dicer at an RNase III domain.
At the interface of PIWI and Mid domains sits the 5′ phosphate of a siRNA, miRNA or piRNA, which is found essential in the functionality. Within Mid lies a MC motif, a homologue structure proposed to mimic the cap-binding structure motif found in eIF4E. It was later found that the MC motif is not involved in mRNA cap binding
Family members
In humans, there are eight AGO family members, some of which are investigated intensively. However, even though AGO1–4 are capable of loading miRNA, endonuclease activity and thus RNAi-dependent gene silencing exclusively belongs to AGO2. Considering the sequence conservation of PAZ and PIWI domains across the family, the uniqueness of AGO2 is presumed to arise from either the N-terminus or the spacing region linking PAZ and PIWI motifs.
Several AGO family members in plants also attract study. AGO1 is involved in miRNA related RNA degradation, and plays a central role in morphogenesis. In some organisms, it is strictly required for epigenetic silencing. It is regulated by miRNA itself. AGO4 does not involve in RNAi directed RNA degradation, but in DNA methylation and other epigenetic regulation, through small RNA (smRNA) pathway. AGO10 is involved in plant development. AGO7 has a function distinct from AGO 1 and 10, and is not found in gene silencing induced by transgenes. Instead, it is related to developmental timing in plants.
Disease and therapeutic tools
Argonaute proteins were reported to be associated with cancers. For the diseases that are involved with selective or elevated expression of particular identified genes, such as pancreatic cancer, the high sequence specificity of RNA interference might make it suitable to be a suitable treatment, particularly appropriate for combating cancers associated with mutated endogenous gene sequences. It has been reported several tiny non-coding RNAs(microRNAs) are related with human cancers, like miR-15a and miR-16a are frequently deleted and/or down-regulated in patients. Even though the biological functions of miRNAs are not fully understood, the roles for miRNAs in the coordination of cell proliferation and cell death during development and metabolism have been uncovered. It is trusted that the miRNAs can direct negative or positive regulation at different levels, which depends on the specific miRNAs and target base pair interaction and the cofactors that recognize them.
Because it has been widely known that many viruses have RNA rather than DNA as their genetic material and go through at least one stage in their life cycle when they make double-stranded RNA, RNA interference has been considered to be a potentially evolutionarily ancient mechanism for protecting organisms from viruses. The small interfering RNAs produced by Dicer cause sequence specific, post-transcriptional gene silencing by guiding an endonuclease, the RNA-induced silencing complex (RISC), to mRNA. This process has been seen in a wide range of organisms, such as Neurospora fungus (in which it is known as quelling), plants (post-transcriptional gene silencing) and mammalian cells(RNAi). If there is a complete or near complete sequence complementarity between the small RNA and the target, the Argonaute protein component of RISC mediates cleavage of the target transcript, the mechanism involves repression of translation predominantly.
Biotechnological applications of prokaryotic Argonaute proteins
In 2016, a group from Hebei University of Science and Technology reported genome editing using a prokaryotic Argonaute protein from Natronobacterium gregoryi. However, evidence for application of Argonaute proteins as DNA-guided nucleases for genome editing have been questioned, with the retraction of the claim from the leading journal. In 2017, a group from University of Illinois reported using a prokaryotic Argonaute protein taken from Pyrococcus furiosus (PfAgo) along with guide DNA to edit DNA in vitro'' as artificial restriction enzymes. PfAgo based artificial restriction enzymes were also used for storing data on native DNA sequences via enzymatic nicking.
References
External links
starBase database: a database for exploring microRNA–mRNA interaction maps from Argonaute CLIP-Seq(HITS-CLIP, PAR-CLIP) and Degradome-Seq data.
Ribonucleases
Molecular genetics
MicroRNA
RNA-binding proteins
RNA interference | Argonaute | Chemistry,Biology | 2,213 |
14,646,394 | https://en.wikipedia.org/wiki/Federal%20modernism | Federal modernism is an architectural style which emerged in the twentieth century encompassing various styles of modern architecture used in the design of federal buildings in the United States. Federal buildings in this style shunned ornamentation, focusing instead on functional efficiency and low costs. There is no universally accepted start date for federal modernism, with some early variants of modernism emerging as early as the 1920s, but the term is most often associated with the buildings built by the U.S. General Services Administration (GSA) in the 1950s through 1970s. Prominent architects associated with federal modernism include Ludwig Mies van der Rohe, Marcel Breuer, Walter Gropius, and Victor Lundy. Federal modernism has been criticized by some architects and politicians such as Donald Trump, either because they believe it lacks "authority" or due to a perceived lack of beauty.
History
Prior to the American Revolution, colonial America derived its public buildings from architectural styles and practices of Great Britain. After gaining independence, the American republic was influenced and inspired by classical Roman and Greek forms, representing the democratic ideals of law and citizenship of a new nation.
In 1852, after the population tripled in numbers, the Office of Construction and Office of the Supervising Architect were established under the Treasury Department to oversee federal design and construction and make the process more efficient and timely. Designs in this period moved away from classicism toward other styles like Renaissance Revival, and emphasized centralization and standardization.
While there is no universally accepted start date for federal modernism, in the early twentieth century the materials and building methods used in federal buildings changed, and reflected the styles of early modernism. These buildings utilized clean lines, flat surfaces, and simple geometric shapes, lacking the ornamentation prevalent in classical architecture. While classicism asserted permanence and authority, modernism celebrated innovation and freedom with its steel and glass materials.
During the New Deal, approximately 1,300 federally funded buildings were constructed nationwide in a simplified classic style. Sometimes referred to as "modern classic" or "stripped classic mode, the “style was so named because the basic form and symmetry of classicism were retained, but much of the ornamentation and motifs were reduced or removed.”
After the General Services Administration (GSA) was established in 1949 to “provide the resources needed by U.S. agencies to accomplish their missions,” federal buildings reflected an emphasis on functionalism rather than ornamentation. Federal modernism is most closely associated with the GSA buildings constructed between the 1950s and 1970s, which embodied this philosophy. Trends of functionalism included individual offices becoming less common while large open “universal” spaces became more common.
Characteristics
Lacking the ornamentation and ceremonial spaces of earlier styles, federal modernism instead incorporated sharp edges and emphasized functionality and efficiency. It often involved the use of many prefabricated elements, and inexpensive materials such as aluminum, concrete, and plastic. The lower cost of building in the modernist style helped it become widespread. With these changes, federal buildings began to resemble private office buildings, and it became challenging to differentiate between public and private structures in communities.
Modernist philosophy and the rapid pace of technological advancement led to buildings being constructed with intended lifespans of only 20-30 years, instead of centuries like their predecessors, due to “economics” and the “increasing requirements of comfort demanded by people”. This has led to many questions over whether it makes sense for the GSA to continually maintain and reinvest in these buildings as they age.
From the 1950s to 1970s, various styles of modern architecture were commonly used in federal buildings. These include International Style, New Formalism, Brutalism, and Expressionism.
Architects
Private architecture firms, along with government architects, produced designs in the federal modernist style for office buildings, courthouses, post offices, border stations, and museums. As a result of the inclusion of private firms, the demarcation between government and private architecture diminished.
Architects associated with federal modernism include prominent American modernist architects of the mid-twentieth century including Ludwig Mies van der Rohe, Marcel Breuer, Walter Gropius, and Victor Lundy.
Mies van der Rohe
Ludwig Mies van der Rohe was the chief designer of the Chicago Federal Center (also known as the Chicago Federal Complex) in Chicago, Illinois. He worked alongside the architects of the firms of Schmidt, Garden and Erikson, C.F. Murphy Associates, and A. Epstein and Sons. The construction took place between 1960-1974. The complex includes three buildings: the 45-story John C. Kluczynski Federal Building, the Everett McKinley Dirksen United States Courthouse, and a post office between these two towers.
A sculpture by Alexander Calder, "Flamingo," is installed in the complex's central plaza.
Walter Gropius and The Architects Collaborative
Walter Gropius, founder of the Bauhaus School, along with The Architects Collaborative, designed the John F. Kennedy Federal Building located at 15 Sudbury Street, Boston, Massachusetts.
Marcel Breuer
Marcel Breuer, who worked with Mies van der Rohe and Gropius as part of the Bauhaus School, designed the Robert C. Weaver Federal Building in Washington D.C. located at 451 7th Street, SW. It houses the headquarters of the U.S. Department of Housing and Urban Development.
Victor Lundy
In 1965, Victor Lundy designed the U.S. Tax Court Building located at 400 2nd Street NW, Washington, D.C., and it is listed on the National Register of Historic Places.
Reception
In 2007, some architects invited by the GSA to a forum complained that modernist courthouses did not have as much “gravitas, order and authority” as those built in the classical style.
Responses to federal modernism became subject to partisan bickering; in 2020, Donald Trump signed an executive order disapproving of modernism in federal buildings due to its perceived lack of beauty. Joe Biden subsequently overturned that executive order in 2021. And in response to Biden's executive order, Republicans in Congress introduced legislation in 2023 that would discourage the use of modernist architecture and instead favor classicism in federal building design.
References
External links
Federal Modernism on the General Services Administration website.
GSA modern building poster galleries.
Modernist architecture in the United States
American architectural styles | Federal modernism | Engineering | 1,273 |
71,366,072 | https://en.wikipedia.org/wiki/Phacopsis%20oroarcticae | Phacopsis oroarcticae is a species of lichenicolous (lichen-dwelling) fungus in the family Parmeliaceae. It was formally described as a new species in 2010 by Russian mycologist Mikhail P. Zhurbenko. The type specimen was collected from a stony polar desert in the Severnaya Zemlya Archipelago in Central Siberia, where it was found growing on the lobes of the foliose lichen Brodoa oroarctica; the species epithet refers to its host. Infection by the fungus results in bleached, swollen, and sometimes contorted lobes. It is the first Phacopsis species known to have Brodoa as a host.
References
Parmeliaceae
Fungi described in 2010
Fungi of Russia
Lichenicolous fungi
Taxa named by Mikhail Petrovich Zhurbenko
Fungus species | Phacopsis oroarcticae | Biology | 171 |
58,960 | https://en.wikipedia.org/wiki/Timeline%20of%20medicine%20and%20medical%20technology | This is a timeline of the history of medicine and medical technology.
Antiquity
3300 BC – During the Stone Age, early doctors used very primitive forms of herbal medicine in India.
3000 BC – Ayurveda The origins of Ayurveda have been traced back to around 3,000 BCE.
c. 2600 BC – Imhotep the priest-physician who was later deified as the Egyptian god of medicine.
2500 BC – Iry Egyptian inscription speaks of Iry as eye-doctor of the palace, palace physician of the belly, guardian of the royal bowels, and he who prepares the important medicine (name cannot be translated) and knows the inner juices of the body.
1900–1600 BC Akkadian clay tablets on medicine survive primarily as copies from Ashurbanipal's library at Nineveh.
1800 BC – Code of Hammurabi sets out fees for surgeons and punishments for malpractice
1800 BC – Kahun Gynecological Papyrus
1600 BC – Hearst papyrus, coprotherapy and magic
1551 BC – Ebers Papyrus, coprotherapy and magic
1500 BC – Saffron used as a medicine on the Aegean island of Thera in ancient Greece
1500 BC – Edwin Smith Papyrus, an Egyptian medical text and the oldest known surgical treatise (no true surgery) no magic
1300 BC – Brugsch Papyrus and London Medical Papyrus
1250 BC – Asklepios
9th century – Hesiod reports an ontological conception of disease via the Pandora myth. Disease has a "life" of its own but is of divine origin.
8th century – Homer tells that Polydamna supplied the Greek forces besieging Troy with healing drugs. Homer also tells about battlefield surgery Idomeneus tells Nestor after Machaon had fallen: A surgeon who can cut out an arrow and heal the wound with his ointments is worth a regiment.
700 BC – Cnidos medical school; also one at Cos
500 BC – Darius I orders the restoration of the House of Life (First record of a (much older) medical school)
500 BC – Bian Que becomes the earliest physician known to use acupuncture and pulse diagnosis
500 BC – The Sushruta Samhita is published, laying the framework for Ayurvedic medicine, giving many surgical procedures for first time such as lithotomy, forehead flap rhinoplasty, otoplasty and many more.
– – Empedocles four elements
500 BC – Pills were used. They were presumably invented so that measured amounts of a medicinal substance could be delivered to a patient.
510–430 BC – Alcmaeon of Croton scientific anatomic dissections. He studied the optic nerves and the brain, arguing that the brain was the seat of the senses and intelligence. He distinguished veins from the arteries and had at least vague understanding of the circulation of the blood. Variously described by modern scholars as Father of Anatomy; Father of Physiology; Father of Embryology; Father of Psychology; Creator of Psychiatry; Founder of Gynecology; and as the Father of Medicine itself. There is little evidence to support the claims but he is, nonetheless, important.
fl. 425 BC – Diogenes of Apollonia
– 425 BC – Herodotus tells us Egyptian doctors were specialists: Medicine is practiced among them on a plan of separation; each physician treats a single disorder, and no more. Thus the country swarms with medical practitioners, some undertaking to cure diseases of the eye, others of the head, others again of the teeth, others of the intestines, and some those which are not local.
496 – 405 BC – Sophocles "It is not a learned physician who sings incantations over pains which should be cured by cutting."
420 BC – Hippocrates of Cos maintains that diseases have natural causes and puts forth the Hippocratic Oath. Origin of rational medicine.
Medicine after Hippocrates
c. 400 BC – 1 BC – The Huangdi Neijing (Yellow Emperor's Classic of Internal Medicine) is published, laying the framework for traditional Chinese medicine
4th century BC – Philistion of Locri Praxagoras distinguishes veins and arteries and determines only arteries pulse
375–295 BC – Diocles of Carystus
354 BC – Critobulus of Cos extracts an arrow from the eye of Phillip II, treating the loss of the eyeball without causing facial disfigurement.
3rd century BC – Philinus of Cos founder of the Empiricist school. Herophilos and Erasistratus practice androtomy. (Dissecting live and dead human beings)
280 BC – Herophilus Dissection studies the nervous system and distinguishes between sensory nerves and motor nerves and the brain. also the anatomy of the eye and medical terminology such as (in Latin translation "net like" becomes retiform/retina.
270 – Huangfu Mi writes the Zhēnjiǔ jiǎyǐ jīng (The ABC Compendium of Acupuncture), the first textbook focusing solely on acupuncture.
250 BC – Erasistratus studies the brain and distinguishes between the cerebrum and cerebellum physiology of the brain, heart and eyes, and in the vascular, nervous, respiratory and reproductive systems.
219 – Zhang Zhongjing publishes Shang Han Lun (On Cold Disease Damage).
200 BC – the Charaka Samhita uses a rational approach to the causes and cure of disease and uses objective methods of clinical examination
124 – 44 BC – Asclepiades of Bithynia
116 – 27 BC – Marcus Terentius Varro Prototypal germ theory of disease.
1st century AD – Rufus of Ephesus; Marcellinus a physician of the first century AD; Numisianus
23 – 79 AD – Pliny the Elder writes Natural History
– – Aulus Cornelius Celsus Medical encyclopedia
50 – 70 AD – Pedanius Dioscorides writes De Materia Medica – a precursor of modern pharmacopoeias that was in use for almost 1600 years
2nd century AD Aretaeus of Cappadocia
98 – 138 AD – Soranus of Ephesus
129 – 216 AD – Galen – Clinical medicine based on observation and experience. The resulting tightly integrated and comprehensive system, offering a complete medical philosophy dominated medicine throughout the Middle Ages and until the beginning of the modern era.
After Galen 200 AD
– Fabulla or Fabylla, medical writer
d. 260 – Gargilius Martialis, short Latin handbook on Medicines from Vegetables and Fruits
4th century Magnus of Nisibis, Alexandrian doctor and professor book on urine
325 – 400 – Oribasius 70 volume encyclopedia
362 – Julian orders xenones built, imitating Christian charity (proto hospitals)
369 – Basil of Caesarea founded at Caesarea in Cappadocia an institution (hospital) called Basileias, with several buildings for patients, nurses, physicians, workshops, and schools
375 – Ephrem the Syrian opened a hospital at Edessa They spread out and specialized nosocomia for the sick, brephotrophia for foundlings, orphanotrophia for orphans, ptochia for the poor, xenodochia for poor or infirm pilgrims, and gerontochia for the old.
400 – The first hospital in Latin Christendom was founded by Fabiola at Rome
420 – Caelius Aurelianus a doctor from Sicca Veneria (El-Kef, Tunisia) handbook On Acute and Chronic Diseases in Latin.
447 – Cassius Felix of Cirta (Constantine, Ksantina, Algeria), medical handbook drew on Greek sources, Methodist and Galenist in Latin
480 – 547 Benedict of Nursia founder of "monastic medicine"
484 – 590 – Flavius Magnus Aurelius Cassiodorus
fl. 511 – 534 – Anthimus Greek: Ἄνθιμος
536 – Sergius of Reshaina (died 536) – A Christian theologian-physician who translated thirty-two of Galen's works into Syriac and wrote medical treatises of his own
525 – 605 – Alexander of Tralles Alexander Trallianus
500 – 550 – Aetius of Amida Encyclopedia 4 books each divided into 4 sections
second half of 6th century building of xenodocheions/bimārestāns by the Nestorians under the Sasanians, would evolve into the complex secular "Islamic hospital", which combined lay practice and Galenic teaching
550 – 630 Stephanus of Athens
560 – 636 – Isidore of Seville
c. 620 Aaron of Alexandria Syriac . He wrote 30 books on medicine, the "Pandects". He was the first author in antiquity who mentioned the diseases of smallpox and measles translated by Māsarjawaih a Syrian Jew and Physician, into Arabic about A. D. 683
c. 630 – Paul of Aegina Encyclopedia in 7 books very detailed surgery used by Albucasis
790 – 869 – Leo Itrosophist also Mathematician or Philosopher wrote "Epitome of Medicine"
c. 800 – 873 – Al-Kindi (Alkindus) De Gradibus
820 – Benedictine hospital founded, School of Salerno would grow around it
d. 857 – Mesue the elder (Yūḥannā ibn Māsawayh) Syriac Christian
c. 830 – 870 – Hunayn ibn Ishaq (Johannitius) Syriac-speaking Christian also knew Greek and Arabic. Translator and author of several medical tracts.
c. 838 – 870 – Ali ibn Sahl Rabban al-Tabari, writes an encyclopedia of medicine in Arabic.
c. 910d – Ishaq ibn Hunayn
9th century – Yahya ibn Sarafyun a Syriac physician Johannes Serapion, Serapion the Elder
c. 865 – 925 – Rhazes pediatrics, and makes the first clear distinction between smallpox and measles in his al-Hawi.
d. 955 – Isaac Judaeus Isḥāq ibn Sulaymān al-Isrāʾīlī Egyptian born Jewish physician
913 – 982 – Shabbethai Donnolo alleged founding father of School of Salerno wrote in Hebrew
d. 982 – 994 – 'Ali ibn al-'Abbas al-Majusi Haly Abbas
1000 – Albucasis (936–1018) surgery Kitab al-Tasrif, surgical instruments.
d. 1075 – Ibn Butlan Christian physician of Baghdad Tacuinum sanitatis the Arabic original and most of the Latin copies, are in tabular format
1018 – 1087 – Michael Psellos or Psellus a Byzantine monk, writer, philosopher, politician and historian. several books on medicine
c. 1030 – Avicenna The Canon of Medicine The Canon remains a standard textbook in Muslim and European universities until the 18th century.
c. 1071 – 1078 – Simeon Seth or Symeon Seth an 11th-century Jewish Byzantine translated Arabic works into Greek
1084 – First documented hospital in England Canterbury
d. 1087 – Constantine the African
1083 – 1153 – Anna Komnene, Latinized as Comnena
1095 – Congregation of the Antonines, was founded to treat victims of "St. Anthony's fire" a skin disease.
Late 11th or early 12th century – Trotula
1123 – St Bartholomew's Hospital founded by the court jester Rahere Augustine nuns originally cared for the patients. Mental patients were accepted along with others
1127 – Stephen of Antioch translated the work of Haly Abbas
1100 – 1161 – Avenzoar Teacher of Averroes
1170 – Rogerius Salernitanus composed his Chirurgia also known as The Surgery of Roger
1126 – 1198 – Averroes
d. c. 1161 – Matthaeus Platearius
1200–1499
1203 – Innocent III organized the hospital of Santo Spirito at Rome inspiring others all over Europe
c. 1210 – 1277 – William of Saliceto, also known as Guilielmus de Saliceto
1210 – 1295 – Taddeo Alderotti – Scholastic medicine
1240 Bartholomeus Anglicus
1242 – Ibn al-Nafis suggests that the right and left ventricles of the heart are separate and discovers the pulmonary circulation and coronary circulation
c. 1248 – Ibn al-Baytar wrote on botany and pharmacy, studied animal anatomy and medicine veterinary medicine.
1249 – Roger Bacon writes about convex lens spectacles for treating long-sightedness
1257 – 1316 Pietro d'Abano also known as Petrus De Apono or Aponensis
1260 – Louis IX established Les Quinze-vingt; originally a retreat for the blind, it became a hospital for eye diseases, and is now one of the most important medical centers in Paris
c. 1260 – 1320 Henri de Mondeville
1284 – Mansur hospital of Cairo
– Joannes Zacharias Actuarius a Byzantine physician wrote the last great compendium of Byzantine medicine
1275 –1326 – Mondino de Luzzi "Mundinus" carried out the first systematic human dissections since Herophilus of Chalcedon and Erasistratus of Ceos 1500 years earlier.
1288 – The hospital of Santa Maria Nuova founded in Florence, it was strictly medical.
1300 – concave lens spectacles to treat myopia developed in Italy.
1310 – Pietro d'Abano's Conciliator ()
d. 1348 – Gentile da Foligno
1292–1350 – Ibn Qayyim al-Jawziya
1306–1390 – John of Arderne
d. 1368 – Guy de Chauliac
f. 1460 – Heinrich von Pfolspeundt
1443 – 1502 – Antonio Benivieni Pathological anatomy
1493 – 1541 – Paracelsus On the relationship between medicine and surgery surgery book
1500–1799
Early 16th century:
Paracelsus, an alchemist by trade, rejects occultism and pioneers the use of chemicals and minerals in medicine. Burns the books of Avicenna, Galen and Hippocrates.
Hieronymus Fabricius His "Surgery" is mostly that of Celsus, Paul of Aegina, and Abulcasis citing them by name.
Caspar Stromayr
1500? – 1561 Pierre Franco
Ambroise Paré (1510–1590) pioneered the treatment of gunshot wounds.
Bartholomeo Maggi at Bologna, Felix Wurtz of Zurich, Léonard Botal in Paris, and the Englishman Thomas Gale (surgeon), (the diversity of their geographical origins attests to the widespread interest of surgeons in the problem), all published works urging similar treatment to Paré's. But it was Paré's writings which were the most influential.
1518 – College of Physicians founded now known as Royal College of Physicians of London is a British professional body of doctors of general medicine and its subspecialties. It received the royal charter in 1518
1510 – 1590 – Ambroise Paré surgeon
1540 – 1604 – William Clowes – Surgical chest for military surgeons
1543 – Andreas Vesalius publishes De Fabrica Corporis Humani which corrects Greek medical errors and revolutionizes European medicine
1546 – Girolamo Fracastoro proposes that epidemic diseases are caused by transferable seedlike entities
1550 – 1612 – Peter Lowe
1553 – Miguel Servet describes the circulation of blood through the lungs.
1556 – Amato Lusitano describes venous valves in the Ázigos vein
1559 – Realdo Colombo describes the circulation of blood through the lungs in detail
1563 – Garcia de Orta founds tropical medicine with his treatise on Indian diseases and treatments
1570 – 1643 – John Woodall Ship surgeons used lemon juice to treat scurvy wrote "The Surgions Mate"
1590 – Microscope was invented, which played a huge part in medical advancement
1596 – Li Shizhen publishes Běncǎo Gāngmù or Compendium of Materia Medica
1603 – Girolamo Fabrici studies leg veins and notices that they have valves which allow blood to flow only toward the heart
1621 – 1676 – Richard Wiseman
1628 – William Harvey explains the circulatory system in Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus
1683 – 1758 – Lorenz Heister
1688 – 1752 – William Cheselden
1701 – Giacomo Pylarini gives the first smallpox inoculations in Europe. They were widely practised in the East before then.
1714 – 1789 – Percivall Pott
1720 – Lady Mary Wortley Montagu
1728 – 1793 – John Hunter
1736 – Claudius Aymand performs the first successful appendectomy
1744 – 1795 – Pierre-Joseph Desault First surgical periodical
1747 – James Lind discovers that citrus fruits prevent scurvy
1749 – 1806 – Benjamin Bell – Leading surgeon of his time and father of a surgical dynasty, author of "A System of Surgery"
1752 – 1832 – Antonio Scarpa
1763 – 1820 – John Bell
1766 – 1842 – Dominique Jean Larrey Surgeon to Napoleon
1768 – 1843 – Astley Cooper surgeon lectures principles and practice
1774 – 1842 – Charles Bell, surgeon
1774 – Joseph Priestley discovers nitrous oxide, nitric oxide, ammonia, hydrogen chloride and oxygen
1777 – 1835 – Baron Guillaume Dupuytren – Head surgeon at Hôtel-Dieu de Paris, The age Dupuytren
1785 – William Withering publishes "An Account of the Foxglove" the first systematic description of digitalis in treating dropsy
1790 – Samuel Hahnemann rages against the prevalent practice of bloodletting as a universal cure and founds homeopathy
1796 – Edward Jenner develops a smallpox vaccination method
1799 – Humphry Davy discovers the anesthetic properties of nitrous oxide
1800–1899
1800 – Humphry Davy announces the anaesthetic properties of nitrous oxide.
1803 – 1841 – Morphine was first isolated by Friedrich Sertürner, this is generally believed to be the first isolation of an active ingredient from a plant.
1813–1883 – James Marion Sims vesico-vaganial surgery Father of surgical gynecology.
1816 – René Laennec invents the stethoscope.
1827 – 1912 – Joseph Lister antiseptic surgery Father of modern surgery
1818 – James Blundell performs the first successful human transfusion.
1842 – Crawford Long performs the first surgical operation using anesthesia with ether.
1845 – John Hughes Bennett first describes leukemia as a blood disorder.
1846 – First painless surgery with general anesthetic.
1847 – Ignaz Semmelweis discovers how to prevent puerperal fever.
1849 – Elizabeth Blackwell is the first woman to gain a medical degree in the United States.
1850 – Female Medical College of Pennsylvania (later Woman's Medical College), the first medical college in the world to grant degrees to women, is founded in Philadelphia.
1858 – Rudolf Carl Virchow 13 October 1821 – 5 September 1902 his theories of cellular pathology spelled the end of Humoral medicine.
1861 – Louis Pasteur discovers the Germ Theory
1867 – Lister publishes Antiseptic Principle of the Practice of Surgery, based partly on Pasteur's work.
1870 – Louis Pasteur and Robert Koch establish the germ theory of disease.
1878 – Ellis Reynolds Shipp graduates from the Women's Medical College of Pennsylvania and begins practice in Utah.
1879 – First vaccine for cholera.
1881 – Louis Pasteur develops an anthrax vaccine.
1882 – Louis Pasteur develops a rabies vaccine.
1887 – Willem Einthoven invents electrocardiography (ECG/EKG)
1890 – Emil von Behring discovers antitoxins and uses them to develop tetanus and diphtheria vaccines.
1895 – Wilhelm Conrad Röntgen discovers medical use of X-rays in medical imaging
1900–1999
1901 – Karl Landsteiner discovers the existence of different human blood types
1901 – Alois Alzheimer identifies the first case of what becomes known as Alzheimer's disease
1906 – Frederick Hopkins suggests the existence of vitamins and suggests that a lack of vitamins causes scurvy and rickets
1907 – Paul Ehrlich develops a chemotherapeutic cure for sleeping sickness
1907 – Henry Stanley Plummer develops the first structured patient record and clinical number (Mayo clinic)
1908 – Victor Horsley and R. Clarke invents the stereotactic method
1909 – First intrauterine device described by Richard Richter.
1910 – Hans Christian Jacobaeus performs the first laparoscopy on humans
1917 – Julius Wagner-Jauregg discovers the malarial fever shock therapy for general paresis of the insane
1921 – Edward Mellanby discovers vitamin D and shows that its absence causes rickets
1921 – Frederick Banting and Charles Best discover insulin – important for the treatment of diabetes
1921 – Fidel Pagés pioneers epidural anesthesia
1923 – First vaccine for diphtheria
1924 – Hans Berger discovers human electroencephalography
1926 – First vaccine for pertussis
1927 – First vaccine for tuberculosis
1927 – First vaccine for tetanus
1930 – First successful sex reassignment surgery performed on Lili Elbe in Dresden, Germany.
1932 – Gerhard Domagk develops a chemotherapeutic cure for streptococcus
1933 – Manfred Sakel discovers insulin shock therapy
1935 – Ladislas J. Meduna discovers metrazol shock therapy
1935 – First vaccine for yellow fever
1936 – Egas Moniz discovers prefrontal lobotomy for treating mental diseases; Enrique Finochietto develops the now ubiquitous self-retaining thoracic retractor
1938 – Ugo Cerletti and Lucio Bini discover electroconvulsive therapy
1938 – Howard Florey and Ernst Chain investigate Penicillin and attempted to mass-produce it and tested it on the policeman Albert Alexander (police officer) who recovered but died due to a lack of Penicillin
1943 – Willem J. Kolff builds the first dialysis machine
1944 – Disposable catheter – David S. Sheridan
1946 – Chemotherapy – Alfred G. Gilman and Louis S. Goodman
1947 – Defibrillator – Claude Beck
1948 – Acetaminophen – Julius Axelrod, Bernard Brodie
1949 – First implant of intraocular lens, by Sir Harold Ridley
1949 – Mechanical assistor for anesthesia – John Emerson
1952 – Jonas Salk develops the first polio vaccine (available in 1955)
1952 – Cloning – Robert Briggs and Thomas King
1953 – First live birth from frozen sperm
1953 – Heart-lung machine – John Heysham Gibbon
1953 – Medical ultrasonography – Inge Edler
1954 – Joseph Murray performs the first human kidney transplant (on identical twins)
1954 – Ventouse – Tage Malmstrom
1955 – Tetracycline – Lloyd Conover
1956 – Metered-dose inhaler – 3M
1957 – William Grey Walter invents the brain EEG topography (toposcope)
1958 – Pacemaker – Rune Elmqvist
1959 – In vitro fertilization – Min Chueh Chang
1960 – Invention of cardiopulmonary resuscitation (CPR)
1960 – First combined oral contraceptive approved by the FDA
1962 – Hip replacement – John Charnley
1962 – Beta blocker James W. Black
1962 – Albert Sabin develops first oral polio vaccine
1963 – Artificial heart – Paul Winchell
1963 – Thomas Starzl performs the first human liver transplant
1963 – James Hardy performs the first human lung transplant
1963 – Valium (diazepam) – Leo H. Sternbach
1964 – First vaccine for measles
1965 – Frank Pantridge installs the first portable defibrillator
1965 – First commercial ultrasound
1966 – C. Walton Lillehei performs the first human pancreas transplant
1966 – Rubella Vaccine – Harry Martin Meyer and Paul D. Parkman
1967 – First vaccine for mumps
1967 – René Favaloro develops Coronary Bypass surgery
1967 – Christiaan Barnard performs the first human heart transplant
1968 – Powered prothesis – Samuel Alderson
1968 – Controlled drug delivery – Alejandro Zaffaron
1969 – Balloon catheter – Thomas Fogarty
1969 – Cochlear implant – William House
1970 – Cyclosporine, the first effective immunosuppressive drug is introduced in organ transplant practice
1971 – MMR Vaccine – developed by Maurice Hilleman
1971 – Genetically modified organisms – Ananda Chakrabart
1971 – Magnetic resonance imaging – Raymond Vahan Damadian
1971 – Computed tomography (CT or CAT Scan) – Godfrey Hounsfield
1971 – Transdermal patches – Alejandro Zaffaroni
1971 – Sir Godfrey Hounsfield invents the first commercial CT scanner
1972 – Insulin pump Dean Kamen
1973 – Laser eye surgery (LASIK) – Mani Lal Bhaumik
1974 – Liposuction – Giorgio Fischer
1976 – First commercial PET scanner
1978 – First live birth from in vitro fertilisation (IVF)
1978 – Last fatal case of smallpox
1979 – Antiviral drugs – George Hitchings and Gertrude Elion
1980 – Raymond Damadian builds first commercial MRI scanner
1980 – Lithotripter – Dornier Research Group
1980 – First vaccine for hepatitis B – Baruch Samuel Blumberg
1980 – Cloning of interferons – Sidney Pestka
1981 – Artificial skin – John F. Burke and Ioannis V. Yannas
1981 – Bruce Reitz performs the first human heart-lung combined transplant
1982 – Human insulin – Eli Lilly
1982 – Willem Johan Kolff performs the first artificial heart transplant.
1985 – Automated DNA sequencer – Leroy Hood and Lloyd Smith
1985 – Polymerase chain reaction (PCR) – Kary Mullis
1985 – Surgical robot – Yik San Kwoh
1985 – DNA fingerprinting – Alec Jeffreys
1985 – Capsule endoscopy – Tarun Mullick
1986 – Fluoxetine HCl – Eli Lilly and Co
1987 – commercially available Statins – Merck & Co.
1987 – Tissue engineering – Joseph Vacanti & Robert Langer
1988 – Intravascular stent – Julio Palmaz
1988 – Laser cataract surgery – Patricia Bath
1989 – Pre-implantation genetic diagnosis (PGD) – Alan Handyside
1989 – DNA microarray – Stephen Fodor
1990 – Gamow bag® – Igor Gamow
1992 – Description of Brugada syndrome (Pedro and Josep Brugada)
1992 – First vaccine for hepatitis A available
1992 – Electroactive polymers (artificial muscle) – SRI International
1992 – Intracytoplasmic sperm injection (ICSI) – Andre van Steirteghem
1995 – Adult stem cell use in regeneration of tissues and organs in vivo – B. G Matapurkar U.S . International Patent
1996 – Dolly the Sheep cloned
1998 – Stem cell therapy – James Thomson
2000–2022
2000 – The Human Genome Project draft was completed.
2001 – The first telesurgery was performed by Jacques Marescaux.
2003 – Carlo Urbani, of Doctors without Borders alerted the World Health Organization to the threat of the SARS virus, triggering the most effective response to an epidemic in history. Urbani succumbs to the disease himself in less than a month.
2005 – Jean-Michel Dubernard performs the first partial face transplant.
2006 – First HPV vaccine approved.
2006 – The second rotavirus vaccine approved (first was withdrawn).
2007 – The visual prosthetic (bionic eye) Argus II.
2008 – Laurent Lantieri performs the first full face transplant.
2011 – First successful Uterus transplant from a deceased donor in Turkey
2013 – The first kidney was grown in vitro in the U.S.
2013 – The first human liver was grown from stem cells in Japan.
2014 – A 3D printer is used for first ever skull transplant.
2014 - Sonendo, a medical technology company based in Laguna Hills, California, introduces the GentleWave system in the United States for root canal treatments.
2016 – The first ever artificial pancreas was created
2019 – 3D-print heart from human patient's cells.
2020 – First vaccine for COVID-19.
2022 – The complete human genome is sequenced.
See also
Timeline of antibiotics
Timeline of vaccines
Timeline of hospitals
Notes
Citations
Reference:
1. International patent USA. .wef 1995. US PTO no.6227202 and 20020007223.
2. R. Maingot's Text Book of Abdominal operations.1997 USA.
3. Text book of Obstetrics and Gynecology. 2010 J P Publishers.
References
Matapurkar B G. (1995). US international Patent 6227202 and 20020007223.medical use of Adult Stem cells. A new physiological phenomenon of Desired Metaplasia for regeneration of tissues and organs in vivo. Annals of NYAS 1998.
Bynum, W. F. and Roy Porter, eds. Companion Encyclopedia of the History of Medicine (2 vol. 1997); 1840pp; 72 long essays by scholars excerpt and text search
Conrad, Lawrence I. et al. The Western Medical Tradition: 800 BC to AD 1800 (1995); excerpt and text search
Bynum, W.F. et al. The Western Medical Tradition: 1800–2000 (2006) excerpt and text search
Loudon, Irvine, ed. Western Medicine: An Illustrated History (1997) online
McGrew, Roderick. Encyclopedia of Medical History (1985)
Porter, Roy, ed. The Cambridge History of Medicine (2006); 416pp; excerpt and text search
Porter, Roy, ed. The Cambridge Illustrated History of Medicine (2001) excerpt and text search excerpt and text search
Singer, Charles, and E. Ashworth Underwood. A Short History of Medicine (2nd ed. 1962)
Watts, Sheldon. Disease and Medicine in World History (2003), 166pp online
Further reading
External links
Interactive timeline of medicine and medical technology (requires Flash plugin)
The Historyscoper
History of medicine
History of medical technology
Medical | Timeline of medicine and medical technology | Biology | 6,187 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.