id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
76,556,523 | https://en.wikipedia.org/wiki/Drug%20antagonism | Drug antagonism refers to a medicine stopping the action or effect of another substance, preventing a biological response. The stopping actions are carried out by four major mechanisms, namely chemical, pharmacokinetic, receptor and physiological antagonism. The four mechanisms are widely used in reducing overstimulated physiological actions. Drug antagonists can be used in a variety of medications, including anticholinergics, antihistamines, etc. The antagonistic effect can be quantified by pharmacodynamics. Some can even serve as antidotes for toxicities and overdose.
Receptor Antagonism
Mechanism of Action
Receptors bind with endogenous ligands to produce a physiological effect and regulate the body and cellular homeostasis. In a ligand-receptor interaction, the ligand binds with the receptors to form a drug-receptor complex, producing a biological response. The biological nature of receptors can be enzymes, nucleic acids or cellular proteins. Common types of receptors include G-protein coupled receptors, nuclear receptors and ion channels.
Functional antagonists would not produce a biological response after binding with a receptor. It blocks the binding of endogenous ligands to the receptors and thus inhibits the subsequent physiological effect.
Types of receptor antagonism
Reversible and irreversible competitive antagonism
Both agonist and antagonist bond the same active site. Adding agonist dose can reverse the effect of reversible competitive antagonism. Irreversible competitive antagonism occurs when the antagonist binds to the same spot on the receptor as the agonist but dissociates from the receptors very slowly or not. As a result, when the agonist is delivered, there is no change in the antagonist occupancy. Since a receptor can only hold one molecule at a time, competitive antagonists can reduce the agonist occupancy (percentage of receptors to which the agonist is bound). Raising the agonist concentration can bring back the agonist occupancy and the subsequent tissue response due to their competition. Thus, the opposition is surmountable. The amount to which the competitive antagonist causes the agonist log concentration–effect curve to shift to the right while maintaining its maximum slope is a measure of the dosage ratio. The antagonist concentration causes the dosage ratio to rise linearly.
Non-competitive (irreversible) antagonism
Allosteric antagonists
Different active sites are bonded by agonist and antagonist, which means in which the antagonist obstructs the chain of events that triggers the agonist to produce a response at a point downstream from the agonist binding site on the receptor. It irreversibly binds to the active site.
One examples is when Ketamine enters the NMDA receptor's ion channel pore and blocks it, stopping ions from passing through the channels. Also, medication like nifedipine and verapamil stops Ca2+ from entering the cell membrane and so non-selectively prevent medications that act at any receptor that binds to these calcium channels from causing smooth muscle contraction.
Partial agonists and full agonists
In the presence of a full agonist exerting its maximal effect, a partial agonist can behave like a competitive antagonist to lower the effect of receptor binding, generating merely a submaximal reaction. These variations can be evaluated regarding effectiveness, indicating the agonist-receptor complex's "strength" in causing a tissue response. It relies on receptor occupancy and response. A particular medication of intermediate efficacy may appear as a partial agonist in one tissue (lower level of receptor expression) and a full agonist in another (high level of receptor expression) across distinct cell types expressing the same receptor but at varying densities.
Clinical use
Reversible and irreversible competitive antagonism
Competitive antagonists are usually structurally similar to the active compound since they are structural analogues that have to bind to the same pocket. Examples of reversible competitive antagonists like antihistamines (Figure 1) compete with histamine (Figure 2) to bind to histamine receptors, blocking the allergic response by histamine. They are used in treating histamine-mediated allergies and allergic rhinitis.
Irreversible competitive antagonists like phenoxybenzamine do not dissociate from alpha-adrenergic receptors. It is used to block the activity of alpha receptors in sympathetic pathway and is used in the treatment of paroxysmal hypertension and sweating resulting from pheochromocytoma and benign prostate hyperplasia.
Chemical antagonism
Mechanism of Action
Chemical antagonism occurs when a chemical antagonist combines with a ligand to form an inactive product compound, inhibiting the response. In chemical antagonism, the receptors are not involved in the process, and the antagonist directly binds with or removes the ligand. It prevents the ligand from binding to the receptor. As the ligand cannot stimulate the receptor, no physiological effect is generated by the receptors and thus provides an inhibitory effect. The common types of chemical antagonism include chelating agents, neutralising antibodies and salt aggregation.
Clinical use
Chelating agent
Chelating agents are organic compounds which are capable of linking to metal ions. They are usually useful for removing toxic heavy metal ions from body. Dimercaprol is a common chelating agent to treat toxic exposure to arsenic, mercury, gold, and lead. It is in the chelating class of drugs. From Figure 3, the SH-ligands of dimercaprol can compete with -SH groups in natural enzymes for heavy metal, forming a stable metal complex to be excreted through urine. The action antagonises the toxic metal ions and helps remove them from body circulation. However, dimercaprol has a narrow TI and is later replaced by its derivative, 2,3-dimercaptosuccinic acid (DMSA).
Neutralising antibodies
Neutralising antibodies block pathogen entry into cells to prevent further infection and replication. Infliximab is a monoclonal antibody binding with tumour necrosis factor-alpha (TNF-alpha), inhibiting its pro-inflammatory action. Its efficacious anti-inflammatory action is clinically used in Crohn's Disease, active rheumatoid arthritis, psoriatic arthritis, and active ankylosing spondylitis.
Salt aggregation
Salt aggregation refers to reactions between a drug and an active compound forming a salt. Strongly anionic unfractionated heparin reacts with the positive cationic protamine arginine peptide to generate a salt aggregation. The resulting salt aggregate is not anticoagulant and is inactive. Protamine acts quickly, taking only five minutes to neutralize unfractionated heparin, and its half-life is only ten minutes.
Pharmacokinetics antagonism
Mechanism of Action
A drug which can affect the pharmacokinetics (absorption, digestion, metabolism, excretion) profile of another chemical (or drugs), thereby reducing the action of the target chemical. There could be a rise in the active drug's rate of metabolic breakdown. As an alternative, there may be a decrease in the rate at which the active medication is absorbed from the digestive system or a rise in the rate at which the drug is excreted by the kidneys.
Clinical use
Drugs affecting Absorption: antacid
Most drugs are taken orally and are absorbed through the gastrointestinal tract. Antacids would increase the pH environment in the stomach and cause premature release of enteric coated drugs, which are designed to be protected from an acidic environment in stomach. For example, proton-pump inhibitors (PPIs) are enteric coated to protect them from decomposition under an acidic environment. Co-administration of antacids with PPIs would lead to premature release into acidic gastric environments and inactivate PPIs before absorption. These types of pharmacokinetics antagonism should be carefully avoided to prevent loss of drug efficacy.
Since most drugs are either weakly acidic or weakly basic, modified pH would also affect the location at which the drug is deionised, thus affecting the required time for absorption and onset.
Drugs affecting metabolism: phenytoin
Many drugs are metabolised by a set of liver enzymes called CYP450s. The activity of these enzymes would determine the rate of pro-drug activation and the rate of inactivation of active drugs. For example, warfarin, a commonly-used anticoagulant drug in atrial fibrillation, is metabolised by an enzyme called CYP2C9. Phenytoin, a CYP2C9 inducer, would increase its activity and the rate of warfarin breakdown, thereby reducing its efficacy. Patients should avoid the co-administration of warfarin and phenytoin. In cases where both drugs must be used together, warfarin dosing may be titrated up to cope with the reduced efficacy.
Drugs affecting excretion: intravenous sodium bicarbonate
The kidney excretes most drugs through urine. Since urine is weakly alkaline in nature, weakly acid drugs would ionise in urine, making it difficult for them to be reabsorbed. Therefore, in cases of aspirin (weak acid) toxicity, injecting intravenous sodium bicarbonate could increase urine pH, thereby increasing the excretion of aspirin through urine. A similar approach can be used in other weakly acidic drug toxicity.
Physiological antagonism
Mechanism of action
Physiologic antagonism refers to the behaviour in which an antagonist behaves the opposite of the agonist but does not bind to the same active site as the agonist does. A physiologic antagonist binds to a different receptor but not the original agonist receptor.
Clinical use
Both insulin and glucagon are synthesised naturally in the human body to regulate blood glucose levels at homeostasis. Insulin binds to insulin receptors to decrease blood glucose levels, whilst glucagon binds to glucagon receptors to increase blood glucose levels. In cases of insulin-induced hypoglycaemia, glucagon injection could help increase blood glucose levels.
Another example is epinephrine (a bronchodilator) and histamine (a bronchoconstrictor). Epinephrine binds to adrenergic receptors to promote bronchodilation whilst histamine binds to histamine receptors which leads to bronchoconstriction. Since they have opposite effects in different pathways, they are considered physiological antagonists, and they are not advised to be taken together.
Quantifying effects of antagonists using pharmacodynamics
Pharmacodynamics
Pharmacodynamics (PD) is the core principle of quantifying the effects of antagonists by measuring the drug’s efficacy and safety. PD emphasises the relationship between the dose and response of a certain drug, which can be illustrated using a dose-response curve.
Efficacy
Efficacy is the maximal effect (Emax) that an agonist can produce. As a receptor antagonist does not affect receptors after binding, it is said to have zero efficacy.
A competitive antagonist does not affect the Emax of the agonist. This is because the effect of an agonist can be maximized by adding the dose of the agonist as the action of the antagonist is reversible. The maximum effect of the agonist can be achieved by adding the concentration of the agonist.
A non-competitive antagonist(or Allosteric antagonist) lowers the Emax of an agonist. The Emax of an agonist is inversely proportional to the concentration of the antagonist, which means a higher concentration of antagonist results in a lower Emax. The maximal efficacy of agonists is reduced as the inhibition cannot be reversed by adding the agonist concentration.
Potency
Potency is the amount of drug needed to give a certain therapeutic effect. It is affected by the drug’s affinity to the receptors and the number of receptors available. For antagonists, half maximal inhibitory concentration (IC50) is used to measure the potency of antagonists. IC50 means the concentration of antagonist needed to give a 50% inhibition. It can be directly compared with EC50, which is commonly used to measure the potency of an agonist. EC50 means the concentration of agonist needed to give a 50% response.
IC50 is significant in determining the optimal dose of antagonist. A high concentration of an antagonist in the body may result in toxicity in the cell and damage the cell membrane. A lower IC50 means the inhibitory effect can be met with a lower concentration of antagonist and, therefore a lower risk of toxicity. For example, the IC50 of antagonists on cancer cell growth is essential for determining the optimal dose which inhibits cancer cells while inducing less harmful systemic effects in the body.
Therapeutic index
The therapeutic index (TI) is used to quantify the risks and benefits of a certain drug. It describes the relationship between toxic dose and minimum effective dose, thus providing an important insight into the safety of a drug. The Therapeutic Index is calculated using the following equation:
TI = TD50 / ED50, where TD50 is the dose at which toxicity presents in 50% of the population, and ED50 is the dose needed to produce 50% of maximal response. From the equation, a high TI indicates that the drug needs a high dose to induce toxicity in 50% of the population or a low dose to achieve the minimum effective dose, and vice versa.
In the case of physiological antagonists, for example, insulin has a narrow TI. A narrow TI indicates that either excess or lack of insulin can cause significant risks. On one hand, lack of insulin may result in high blood glucose levels and kidney or cardiovascular damage. On the other hand, excess insulin may result in insulin-induced hypoglycemia as aforementioned. Another example is dimercaprol, a chemical antagonist in treating metal toxicity. Dimercaprol has a narrow TI so it is replaced by its derivative, 2,3-dimercaptosuccinic acid (DMSA).
Upregulation of receptors in functional antagonists
Upregulation of receptors is the increase in receptor number or sensitivity of receptors. The receptors involved in functional antagonism are regulated in sensitivity, number and location. Therefore, changes in receptors are common. Using a long-term antagonist drug or continuous exposure to an antagonist may cause the upregulation and hypersensitivity of receptors, which means an increase in the number and sensitivity of receptors. The increase in the number of receptors is due to the increased expression of receptors after prolonged inhibition. The upregulation of receptors is important in the clinical aspect.
One example of upregulation of receptors is the upregulation of β-receptors caused by β receptor antagonists (also called β-blocker). The prolonged use of β-blockers results in the blockade of β-receptors, causing cells (mainly myocardial cells) to increase their expression of β-receptor. After removing the blockage, more receptors available for stimulation, resulting in higher sensitivity of β-receptors called the hypersensitivity of β-receptors. Abrupt discontinuation of β-blocker may potentially aggravate coronary artery disease, tachycardia, or even sudden cardiac death. Therefore, to prevent the adverse effects, doses of β-blocker must be reduced gradually over 10–14 days.
Antidotes
Antidotes are agents that can neutralise the effects of a poison or toxin. Antidotes counteract the effects of toxins in many ways, such as by blocking the absorption of the toxin, binding and neutralising the poison, opposing the toxin's end-organ function, or blocking the toxin's conversion to more hazardous metabolites. In addition to lowering the amount of free or active poison present, antidote delivery may also lessen the toxin's effects on organs through competitive inhibition, receptor blockage, or direct antagonistic interaction.
The therapeutic index or ratio (TD50/ED50), which is the ratio of the toxic dosage (TD) or fatal dose (LD) to the effective dose (ED), determines the level of safety associated with a substance.
Mechanism of action
Decrease the active toxin level
Agents that "bind" to the toxin can reduce free or active toxin present. It is possible for this binding to be nonspecific or specific.
Activated charcoal is the non-specific binding agent most frequently utilised as it has strong adsorption capacity and could prevent the toxin's enterohepatic recirculation. Chelation agents, immunotherapy, and bioscavenger therapy are examples of specific binders. Urinary alkalization or hemadsorption may improve elimination in some circumstances.
Block the site of action of the toxin
It might occur either at the enzyme or receptor level. There are two possible outcomes at the enzyme level: competitive inhibition or enzyme activity reactivation. Ethyl alcohol or fomepizole used in methyl alcohol or ethylene glycol poisoning is a typical example of competitive enzyme inhibition. By posing competition for alcohol dehydrogenase (ADH) with methyl alcohol and ethylene glycol, these drugs reduce the production of harmful metabolites. For receptor level, the traditional antidotes include naloxone and flumazenil. Flumazenil functions as a competitive antagonist at the GABA-A receptor complex's benzodiazepine site. By doing this, the CNS and respiratory depression would reverse and the inward chloride current would reduce. Flumazenil is useful in treating and preventing benzodiazepine-induced coma from recurring.
Decreasing the toxic metabolite
Antidotes can be employed to either mop up hazardous metabolites or change them into less toxic forms once they have developed. Hepatic glutathione stores are replenished by N-acetyl cysteine, and this process is what leads to the conjugation of the poisonous metabolite N-acetyl P-benzoquinone imine (NAPQI).
See also
Receptor antagonist
Receptor
Dose–response relationship
Pharmacodynamics
Antidotes
References
Medicinal chemistry
Drugs
Pharmacodynamics | Drug antagonism | Chemistry,Biology | 3,800 |
672,499 | https://en.wikipedia.org/wiki/Graph-structured%20stack | In computer science, a graph-structured stack (GSS) is a directed acyclic graph where each directed path represents a stack.
The graph-structured stack is an essential part of Tomita's algorithm, where it replaces the usual stack of a pushdown automaton. This allows the algorithm to encode the nondeterministic choices in parsing an ambiguous grammar, sometimes with greater efficiency.
In the following diagram, there are four stacks: {7,3,1,0}, {7,4,1,0}, {7,5,2,0}, and {8,6,2,0}.
Another way to simulate nondeterminism would be to duplicate the stack as needed. The duplication would be less efficient since vertices would not be shared. For this example, 16 vertices would be needed instead of 9.
Operations
GSSnode* GSS::add(GSSnode* prev, int elem)
{
int prevlevel = prev->level;
assert(levels.size() >= prevlevel + 1);
int level = prevlevel + 1;
if (levels.size() == level)
{
levels.resize(level + 1);
}
GSSnode* node = findElemAtLevel(level, elem);
if (node == nullptr)
{
node = new GSSnode();
node->elem = elem;
node->level = level;
levels[level].push_back(node);
}
node->add(prev);
return node;
}
void GSS::remove(GSSnode* node)
{
if (levels.size() > node->level + 1)
if (findPrevAtLevel(node->level + 1, node)) throw Exception("Can remove only from top.");
for (int i = 0; i < levels[node->level].size(); i++)
if (levels[node->level][i] == node)
{
levels[node->level].erase(levels[node->level].begin() + i);
break;
}
delete node;
}
References
Masaru Tomita. Graph-Structured Stack And Natural Language Parsing. Annual Meeting of the Association of Computational Linguistics, 1988.
Elizabeth Scott, Adrian Johnstone GLL Parsing gll.pdf
Graph data structures
Application-specific graphs | Graph-structured stack | Technology | 534 |
1,632,111 | https://en.wikipedia.org/wiki/Hydrobromide | In chemistry, a hydrobromide is an acid salt resulting, or regarded as resulting, from the reaction of hydrobromic acid with an organic base (e.g. an amine). The compounds are similar to hydrochlorides.
Some drugs are formulated as hydrobromides, e.g. eletriptan hydrobromide.
See also
Bromide, inorganic salts of hydrobromic acid
Bromine, the element Br
Free base (chemistry)
Acid salts
Salts
Bromides | Hydrobromide | Chemistry | 103 |
27,330,293 | https://en.wikipedia.org/wiki/Global%20Internet%20Freedom%20Consortium | The Global Internet Freedom Consortium is a consortium of organizations that develop and deploy anti-censorship technologies for use by Internet users in countries whose governments restrict Web-based information access. The organization was reportedly begun in 2001 by Chinese-born scientists living in the United States reacting against Chinese government oppression of the Falun Gong.
Products
The main products are Freegate and Ultrasurf.
Funding
The organization states that the majority of its funding comes from its members. In May 2010, the group was offered a $1.5 million (USD) grant from the United States Department of State. This move received criticism from representatives of the Chinese government.
See also
Human rights in the People's Republic of China
Internet censorship
Internet censorship in the People's Republic of China
Political repression of cyber-dissidents
References
External links
Global Internet Freedom Consortium
Information technology organizations
Organizations established in 2001
Computer security organizations
Internet censorship | Global Internet Freedom Consortium | Technology | 179 |
2,695,433 | https://en.wikipedia.org/wiki/Wess%E2%80%93Zumino%20model | In theoretical physics, the Wess–Zumino model has become the first known example of an interacting four-dimensional quantum field theory with linearly realised supersymmetry. In 1974, Julius Wess and Bruno Zumino studied, using modern terminology, dynamics of a single chiral superfield (composed of a complex scalar and a spinor fermion) whose cubic superpotential leads to a renormalizable theory. It is a special case of 4D N = 1 global supersymmetry.
The treatment in this article largely follows that of Figueroa-O'Farrill's lectures on supersymmetry, and to some extent of Tong.
The model is an important model in supersymmetric quantum field theory. It is arguably the simplest supersymmetric field theory in four dimensions, and is ungauged.
The Wess–Zumino action
Preliminary treatment
Spacetime and matter content
In a preliminary treatment, the theory is defined on flat spacetime (Minkowski space). For this article, the metric has mostly plus signature. The matter content is a real scalar field , a real pseudoscalar field , and a real (Majorana) spinor field .
This is a preliminary treatment in the sense that the theory is written in terms of familiar scalar and spinor fields which are functions of spacetime, without developing a theory of superspace or superfields, which appear later in the article.
Free, massless theory
The Lagrangian of the free, massless Wess–Zumino model is
where
The corresponding action is
.
Massive theory
Supersymmetry is preserved when adding a mass term of the form
Interacting theory
Supersymmetry is preserved when adding an interaction term with coupling constant :
The full Wess–Zumino action is then given by putting these Lagrangians together:
Alternative expression
There is an alternative way of organizing the fields. The real fields and are combined into a single complex scalar field while the Majorana spinor is written in terms of two Weyl spinors: . Defining the superpotential
the Wess–Zumino action can also be written (possibly after relabelling some constant factors)
Upon substituting in , one finds that this is a theory with a massive complex scalar and a massive Majorana spinor of the same mass. The interactions are a cubic and quartic interaction, and a Yukawa interaction between and , which are all familiar interactions from courses in non-supersymmetric quantum field theory.
Using superspace and superfields
Superspace and superfield content
Superspace consists of the direct sum of Minkowski space with 'spin space', a four dimensional space with coordinates , where are indices taking values in More formally, superspace is constructed as the space of right cosets of the Lorentz group in the super-Poincaré group.
The fact there is only 4 'spin coordinates' means that this is a theory with what is known as supersymmetry, corresponding to an algebra with a single supercharge. The dimensional superspace is sometimes written , and called super Minkowski space. The 'spin coordinates' are so called not due to any relation to angular momentum, but because they are treated as anti-commuting numbers, a property typical of spinors in quantum field theory due to the spin statistics theorem.
A superfield is then a function on superspace, .
Defining the supercovariant derivative
a chiral superfield satisfies The field content is then simply a single chiral superfield.
However, the chiral superfield contains fields, in the sense that it admits the expansion
with Then can be identified as a complex scalar, is a Weyl spinor and is an auxiliary complex scalar.
These fields admit a further relabelling, with and This allows recovery of the preliminary forms, after eliminating the non-dynamical using its equation of motion.
Free, massless action
When written in terms of the chiral superfield , the action (for the free, massless Wess–Zumino model) takes on the simple form
where are integrals over spinor dimensions of superspace.
Superpotential
Masses and interactions are added through a superpotential. The Wess–Zumino superpotential is
Since is complex, to ensure the action is real its conjugate must also be added.
The full Wess–Zumino action is written
Supersymmetry of the action
Preliminary treatment
The action is invariant under the supersymmetry transformations, given in infinitesimal form by
where is a Majorana spinor-valued transformation parameter and is the chirality operator.
The alternative form is invariant under the transformation
.
Without developing a theory of superspace transformations, these symmetries appear ad-hoc.
Superfield treatment
If the action can be written as
where is a real superfield, that is, , then the action is invariant under supersymmetry.
Then the reality of means it is invariant under supersymmetry.
Extra classical symmetries
Superconformal symmetry
The massless Wess–Zumino model admits a larger set of symmetries, described at the algebra level by the superconformal algebra. As well as the Poincaré symmetry generators and the supersymmetry translation generators, this contains the conformal algebra as well as a conformal supersymmetry generator .
The conformal symmetry is broken at the quantum level by trace and conformal anomalies, which break invariance under the conformal generators for dilatations and for special conformal transformations respectively.
R-symmetry
The R-symmetry of supersymmetry holds when the superpotential is a monomial. This means either , so that the superfield is massive but free (non-interacting), or so the theory is massless but (possibly) interacting.
This is broken at the quantum level by anomalies.
Action for multiple chiral superfields
The action generalizes straightforwardly to multiple chiral superfields with . The most general renormalizable theory is
where the superpotential is
,
where implicit summation is used.
By a change of coordinates, under which transforms under , one can set without loss of generality. With this choice, the expression is known as the canonical Kähler potential. There is residual freedom to make a unitary transformation in order to diagonalise the mass matrix .
When , if the multiplet is massive then the Weyl fermion has a Majorana mass. But for the two Weyl fermions can have a Dirac mass, when the superpotential is taken to be
This theory has a symmetry, where rotate with opposite charges
Super QCD
For general , a superpotential of the form has a symmetry when rotate with opposite charges, that is under
.
This symmetry can be gauged and coupled to supersymmetric Yang–Mills to form a supersymmetric analogue to quantum chromodynamics, known as super QCD.
Supersymmetric sigma models
If renormalizability is not insisted upon, then there are two possible generalizations. The first of these is to consider more general superpotentials. The second is to consider in the kinetic term
to be a real function of and .
The action is invariant under transformations : these are known as Kähler transformations.
Considering this theory gives an intersection of Kähler geometry with supersymmetric field theory.
By expanding the Kähler potential in terms of derivatives of and the constituent superfields of , and then eliminating the auxiliary fields using the equations of motion, the following expression is obtained:
where
is the Kähler metric. It is invariant under Kähler transformations. If the kinetic term is positive definite, then is invertible, allowing the inverse metric to be defined.
The Christoffel symbols (adapted for a Kähler metric) are and
The covariant derivatives and are defined
and
The Riemann curvature tensor (adapted for a Kähler metric) is defined .
Adding a superpotential
A superpotential can be added to form the more general action
where the Hessians of are defined
.
See also
N = 4 supersymmetric Yang–Mills theory
Supermultiplet
References
Supersymmetric quantum field theory | Wess–Zumino model | Physics | 1,693 |
3,529,850 | https://en.wikipedia.org/wiki/Sun%20Certified%20Network%20Administrator | SCNA (an abbreviation of Sun Certified Network Administrator) is a certification for system administrators and covers LANs and Solaris.
Requirements
Candidates must pass a certification exam. The examination includes multiple-choice, scenario-based questions, drag-and-drop questions, and tests the candidate on Solaris network administration topics including how to configure and manage the network interface layer, the network (internet and transport layers), network applications, and the Solaris IP Filter.
Candidates must have three or more years of experience administering Sun systems in a networked environment.
Certification also requires already being a Sun Certified System Administrator for Solaris (any edition).
References
Sun Microsystems
Information technology qualifications | Sun Certified Network Administrator | Technology | 140 |
59,469,155 | https://en.wikipedia.org/wiki/Collidinic%20acid | Collidinic acid (pyridine-2,4,6-tricarboxylic acid) is an organic compound that belongs to the heterocycles (more precisely the heteroaromatics). It belongs to the group of pyridinetricarboxylic acids and consists of a pyridine ring which carries three carboxy groups in the 2-, 4- and 6-positions. The name is derived from 2,4,6-collidine (2,4,6-trimethylpyridine).
Preparation
The compound can be obtained from the oxidation of 2,4,6-collidine by potassium permanganate.
Uses
Collidinic acid can be used in the spectrophotometric determination of iron.
References
Pyridines
Tricarboxylic acids
Aromatic acids | Collidinic acid | Chemistry | 181 |
6,168,349 | https://en.wikipedia.org/wiki/Electric%20energy%20consumption | Electric energy consumption is energy consumption in the form of electrical energy. About a fifth of global energy is consumed as electricity: for residential, industrial, commercial, transportation and other purposes.
The global electricity consumption in 2022 was 24,398 terawatt-hour (TWh), almost exactly three times the amount of consumption in 1981 (8,132 TWh). China, the United States, and India accounted for more than half of the global share of electricity consumption. Japan and Russia followed with nearly twice the consumption of the remaining industrialized countries.
Overview
Electric energy is most often measured either in joules (J), or in watt hours (W·h).
1 W·s = 1 J
1 W·h = 3,600 W·s = 3,600 J
1 kWh = 3,600 kWs = 1,000 Wh = 3.6 million W·s = 3.6 million J
Electric and electronic devices consume electric energy to generate desired output (light, heat, motion, etc.). During operation, some part of the energy is lost depending on the electrical efficiency.
Electricity has been generated in power stations since 1882. The invention of the steam turbine in 1884 to drive the electric generator led to an increase in worldwide electricity consumption.
In 2022, the total worldwide electricity production was nearly 29,000 TWh. Total primary energy is converted into numerous forms, including, but not limited to, electricity, heat and motion. Some primary energy is lost during the conversion to electricity, as seen in the United States, where a little more than 60% was lost in 2022.
Electricity accounted for more than 20% of worldwide final energy consumption in 2022, with oil being less than 40%, coal being less than 9%, natural gas being less than 15%, biofuels and waste less than 10%, and other sources (such as heat, solar electricity, wind electricity and geothermal) being more than 5%. The total final electricity consumption in 2022 was split unevenly between the following sectors: industry (42.2%), residential (26.8%), commercial and public services (21.1%), transport (1.8%), and other (8.1%; i.e., agriculture and fishing). In 1981, the final electricity consumption continued to decrease in the industrial sector and increase in the residential, commercial and public services sectors.
A sensitivity analysis on an adaptive neuro-fuzzy network model for electric demand estimation shows that employment is the most critical factor influencing electrical consumption. The study used six parameters as input data, employment, GDP, dwelling, population, heating degree day and cooling degree day, with electricity demand as output variable.
World electricity consumption
The table lists 45 electricity-consuming countries, which used about 22,000 TWh. These countries comprise about 90% of the final consumption of 190+ countries. The final consumption to generate this electricity is provided for every country. The data is from 2022.
In 2022, OECD's final electricity consumption was over 10,000 TWh. In that year, the industrial sector consumed about 42.2% of the electricity, with the residential sector consuming nearly 26.8%, the commercial and public services sectors consuming about 21.1%, the transport sector consuming nearly 1.8%, and the other sectors (such as agriculture and fishing) consuming nearly 8.1%. In recent decades, the consumption in the residential and commercial and public services sectors has grown, while the industry consumption has declined. More recently, the transport sector has witnessed an increase in consumption with the growth in the electric vehicle market.
Consumption per capita
The final consumption divided by the number of inhabitants provides a country's consumption per capita. In Western Europe, this is between 4 and 8 MWh/year. (1 MWh = 1,000 kWh) In Scandinavia, the United States, Canada, Taiwan, South Korea, Australia, Japan and the United Kingdom, the per capita consumption is higher; however, in developing countries, it is much lower. The world's average was about 3 MWh/year in 2022. Very low consumption levels, such as those in Philippines, not included in the table, indicate that many inhabitants are not connected to the electricity grid, and that is the reason why some of the world's most populous countries, including Nigeria and Bangladesh, do not appear in the table.
Electricity generation and GDP
The table lists 30 countries, which represent about 76% of the world population, 84% of the world GDP, and 85% of the world electricity generation. Productivity per electricity generation (concept similar to energy intensity) can be measured by dividing GDP over the electricity generated. The data is from 2019.
Electricity consumption by sector
The table below lists the 15 countries with the highest final electricity consumption, which comprised more than 70% of the global consumption in 2022.
Electricity outlook
Looking forward, increasing energy efficiency will result in less electricity needed for a given demand in power, but demand will increase strongly on the account of:
Economic growth in developing countries, and
Electrification of transport and heating. Combustion engines are replaced by electric drive and for heating less gas and oil, but more electricity is used, if possible with heat pumps.
The International Energy Agency expects revisions of subsidies for fossil fuels which amounted to $550 billion in 2013, more than four times renewable energy subsidies. In this scenario, almost half of the increase in 2040 of electricity consumption is covered by more than 80% growth of renewable energy. Many new nuclear plants will be constructed, mainly to replace old ones. The nuclear part of electricity generation will increase from 11 to 12%. The renewable part goes up much more, from 21 to 33%. The IEA warns that in order to restrict global warming to 2 °C, carbon dioxide emissions must not exceed 1000 gigaton (Gt) from 2014. This limit is reached in 2040 and emissions will not drop to zero ever.
The World Energy Council sees world electricity consumption increasing to more than 40,000 TWh/a in 2040. The fossil part of generation depends on energy policy. It can stay around 70% in the so-called "Jazz" scenario where countries rather independently "improvise" but it can also decrease to around 40% in the "Symphony" scenario if countries work "orchestrated" for more climate friendly policy. Carbon dioxide emissions, 32 Gt/a in 2012, will increase to 46 Gt/a in Jazz but decrease to 26 Gt/a in Symphony. Accordingly, until 2040 the renewable part of generation will stay at about 20% in Jazz but increase to about 45% in Symphony.
An EU survey conducted on climate and energy consumption in 2022 found that 63% of people in the European Union want energy costs to be dependent on use, with the greatest consumers paying more. This is compared to 83% in China, 63% in the UK and 57% in the US. 24% of Americans surveyed believing that people and businesses should do more to cut their own usage (compared to 20% in the UK, 19% in the EU, and 17% in China).
Nearly half of those polled in the European Union (47%) and the United Kingdom (45%) want their government to focus on the development of renewable energies. This is compared to 37% in both the United States and China when asked to list their priorities on energy.
See also
Electricity generation
Electricity retailing
List of countries by energy intensity
List of countries by carbon dioxide emissions
List of countries by electricity consumption
List of countries by electricity production
List of countries by energy consumption per capita
List of countries by greenhouse gas emissions
List of countries by renewable electricity production
List of countries by energy consumption and production
World energy supply and consumption
References
External links
World Electricity production 2012
World Map and Chart of Energy Consumption by country by Lebanese-economy-forum, World Bank data
Electricity Information 2019 - IEA
Electric power
Consumption
Energy consumption
Energy development
Energy policy | Electric energy consumption | Physics,Engineering,Environmental_science | 1,630 |
3,994,748 | https://en.wikipedia.org/wiki/Carrier-to-noise%20ratio | In telecommunications, the carrier-to-noise ratio, often written CNR or C/N, is the signal-to-noise ratio (SNR) of a modulated signal. The term is used to distinguish the CNR of the radio frequency passband signal from the SNR of an analog base band message signal after demodulation. For example, with FM radio, the strength of the 100 MHz carrier with modulations would be considered for CNR, whereas the audio frequency analogue message signal would be for SNR; in each case, compared to the apparent noise. If this distinction is not necessary, the term SNR is often used instead of CNR, with the same definition.
Digitally modulated signals (e.g. QAM or PSK) are basically made of two CW carriers (the I and Q components, which are out-of-phase carriers). In fact, the information (bits or symbols) is carried by given combinations of phase and/or amplitude of the I and Q components. It is for this reason that, in the context of digital modulations, digitally modulated signals are usually referred to as carriers. Therefore, the term carrier-to-noise-ratio (CNR), instead of signal-to-noise-ratio (SNR), is preferred to express the signal quality when the signal has been digitally modulated.
High C/N ratios provide good quality of reception, for example low bit error rate (BER) of a digital message signal, or high SNR of an analog message signal.
Definition
The carrier-to-noise ratio is defined as the ratio of the received modulated carrier signal power C to the received noise power N after the receiver filters:
.
When both carrier and noise are measured across the same impedance, this ratio can equivalently be given as:
,
where and are the root mean square (RMS) voltage levels of the carrier signal and noise respectively.
C/N ratios are often specified in decibels (dB):
or in term of voltage:
Measurements and estimation
The C/N ratio is measured in a manner similar to the way the signal-to-noise ratio (S/N) is measured, and both specifications give an indication of the quality of a communications channel.
In the famous Shannon–Hartley theorem, the C/N ratio is equivalent to the S/N ratio. The C/N ratio resembles the carrier-to-interference ratio (C/I, CIR), and the carrier-to-noise-and-interference ratio, C/(N+I) or CNIR.
C/N estimators are needed to optimize the receiver performance. Typically, it is easier to measure the total power than the ratio of signal power to noise power (or noise power spectral density), and that is why CNR estimation techniques are timely and important.
Carrier-to-noise density ratio
In satellite communications, carrier-to-noise-density ratio (C/N0) is the ratio of the carrier power C to the noise power density N0, expressed in dB-Hz.
When considering only the receiver as a source of noise, it is called carrier-to-receiver-noise-density ratio.
It determines whether a receiver can lock on to the carrier and if the information encoded in the signal can be retrieved, given the amount of noise present in the received signal. The carrier-to-receiver noise density ratio is usually expressed in dB-Hz.
The noise power density, N0=kT, is the receiver noise power per hertz, which can be written in terms of the Boltzmann constant k (in joules per kelvin) and the noise temperature T (in kelvins).
See also
C/I: carrier-to-interference ratio
Eb/N0 (energy per bit relative to noise power spectral density)
Es/N0 (energy per symbol relative to noise power spectral density)
Signal-to-interference ratio (SIR or S/I)
Signal-to-noise ratio (SNR or S/N)
SINAD (ratio of signal-plus-noise-plus-distortion to noise-plus-distortion)
References
Further reading
Measuring GNSS Signal Strength
Noise (electronics)
Engineering ratios
Radio frequency propagation
Radio resource management
Interference | Carrier-to-noise ratio | Physics,Mathematics,Engineering | 874 |
9,331,271 | https://en.wikipedia.org/wiki/Molecular%20Systems%20Biology | Molecular Systems Biology is a peer-reviewed open-access scientific journal covering systems biology at the molecular level (examples include: genomics, proteomics, metabolomics, microbial systems, the integration of cell signaling and regulatory networks), synthetic biology, and systems medicine. It was established in 2005 and published by the Nature Publishing Group on behalf of the European Molecular Biology Organization. As of December 2013, it is published by EMBO Press.
References
External links
Molecular and cellular biology journals
Systems biology
Academic journals established in 2005
English-language journals
Monthly journals
European Molecular Biology Organization academic journals | Molecular Systems Biology | Chemistry,Biology | 119 |
41,549 | https://en.wikipedia.org/wiki/Phase%20noise | In signal processing, phase noise is the frequency-domain representation of random fluctuations in the phase of a waveform, corresponding to time-domain deviations from perfect periodicity (jitter). Generally speaking, radio-frequency engineers speak of the phase noise of an oscillator, whereas digital-system engineers work with the jitter of a clock.
Definitions
An ideal oscillator would generate a pure sine wave. In the frequency domain, this would be represented as a single pair of Dirac delta functions (positive and negative conjugates) at the oscillator's frequency; i.e., all the signal's power is at a single frequency. All real oscillators have phase modulated noise components. The phase noise components spread the power of a signal to adjacent frequencies, resulting in noise sidebands.
Consider the following noise-free signal:
Phase noise is added to this signal by adding a stochastic process represented by to the signal as follows:
Different phase noise processes, , possess different power Spectral density (PSD). For example, a white noise PSD follows a trend, a pink noise PSD follows a trend, and a brown noise PSD follows a trend.
is the single-sided (f>0) phase noise PSD , given by the Fourier transform of the Autocorrelation of the phase noise.
The noise can also be represented at the single-sided (f>0) frequency noise PSD, , or the fractional frequency stability PSD, , which defines the frequency fluctuations in terms of the deviation from the carrier frequency, .
The phase noise can also be given as the spectral purity, , the single-sideband power in a 1Hz bandwidth at a frequency offset, f, from the carrier frequency, , referenced to the carrier power.
Jitter conversions
Phase noise is sometimes also measured and expressed as a power obtained by integrating over a certain range of offset frequencies. For example, the phase noise may be −40 dBc integrated over the range of 1 kHz to 100 kHz. This integrated phase noise (expressed in degrees) can be converted to jitter (expressed in seconds) using the following formula:
In the absence of 1/f noise in a region where the phase noise displays a –20dBc/decade slope (Leeson's equation), the RMS cycle jitter can be related to the phase noise by:
Likewise:
Measurement
Phase noise can be measured using a spectrum analyzer if the phase noise of the device under test (DUT) is large with respect to the spectrum analyzer's local oscillator. Care should be taken that observed values are due to the measured signal and not the shape factor of the spectrum analyzer's filters. Spectrum analyzer based measurement can show the phase-noise power over many decades of frequency; e.g., 1 Hz to 10 MHz. The slope with offset frequency in various offset frequency regions can provide clues as to the source of the noise; e.g., low frequency flicker noise decreasing at 30 dB per decade (= 9 dB per octave).
Phase noise measurement systems are alternatives to spectrum analyzers. These systems may use internal and external references and allow measurement of both residual (additive) and absolute noise. Additionally, these systems can make low-noise, close-to-the-carrier, measurements.
Linewidths
The sinusoidal output of an ideal oscillator is a Dirac delta function in the power spectral density centered at the frequency of the sinusoid. Such perfect spectral purity is not achievable in a practical oscillator. Spreading of the spectrum line caused by phase noise is characterized by the fundamental linewidth and the integral linewidth.
The fundamental linewidth, also known as the White noise-limited linewidth or the intrinsic linewidth, is the linewidth of an oscillator's PSD in the presence of only white noise sources (noise with a PSD that follows a trend, ie. equivalent across all frequencies). The fundamental linewidth takes Lorentzian spectral line shape. White noise provides a Allan Deviation plot at small averaging times.
The integral linewidth, also known as the effective linewidth or the total linewidth, is the linewidth of an oscillator's PSD in the presence of both white noise sources (noise with a PSD that follows a trend) and pink noise sources (noise with a PSD that follows a trend). Pink noise is sometimes called Flicker noise, or simply 1/f noise. The integral linewidth takes Voigt lineshape, a convolution of the white noise-induced Lorentzian lineshape and the pink noise-induced Gaussian lineshape. Pink noise provides a Allan Deviation plot at moderate averaging times. This flat line on the Allan Deviation plot is also known as the flicker floor.
Additionally, the oscillator might experience Frequency drift over long periods of time, slowly moving the center frequency of the Voigt lineshape. This drift is a brown noise source (noise with a PSD that follows a trend), and provides a Allan Deviation plot at large averaging times.
Limiting System Performance
A laser is a common oscillator that is characterized by its noise, and thus its Laser linewidth. The laser noise provides fundamental limitations of the systems that the laser is used in, such as loss of sensitivity in radar and communications systems, lack of definition in imaging systems, and a higher bit error rate in digital systems.
Lasers with a near-Infrared center wavelength are used in many atomic, molecular, and optical physics experiments to provide photons that interact with atoms. The requirements for the spectral purity at specific frequency offsets of the lasers used in qubit operation (such as clock transition lasers and state preparation lasers) are highly stringent because the coherence time of the qubit is directly related to the linewidth of the lasers.
See also
Allan variance
Flicker noise
Leeson's equation
Maximum time interval error
Noise spectral density
Spectral density
Spectral phase
Opto-electronic oscillator
References
Further reading
Ulrich L. Rohde, A New and Efficient Method of Designing Low Noise Microwave Oscillators, https://depositonce.tu-berlin.de/bitstream/11303/1306/1/Dokument_16.pdf
Ajay Poddar, Ulrich Rohde, Anisha Apte, “ How Low Can They Go, Oscillator Phase noise model, Theoretical, Experimental Validation, and Phase Noise Measurements”, IEEE Microwave Magazine, Vol. 14, No. 6, pp. 50–72, September/October 2013.
Ulrich Rohde, Ajay Poddar, Anisha Apte, “Getting Its Measure”, IEEE Microwave Magazine, Vol. 14, No. 6, pp. 73–86, September/October 2013
U. L. Rohde, A. K. Poddar, Anisha Apte, “Phase noise measurement and its limitations”, Microwave Journal, pp. 22–46, May 2013
A. K. Poddar, U.L. Rohde, “Technique to Minimize Phase Noise of Crystal Oscillators”, Microwave Journal, pp. 132–150, May 2013.
A. K. Poddar, U. L. Rohde, and E. Rubiola, “Phase noise measurement: Challenges and uncertainty”, 2014 IEEE IMaRC, Bangalore, Dec 2014.
Oscillators
Frequency-domain analysis
Telecommunication theory
Noise (electronics) | Phase noise | Physics | 1,582 |
77,656,313 | https://en.wikipedia.org/wiki/National%20Satellite%20Test%20Facility | The National Satellite Test Facility (NSTF) is a testing site for artificial satellites, located in Harwell, Oxfordshire, in the United Kingdom. It is the first dedicated satellite testing facility in the UK. Construction began in 2018 and was completed in 2024. The facility opened in May 2024. It was built through a collaboration between RAL (Rutherford Appleton Lab) Space and the National Physical Laboratory.
Its first customers were Airbus Defence and Space UK.
References
Buildings and structures in Oxfordshire
Space programme of the United Kingdom
Science and technology in the United Kingdom | National Satellite Test Facility | Astronomy | 113 |
14,502,375 | https://en.wikipedia.org/wiki/Gaston%20Bonnier | Gaston Eugène Marie Bonnier (; 9 April 1853 – 2 January 1922) was a French botanist and plant ecologist.
Biography
Bonnier first studied at École Normale Supérieure in Paris from 1873 to 1876. Together with Charles Flahault, he studied at Uppsala University in 1878. They published two articles about their impressions:
Observations sur la flore cryptogamique de la Scandinavie
Sur la distribution des végétaux dans la region moyenne de la presqu’ile Scandinave (both with Charles Flahault 1879)
He became assistant professor, later full professor, of botany at Sorbonne in 1887 and, in addition, he founded a Plant Biological Laboratory in Fontainebleau in 1889. The same year, he co-founded the scientific journal Revue Générale de Botanique, which he edited until 1922.
He was an early exponent of experimental plant ecology. He transplanted alpine plants between the Alps and Pyrenees and the research garden in Fontainebleau. The results were published in:
Cultures expérimentales dans les Alpes et les Pyrénées. Revue Générale de Botanique 2 (1890): 513–546.
Les plantes arctiques comparées aux mêmes espèces des Alpes et des Pyrénées (1894).
Nouvelles observations sur les cultures expérimentales à diverses altitudes et cultures par semis. Revue Générale de Botanique 22 (1920): 305–326.
He authored several floras of France, such as
Nouvelle flore du Nord de la France et de la Belgique pour la détermination facile des plantes sans mots techniques. Vol. I. Tableaux synoptiques des plantes vasculaires de la flore de la France. P. Dupont, Paris, 1894 (With Georges de Layens (1834–1897)).
Vol. II. Nouvelle Flore des mousses et des hépatiques with Charles Isidore Douin (1858–1944). P. Dupont, Paris, 1895.
Vol. III. Nouvelle Flore des champignons with Julien Noël Costantin (1857–1936) and Léon Jean Marie Dufour (1862–1942). P. Dupont, Paris, 1895.
Flore complète illustrée de France, Suisse et Belgique. (1911).
Notable students of Gaston Bonnier include Henri Devaux, Maurice Bouly de Lesdain, Paul Becquerel, Louis Emberger, Paul Jaccard, and Albert Maige among others.
References
1853 births
1922 deaths
Members of the French Academy of Sciences
Corresponding members of the Saint Petersburg Academy of Sciences
20th-century French botanists
Academic staff of the University of Paris
French ecologists
19th-century French botanists
Plant ecologists
Lamarckism | Gaston Bonnier | Biology | 569 |
4,586,351 | https://en.wikipedia.org/wiki/Green%27s%20matrix | In mathematics, and in particular ordinary differential equations, a Green's matrix helps to determine a particular solution to a first-order inhomogeneous linear system of ODEs. The concept is named after George Green.
For instance, consider where is a vector and is an matrix function of , which is continuous for , where is some interval.
Now let be linearly independent solutions to the homogeneous equation and arrange them in columns to form a fundamental matrix:
Now is an matrix solution of .
This fundamental matrix will provide the homogeneous solution, and if added to a particular solution will give the general solution to the inhomogeneous equation.
Let be the general solution. Now,
This implies or where is an arbitrary constant vector.
Now the general solution is
The first term is the homogeneous solution and the second term is the particular solution.
Now define the Green's matrix
The particular solution can now be written
External links
An example of solving an inhomogeneous system of linear ODEs and finding a Green's matrix from www.exampleproblems.com.
Ordinary differential equations
Matrices | Green's matrix | Mathematics | 219 |
1,616,977 | https://en.wikipedia.org/wiki/Coefficient%20of%20haze | The coefficient of haze (also known as smoke shade) is a measurement of visibility interference in the atmosphere.
One way to measure this is to draw about 1000 cubic feet of air sample through an air filter and obtain the radiation intensity through the filter. The coefficient is then calculated based on the absorbance formula
where is the radiation (400 nm light) intensity transmitted through the sampled filter, and is the radiation intensity transmitted through a clean (control) filter.
References
Further reading
Visibility | Coefficient of haze | Physics,Mathematics | 96 |
49,536,531 | https://en.wikipedia.org/wiki/Logging%20trail | A logging trail is a type of unpaved trail used to transport logged wood by means of machinery or horse power. In contrast to a logging road the logging trail is also free of gravel or other material and a pure earth trail.
Characteristics
For ease of transport when using machinery like forwarders on sloped terrain, the logging trail often follows the gradient of the terrain to a logging road or street for further transport of the wood. Width of the trail is typically 3 m to 4 m with a distance of 20 m to 60 m among each other.
In terrain steeper than 30% logging trails are usually constructed parallel to the contour of the terrain.
A logging trail may eventually convert into a hiking trail.
History
Logging trails became necessary with the advent of machine driven logging. Before that period loggers used horse power instead with lesser need for structured logging trails.
See also
References
Types of thoroughfares
Trails
Landscape architecture
Log transport | Logging trail | Engineering | 187 |
993,533 | https://en.wikipedia.org/wiki/Inclusionary%20zoning | Inclusionary zoning (IZ) is municipal and county planning ordinances that require or provide incentives when a given percentage of units in a new housing development be affordable by people with low to moderate incomes. Such housing is known as inclusionary housing. The term inclusionary zoning indicates that these ordinances seek to counter exclusionary zoning practices, which exclude low-cost housing from a municipality through the zoning code. (For example, single-family zoning makes it illegal to build multi-family apartment buildings.) Non-profit affordable housing developers build 100% of their units as affordable, but need significant taxpayer subsidies for this model to work. Inclusionary zoning allows municipalities to have new affordable housing constructed without taxpayer subsidies. In order to encourage for-profit developers to build projects that include affordable units, cities often allow developers to build more total units (a "density bonus") than their zoning laws currently allow so that there will be enough profit generating market-rate units to offset the losses from the below market-rate units and still allow the project to be financially feasible. Inclusionary zoning can be mandatory or voluntary, though the great majority of units have been built as a result of mandatory programmes. There are variations among the set-aside requirements (percentage of units set-aside for low-income residents), affordability levels (what income level is considered "low-income"), and length of time the unit is deed-restricted as affordable housing.
In practice, these policies involve placing deed restrictions on 10–30% of new houses or apartments in order to make the cost of the housing affordable to lower-income households. The mix of "affordable housing" and "market-rate" housing in the same neighborhood is seen as beneficial by city planners and sociologists. Another goal of inclusionary zoning is to build mixed-income communities, rather than having poor households concentrated in specific city neighborhoods. Economists state that IZ functions as a price control on a percentage of units and has similar negative effects as other price controls (rent control) being that it discourages the supply of new housing. It can also be understood similar to impact fees as an "inclusionary tax" on market-rate units which raises the prices of new non-price-controlled units in that development and thereby diminishes the financial incentive to create new housing.
Most inclusionary zoning is enacted at the municipal or county level; when imposed by the state, as in Massachusetts, it has been argued that such laws usurp local control. In such cases, developers can use inclusionary zoning to avoid certain aspects of local zoning laws.
Historical background
During the mid- to late-20th century, new suburbs grew and expanded around American cities as middle-class house buyers, supported by federal loan programs such as Veterans Administration housing loan guarantees, left established neighborhoods and communities. These newly populated places were generally more economically homogeneous than the cities they encircled. Many suburban communities enacted local ordinances, often in zoning codes, to preserve the character of their municipality. One of the most commonly cited exclusionary practices is the stipulation that lots must be of a certain minimum size and houses must be set back from the street a certain minimum distance. In many cases, these housing ordinances prevented affordable housing from being built, because the large plots of land required to build within the code restrictions were cost-prohibitive for modestly priced houses. Communities have remained accessible to wealthier citizens because of these ordinances, effectively shutting the low income families out of desirable communities. Such zoning ordinances have not always been enacted with conscious intent to exclude lower income households, but it has been the unintended result of such policies.
By denying lower income families access to suburban communities, many feel that exclusionary zoning has contributed to the maintenance of inner city ghettos. Supporters of inclusionary zoning point out that low income households are more likely to become economically successful if they have middle class neighbors as peers and role models. When effective, inclusionary zoning reduces the concentration of poverty in slum districts where social norms may not provide adequate models of success. Education is one of the largest components in the effort to lift people out of poverty; access to high-quality public schools is another key benefit of reduced segregation. Statistically, a poor child in a school where 80% of the children are poor scores 13–15% lower compared to environments where the poor child's peers are 80% middle class. But this poor child, unlike their middle-class peers in market-rate housing, loses out on intergenerational wealth.
In many of the communities where inclusionary zoning has been put into practice, income requirements allow households that earn 80–120% of the median income to qualify for the "affordable" housing. This is because in many places high housing prices have prevented even median-income households from buying market-rate properties. This is especially prominent in California, where only 16% of the population could afford the median-priced home during 2005.
Potential benefits and limitations of IZ Policies
Potential benefits
Poor and working families would have access to a range of opportunities, including good employment opportunities, good schools, comprehensive transportation system and safe streets
Alleviating the problem of inadequate supply of Affordable Housing
Avoiding economic and racial segregation, which helps reducing crime rate, failing schools and improving social stability
Relatively small amount of public subsidies required for adopting IZ as a market-based tool
Potential limitations
Low production of affordable housings, which produced approximately 150,000 units over several decades nationwide, comparing to other schemes, such as Housing Choice Vouchers that helps approximately two million households and the LIHTC program that has produced over two million affordable homes
Unstable production of affordable housing that highly affected by local housing-market conditions
Very little research on outcomes for participants in these programs. Although these affordable housing programs, by definition, offer lower-cost units that municipalities promote as inclusive, the deed restrictions imposed on participants in these programs result in additional economic disparities and other hardships not faced by market-rate homeowners.
Economics
Economists state that IZ functions as a price control on a percentage of units and has similar negative effects as other price controls (rent control) being that it discourages the supply of new housing. It can also be understood similar to impact fees as an "inclusionary tax" on market-rate units which raises the prices of new non-price-controlled units in that development and thereby diminishes the financial incentive to create new housing.
Differences in ordinances
Inclusionary zoning ordinances vary substantially among municipalities. These variables can include:
Mandatory or voluntary ordinance. While many cities require inclusionary housing, many more offer zoning bonuses, expedited permits, reduced fees, cash subsidies, or other incentives for developers who voluntarily build affordable housing.
Percentage of units to be dedicated as inclusionary housing. This varies quite substantially among jurisdictions, but appears to range from 10 to 30%.
Minimum size of development that the ordinance applies to. Most jurisdictions exempt smaller developments, but some require that even developments incurring only a fraction of an inclusionary housing unit pay a fee (see below).
Whether inclusionary housing must be built on site. Some programs allow housing to be built nearby, in cases of hardship.
Whether fees can be paid in lieu of building inclusionary housing. Fees-in-lieu allow a developer to "buy out" of an inclusionary housing obligation. This may seem to defeat the purpose of inclusionary zoning, but in some cases the cost of building one affordable unit on-site could purchase several affordable units off-site.
Income level or price defined as "affordable," and buyer qualification methods. Most ordinances seem to target inclusionary units to low- or moderate-income households which earn approximately the regional median income or somewhat below. Inclusionary housing typically does not create housing for those with very low incomes.
Whether inclusionary housing units are limited by price or by size (the City of Johannesburg for example provides for both options)
Appearance and integration of inclusionary housing units. Many jurisdictions require that inclusionary housing units be indistinguishable from market-rate units, but this can increase costs.
Longevity of price restrictions attached to inclusionary housing units, and allowable appreciation. Ordinances that allow the "discount" to expire essentially grant a windfall profit, similar to what market-rate owners would get. Municipalities dislike this because it would mean they would have to create more affordable units. Instead, participants in these programs subsidize themselves, relieving municipalities of the financial burden to keep these programs running. However, placing the brunt of the work and subsidies on the people in these programs raises questions. It can trap individuals in public housing programs, making it nearly impossible for them to move out until they pass away. If they could not afford market-rate housing 15 years ago, staying in a unit that restricts appreciation becomes a significant barrier to leaving public housing. In addition, requiring participants to do maintenance and take on all other homeowner liabilities on a home that is economically similar to a rental (since there is limited appreciation minus HOA fees, interest, taxes, etc.) can add further housing related stress.
Whether housing rehabilitation counts as "construction," either of market-rate or affordable units. Some cities, like New York City, allow developers to count rehabilitation of off-site housing as an inclusionary contribution.
Which types of housing construction the ordinance applies to. For example, high-rise housing costs more to build per square foot (thus raising compliance costs, perhaps prohibitively), so some ordinances exempt it from compliance.
Alternative solutions
While many suburban communities feature Section 8 for low income households, they are generally restricted to concentrated sections. In some cases, counties specify small districts where Section 8 properties are to be rented. In other cases, the market tends to self-segregate property by income. For instance, in Montgomery County, Pennsylvania, a wealthy suburban county bordering Philadelphia, only 5% of the county's population live in the borough of Norristown, yet 50% of the county's Section 8 properties are located there. The large low income resident population burdens Norristown's local government and school district, while much of the county remains unburdened.
Inclusionary zoning aims to reduce residential economic segregation by mandating that a mix of incomes be represented in a single development.
Controversy
Inclusionary zoning remains a controversial issue. Some affordable housing advocates seek to promote the policies in order to ensure that housing is available for a variety of income levels in more places. These supporters hold that inclusionary zoning produces needed affordable housing and creates income-integrated communities.
Yet other Affordable Housing advocates state the reverse is true, that Inclusionary Zoning can have the opposite effect and actually reduce affordable housing in a community. For example, in Los Angeles, California, inclusionary zoning apparently accelerated gentrification, as older, unprofitable buildings were razed and replaced with mostly high-rent housing, and a small percentage of affordable housing; the net result was less affordable housing. In New York, NY, inclusionary zoning allows for up to a 400% increase in luxury housing for every unit of affordable housing and for an additional 400% luxury housing when combined with the liberal use of development rights. Critics have stated the affordable housing can be directed to those making up to $200,000 through the improper use of an Area Median Income, and used as political tools by organizations tied to various politicians. New York City communities such as Harlem, the Lower East Side, Williamsburg, Chelsea and Hell's Kitchen have experienced significant secondary displacement through the use of Inclusionary Zoning.
Real Estate industry detractors note that inclusionary zoning levies an indirect tax on developers, so as to discourage them from building in areas that face supply shortages. Furthermore, to ensure that the affordable units are not resold for profit, deed restrictions generally fix a long-term resale price ceiling, eliminating a potential benefit of home ownership.
Free market advocates oppose attempts to fix given social outcomes by government intervention in markets. They argue inclusionary zoning constitutes an onerous land use regulation that exacerbates housing shortages.
Homeowners sometimes note that their property values will be reduced if low income families move into their community. Others counter consider their concerns thinly-concealed classism and racism.
Some of the most widely publicized inclusionary zoning battles have involved the REIT AvalonBay Communities. According to the company's website, AvalonBay seeks to develop properties in "high barrier-to-entry markets" across the United States. In practice, AvalonBay uses inclusionary zoning laws, such as the Massachusetts Comprehensive Permit Act: Chapter 40B, to bypass local zoning laws and build large apartment complexes. In some cases, local residents fight back with a lawsuit. In Connecticut, similar developments by AvalonBay have resulted in attempts to condemn the land or reclaim it by eminent domain. In most cases AvalonBay has won these disputes and built extremely profitable apartments or condominiums.
Other legal battles have occurred in California, where many cities have implemented inclusionary zoning policies that typically require 10 percent to 15 percent of units to be affordable housing. The definition of affordable housing includes both low-income housing and moderate-income housing. In California, low-income housing is typically designed for households making 51 percent to 80 percent of the median income, and moderate-income housing is typically for households making 81 percent to 120 percent of the median income. Developers have attempted to fight back these requirements by challenging local inclusionary zoning ordinances through the court legal system. In the case Home Builders Association of Northern California v. City of Napa, the California First District Court of Appeal upheld the inclusionary zoning ordinances of City of Napa that require 10 percent of units of the new development project to be moderate income housing against the Home Builders Association that challenged the City of Napa. Cities have also attempted to impose inclusionary requirements on rental units. However, the Costa-Hawkins Rental Housing Act prohibits cities in California from imposing limitation on rental rates on vacant units. Subsequently, developers have won cases, such as Palmer/Sixth Street Properties, L.P. v. City of Los Angeles (2009), against cities that imposed inclusionary requirements on rental units, as the state law supersedes local ordinances.
Citizen groups and developers have also sought other ways to strengthen or defeat inclusionary zoning laws. For example, the initiative and referendum process in California allows citizen groups or developers to change local ordinances on affordable housing by popular vote. Any citizens or interest groups can participate in this process by gathering at least the required number of signatures so that the measure proposed can qualify to be on the ballot; once enough signatures are submitted and the ballot measure is cleared by election officials, the ballot measure is typically placed on the ballot for the upcoming election. One recent case is Proposition C in San Francisco. This ballot measure was placed on the ballot for the June 2016 California primary election. Passed in June 2016, this proposition amends the city's charter to increase the requirement for affordable housing for development projects of 25 units or more.
The clash between these various interests is reflected in this study published by the libertarian-leaning Reason Foundation's public policy think tank, and the response of a
peer review of that research. Local governments reflect and in some cases balance these competing interests. In California, the League of Cities has created a guide to inclusionary zoning which includes a section on the pros and cons of the policies.
Failure in improving social integration coupled with increasing social cost
It is suggested that IZ policies may not effectively disperse low-income units throughout the region, which actually contradicts the aim of the policy itself. For instances in Suffolk County, it is found that there is a spatial concentration of IZ units in poor neighbourhood coupled with higher proportions of Black and Hispanic, which are considered the minorities. Furthermore, 97.7% of the IZ units were built in only 10% of the census tract from 1980 to 2000, which is area with the lowest-income neighbourhood coupled with clustering of minorities. It is indispensable to notice that housing policies is controlled by local government rather than regional government in Suffolk County, therefore without regional coordinations of housing policy, it fails to consider the inter-municipality distribution of low-income household within the county. Besides, density bonuses given to property developers for the provision of IZ units have intensified the concentration of affordable units in poor neighborhood (Ryan & Enderle as cited in Mukhija, Das, Regus et al., 2012). This shows that IZ policies may fail to disperse the low-income distributions when it is carried out without taking regional coordination into account.
Moreover, with density bonuses allocated to property developers for the provision of IZ units, it implies the community would be bearing the cost of increasing population density and sharing existing infrastructure.
In practice
Examples from the USA
More than 200 communities in the United States have some sort of inclusionary zoning provision.
Montgomery County, Maryland, is often held to be a pioneer in establishing inclusionary zoning policies. It is the sixth wealthiest county in the United States, yet it has built more than 10,000 units of affordable housing since 1974, many units door-to-door with market-rate housing.
All municipalities in the state of Massachusetts are subject to that state's General Laws Chapter 40B, which allows developers to bypass certain municipal zoning restrictions in those municipalities which have fewer than the statutorily defined 10% affordable housing units. Developers taking advantage of Chapter 40B must construct 20% affordable units as defined under the statute.
All municipalities in the state of New Jersey are subject to judicially imposed inclusionary zoning as a result of the New Jersey Supreme Court's Mount Laurel Decision and subsequent acts of the New Jersey state legislature.
A 2006 study, found that 170 jurisdictions in California had some form of inclusionary housing. This was a 59% increase from 2003, when only 107 jurisdictions had inclusionary housing. In addition, state law requires that 15% of the housing units produced in redevelopment project areas must be affordable. At least 20% of revenue generated from a redevelopment project must be contributed to low-income and moderate-income housing. However, Governor Jerry Brown passed AB 1X 26 that dissolved all redevelopment agencies on February 1, 2012.
However, Los Angeles, California's inclusionary zoning ordinance for rental housing was invalidated in 2009 by the California Court of Appeal for the Second Appellate District because it directly conflicted with a provision of the state's Costa-Hawkins Rental Housing Act of 1996 which specifically gave all landlords the right to set the "initial rental rate" for new housing units.
Madison, Wisconsin's inclusionary zoning ordinance respecting rental housing was struck down by Wisconsin's 4th District Court of Appeals in 2006 because that appellate court construed inclusionary zoning to be rent control, which is prohibited by state statute. The Wisconsin Supreme Court declined the city's request to review the case. The ordinance was structured with a sunset in February 2009, unless extended by the Common Council. The Common Council did not extend the inclusionary zoning ordinance and therefore it expired and is no longer in effect.
International Examples
Johannesburg, South Africa
On 21 Feb 2019, the City of Johannesburg Council approved its "Inclusionary Housing Incentives, Regulations and Mechanisms 2019". The policy is the first of its kind in South Africa and provides four options for inclusionary housing (including price limited, size limited or negotiated options) where at least 30% of dwelling units in new developments of 20 units or more, must be inclusionary housing.
The trend of going mandatory over voluntary
While inclusionary zoning can be either mandatory or voluntary, some studies have shown that mandatory approaches would be crucial to the success of inclusionary zoning programs in terms of providing a larger number of affordable housing. Below are some examples showing the greater effect of mandatory practice over voluntary practice:
See also
Visitability - Social Integration Beyond Independent Living
Affordable housing
Residential segregation
Exclusionary zoning
Office of Fair Housing and Equal Opportunity
Woodward's building
Notes
References
Business and Professional People for the Public Interest Issue Brief #4 Inclusionary Housing in Montgomery County, MD,
Rusk, David; Nine Lessons for Inclusionary Zoning, National Inclusionary Housing Conference
Waring, Tom; "Section 8 needs a dose of reform, Hoeffel says" Northeast Times, May 15, 2002
Inclusionary Housing for the City of Chicago: Facts and Myths, North Park University
Affordable housing
Price controls
Zoning | Inclusionary zoning | Engineering | 4,149 |
917,280 | https://en.wikipedia.org/wiki/Nutrient%20sensing | Nutrient sensing is a cell's ability to recognize and respond to fuel substrates such as glucose. Each type of fuel used by the cell requires an alternate pathway of utilization and accessory molecules such as enzymes and cofactors. In order to conserve resources a cell will only produce molecules that it needs at the time. The level and type of fuel that is available to a cell will determine the type of enzymes it needs to express from its genome for utilization. Receptors on the cell membrane's surface designed to be activated in the presence of specific fuel molecules communicate to the cell nucleus via a means of cascading interactions. Nutrient receptors are receptors that are primarily designed to perform the function of nutrient sensing, whereas other receptors (e.g. insulin receptors, leptin receptors) are extensively multifunctional and perform many functions besides nutrient sensing. In this way the cell is aware of the available nutrients and is able to produce only the molecules specific to that nutrient type.
Nutrient sensing in mammalian cells
A rapid and efficient response to disturbances in nutrient levels is crucial for the survival of organisms from bacteria to humans. Cells have therefore evolved a host of molecular pathways that can sense nutrient concentrations and quickly regulate gene expression and protein modification to respond to any changes.
Cell growth is regulated by coordination of both extracellular nutrients and intracellular metabolite concentrations. AMP-activated kinase (AMPK) and mammalian target of rapamycin complex 1 serve as key molecules that sense cellular energy and nutrients levels, respectively.
The interplay among nutrients, metabolites, gene expression, and protein modification are involved in the coordination of cell growth with extracellular and intracellular conditions.
Living cells use ATP as the most important direct energy source. Hydrolysis of ATP to ADP and phosphate (or AMP and pyrophosphate) provides energy for most biological processes. The ratio of ATP to ADP and AMP is a barometer of cellular energy status and is therefore tightly monitored by the cell. In eukaryotic cells, AMPK serves as a key cellular energy sensor and a master regulator of metabolism to maintain energy homeostasis.
Nutrient sensing and epigenetics
Nutrient sensing and signaling is a key regulator of epigenetic machinery in cancer. During glucose shortage, the energy sensor AMPK activates arginine methyltransferase CARM1 and mediates histone H3 hypermethylation (H3R17me2), leading to enhanced autophagy. In addition, O-GlcNAc transferase (OGT) signals glucose availability to TET3 and inhibits TET3 by both decreasing its dioxygenase activity and promoting its nuclear export. OGT is also known to directly modify histones with O-GlcNAc. These observations strongly suggest that nutrient signaling directly targets epigenetic enzymes to control epigenetic modifications.
Regulation of tissue growth
Nutrient sensing is a key regulator of tissue growth. The main mediator of cellular nutrient sensing is the protein kinase TOR (target of rapamycin). TOR receives information from levels of cellular amino acids and energy, and it regulates the activity of processes involved in cell growth, such as protein synthesis and autophagy. Insulin-like signaling is the main mechanism of systemic nutrient sensing and mediates its growth-regulatory functions largely through the protein kinase pathway. Other nutrition-regulated hormonal mechanisms contribute to growth control of modulating the activity of insulin-like signaling.
Nutrient sensing in plants
Higher plants require a number of essential nutrient elements for completing their life cycles. Mineral nutrients are mainly acquired by roots from the rhizosphere and are subsequently distributed to shoots. To overcome with nutrient limitations, plants have evolved a set of elaborate responses consisting of sensing mechanisms and signaling processes to perceive and adapt to external nutrient availability.
Plants obtain most necessary nutrients by taking them up from the soil into their roots. Although plants cannot move to a new environment when nutrient availability is less than favorable, they can modify their development to favor root colonization of soil areas where nutrients are abundant. Therefore, plants perceive the availability of external nutrients, like nitrogen, and couple this nutrient sensing to an appropriate adaptive response.
Types of nutrients in plants
Potassium (K+) and phosphorus (P+) are important macronutrients for crops but are often deficient in the field. Very little is known about how plants sense fluctuations in concentrations of K+ and P+, and how such sensing is integrated at the organismic level into physiological and metabolic adaptations. Smaller amounts of other micronutrients are also important for the growth of the crop. All of these nutrients are equally important for the growth of the plant and lack of one nutrient can result in poor growth of the plant as well as becoming more vulnerable to diseases or can lead to death. These nutrients along with and energy from the sun aids in the development of the plant.
Nitrogen sensing
As one of the most vital nutrients for the development and growth of all plants, nitrogen sensing and the signalling response are vital for plants to live. Plants absorb nitrogen through the soil in the form of either nitrate or ammonia. In soil with low oxygen levels, ammonia is the primary nitrogen source, but toxicity is carefully controlled for with the transcription of ammonium transporters (AMTs). This metabolite and others including glutamate and glutamine have been shown to act as a signal of low nitrogen through regulation of nitrogen transporter gene transcription. NRT1.1, also known as CHL1, is the nitrate transceptor (transporter and receptor) found on the plasma membrane of plants. This is both a high and low affinity transceptor that senses varying concentrations of nitrate depending on its T101 residue phosphorylation. It has been shown that nitrate can also act as just a signal for plants, since mutants unable to metabolize are still able to sense the ion. For example, many plants show the increase of nitrate-regulated genes in low nitrate conditions and consistent mRNA transcription of such genes in soil high in nitrate. This demonstrates the ability to sense nitrate soil concentrations without metabolic products of nitrate and still exhibit downstream genetic effects.
Potassium Sensing
Potassium (K+), one of the essential macronutrients is found in plant soil. K+ is the most abundant cation and it is very limited in plant soil. Plants absorb K+ from the soil through channels that are found at the plasma membrane of root cells. Potassium is not assimilated into organic matter like other nutrients such as nitrate and ammonium but serves as a major osmoticum.
Brain and gut regulation of food intake
Maintaining a careful balance between stored energy and caloric intake is important to ensure that the body has enough energy to maintain itself, grow, and engage in activity. When balanced improperly, obesity and its accompanying disorders can result.
References
Nutrition
Receptors | Nutrient sensing | Chemistry | 1,384 |
53,906,579 | https://en.wikipedia.org/wiki/Bridge%20tender%27s%20house | A bridge tender's house is a structure near or upon a moveable bridge from which a bridge tender may operate the bridge and monitor river traffic, and in which they may reside. It may contain the controls and the mechanicals to operate the bridge.
References
External links
Moveable bridges
Bridge components | Bridge tender's house | Technology | 61 |
30,875,305 | https://en.wikipedia.org/wiki/Activation-synthesis%20hypothesis | The activation-synthesis hypothesis, proposed by Harvard University psychiatrists John Allan Hobson and Robert McCarley, is a neurobiological theory of dreams first published in the American Journal of Psychiatry in December 1977. The differences in neuronal activity of the brainstem during waking and REM sleep were observed, and the hypothesis proposes that dreams result from brain activation during REM sleep. Since then, the hypothesis has undergone an evolution as technology and experimental equipment has become more precise. Currently, a three-dimensional model called AIM Model, described below, is used to determine the different states of the brain over the course of the day and night. The AIM Model introduces a new hypothesis that primary consciousness is an important building block on which secondary consciousness is constructed.
Introduction
With the advancement of brain imaging technology, the sleep-waking cycle can be studied as never before. The brain can be objectively quantified and identified as being in either one of three states: awake, REM sleep, and NREM sleep due to these advanced methods of measurement. It has been shown that global deactivation of the brain from waking state to NREM sleep occurs, and a subsequent reactivation during REM sleep, to a degree greater than during waking. Consciousness and its substates, primary consciousness and secondary consciousness, play a part in identifying the state of the brain. Primary consciousness is the simple awareness of perception and emotion; that is, the awareness of the world via advanced visual and motor coordination information your brain receives. Secondary consciousness is an advanced state that includes both primary consciousness and abstract analysis, or thinking, and metacognitive components, or the awareness of being aware. Most animals show some stages of primary consciousness, but only humans have been experimentally shown to experience secondary consciousness. The cycle of waking-NREM-REM sleep is essential to mental health of mammals. It has been shown through experimentation that animals subjected to inability to enter REM sleep show an immediate attempt to quickly enter REM stages and long-term effects on motor coordination and habitual motor habits, eventually leading to the death of the animal. It has also been shown that homeothermic animals might require sleep to maintain body weight and temperature.
Background
Waking
The waking consciousness is the awareness of the world, our bodies, and ourselves. This includes humans experiencing the awareness of being aware of ourselves, an intrinsic ability to humans. It's the ability to look in a mirror and know that you are looking at yourself, and not just another human being. Wakefulness allows the distinction between tasks and default brain states, and also distinguishes between background and foreground processing. Being awake allows the person to not only be aware of themselves and the world, but also to have conscious motor coordination and understand the difference between need and want that comes from secondary consciousness.
Difference between sleep and dream
There is a difference between being just asleep and in a state of mind called dreaming. Sleeping can be described as the lack of conscious awareness of the outside world, meaning large portions of the brain that receive and interpret signals are deactivated during this time, while dreaming is a specific state of sleep in which enhanced brain activity has been shown to occur, theorizing the primary consciousness could be active during dreaming. Indeed, during dreams we are consciously aware of our surroundings, and assuredly have a certain perception and emotion throughout the course of the dream, suggesting that at least part of the primary consciousness is activated during the dream.
Dream
A dream has all features of primary consciousness but is produced in the brain without external stimulation. Unlike the waking state, the brain cannot recognize its own condition; that it is in the midst of the dream and is not the same as the real world. The brain has a single-minded state of primary consciousness during dreaming, which allows the brain to reach greater perception and awareness of a single scenario out of images and dreams. This is called the dream consciousness.
Four stages of sleep
The four sleep stages have been identified as follows: sleep onset stage I, late-night stage II, and deep sleep stages III and IV. Deep sleep stages III and IV all occur during the first half of the night, while lighter stages I and II occur during the later half. During standard sleep laboratory measurements, the states of sleep and waking have behavioral, polygraphic, and psychological manifestation within the pontine brainstem. These states are regulated by a reciprocal relationship between two types of neuronal cells, aminergic inhibitory cells such as serotonin and norepinephrine and cholinergic excitatory cells such as acetylcholine. Changes in the sleep stages occur when the activity curves of these neurons cross. REM sleep stage I is a state of sleep just above and most closely linked to sleep onset stage I.
NREM
NREM sleep can be described as the stages of sleep that show greatly decreased brain activity. There are four different stages of NREM sleep. The brain shows dulled or limited senses of perception, though the thought process has been shown to be logical and perseverative. Episodic movements of the body occur during these stages, though they are involuntary movements.
REM
REM sleep may be a more evolutionarily recent sleep state, and is prominent in most birds and mammals, although may exist in reptiles and other vertebrates to varying degrees. REM stands for rapid eye movement. It is generally a later sleep state following non-REM (NREM) sleep. It is regulated in part by the pontine brainstem. Infants spend most of their time in REM sleep, and rather than enter stage 1 sleep they may go directly to REM sleep. Most REM sleep occurs just above stage I of sleep, and experiences different mental abilities than during NREM sleep. The thought process is sometimes non-logical or even bizarre, sensation and perception is vivid but created internally by the brain, and the body's movements are inhibited. Most REM stages last 10–15 minutes, and the average human will go through 4–6 of these stages during sleep each night. Subsequent REM stages increase in duration, so the last REM stage before awakening is the longest and thus may have the most vivid dream imagery. It has been proposed that REM sleep is necessary for preparation of many integrative functions, of which one is consciousness. It supports the idea that sleep, and dreaming, is necessary or at least optimal preparation for the next day's processes. The scientific tracking of REM sleep stages can be measured by neuronal signals within the pontine brainstem. The interactions of aminergic inhibitory neurons and cholinergic excitatory neurons can be measured, and REM sleep occurs when aminergic cells are at their least active and cholinergic cells are at their most active.
Evolution of REM
It has been stated that REM sleep is a recent evolutionary behavior in homeothermic animals. In both, there is increased REM sleep in the early stages of life. In humans, REM sleep peaks during the third trimester of gestation, and quickly falls after birth as primary consciousness declines and secondary consciousness grows with the development of the brain. The developing control over stages of sleep and waking suggests that sleep and REM has developed as a way to self-activate in order to anticipate awake-state circumstances.
Neuronic modeling
Within the pons, the modeling and tracking of these aminergic inhibitory neurons and cholinergic excitatory neurons occurs via the study of PGO waves. These are phasic waves that occur in cycles, and originate from the pontine brainstem (P), lateral geniculate of thalamus (G), and occipital cortex (O). Aminergic monoamines serotonin, noradrenaline, histamine, and dopamine are balanced between acetylcholine cholinergic signals, and play a part in the regulation of cognition. Aminergic cell signal strength is lowest during REM sleep, increases during NREM, and is highest at waking. Cholinergic cell signal strength is highest during REM, declines during NREM, and is lowest at waking. Changes in sleep state and phase occur when two activity paths cross.
Theory
The development of consciousness is a gradual, time-consuming and lifelong process that builds upon and uses a more primitive virtual reality generator that is more definable in our dreams. As such, the development of secondary consciousness during the lifetime requires a blank consciousness that during REM sleep creates an imaginary self that has movements and experiences emotions. This is an experimental state not associated with awareness, and this state, or protoconscious, is able to be reached during childhood. This protoconsciousness is a protoself created early in life by the brain as a building block for consciousness to develop, and provides intrinsic predictions of external inputs created by dreaming.
Original activation-synthesis hypothesis model
Hobson and McCarley originally proposed in the 1970s that the differences in the waking-NREM-REM sleep cycle was the result of interactions between aminergic REM-off cells and cholinergic REM-on cells. This was perceived as the activation-synthesis model, stating that brain activation during REM sleep results in synthesis of dream creation. Hobson's five cardinal characteristics include: intense emotions, illogical content, apparent sensory impressions, uncritical acceptance of dream events, and difficulty in being remembered.
Current model – AIM
Thanks to the development of technology since the original proposal, new experimental data has been collected and additional mechanistic details of neuronal control have been developed. It has been determined that consciousness states can be described with three values, and the AIM model is a model that uses these values for representing the similarities and differences between waking and dreaming. It is a three-dimensional state-space model that describes different states of the brain and their variance throughout the day and night. It is composed of three different values: A – activation, I – input-output gating, and M – modulation. The model is limited however, in that it cannot yet explain the regional differences in brain activity that distinguish REM sleep from waking. Other limitations include the inability to quantifiably identify and measure M in humans. During waking and activation of primary and secondary consciousnesses, high values of A, I, and M have been observed, but during REM sleep high values of A but low I and M have been observed.
Protoconsciousness
The protoconsciousness is template of consciousness that occurs during sleep, and on which can be constructed other mental conscious processes. Early in childhood, it has been said that this protoconsciousness is where secondary aspects of consciousness are originally developed and tested by the primary consciousness, and the person can slowly develop increased secondary consciousness throughout their life as their protoconscious template is further expanded, developed, and creates more vivid ideas and representations of secondary consciousness.
Activation (A)
Large parts of the brain that are activated and sending signals during waking are inactive during NREM sleep and become reactivated during REM sleep. It is based on the fact that the brain and its neural circuitry is plastic and self-regulating, especially in its own activation and inactivation. This was observed by two experiments: development of sleepiness after dopamine neuron destruction in substantia nigra in the midbrain, and discovery of the reticular activating system, which are visual cues received through our eyes and to our brain that begin the waking process, that waking consciousness depends sleep. Following these studies, it became clear that activity levels and quality of consciousness were functions of brain activation and deactivation.
Input-output gating (I)
It has been shown that the internal activation of the brain is associated with the inhibition of both external sensory input and motor output. This implies that the brain is actively kept offline during REM, and the brainstem guarantees the coordination of factors I and A via the input-output gate control within the brainstem. PGO waves play a part in the ability of the brain to remain asleep while constituting the building blocks for perception and fine motor control via their phasic coordination. It has therefore been proposed that PGO signals are used in the construction of visual imagery of dreams.
Modulation (M)
The neuromodulator release of aminergic neurons have a broad chemical influence on the brain; they instruct other neurons to keep or discard a record of information they've processed. The mechanics of modulation are not known at this time, and modulation has yet to be quantitatively identified. Qualitatively, aminergic modulation has been shown to be strong during waking but lower during sleep, but more studies need to be conducted. Numerous studies have emerged from the discipline of computational neuroscience that support to the AIM model. The theory of Metalearning in particular describes how these neuromodulators facilitate dynamic learning, though a series of interpretive models all consistent with the AIM model.
Implications
The three-dimensional AIM model shows that during the cycle of brain states waking-NREM-REM, the brain is dynamically changing constantly, and that this state space described by the AIM has an infinite number of subregions other than the main three. It proposes that via a protoconsciousness brain activation during sleep is necessary for the development and maintenance of waking consciousness and other higher-order brain functions such as problem solving. It suggests the possibility that the state of waking consciousness is only present in humans due to the evolution of extensive cortical structures within the brain. Dreaming is a state of the brain that is similar to yet different from the waking consciousness, and interaction and correlation between the two is necessary for optimal performance from both. One study conducted measuring brain activity via EEG used Hobson's AIM model to show that quantitatively dream consciousness is remarkably similar to waking consciousness.
References
Sleep physiology
Dream
Unsolved problems in neuroscience
Consciousness | Activation-synthesis hypothesis | Biology | 2,850 |
26,414,007 | https://en.wikipedia.org/wiki/Rosemary%20A.%20Bailey | Rosemary A. Bailey (born 1947) is a British statistician who works in the design of experiments and the analysis of variance and in related areas of combinatorial design, especially in association schemes. She has written books on the design of experiments, on association schemes, and on linear models in statistics.
Education and career
Bailey's first degree and Ph.D. were in mathematics at the University of Oxford. She was awarded her doctorate in 1974 for a dissertation on permutation groups, Finite Permutation Groups supervised by Graham Higman. Bailey's career has not been in pure mathematics but in statistics where she has specialised in the algebraic problems associated with the design of experiments.
Bailey worked at the University of Edinburgh with David Finney and at The Open University. She spent 1981–91 in the Statistics Department of Rothamsted Experimental Station. In 1991 Bailey became Professor of Mathematical Sciences at Goldsmiths College in the University of London and then Professor of Statistics at Queen Mary, University of London where she is Professor Emerita of Statistics. She is currently Professor of Mathematics and Statistics in the School of Mathematics and Statistics at the University of St Andrews, Scotland.
Recognition
Bailey is a Fellow of the Institute of Mathematical Statistics and in 2015 was elected a Fellow of the Royal Society of Edinburgh.
Selected publications
References
External links
Homepage of Professor Bailey at Queen Mary University of London
Homepage of Professor Bailey at the School of Mathematics and Statistics, University of St Andrews
R.A. Bailey at theoremoftheday.org
20th-century English mathematicians
21st-century English mathematicians
Academics of Queen Mary University of London
Algebraists
Alumni of St Hugh's College, Oxford
Combinatorialists
English statisticians
Living people
Rothamsted statisticians
British women statisticians
1947 births
Academics of the University of St Andrews
Fellows of the Institute of Mathematical Statistics
Fellows of the Royal Society of Edinburgh
20th-century British women mathematicians
21st-century British women mathematicians | Rosemary A. Bailey | Mathematics | 391 |
49,146,103 | https://en.wikipedia.org/wiki/Sarcodon%20conchyliatus | Sarcodon conchyliatus is a species of tooth fungus in the family Bankeraceae. Found in Malaysia, it was described as new to science in 1971 by Dutch mycologist Rudolph Arnold Maas Geesteranus. The fruit bodies have finely tomentose caps that are dull ochraceous, greyish or brownish, and typically have drab to purplish tinges. The spines on the cap underside are not decurrent on the stipe. Maas Geesteranus placed the fungus in the section Virescentes, along with S. atroviridis and S. thwaitesii, all species with flesh that dries to a deep olive green color.
References
External links
Fungi described in 1971
Fungi of Asia
conchyliatus
Fungus species | Sarcodon conchyliatus | Biology | 156 |
11,542,343 | https://en.wikipedia.org/wiki/Transcription%20factor%20II%20A | Transcription factor TFIIA is a nuclear protein involved in the RNA polymerase II-dependent transcription of DNA. TFIIA is one of several general (basal) transcription factors (GTFs) that are required for all transcription events that use RNA polymerase II. Other GTFs include TFIID, a complex composed of the TATA binding protein TBP and TBP-associated factors (TAFs), as well as the factors TFIIB, TFIIE, TFIIF, and TFIIH. Together, these factors are responsible for promoter recognition and the formation of a transcription preinitiation complex (PIC) capable of initiating RNA synthesis from a DNA template.
Functions
TFIIA interacts with the TBP subunit of TFIID and aids in the binding of TBP to TATA-box containing promoter DNA. Interaction of TFIIA with TBP facilitates formation of and stabilizes the preinitiation complex. Interaction of TFIIA with TBP also results in the exclusion of negative (repressive) factors that might otherwise bind to TBP and interfere with PIC formation. TFIIA also acts as a coactivator for some transcriptional activators, assisting with their ability to increase, or activate, transcription. The requirement for TFIIA in vitro transcription systems has been variable, and it can be considered either as a GTF and/or a loosely associated TAF-like coactivator. Genetic analysis in yeast has shown that TFIIA is essential for viability.
Structure
TFIIA is a heterodimer with two subunits: one large unprocessed (subunit 1, or alpha/beta; gene name ) and one small (subunit 2, or gamma; gene name ). It was originally believed to be a heterotrimer of an alpha (p35), a beta (p19) and a gamma subunit (p12). In humans, the sizes of the encoded proteins are approximately 55 kD and 12 kD. Both genes are present in species ranging from humans to yeast, and their protein products interact to form a complex composed of a beta barrel domain and an alpha helical bundle domain. It is the N-terminal and C-terminal regions of the large subunit that participate in interactions with the small subunit. These regions are separated by another domain whose sequence is always present in large subunits from various species but whose size varies and whose sequence is poorly conserved. A second gene encoding a large TFIIA subunit has been found in some higher eukaryotes. This gene, ALF/TFIIAtau (gene name ) is expressed only in oocytes and spermatocytes, suggesting it has a TFIIA-like regulatory role for gene expression only in germ cells.
References
External links
Gene expression
Transcription factors | Transcription factor II A | Chemistry,Biology | 574 |
37,092,814 | https://en.wikipedia.org/wiki/Pterobranchia%20mitochondrial%20code | The pterobranchia mitochondrial code (translation table 24) is a genetic code used by the mitochondrial genome of Rhabdopleura compacta (Pterobranchia). The Pterobranchia are one of the two groups in the Hemichordata which together with the Echinodermata and Chordata form the three major lineages of deuterostomes. AUA translates to isoleucine in Rhabdopleura as it does in the Echinodermata and Enteropneusta while AUA encodes methionine in the Chordata. The assignment of AGG to lysine is not found elsewhere in deuterostome mitochondria but it occurs in some taxa of Arthropoda. This code shares with many other mitochondrial codes the reassignment of the UGA STOP to tryptophan, and AGG and AGA to an amino acid other than arginine. The initiation codons in Rhabdopleura compacta are ATG and GTG.
Code 24 is very similar to the mitochondrial code 33 for the Pterobranchia.
The code
AAs = FFLLSSSSYY**CCWWLLLLPPPPHHQQRRRRIIIMTTTTNNKKSSSKVVVVAAAADDEEGGGG
Starts = ---M---------------M---------------M---------------M------------
Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG
Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG
Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG
Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U).
Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), Valine (Val, V)
Differences from the standard code
See also
List of genetic codes
References
Molecular genetics
Gene expression
Protein biosynthesis | Pterobranchia mitochondrial code | Chemistry,Biology | 705 |
22,762,908 | https://en.wikipedia.org/wiki/Dual%20of%20BCH%20is%20an%20independent%20source | A certain family of BCH codes have a particularly useful property, which is that
treated as linear operators, their dual operators turns their input into an -wise independent source. That is, the set of vectors from the input vector space are mapped to an -wise independent source. The proof of this fact below as the following Lemma and Corollary is useful in derandomizing the algorithm for a -approximation to MAXEkSAT.
Lemma
Let be a linear code such that has distance greater than . Then is an -wise independent source.
Proof of lemma
It is sufficient to show that given any matrix M, where k is greater than or equal to l, such that the rank of M is l, for all , takes every value in the same number of times.
Since M has rank l, we can write M as two matrices of the same size, and , where has rank equal to l. This means that can be rewritten as for some and .
If we consider M written with respect to a basis where the first l rows are the identity matrix, then has zeros wherever has nonzero rows, and has zeros wherever has nonzero rows.
Now any value y, where , can be written as for some vectors .
We can rewrite this as:
Fixing the value of the last coordinates of
(note that there are exactly
such choices), we can rewrite this equation again as:
for some b.
Since has rank equal to l, there
is exactly one solution , so the total number of solutions is exactly , proving the lemma.
Corollary
Recall that BCH2,m,d is an linear code.
Let be BCH2,log n,ℓ+1. Then is an -wise independent source of size .
Proof of corollary
The dimension d of C is just . So .
So the cardinality of considered as a set is just
, proving the Corollary.
References
Coding Theory notes at University at Buffalo
Coding Theory notes at MIT
Article proofs | Dual of BCH is an independent source | Mathematics | 409 |
3,946,940 | https://en.wikipedia.org/wiki/Alan%20Stern | Sol Alan Stern (born November 22, 1957) is an American engineer, planetary scientist and space tourist. He is the principal investigator of the New Horizons mission to Pluto and the Chief Scientist at Moon Express.
Stern has been involved in 24 suborbital, orbital, and planetary space missions, including eight for which he was the mission principal investigator. One of his projects was the Southwest Ultraviolet Imaging System, an instrument which flew on two space shuttle missions, STS-85 in 1997 and STS-93 in 1999.
Stern has also developed eight scientific instruments for planetary and near-space research missions and has been a guest observer on numerous NASA satellite observatories, including the International Ultraviolet Explorer, the Hubble Space Telescope, the International Infrared Observer and the Extreme Ultraviolet Observer. Stern was executive director of the Southwest Research Institute's Space Science and Engineering Division until becoming Associate Administrator of NASA's Science Mission Directorate in 2007. He resigned from that position after nearly a year.
His research has focused on studies of our solar system's Kuiper belt and Oort cloud, comets, the satellites of the outer planets, Pluto, and the search for evidence of planetary systems around other stars. He has also worked on spacecraft rendezvous theory, terrestrial polar mesospheric clouds, galactic astrophysics, and studies of tenuous satellite atmospheres, including the atmosphere of the Moon.
Life and career
Stern was born in New Orleans, Louisiana to Jewish parents Joel and Leonard Stern. He graduated from St. Mark's School of Texas in 1975. He then attended the University of Texas, Austin, where he received his bachelor's degrees in physics & astronomy and his master's degrees in aerospace engineering and planetary atmospheres. He earned a doctorate in astrophysics and planetary science from the University of Colorado, Boulder.
From 1983 to 1991, Stern held positions at the University of Colorado in the Center for Space and Geoscience Policy, the office of the vice president for Research, and the Center for Astrophysics and Space Astronomy. He received his doctorate in 1989. From 1991 to 1994 he was the leader of Southwest Research Institute's Astrophysical and Planetary Sciences group and was chair of NASA's Outer Planets Science Working Group. From 1994 to 1998 he was the leader of the Geophysical, Astrophysical, and Planetary Science section in Southwest Research Institute's Space Sciences Department, and from 1998 to 2005 he was the director of the Department of Space Studies at Southwest Research Institute. In 1995 he was selected to be a Space Shuttle mission specialist finalist, and in 1996 he was a candidate Space Shuttle payload specialist but did not have the opportunity to fly on the Space Shuttle.
In 2007, Stern was listed among Time magazine's 100 Most Influential People in The World.
On August 27, 2008, Stern was elected to the board of directors of the Challenger Center for Space Science Education.
In 2015, Stern was the recipient of Smithsonian Magazine'''s American Ingenuity Award in the Physical Sciences category.
On October 7, 2016, Stern was inducted into the Colorado Space Hall of Fame.
Inspiration for Pluto/Kuiper belt mission
On June 14, 2007, in an address to the Smithsonian Institution for their "Exploring the Solar System Lecture Series", Stern commented on the New Horizons mission:
Private sector experience
After completing a master's degree in aerospace engineering Stern spent seven years as an aerospace systems engineer, concentrating on spacecraft and payload systems at the NASA Johnson Space Center, Martin Marietta Aerospace, and the Laboratory for Atmospheric and Space Physics at the University of Colorado.
Stern is currently active as a consultant for private sector space efforts and has stated:
On June 18, 2008, Stern joined Odyssey Moon Limited (Isle of Man), a private industry effort, as a part-time Science Mission Director/consultant in their efforts to launch a robotic mission to the Earth's Moon by participating in the $30 Million Google Lunar X-Prize competition.
In December 2008, Stern joined Blue Origin, a company that was founded by Amazon.com's Jeff Bezos as an independent representative for research and education Missions. The company has stated that its objective is to develop a new vertical-take-off, vertical-landing vehicle known as New Shepard that is designed to take a small number of astronauts on a sub-orbital journey into space and reduce the cost of space transportation. The company is located in Kent, Washington and has flight tested some hardware.
In 2012, Stern co-founded Uwingu.
Space science mission
Stern has experience in instrument development, concentrating on ultraviolet technologies. Stern is a principal investigator (PI) in NASA's UV sounding rocket program, and was the project scientist on a Shuttle-deployable SPARTAN astronomical satellite. He was the PI of the advanced, miniaturized HIPPS Pluto breadboard camera/IR spectrometer/UV spectrometer payload for the NASA/Pluto-Kuiper Express mission, and he is the PI of the PERSI imager/spectrometer payload on NASA's New Horizons Pluto mission. Stern is also the PI of the ALI CE UV Spectrometer for the ESA/NASA Rosetta comet orbiter. He was a member of the New Millennium Deep Space 1 (DS1) mission science team, and is a Co-investigator on both the ESA SPICAM Mars UV spectrometer launched on Mars Express, and the Hubble Space Telescope Cosmic Origins Spectrograph (COS) installed in 2009. He is the PI of the SWUIS ultraviolet imager, which has flown two Shuttle missions, and the SWUIS-A airborne astronomical facility. In this capacity, Stern has flown numerous WB-57 and F-18 airborne research astronomy missions. Stern and his colleague, Dr. Daniel Durda, have been flying on the modified F/A-18 Hornet with a sophisticated camera system called the Southwest Ultraviolet Imaging System (SWUIS). They use the camera to search for a hypothetical group of asteroids (Vulcanoids) between the orbit of Mercury and the Sun that are so elusive and hard to see that scientists are not sure they exist.
NASA experience
Stern has served on various NASA committees, including the Lunar Exploration Science Working Group (LExSWG) and the Discovery Program Science Working Group (DPSWG), the Solar System Exploration Subcommittee (SSES), the New Millennium Science Working Group (NMSWG), and the Sounding Rocket Working Group (SRWG). He was Chair of NASA's Outer Planets Science Working Group (OPSWG) from 1991 to 1994 and served as a panel member for the National Research Council's 2003-2013 Decadal Survey on planetary science. Stern is a member of the AAAS, the AAS, and the AGU.
NASA Associate Administrator
Stern was appointed NASA's Associate Administrator for the Science Mission Directorate, essentially NASA's top-ranking official for science, in April 2007. In this position Stern directed a organization with 93 separate flight missions and a program of over 3,000 research grants. During his tenure a record 10 major new flight projects were started and deep reforms of the research and also the education and public outreach programs were put in place. Stern's style was characterised as "hard-charging" as he pursued a reform-minded agenda. He "made headlines for trying to keep agency missions on schedule and under budget" but faced "internal battles over funding". He was credited with making "significant changes that have helped restore the importance of science in NASA's mission".
On March 26, 2008, it was announced that Stern had resigned his position the previous day, effective April 11. He was replaced by Ed Weiler, who was to serve his second stint in the position. The resignation occurred on the same day that NASA Chief Michael D. Griffin overruled a decrease in funding for the Mars Exploration Rovers and Mars Odyssey missions that was intended to free up funds needed for the upcoming Mars Science Laboratory. NASA officials would neither confirm nor deny a connection between the two events.
Stern left to avoid cutting healthy programs and basic research in order to cover cost overruns. He believed that cost overruns in the Mars program should be accommodated from within the Mars program, and not taken from other NASA programs. Michael D. Griffin became upset with Stern for making major decisions without consulting him, while Stern was frustrated by Griffin's refusal to allow him to cut or delay politically sensitive projects. Griffin favored cutting "less popular parts" of the budget, including basic research, and Stern's refusal to do so led to his resignation.
Casting doubt on the theory that Stern resigned due to conflict with former Administrator Griffin is his statement of March 25, 2009 at spacepolitics.com:
On November 23, 2008, in an op-ed in The New York Times, Stern criticized NASA's inability to keep its spending under control. Stern said that, during his own time at NASA, "when I articulated this problem... and consistently curtailed cost increases, I found myself eventually admonished and then neutered by still higher ups, precipitating my resignation earlier this year." While complimenting NASA Administrator Michael D. Griffin, Stern suggested that Griffin's decision to again bail out an over-budget mission was motivated by fear "that any move to cancel the Mars mission would be rebuffed by members of Congress protecting local jobs."
Since leaving NASA, Stern has made criticisms of the budgetary process and has advocated for revamping its public appeal.
Planetary classification
Stern has become involved in the debate surrounding the 2006 definition of planet by the IAU. After the IAU's decision was made he was quoted as saying "It's an awful definition; it's sloppy science and it would never pass peer review" and claimed that Earth, Mars, Jupiter and Neptune have not fully cleared their orbital zones and has stated in his capacity as PI of the New Horizons project that "The New Horizons project [...] will not recognize the IAU's planet definition resolution of August 24, 2006."
A 2000 paper by Stern and Levison proposed a system of planet classification that included both the concepts of hydrostatic equilibrium and clearing the neighbourhood used in the new definition, with a proposed classification scheme labeling all sub-stellar objects in hydrostatic equilibrium as "planets" and subclassifying them into "überplanets" and "unterplanets" based on a mathematical analysis of the planet's ability to scatter other objects out of its orbit over a long period of time. Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune were classified as neighborhood-clearing "überplanets" and Pluto was classified as an "unterplanet".
Satellite planets and belt planets
Some large satellites are of similar size or larger than the planet Mercury, e.g. Jupiter's Galilean moons and Titan. Stern has argued that location should not matter and only geophysical attributes should be taken into account in the definition of a planet, and proposes the term satellite planet for a planet-sized object orbiting another planet. Likewise planet-sized objects in the asteroid belt or Kuiper belt should also be planets according to Stern. Others have used the neologism planemo'' (planetary-mass object) for the broad concept of "planet" advocated by Stern.
Selected bibliography
References
1957 births
American planetary scientists
Pluto's planethood
New Horizons
Jewish American scientists
Jewish engineers
St. Mark's School (Texas) alumni
Living people
Discoverers of trans-Neptunian objects
NASA people
University of Texas at Austin College of Natural Sciences alumni
University of Colorado Boulder alumni
Cockrell School of Engineering alumni | Alan Stern | Astronomy | 2,358 |
980,240 | https://en.wikipedia.org/wiki/Tachistoscope | A tachistoscope is a device that displays a picture, text, or an object for a specific amount of time. It can be used for various purposes such as to increase recognition speed, to show something too fast to be consciously recognized, or to test which elements of a display are memorable.
Early tachistoscopes were mechanical, using a flat masking screen that containing a window. The screen concealed the picture or text until the sceen moved, at a known speed, the window over the picture or text, revealing it. The screen continued to move until it hid the picture or text again. Later tachistoscopes used a shutter system typical of a camera in conjunction with a slide or transparency projector. Even later, tachistoscopes used brief illumination, such as from fast-onset and fast-offset fluorescent lamps, of the material to be displayed. By the late 1990s, tachistoscopes had largely been replaced by computers for displaying pictures and text.
History
The first tachistoscope was originally described by the German physiologist A.W. Volkmann in 1859. Samuel Renshaw used it during World War II in the training of fighter pilots to help them identify aircraft silhouettes as friend or foe.
Applications
Before computers became universal, tachistoscopes were used extensively in psychological research to present visual stimuli for controlled durations. Some experiments employed pairs of tachistoscopes so that an experimental participant could be given different stimulation in each visual field.
Tachistoscopes were used during the late 1960s in public schools as an aid to increased reading comprehension for speed reading. There were two types: the student would look through a lens similar to an aircraft bombsight viewfinder and read letters, words, and phrases using manually advanced slide film. The second type projected words and phrases on a screen in sequence. Both types were followed up with comprehension and vocabulary testing.
Tachistoscopes continue to be used in market research, where they are typically used to compare the visual impact, or memorability of marketing materials or packaging designs. Tachistoscopes used for this purpose still typically employ slide projectors rather than computer monitors, due to
the increased fidelity of the image which can be displayed in this way and
the opportunity to show large or life-size images.
References
External links
https://web.archive.org/web/20220328115006/http://www.sykronix.com/researching/tscope.htm How to Build and Use a Tachistoscope]
Photography equipment
Optical devices | Tachistoscope | Materials_science,Engineering | 531 |
24,884,148 | https://en.wikipedia.org/wiki/C%20Aquarii | The Bayer designation c Aquarii is shared by three stars in the constellation Aquarius:
c1 Aquarii or 86 Aquarii
c2 Aquarii or 88 Aquarii
c3 Aquarii or 89 Aquarii
c Aquarii
Aquarii, c | C Aquarii | Astronomy | 57 |
40,914,072 | https://en.wikipedia.org/wiki/Immunoadsorption | Immunoadsorption is a procedure that removes specific blood group antibodies from the blood. It is needed to remove the antibodies against pathogenic antibodies.
The procedure generally takes about three to four hours.
Immunoadsorption was developed in the 1990s as a method of extracorporeal removal of molecules from the blood, in particular molecules of the immune system.
Different number of devices/columns exist on the market, each with a different active component to which the molecule of interest attaches, allowing for selectivity in the molecules of interest.
Immunoadsorption may be used as an alternative to plasma exchange in certain conditions. Evidence of benefit is lacking in those with kidney problems. Concerns include that it is expensive.
Procedure
Dual column system
Blood first passes to plasma filter. Plasma then passes on to immunoadsorption column before returning to patient. As the plasma is passing through one column, the second column is being regenerated. Once the first column is saturated the flow switches to the second column while the first is then regenerated.
-1st step: the separation of plasma from the blood cells
-2nd step: the immunoadsorption column
Treatment prescriptions for immunoadsorption are based on plasma volumes with different recommendations for each condition and depending on the condition being treated, sessions can be daily or intermittent.
The therapy
Immunoadsorption could be used in various autoimmune-mediated neurological diseases in order to remove autoimmune antibodies and other pathological constituents from the patients blood
It is increasingly recognized as a more specific alternative and generally appreciated for its potentially advantageous safety profile.
Immunoadsorption is also used in kidney transplantation for either the preparation of the ABO-incompatible or the highly sensitized kidney transplant candidate before transplantation, or the treatment of antibody-mediated rejection after transplantation.
Indication
The most frequently encountered complication of immunoadsorption is an allergic reaction to the filter or adsorption column. Medication may be given before the procedure to minimize the risk.
Other side effects during the treatment could be dizziness, nausea or feeling cold.
The usage of immunoadsorption as medical procedure is still limited in some countries of the world, especially in Northern America. The additional costs for immunoadsorption are balanced by the reduced length of stay time as well as the reduced need of plasma substituting solutions and handling of side effects.
References
Further reading
Immunology | Immunoadsorption | Biology | 509 |
59,761,321 | https://en.wikipedia.org/wiki/NGC%202985 | NGC 2985 is a spiral galaxy located in the constellation Ursa Major. It is located at a distance of circa 70 million light years from Earth, which, given its apparent dimensions, means that NGC 2985 is about 95,000 light years across. It was discovered by William Herschel on April 3, 1785.
The galaxy is seen with an inclination of 37 degrees. The galaxy has a bright nucleus from which emanate multiple tightly wound spiral fragments. Numerous blue knots are visible at the galactic disk. At the outer part of the galaxy lies a massive spiral arm that forms a pseudoring that encircles the galaxy. The inner part of the galaxy, where active star formation has been observed, has been found to be unstable, contrary to the outer stable one. It has been suggested that the presence of molecular clouds accounts for the instability of the region.
The nucleus of NGC 2985 is active, and based on its spectrum has been categorised as a LINER. The most accepted theory for the activity source is the presence of an accretion disk around a supermassive black hole. The mass of the supermassive black hole at the centre of NGC 2985 is estimated to be 160 million (108.2) , based on stellar velocity dispersion. The velocity dispersion is anisotropic, and changes with the azimuth. The rotational speed of the galaxy at its effective radius is 222.9 ± 31.2 km/s.
NGC 2985 is the brightest member of a galaxy group known as the NGC 2985 group. Other members of the group include NGC 3027, 25 arcminutes away. Other nearby galaxies include NGC 3252, and NGC 3403.
References
External links
NGC 2985 on SIMBAD
Unbarred spiral galaxies
Ursa Major
5253
06426
28316
Astronomical objects discovered in 1785
Discoveries by William Herschel | NGC 2985 | Astronomy | 385 |
74,550,274 | https://en.wikipedia.org/wiki/RO5256390 | RO5256390 or RO-5256390 is a drug developed by Hoffmann-La Roche which acts as an agonist for the trace amine associated receptor 1 (TAAR1). It is a full agonist of the rat, cynomolgus monkey, and human TAAR1, but a partial agonist of the mouse TAAR1.
Pharmacology
Pharmacodynamics
Actions
RO5256390 is a full agonist of the rat, cynomolgus monkey, and human TAAR1, but a high-efficacy partial agonist of the mouse TAAR1.
Effects
RO5256390 has been found to suppress the firing rates of ventral tegmental area (VTA) dopaminergic neurons and dorsal raphe nucleus (DRN) serotonergic neurons in mouse brain slices ex vivo. This effect was absent in slices from TAAR1 knockout mice. Similarly, acute RO5256390 suppressed VTA dopaminergic and DRN serotonergic neuronal excitability in rats in vivo, whereas the excitability of locus coeruleus (LC) noradrenergic neurons was unaffected. In contrast with acute exposure however, chronic administration of RO5256390 for 14days increased the excitability of VTA dopaminergic and DRN serotonergic neurons. The drug has been found to dose-dependently block cocaine-induced inhibition of dopamine clearance (reuptake inhibition) in rat nucleus accumbens (NAc) slices ex vivo whilst having no effect on dopamine clearance by itself.
RO5256390 has been found to fully suppress the hyperlocomotion (a psychostimulant-like effect) induced by cocaine in rodents. In addition, it dose-dependently inhibited the hyperlocomotion induced by the NMDA receptor antagonists phencyclidine (PCP) and L-687,414. RO5256390 is said to produce a brain activity pattern similar to that of the antipsychotic olanzapine in rodents and hence is presumed to have antipsychotic-like properties. In contrast to classical antipsychotics however, RO5256390 did not produce extrapyramidal-like symptoms in rodents and instead could reduce the catalepsy induced by haloperidol. RO5256390 has been found to dose-dependently inhibit cocaine self-administration and context-triggered cocaine-seeking behavior in rodents.
RO5256390 shows robust aversive and locomotor-suppressing effects in rodents that are dependent on TAAR1 activation. Similar aversive effects have also been observed with other TAAR1 agonists like RO5263397 and RO5166017. RO5256390 has been shown to decrease motor hyperactivity, novelty-induced locomotor activity, and induce anxiolytic-like effects in the spontaneously hypertensive rat (SHR), a rodent model of attention deficit hyperactivity disorder (ADHD). In contrast to the TAAR1 partial agonist RO5263397, RO5256390 did not produce antidepressant-like effects in rodents. Conversely however, both agents produced antidepressant-like effects in monkeys.
RO5256390 has been found to produce pro-cognitive effects in rodents and monkeys. It has been shown to strongly suppress rapid eye movement (REM) sleep in rodents. On the other hand, it did not promote wakefulness in rodents. RO5256390 has been shown to block compulsive and binge-like eating behavior in rats. For this reason, it is being investigated as a potential drug to treat binge eating disorder.
History
RO5256390 was first described in the scientific literature by 2013.
See also
RO5073012 – TAAR1 weak partial agonist
RO5166017 – TAAR1 partial or full agonist
RO5203648 – TAAR1 partial agonist
RO5263397 – TAAR1 partial agonist
EPPTB – TAAR1 antagonist/inverse agonist
References
Amines
Oxazolines
TAAR1 agonists
TAAR1 antagonists | RO5256390 | Chemistry | 896 |
7,356,531 | https://en.wikipedia.org/wiki/PAL-M | PAL-M is the analogue colour TV system used in Brazil since early 1972, making it the first South American country to broadcast in colour.
It is unique among analogue TV systems in that it combines the 525-line 30 frames-per-second System M with the PAL colour encoding system (using very nearly the NTSC colour subcarrier frequency), unlike all other countries which pair PAL with 625-line systems and NTSC with 525-line systems.
Colour broadcasts began on 19 February 1972, when a TV station in Caxias do Sul, TV Difusora, transmitted the Caxias do Sul Grape Festival in collaboration with TV Rio. Transition from black and white to colour on most programmes was not complete until 1978, and only became commonplace nationwide by 1980.
Origins
NTSC being the "natural" choice for countries with monochrome standard M, the choice of a different colour system poses problems of incompatibility with available hardware and the need to develop new television sets and production hardware. Walter Bruch, inventor of PAL, explains Brazil's choice of PAL over NTSC against these odds by an advertising campaign Telefunken and Philips carried out across South America in 1972, which included colour test broadcasts of popular shows (done with TV Globo) and technical demonstrations with executives of television stations.
Technical specifications
PAL-M signals are in general identical to North American NTSC signals, except for the encoding of the colour carrier. Both systems are based on the monochrome CCIR System M standard, therefore, PAL-M will display in monochrome with sound on an NTSC set and vice versa. Nevertheless, due to the different gamma correction values (2.2 for NTSC, 2.8 for PAL-M), gray tones will be incorrect.
PAL-M is incompatible with 625-line based versions of PAL, because its frame rate, scan line, colour subcarrier and sound carrier specifications are different. It will therefore usually give a rolling and/or squashed monochrome picture with no sound on a native European PAL television, as do NTSC signals.
PAL-M details:
Transmission band: VHF/UHF
Fields: 60
Scan lines: 525
Active lines: 480
Channel bandwidth: 6 MHz
Video bandwidth: 4.2 MHz
Vision/sound carrier spacing: 4.5 MHz
Colour subcarrier: 3.575611 MHz
Assumed receiver gamma: 2.8
Color model: YUV
PAL-M colorimetry:
Colorimetry is similar to the original 1953 color NTSC specification:
Standard: BT.470-6
White point: C
Color primaries:
Red: x 0.67; y 0.33
Green: x 0.21; y 0.71
Blue: x 0.14; y 0.08
PAL-M systems conversion issues
PAL-M being a standard unique to one country, the need to convert it to/from other standards often arises.
Conversion to/from NTSC is easy, as only the colour carrier needs to be changed. Frame rate and scan lines can remain untouched.
Conversion to/from PAL/625 lines/25 frame/s and SECAM/625/25 signals involves changing the frame rates as well as the scan lines. This is achieved using complicated circuitry involving a digital frame store, the same method used for converting between NTSC and the 625/25 standards. The fact that the colour encoding of PAL-M and PAL/625/25 is the same does not help, as the entire signal goes through an A/D-D/A conversion process anyway.
However some special VHS video recorders are available which can allow viewers the flexibility of enjoying PAL-M recordings using a standard PAL (625/50 Hz) colour TV, or even through multi-system TV sets. Video recorders like Panasonic NV-W1E (AG-W1 for the USA), AG-W2, AG-W3, NV-J700AM, Aiwa HV-MX100, HV-MX1U, Samsung SV-4000W and SV-7000W feature a digital TV system conversion circuitry. Some recorders support the other way around, being able to playback standard PAL (625/50 Hz) in 50 Hz-compatible PAL-M TV sets, such as the Panasonic NV-FJ605.
PAL 60
The PAL colour system (either baseband or with any RF system, with the normal 4.43 MHz subcarrier unlike PAL-M) can also be applied to an NTSC-like 525-line picture to form what is often known as "PAL-60" (sometimes "PAL-60/525," "Pseudo-PAL," or "Quasi-PAL"). This non-standard signal is a method used in European domestic VCRs and DVD players for playback of NTSC material on PAL televisions. It's not identical to PAL-M and incompatible with it, because the colour subcarrier is at a different frequency; it will therefore display in monochrome on PAL-M and NTSC television sets.
Technological obsolescence
SBTVD and ABERT/SET tests
The analog PAL-M was scheduled to be supplanted by a digital high-definition system named Sistema Brasileiro de Televisão Digital (SBTVD) by 2015, and finishing in 2018. From 1999 to 2000, the ABERT/SET group in Brazil did system comparison tests of ATSC, DVB-T and ISDB-T under the supervision of the CPqD foundation.
Originally, Brazil including Argentina, Paraguay and Uruguay planned to adopt the DVB-T standard. However, the ABERT/SET group selected ISDB-T, after field-tests results showed that it was the most robust system under Brazilian reception conditions. Therefore, SBTVD was replaced by the Brazilian variant of the ISDB standard, ISDB-Tb, which features SBTVD's characteristics into the originally-Japanese digital norm.
See also
Broadcast television systems
References
Television in Brazil
Television technology
Television transmission standards
Video formats | PAL-M | Technology | 1,241 |
31,773,185 | https://en.wikipedia.org/wiki/Stargate%20%28asterism%29 | The Stargate Asterism or Stargate Cluster is an asterism in the constellation Corvus consisting of six stars, also known as STF 1659.
Gallery
See also
Lists of stars
References
Asterisms (astronomy) | Stargate (asterism) | Astronomy | 48 |
2,287,401 | https://en.wikipedia.org/wiki/Crooked%20spire | A crooked spire, (also known as a twisted spire) is a tower showing a twist and/or a deviation from the vertical. A church tower usually consists of a square stone tower topped with a pyramidal wooden structure, the spire is usually cladded with slates or lead to protect the wood. Through accident or design the spire may contain a twist, or it may not point perfectly straight upwards. Some however have been built or rebuilt with a deliberate twist, generally as a design choice.
There are about a hundred bell towers of this type in Europe.
Reasons for spires to twist and bend
Twisting can be caused by internal or external forces. Internal conditions, such as green or unseasoned wood, can cause some twisting until after about 50 years when fully seasoned. Also the weight of any lead used in construction can cause the wood to twist. Dry wood will shrink, causing further movement.
External forces, such as water ingress that causes rot, can cause partial collapse, resulting in tilting. Heat from the sun on one side can also cause movement. Earthquakes have also occasionally caused twisting. Subsidence can cause leaning. Strong winds have been blamed at times, but there is little evidence to back this up. Finally, weak design can be at fault, for instance with a lack of cross-bracing, resulting in the ability of the tower to move.
One legend relating to Chesterfield says that a virgin once married in the church, and the church was so surprised that the spire turned around to look at the bride. Another version of the myth common in Chesterfield is that the devil twisted the spire when a virgin married in the church, saying that he would untwist it when the next virgin got married there. A third myth says that the devil perched on the spire and twisted his tail around it to hold on, the twist of his tail transmitting to the structure.
List of twisted spires
References
Towers | Crooked spire | Engineering | 387 |
13,500,312 | https://en.wikipedia.org/wiki/Ideotype | In systematics, an ideotype is a specimen identified as belonging to a specific taxon by the author of that taxon, but collected from somewhere other than the type locality.
The concept of ideotype in plant breeding was introduced by Donald in 1968 to describe the idealized appearance of a plant variety. It literally means 'a form denoting an idea'. According to Donald, an ideotype is a biological model which is expected to perform or behave in a particular manner within a defined environment: "a crop ideotype is a plant model, which is expected to yield a greater quantity or quality of grain, oil or other useful product when developed as a cultivar." Donald and Hamblin (1976) proposed the concepts of isolation, competition and crop ideotypes. Market ideotype, climatic ideotype, edaphic ideotype, stress ideotype and disease/pest ideotypes are its other concepts. The term ideotype has the following synonyms: model plant type, ideal model plant type and ideal plan type.
The term is also used in cognitive science and cognitive psychology, where Ronaldo Vigo (2011, 2013, 2014) introduced it to refer to a type of concept metarepresentation that is a compound memory trace consisting of the structural information detected by humans in categorical stimuli.
Notes
Molecular biology
Botanical nomenclature | Ideotype | Chemistry,Biology | 281 |
54,626,053 | https://en.wikipedia.org/wiki/UGC%206614 | UGC 6614 is a giant spiral galaxy located about 330 million light-years away in the constellation Leo. It has an estimated diameter of nearly 300,000 light-years.
Physical characteristics
UGC 6614 is classified as a low surface brightness (LSB) galaxy. The galaxy is nearly face-on and has a ring-like feature around its bulge, with distinctive extended spiral arms. The bulge of UGC 6614 is found to be red, similar to those of S0 and other elliptical galaxies, hinting at the existence of an old star population. In its center, globular clusters are present.
It is hypothesised UGC 6614 might be a giant elliptical galaxy, but because of repeated mergers with other disk galaxies, it shows a stellar disk structure, causing its spiral-like appearance.
UGC 6614 possibly shows the highest metallicity known for an LSB galaxy with an estimated log value of (O/H) 1⁄4 3 to 2.84. Its nucleus shows AGN activity at optical wavelengths and appears as a bright core in X-ray emission, according to XMM-Newton archival data.
Black hole
UGC 6614 contains a supermassive black hole in its center with an estimated solar mass of 3.8 x 106.
Unconfirmed Supernova
AT 2020ojw, an astronomical transient, was discovered in UGC 6614 in July 2020 by ATLAS (Asteroid Terrestrial-impact Last Alert System). It had a magnitude of 18.4 and is a candidate supernova.
Group Membership
UGC 6614 is a member of a small group of 3 galaxies known as [T2015] nest 100958. [T2015] nest 100958 has a velocity dispersion of 244 km/s and an estimated mass of 1.38 × 1013 M☉. Other members of the group incude its brightest member, NGC 3767, and CGCG 097-024. The group is part of the Coma Supercluster.
See also
NGC 45
Low-surface-brightness galaxy
References
External links
Unbarred spiral galaxies
Leo (constellation)
06614
036122
+03-30-029
Ring galaxies
Low surface brightness galaxies
T2015 nest 100958 | UGC 6614 | Astronomy | 460 |
2,627,025 | https://en.wikipedia.org/wiki/Obligatory%20passage%20point | The concept of an obligatory passage point (OPP) was developed by sociologist Michel Callon in a seminal contribution to actor–network theory: Callon, Michel (1986), "Elements of a sociology of translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay". In John Law (Ed.), Power, Action and Belief: A New Sociology of Knowledge? London, Routledge: 196–233.
Obligatory passage points are a feature of actor-networks, usually associated with the initial (problematization) phase of a translation process. An OPP can be thought of as the narrow end of a funnel, that forces the actors to converge on a certain topic, purpose or question. The OPP thereby becomes a necessary element for the formation of a network and an action program. The OPP thereby mediates all interactions between actors in a network and defines the action program. Obligatory passage points allow for local networks to set up negotiation spaces that allow them a degree of autonomy from the global network of involved actors.
If a project is unable to impose itself as a strong OPP between the global and local networks, it has no control over global resources such as financial and political support, which can be misused or withdrawn. Additionally, a weak OPP is unable to take credit for the successes achieved within the local network, as outside actors are able to bypass its control and influence the local network directly.
An action program can comprise a number of different OPPs. An OPP can also be redefined as the problematization phase is revisited.
In Callon and Law's '"Engineering and Sociology in a Military Aircraft Project" the project management of a project to design a new strategic jet fighter for the British Military became an obligatory passage point between representatives of government and aerospace engineers.
In recent years, the notion of the obligatory passage point has taken hold in information systems security and information privacy disciplines and journals. Backhouse et al. (2006) illustrated how practices and policies are standardized and institutionalized through OPP.
References
Actor-network theory | Obligatory passage point | Technology | 421 |
33,846,186 | https://en.wikipedia.org/wiki/Square%20root%20of%20a%202%20by%202%20matrix | A square root of a 2×2 matrix M is another 2×2 matrix R such that M = R2, where R2 stands for the matrix product of R with itself. In general, there can be zero, two, four, or even an infinitude of square-root matrices. In many cases, such a matrix R can be obtained by an explicit formula.
Square roots that are not the all-zeros matrix come in pairs: if R is a square root of M, then −R is also a square root of M, since (−R)(−R) = (−1)(−1)(RR) = R2 = M.A 2×2 matrix with two distinct nonzero eigenvalues has four square roots. A positive-definite matrix has precisely one positive-definite square root.
A general formula
The following is a general formula that applies to almost any 2 × 2 matrix. Let the given matrix be
where A, B, C, and D may be real or complex numbers. Furthermore, let τ = A + D be the trace of M, and δ = AD − BC be its determinant. Let s be such that s2 = δ, and t be such that t2 = τ + 2s. That is,
Then, if t ≠ 0, a square root of M is
Indeed, the square of R is
Note that R may have complex entries even if M is a real matrix; this will be the case, in particular, if the determinant δ is negative.
The general case of this formula is when δ is nonzero, and τ2 ≠ 4δ, in which case s is nonzero, and t is nonzero for each choice of sign of s. Then the formula above will provide four distinct square roots R, one for each choice of signs for s and t.
Special cases of the formula
If the determinant δ is zero, but the trace τ is nonzero, the general formula above will give only two distinct solutions, corresponding to the two signs of t. Namely,
where t is any square root of the trace τ.
The formula also gives only two distinct solutions if δ is nonzero, and τ2 = 4δ (the case of duplicate eigenvalues), in which case one of the choices for s will make the denominator t be zero. In that case, the two roots are
where s is the square root of δ that makes τ − 2s nonzero, and t is any square root of τ − 2s.
The formula above fails completely if δ and τ are both zero; that is, if D = −A, and A2 = −BC, so that both the trace and the determinant of the matrix are zero. In this case, if M is the null matrix (with A = B = C = D = 0), then the null matrix is also a square root of M, as is any matrix
where b and c are arbitrary real or complex values. Otherwise M has no square root.
Formulas for special matrices
Idempotent matrix
If M is an idempotent matrix, meaning that MM = M, then if it is not the identity matrix, its determinant is zero, and its trace equals its rank, which (excluding the zero matrix) is 1. Then the above formula has s = 0 and τ = 1, giving M and −M as two square roots of M.
Exponential matrix
If the matrix M can be expressed as real multiple of the exponent of some matrix A, , then two of its square roots are . In this case the square root is real.
Diagonal matrix
If M is diagonal (that is, B = C = 0), one can use the simplified formula
where a = ±√A, and d = ±√D. This, for the various sign choices, gives four, two, or one distinct matrices, if none of, only one of, or both A and D are zero, respectively.
Identity matrix
Because it has duplicate eigenvalues, the 2×2 identity matrix has infinitely many symmetric rational square roots given by
where are any complex numbers such that
Matrix with one off-diagonal zero
If B is zero, but A and D are not both zero, one can use
This formula will provide two solutions if A = D or A = 0 or D = 0, and four otherwise. A similar formula can be used when C is zero, but A and D are not both zero.
Real matrices with real square roots
The algebra M(2, R) of 2x2 real matrices has three types of planar subalgebras. Each subalgebra admits the exponential map. If are square roots of p. The condition that the matrix is the image under exp limits it to half the plane of dual numbers, and to a quarter of the plane of split complex numbers, but does not constrain ordinary complex planes since the exponential mapping covers them. In the split-complex case there are two more square roots of p since each quadrant contains one.
References
Matrices | Square root of a 2 by 2 matrix | Mathematics | 1,047 |
6,040,372 | https://en.wikipedia.org/wiki/Physiology%20of%20dinosaurs | The physiology of dinosaurs has historically been a controversial subject, particularly their thermoregulation. Recently, many new lines of evidence have been brought to bear on dinosaur physiology generally, including not only metabolic systems and thermoregulation, but on respiratory and cardiovascular systems as well.
During the early years of dinosaur paleontology, it was widely considered that they were sluggish, cumbersome, and sprawling cold-blooded lizards. However, with the discovery of much more complete skeletons in western United States, starting in the 1870s, scientists could make more informed interpretations of dinosaur biology and physiology. Edward Drinker Cope, opponent of Othniel Charles Marsh in the Bone Wars, propounded at least some dinosaurs as active and agile, as seen in the painting of two fighting Laelaps produced under his direction by Charles R. Knight.
In parallel, the development of Darwinian evolution, and the discoveries of Archaeopteryx and Compsognathus, led Thomas Henry Huxley to propose that dinosaurs were closely related to birds. Despite these considerations, the image of dinosaurs as large reptiles had already taken root, and most aspects of their paleobiology were interpreted as being typically reptilian for the first half of the twentieth century. Beginning in the 1960s and with the advent of the Dinosaur Renaissance, views of dinosaurs and their physiology have changed dramatically, including the discovery of feathered dinosaurs in Early Cretaceous age deposits in China, indicating that birds evolved from highly agile maniraptoran dinosaurs.
History
Early interpretations
The study of dinosaurs began in the 1820s in England. Pioneers in the field, such as William Buckland, Gideon Mantell, and Richard Owen, interpreted the first, very fragmentary remains as belonging to large quadrupedal beasts. Their early work can be seen today in the Crystal Palace Dinosaurs, constructed in the 1850s, which present known dinosaurs as elephantine lizard-like reptiles. Despite these reptilian appearances, Owen speculated that dinosaur heart and respiratory systems were more similar to that of a mammal than a reptile.
Changing views and the dinosaur renaissance
In the late 1960s, similar ideas reappeared, beginning with John Ostrom's work on Deinonychus and bird evolution. His student, Bob Bakker, popularized the changing thought in a series of papers beginning with The superiority of dinosaurs in 1968. In these publications, he argued strenuously that dinosaurs were warm-blooded and active animals, capable of sustained periods of high activity. In most of his writings Bakker framed his arguments as new evidence leading to a revival of ideas popular in the late 19th century, frequently referring to an ongoing dinosaur renaissance. He used a variety of anatomical and statistical arguments to defend his case, the methodology of which was fiercely debated among scientists.
These debates sparked interest in new methods for ascertaining the palaeobiology of extinct animals, such as bone histology, which have been successfully applied to determining the growth-rates of many dinosaurs.
Today, it is generally thought that many or perhaps all dinosaurs had higher metabolic rates than living reptiles, but also that the situation is more complex and varied than Bakker originally proposed. For example, while smaller dinosaurs may have been true endotherms, the larger forms could have been inertial homeotherms, or that many dinosaurs could have had intermediate metabolic rates.
Feeding and digestion
The earliest dinosaurs were almost certainly predators, and shared several predatory features with their nearest non-dinosaur relatives like Lagosuchus, including: relatively large, curved, blade-like teeth in large, wide-opening jaws that closed like scissors; relatively small abdomens, as carnivores do not require large digestive systems. Later dinosaurs regarded as predators sometimes grew much larger, but retained the same set of features. Instead of chewing their food, these predators swallowed it whole.
The feeding habits of ornithomimosaurs and oviraptorosaurs are a mystery: although they evolved from a predatory theropod lineage, they have small jaws and lack the blade-like teeth of typical predators, but there is no evidence of their diet or how they ate and digested it.
Features of other groups of dinosaurs indicate they were herbivores. These features include:
Jaws that only slightly opened and closed so that all the teeth met at the same time
Large abdomens that could accommodate large amounts of vegetation and store it for the long time it takes to digest vegetation
Guts that likely contained endosymbiotic micro-organisms that digest cellulose, as no known animal can digest this tough material directly
Sauropods, which were herbivores, did not chew their food, as their teeth and jaws appear suitable only for stripping leaves off plants. Ornithischians, also herbivores, show a variety of approaches. The armored ankylosaurs and stegosaurs had small heads and weak jaws and teeth, and are thought to have fed in much the same way as sauropods. The pachycephalosaurs had small heads and weak jaws and teeth, but their lack of large digestive systems suggests a different diet, possibly fruits, seeds, or young shoots, which would have been more nutritious to them than leaves.
On the other hand, ornithopods such as Hypsilophodon, Iguanodon and various hadrosaurs had horny beaks for snipping off vegetation and jaws and teeth that were well-adapted for chewing. The horned ceratopsians had similar mechanisms.
It has often been suggested that at least some dinosaurs used swallowed stones, known as gastroliths, to aid digestion by grinding their food in muscular gizzards, and that this was a feature they shared with birds. In 2007 Oliver Wings reviewed references to gastroliths in scientific literature and found considerable confusion, starting with the lack of an agreed and objective definition of "gastrolith". He found that swallowed hard stones or grit can assist digestion in birds that mainly feed on grain but may not be essential—and that birds that eat insects in summer and grain in winter usually get rid of the stones and grit in summer. Gastroliths have often been described as important for sauropod dinosaurs, whose diet of vegetation required very thorough digestion, but Wings concluded that this idea was incorrect: gastroliths are found with only a small percentage of sauropod fossils; where they have been found, the amounts are too small and in many cases the stones are too soft to have been effective in grinding food; most of these gastroliths are highly polished, but gastroliths used by modern animals to grind food are roughened by wear and corroded by stomach acids; hence the sauropod gastroliths were probably swallowed accidentally. On the other hand, he concluded that gastroliths found with fossils of advanced theropod dinosaurs such as Sinornithomimus and Caudipteryx resemble those of birds, and that the use of gastroliths for grinding food may have appeared early in the group of dinosaurs from which these dinosaurs and birds both evolved.
Reproductive biology
When laying eggs, female birds grow a special type of bone in their limbs between the hard outer bone and the marrow. This medullary bone, which is rich in calcium, is used to make eggshells, and the birds that produced it absorb it when they have finished laying eggs. Medullary bone has been found in fossils of the theropods Tyrannosaurus and Allosaurus and of the ornithopod Tenontosaurus.
Because the line of dinosaurs that includes Allosaurus and Tyrannosaurus diverged from the line that led to Tenontosaurus very early in the evolution of dinosaurs, the presence of medullary bone in both groups suggests that dinosaurs in general produced medullary tissue. On the other hand, crocodilians, which are dinosaurs' second closest extant relatives after birds, do not produce medullary bone. This tissue may have first appeared in ornithodires, the Triassic archosaur group from which dinosaurs are thought to have evolved.
Medullary bone has been found in specimens of sub-adult size, which suggests that dinosaurs reached sexual maturity before they were full-grown. Sexual maturity at sub-adult size is also found in reptiles and in medium- to large-sized mammals, but birds and small mammals reach sexual maturity only after they are full-grown—which happens within their first year. Early sexual maturity is also associated with specific features of animals' life cycles: the young are born relatively well-developed rather than helpless; and the death-rate among adults is high.
Respiratory system
Air sacs
From about 1870 onwards scientists have generally agreed that the post-cranial skeletons of many dinosaurs contained many air-filled cavities (postcranial skeletal pneumaticity, especially in the vertebrae. Pneumatization of the skull (such as paranasal sinuses) is found in both synapsids and archosaurs, but postcranial pneumatization is found only in birds, non-avian saurischian dinosaurs, and pterosaurs.
For a long time these cavities were regarded simply as weight-saving devices, but Bakker proposed that they were connected to air sacs like those that make birds' respiratory systems the most efficient of all animals'.
John Ruben et al. (1997, 1999, 2003, 2004) disputed this and suggested that dinosaurs had a "tidal" respiratory system (in and out) powered by a crocodile-like hepatic piston mechanism – muscles attached mainly to the pubis pull the liver backwards, which makes the lungs expand to inhale; when these muscles relax, the lungs return to their previous size and shape, and the animal exhales. They also presented this as a reason for doubting that birds descended from dinosaurs.
Critics have claimed that, without avian air sacs, modest improvements in a few aspects of a modern reptile's circulatory and respiratory systems would enable the reptile to achieve 50% to 70% of the oxygen flow of a mammal of similar size, and that lack of avian air sacs would not prevent the development of endothermy. Very few formal rebuttals have been published in scientific journals of Ruben et al.'s claim that dinosaurs could not have had avian-style air sacs; but one points out that the Sinosauropteryx fossil on which they based much of their argument was severely flattened and therefore it was impossible to tell whether the liver was the right shape to act as part of a hepatic piston mechanism. Some recent papers simply note without further comment that Ruben et al. argued against the presence of air sacs in dinosaurs.
Researchers have presented evidence and arguments for air sacs in sauropods, "prosauropods", coelurosaurs, ceratosaurs, and the theropods Aerosteon and Coelophysis.
In advanced sauropods ("neosauropods") the vertebrae of the lower back and hip regions show signs of air sacs. In early sauropods only the cervical (neck) vertebrae show these features. If the developmental sequence found in bird embryos is a guide, air sacs actually evolved before the channels in the skeleton that accommodate them in later forms.
Evidence of air sacs has also been found in theropods. Studies indicate that fossils of coelurosaurs, ceratosaurs, and the theropods Coelophysis and Aerosteon exhibit evidence of air sacs. Coelophysis, from the late Triassic, is one of the earliest dinosaurs whose fossils show evidence of channels for air sacs. Aerosteon, a Late Cretaceous allosaur, had the most bird-like air sacs found so far.
Early sauropodomorphs, including the group traditionally called "prosauropods", may also have had air sacs. Although possible pneumatic indentations have been found in Plateosaurus and Thecodontosaurus, the indentations are very small. One study in 2007 concluded that prosauropods likely had abdominal and cervical air sacs, based on the evidence for them in sister taxa (theropods and sauropods). The study concluded that it was impossible to determine whether prosauropods had a bird-like flow-through lung, but that the air sacs were almost certainly present. A further indication for the presence of air sacs and their use in lung ventilation comes from a reconstruction of the air exchange volume (the volume of air exchanged with each breath) of Plateosaurus, which when expressed as a ratio of air volume per body weight at 29 ml/kg is similar to values of geese and other birds, and much higher than typical mammalian values.
So far no evidence of air sacs has been found in ornithischian dinosaurs. But this does not imply that ornithischians could not have had metabolic rates comparable to those of mammals, since mammals also do not have air sacs.
Three explanations have been suggested for the development of air sacs in dinosaurs:
Increase in respiratory capacity. This is probably the most common hypothesis, and fits well with the idea that many dinosaurs had fairly high metabolic rates.
Improving balance and maneuvrability by lowering the center of gravity and reducing rotational inertia. However this does not explain the expansion of air sacs in the quadrupedal sauropods.
As a cooling mechanism. It seems that air sacs and feathers evolved at about the same time in coelurosaurs. If feathers retained heat, their owners would have required a means of dissipating excess heat. This idea is plausible but needs further empirical support.
Calculations of the volumes of various parts of the sauropod Apatosaurus respiratory system support the evidence of bird-like air sacs in sauropods:
Assuming that Apatosaurus, like dinosaurs' nearest surviving relatives crocodilians and birds, did not have a diaphragm, the dead-space volume of a 30-ton specimen would be about 184 liters. This is the total volume of the mouth, trachea and air tubes. If the animal exhales less than this, stale air is not expelled and is sucked back into the lungs on the following inhalation.
Estimates of its tidal volume – the amount of air moved into or out of the lungs in a single breath – depend on the type of respiratory system the animal had: 904 liters if avian; 225 liters if mammalian; 19 liters if reptilian.
On this basis, Apatosaurus could not have had a reptilian respiratory system, as its tidal volume would have been less than its dead-space volume, so that stale air was not expelled but was sucked back into the lungs. Likewise, a mammalian system would only provide to the lungs about 225 − 184 = 41 liters of fresh, oxygenated air on each breath. Apatosaurus must therefore have had either a system unknown in the modern world or one like birds', with multiple air sacs and a flow-through lung. Furthermore, an avian system would only need a lung volume of about 600 liters while a mammalian one would have required about 2,950 liters, which would exceed the estimated 1,700 liters of space available in a 30-ton Apatosaurus′ chest.
Dinosaur respiratory systems with bird-like air sacs may have been capable of sustaining higher activity levels than mammals of similar size and build can sustain. In addition to providing a very efficient supply of oxygen, the rapid airflow would have been an effective cooling mechanism, which is essential for animals that are active but too large to get rid of all the excess heat through their skins.
The palaeontologist Peter Ward has argued that the evolution of the air sac system, which first appears in the very earliest dinosaurs, may have been in response to the very low (11%) atmospheric oxygen of the Carnian and Norian ages of the Triassic Period.
Uncinate processes on the ribs
Birds have spurs called "uncinate processes" on the rear edges of their ribs, and these give the chest muscles more leverage when pumping the chest to improve oxygen supply. The size of the uncinate processes is related to the bird's lifestyle and oxygen requirements: they are shortest in walking birds and longest in diving birds, which need to replenish their oxygen reserves quickly when they surface. Non-avian maniraptoran dinosaurs also had these uncinate processes, and they were proportionately as long as in modern diving birds, which indicates that maniraptorans needed a high-capacity oxygen supply.
Plates that may have functioned the same way as uncinate processes have been observed in fossils of the ornithischian dinosaur Thescelosaurus, and have been interpreted as evidence of high oxygen consumption and therefore high metabolic rate.
Nasal turbinates
Nasal turbinates are convoluted structures of thin bone in the nasal cavity. In most mammals and birds these are present and lined with mucous membranes that perform two functions. They improve the sense of smell by increasing the area available to absorb airborne chemicals, and they warm and moisten inhaled air, and extract heat and moisture from exhaled air to prevent desiccation of the lungs.
John Ruben and others have argued that no evidence of nasal turbinates has been found in dinosaurs. All the dinosaurs they examined had nasal passages that were too narrow and short to accommodate nasal turbinates, so dinosaurs could not have sustained the breathing rate required for a mammal-like or bird-like metabolic rate while at rest, because their lungs would have dried out. However, objections have been raised against this argument. Nasal turbinates are absent or very small in some birds (e.g. ratites, Procellariiformes and Falconiformes) and mammals (e.g. whales, anteaters, bats, elephants, and most primates), although these animals are fully endothermic and in some cases very active. Other studies conclude that nasal turbinates are fragile and seldom found in fossils. In particular none have been found in fossil birds.
In 2014 Jason Bourke and others in Anatomical Record reported finding nasal turbinates in pachycephalosaurs.
Cardiovascular system
In principle one would expect dinosaurs to have had two-part circulations driven by four-chambered hearts, since many would have needed high blood pressure to deliver blood to their heads, which were high off the ground, but vertebrate lungs can only tolerate fairly low blood pressure. In 2000, a skeleton of Thescelosaurus, now on display at the North Carolina Museum of Natural Sciences, was described as including the remnants of a four-chambered heart and an aorta. The authors interpreted the structure of the heart as indicating an elevated metabolic rate for Thescelosaurus, not reptilian cold-bloodedness. Their conclusions have been disputed; other researchers published a paper where they assert that the heart is really a concretion of entirely mineral "cement". As they note: the anatomy given for the object is incorrect, for example the alleged "aorta" is narrowest where it meets the "heart" and lacks arteries branching from it; the "heart" partially engulfs one of the ribs and has an internal structure of concentric layers in some places; and another concretion is preserved behind the right leg. The original authors defended their position; they agreed that the chest did contain a type of concretion, but one that had formed around and partially preserved the more muscular portions of the heart and aorta.
Regardless of the object's identity, it may have little relevance to dinosaurs' internal anatomy and metabolic rate. Both modern crocodilians and birds, the closest living relatives of dinosaurs, have four-chambered hearts, although modified in crocodilians, and so dinosaurs probably had them as well. However such hearts are not necessarily tied to metabolic rate.
Growth and lifecycle
No dinosaur egg has been found that is larger than a basketball and embryos of large dinosaurs have been found in relatively small eggs, e.g. Maiasaura. Like mammals, dinosaurs stopped growing when they reached the typical adult size of their species, while mature reptiles continued to grow slowly if they had enough food. Dinosaurs of all sizes grew faster than similarly sized modern reptiles; but the results of comparisons with similarly sized "warm-blooded" modern animals depend on their sizes:
Tyrannosaurus rex showed a "teenage growth spurt":
½ ton at age 10
very rapid growth to around 2 tons in the mid-teens (about ½ ton per year).
negligible growth after the second decade.
A 2008 study of one skeleton of the hadrosaur Hypacrosaurus concluded that this dinosaur grew even faster, reaching its full size at the age of about 15; the main evidence was the number and spacing of growth rings in its bones. The authors found this consistent with a life-cycle theory that prey species should grow faster than their predators if they lose a lot of juveniles to predators and the local environment provides enough resources for rapid growth.
It appears that individual dinosaurs were rather short-lived, e.g. the oldest (at death) Tyrannosaurus found so far was 28 and the oldest sauropod was 38. Predation was probably responsible for the high death rate of very young dinosaurs and sexual competition for the high death rate of sexually mature dinosaurs.
Metabolism
Scientific opinion about the life-style, metabolism and temperature regulation of dinosaurs has varied over time since the discovery of dinosaurs in the mid-19th century. The activity of metabolic enzymes varies with temperature, so temperature control is vital for any organism, whether endothermic or ectothermic. Organisms can be categorized as poikilotherms (poikilo – changing), which are tolerant of internal temperature fluctuations, and homeotherms (homeo – same), which must maintain a constant core temperature. Animals can be further categorized as endotherms, which regulate their temperature internally, and ectotherms, which regulate temperature by the use of external heat sources.
"Warm-bloodedness" is a complex and rather ambiguous term, because it includes some or all of:
Homeothermy, i.e. maintaining a fairly constant body temperature. Modern endotherms maintain a variety of temperatures: to in monotremes and sloths; to in marsupials; to in most placentals; and around in birds.
Tachymetabolism, i.e. maintaining a high metabolic rate, particularly when at rest. This requires a fairly high and stable body temperature, since biochemical processes run about half as fast if an animal's temperature drops by 10°C; most enzymes have an optimum operating temperature and their efficiency drops rapidly outside the preferred range.
Endothermy, i.e. the ability to generate heat internally, for example by "burning" fat, rather than via behaviors such as basking or muscular activity. Although endothermy is in principle the most reliable way to maintain a fairly constant temperature, it is expensive; for example modern mammals need 10 to 13 times as much food as modern reptiles.
Large dinosaurs may also have maintained their temperatures by inertial homeothermy, also known as "bulk homeothermy" or "mass homeothermy". In other words, the thermal capacity of such large animals was so high that it would take two days or more for their temperatures to change significantly, and this would have smoothed out variations caused by daily temperature cycles. This smoothing effect has been observed in large turtles and crocodilians, but Plateosaurus, which weighed about , may have been the smallest dinosaur in which it would have been effective. Inertial homeothermy would not have been possible for small species nor for the young of larger species. Vegetation fermenting in the guts of large herbivores can also produce considerable heat, but this method of maintaining a high and stable temperature would not have been possible for carnivores or for small herbivores or the young of larger herbivores.
Since the internal mechanisms of extinct creatures are unknowable, most discussion focuses on homeothermy and tachymetabolism.
Assessment of metabolic rates is complicated by the distinction between the rates while resting and while active. In all modern reptiles and most mammals and birds the maximum rates during all-out activity are 10 to 20 times higher than minimum rates while at rest. However, in a few mammals these rates differ by a factor of 70. Theoretically it would be possible for a land vertebrate to have a reptilian metabolic rate at rest and a bird-like rate while working flat out. However, an animal with such a low resting rate would be unable to grow quickly. The huge herbivorous sauropods may have been on the move so constantly in search of food that their energy expenditure would have been much the same irrespective of whether their resting metabolic rates were high or low.
Theories
The main possibilities are that:
Dinosaurs were cold-blooded, like modern reptiles, except that the large size of many would have stabilized their body temperatures.
They were warm-blooded, more like modern mammals or birds than modern reptiles.
They were neither cold-blooded nor warm-blooded in modern terms, but had metabolisms that were different from and in some ways intermediate between those of modern cold-blooded and warm-blooded animals.
They included animals with two or three of these types of metabolism.
Dinosaurs were around for about 150 million years, so it is very likely that different groups evolved different metabolisms and thermoregulatory regimes, and that some developed different physiologies from the first dinosaurs.
If all or some dinosaurs had intermediate metabolisms, they may have had the following features:
Low resting metabolic rates—which would reduce the amount of food they needed and allow them to use more of that food for growth than do animals with high resting metabolic rates.
Inertial homeothermy
The ability to control heat loss by expanding and contracting blood vessels just under the skin, as many modern reptiles do.
Two-part circulations driven by four-chambered hearts.
High aerobic capacity, allowing sustained activity.
Robert Reid has suggested that such animals could be regarded as "failed endotherms". He envisaged both dinosaurs and the Triassic ancestors of mammals passing through a stage with these features. Mammals were forced to become smaller as archosaurs came to dominate ecological niches for medium to large animals. Their decreasing size made them more vulnerable to heat loss because it increased their ratios of surface area to mass, and thus forced them to increase internal heat generation and thus become full endotherms. On the other hand, dinosaurs became medium to very large animals and thus were able to retain the "intermediate" type of metabolism.
Bone structure
Armand de Ricqlès discovered Haversian canals in dinosaur bones, and argued that there was evidence of endothermy in dinosaurs. These canals are common in "warm-blooded" animals and are associated with fast growth and an active life style because they help to recycle bone to facilitate rapid growth and repair damage caused by stress or injuries. Dense secondary Haversian bone, which is formed during remodeling, is found in many living endotherms as well as dinosaurs, pterosaurs and therapsids. Secondary Haversian canals are correlated with size and age, mechanical stress and nutrient turnover. The presence of secondary Haversian canals suggests comparable bone growth and lifespans in mammals and dinosaurs. Bakker argued that the presence of fibrolamellar bone (produced quickly and having a fibrous, woven appearance) in dinosaur fossils was evidence of endothermy.
However, as a result of other, mainly later research, bone structure is not considered a reliable indicator of metabolism in dinosaurs, mammals or reptiles:
Dinosaur bones often contain lines of arrested growth (LAGs), formed by alternating periods of slow and fast growth; in fact many studies count growth rings to estimate the ages of dinosaurs. The formation of growth rings is usually driven by seasonal changes in temperature, and this seasonal influence has sometimes been regarded as a sign of slow metabolism and ectothermy. But growth rings are found in polar bears and in mammals that hibernate. The relationship between LAGs and seasonal growth dependency remains unresolved.
Fibrolamellar bone is fairly common in young crocodilians and sometimes found in adults.
Haversian bone has been found in turtles, crocodilians and tortoises, but is often absent in small birds, bats, shrews and rodents.
Nevertheless, de Ricqlès persevered with studies of the bone structure of dinosaurs and archosaurs. In mid-2008 he co-authored a paper that examined bone samples from a wide range of archosaurs, including early dinosaurs, and concluded that:
Even the earliest archosauriforms may have been capable of very fast growth, which suggests they had fairly high metabolic rates. Although drawing conclusions about the earliest archosauriformes from later forms is tricky, because species-specific variations in bone structure and growth rate are very likely, there are research strategies that can minimize the risk that such factors will cause errors in the analysis.
Archosaurs split into three main groups in the Triassic: ornithodirans, from which dinosaurs evolved, remained committed to rapid growth; crocodilians' ancestors adopted more typical "reptilian" slow growth rates; and most other Triassic archosaurs had intermediate growth rates.
An osteohistological analysis of vascular density and density, shape and area of osteocytes concluded non-avian dinosaurs and the majority of archosauriforms (except Proterosuchus, crocodilians and phytosaurs) retained heat and had resting metabolic rates similar to those of extant mammals and birds.
Metabolic rate, blood pressure and flow
Endotherms rely highly on aerobic metabolism and have high rates of oxygen consumption during activity and rest. The oxygen required by the tissues is carried by the blood, and consequently blood flow rates and blood pressures at the heart of warm-blooded endotherms are considerably higher than those of cold-blooded ectotherms. It is possible to measure the minimum blood pressures of dinosaurs by estimating the vertical distance between the heart and the top of the head, because this column of blood must have a pressure at the bottom equal to the hydrostatic pressure derived from the density of blood and gravity. Added to this pressure is that required to move the blood through the circulatory system. It was pointed out in 1976 that, because of their height, many dinosaurs had minimum blood pressures within the endothermic range, and that they must have had four-chambered hearts to separate the high pressure circuit to the body from the low pressure circuit to the lungs. It was not clear whether these dinosaurs had high blood pressure simply to support the blood column or to support the high blood flow rates required by endothermy or both.
However, recent analysis of the tiny holes in fossil leg bones of dinosaurs provides a gauge for blood flow rate and hence metabolic rate. The holes are called nutrient foramina, and the nutrient artery is the major blood vessel passing through to the interior of the bone, where it branches into tiny vessels of the Haversian canal system. This system is responsible for replacing old bone with new bone, thereby repairing microbreaks that occur naturally during locomotion. Without this repair, microbreaks would build up, leading to stress fractures and ultimately catastrophic bone failure. The size of the nutrient foramen provides an index of blood flow through it, according to the Hagen-Poiseuille equation. The size is also related to the body size of animal, of course, so this effect is removed by analysis of allometry. Blood flow index of the nutrient foramen of the femurs in living mammals increases in direct proportion to the animals' maximum metabolic rates, as measured during maximum sustained locomotion. Mammalian blood flow index is about 10 times greater than in ectothermic reptiles. Ten species of fossil dinosaurs from five taxonomic groups reveal indices even higher than in mammals, when body size is accounted for, indicating that they were highly active, aerobic animals. Thus high blood flow rate, high blood pressure, a four-chambered heart and sustained aerobic metabolism are all consistent with endothermy.
Growth rates
Dinosaurs grew from small eggs to several tons in weight relatively quickly. A natural interpretation of this is that dinosaurs converted food into body weight very quickly, which requires a fairly fast metabolism both to forage actively and to assimilate the food quickly. Developing bone found in juveniles is distinctly porous, which has been linked to vascularization and bone deposition rate, all suggesting growth rates close to those observed in modern birds.
But a preliminary study of the relationship between adult size, growth rate, and body temperature concluded that larger dinosaurs had higher body temperatures than smaller ones had; Apatosaurus, the largest dinosaur in the sample, was estimated to have a body temperature exceeding , whereas smaller dinosaurs were estimated to have body temperatures around – for comparison, normal human body temperature is about . Based on these estimations, the study concluded that large dinosaurs were inertial homeotherms (their temperatures were stabilized by their sheer bulk) and that dinosaurs were ectothermic (in colloquial terms, "cold-blooded", because they did not generate as much heat as mammals when not moving or digesting food). These results are consistent with the relationship between dinosaurs' sizes and growth rates (described above). Studies of the sauropodomorph Massospondylus and early theropod Syntarsus (Megapnosaurus) reveal growth rates of 3 kg/year and 17 kg/year, respectively, much slower than those estimated of Maiasaura and observed in modern birds.
Oxygen isotope ratios in bone
The ratio of the isotopes 16O and 18O in bone depends on the temperature the bone formed at: the higher the temperature, the more 16O. Barrick and Showers (1999) analyzed the isotope ratios in two theropods that lived in temperate regions with seasonal variation in temperature, Tyrannosaurus (USA) and Giganotosaurus (Argentina):
dorsal vertebrae from both dinosaurs showed no sign of seasonal variation, indicating that both maintained a constant core temperature despite seasonal variations in air temperature.
ribs and leg bones from both dinosaurs showed greater variability in temperature and a lower average temperature as the distance from the vertebrae increased.
Barrick and Showers concluded that both dinosaurs were endothermic but at lower metabolic levels than modern mammals, and that inertial homeothermy was an important part of their temperature regulation as adults. Their similar analysis of some Late Cretaceous ornithischians in 1996 concluded that these animals showed a similar pattern.
However this view has been challenged. The evidence indicates homeothermy, but by itself cannot prove endothermy. Secondly, the production of bone may not have been continuous in areas near the extremities of limbs – in allosaur skeletons lines of arrested growth ("LAGs"; rather like growth rings) are sparse or absent in large limb bones but common in the fingers and toes. While there is no absolute proof that LAGs are temperature-related, they could mark times when the extremities were so cool that the bones ceased to grow. If so, the data about oxygen isotope ratios would be incomplete, especially for times when the extremities were coolest. Oxygen isotope ratios may be an unreliable method of estimating temperatures if it cannot be shown that bone growth was equally continuous in all parts of the animal.
Predator–prey ratios
Bakker argued that:
cold-blooded predators need much less food than warm-blooded ones, so a given mass of prey can support far more cold-blooded predators than warm-blooded ones.
the ratio of the total mass of predators to prey in dinosaur communities was much more like that of modern and recent warm-blooded communities than that of recent or fossil cold-blooded communities.
hence predatory dinosaurs were warm-blooded. And since the earliest dinosaurs (e.g. Staurikosaurus, Herrerasaurus) were predators, all dinosaurs must have been warm-blooded.
This argument was criticized on several grounds and is no longer taken seriously (the following list of criticisms is far from exhaustive):
Estimates of dinosaur weights vary widely, and even a small variation can make a large difference to the calculated predator–prey ratio.
His sample may not have been representative. Bakker obtained his numbers by counting museum specimens, but these have a bias towards rare or especially well-preserved specimens, and do not represent what exists in fossil beds. Even fossil beds may not accurately represent the actual populations, for example smaller and younger animals have less robust bones and are therefore less likely to be preserved.
There are no published predator–prey ratios for large ectothermic predators, because such predators are very rare and mostly occur only on fairly small islands. Large ectothermic herbivores are equally rare. So Bakker was forced to compare mammalian predator–prey ratios with those of fish and invertebrate communities, where life expectancies are much shorter and other differences also distort the comparison.
The concept assumes that predator populations are limited only by the availability of prey. However other factors such as shortage of nesting sites, cannibalism or predation of one predator on another can hold predator populations below the limit imposed by prey biomass, and this would misleadingly reduce the predator–prey ratio.
Ecological factors can misleadingly reduce the predator–prey ratio, for example: a predator might prey on only some of the "prey" species present; disease, parasites and starvation might kill some of the prey animals before the predators get a chance to hunt them.
It is very difficult to state precisely what preys on what. For example, the young of herbivores may be preyed upon by lizards and snakes while the adults are preyed on by mammals. Conversely the young of many predators live largely on invertebrates and switch to vertebrates as they grow.
Posture and gait
Dinosaurs' limbs were erect and held under their bodies, rather than sprawling out to the sides like those of lizards and newts. The evidence for this is the angles of the joint surfaces and the locations of muscle and tendon attachments on the bones. Attempts to represent dinosaurs with sprawling limbs result in creatures with dislocated hips, knees, shoulders and elbows.
Carrier's constraint states that air-breathing vertebrates with two lungs that flex their bodies sideways during locomotion find it difficult to move and breathe at the same time. This severely limits stamina, and forces them to spend more time resting than moving.
Sprawling limbs require sideways flexing during locomotion (except for tortoises and turtles, which are very slow and whose armor keeps their bodies fairly rigid). However, despite Carrier's constraint, sprawling limbs are efficient for creatures that spend most of their time resting on their bellies and only move for a few seconds at a time—because this arrangement minimizes the energy costs of getting up and lying down.
Erect limbs increase the costs of getting up and lying down, but avoid Carrier's constraint. This indicates that dinosaurs were active animals because natural selection would have favored the retention of sprawling limbs if dinosaurs had been sluggish and spent most of their waking time resting. An active lifestyle requires a metabolism that quickly regenerates energy supplies and breaks down waste products which cause fatigue, i.e., it requires a fairly fast metabolism and a considerable degree of homeothermy.
Additionally, an erect posture demands precise balance, the result of a rapidly functioning neuromuscular system. This suggests endothermic metabolism, because an ectothermic animal would be unable to walk or run, and thus to evade predators, when its core temperature was lowered. Other evidence for endothermy includes limb length (many dinosaurs possessed comparatively long limbs) and bipedalism, both found today only in endotherms. Many bipedal dinosaurs possessed gracile leg bones with a short thigh relative to calf length. This is generally an adaptation to frequent sustained running, characteristic of endotherms which, unlike ectotherms, are capable of producing sufficient energy to stave off the onset of anaerobic metabolism in the muscle.
Bakker and Ostrom both pointed out that all dinosaurs had erect hindlimbs and that all quadrupedal dinosaurs had erect forelimbs; and that among living animals only the endothermic ("warm-blooded") mammals and birds have erect limbs (Ostrom acknowledged that crocodilians' occasional "high walk" was a partial exception). Bakker claimed this was clear evidence of endothermy in dinosaurs, while Ostrom regarded it as persuasive but not conclusive.
A 2009 study supported the hypothesis that endothermy was widespread in at least larger non-avian dinosaurs, and that It was plausibly ancestral for all dinosauriforms, based on the biomechanics of running, though it has also been suggested that endothermy appeared much earlier in archosauromorph evolution, perhaps even preceding the origin of Archosauriformes.
Feathers and filaments
There is now no doubt that many theropod dinosaur species had feathers, including Shuvuuia, Sinosauropteryx and Dilong (an early tyrannosaur). These have been interpreted as insulation and therefore evidence of warm-bloodedness.
But direct, unambiguous impressions of feathers have only been found in coelurosaurs (which include the birds and tyrannosaurs, among others), so at present feathers give us no information about the metabolisms of the other major dinosaur groups, e.g. coelophysids, ceratosaurs, carnosaurs, or sauropods. Filamentous integument was also present in at least some ornithischians, such as Tianyulong, Kulindadromeus and Psittacosaurus, not only indicating endothermy in this group, but also that feathers were already present in the first ornithodiran (the last common ancestor of dinosaurs and pterosaurs). Their absence in certain groups like Ankylosauria could be the result of suppression of feather genes. Though filaments only first appeared in Coelurosauria according to maximum likelihood reconstructions and that the integumentary structures of Psittacosaurus, Tianyulong, and Kulindadromeus independently evolved from filaments but this was by assuming primitive pterosaur ancestors were scaly.
The fossilised skin of Carnotaurus (an abelisaurid and therefore not a coelurosaur) shows an unfeathered, reptile-like skin with rows of bumps, but the conclusion that Carnotaurus was necessarily featherless has been criticized as the impressions do not cover the whole body, being found only in the lateral region but not the dorsum. An adult Carnotaurus weighed about 2 tonnes, and mammals of this size and larger have either very short, sparse hair or naked skins, so perhaps the skin of Carnotaurus tells us nothing about whether smaller non-coelurosaurian theropods had feathers. The tyrannosauroid Yutyrannus is known to have possessed feathers and weighed 1.1 tonne.
Skin-impressions of Pelorosaurus and other sauropods (dinosaurs with elephantine bodies and long necks) reveal large hexagonal scales, and some sauropods, such as Saltasaurus, had bony plates in their skin. The skin of ceratopsians consisted of large polygonal scales, sometimes with scattered circular plates. "Mummified" remains and skin impressions of hadrosaurids reveal pebbly scales. It is unlikely that the ankylosaurids, such as Euoplocephalus, had insulation, as most of their surface area was covered in bony knobs and plates. Likewise there is no evidence of insulation in the stegosaurs. Thus insulation, and the elevated metabolic rate behind evolving them, may have been limited to the theropods, or even just a subset of theropods. Lack of feathers or other sort of insulation does not indicate ectothermy or low metabolisms, as observed in the relative hairlessness of mammalian megafauna, pigs, human children and the hairless bat being compatible with endothermy.
Polar dinosaurs
Dinosaur fossils have been found in regions that were close to the poles at the relevant times, notably in southeastern Australia, Antarctica and the North Slope of Alaska. There is no evidence of major changes in the angle of the Earth's axis, so polar dinosaurs and the rest of these ecosystems would have had to cope with the same extreme variation of day length through the year that occurs at similar latitudes today (up to a full day with no darkness in summer, and a full day with no sunlight in winter).
Studies of fossilized vegetation suggest that the Alaska North Slope had a maximum temperature of and a minimum temperature of to in the last 35 million years of the Cretaceous (slightly cooler than Portland, Oregon but slightly warmer than Calgary, Alberta). Even so, the Alaska North Slope has no fossils of large cold-blooded animals such as lizards and crocodilians, which were common at the same time in Alberta, Montana, and Wyoming. This suggests that at least some non-avian dinosaurs were warm-blooded. It has been proposed that North American polar dinosaurs may have migrated to warmer regions as winter approached, which would allow them to inhabit Alaska during the summers even if they were cold-blooded. But a round trip between there and Montana would probably have used more energy than a cold-blooded land vertebrate produces in a year; in other words the Alaskan dinosaurs would have to be warm-blooded, irrespective of whether they migrated or stayed for the winter. A 2008 paper on dinosaur migration by Phil R. Bell and Eric Snively proposed that most polar dinosaurs, including theropods, sauropods, ankylosaurians, and hypsilophodonts, probably overwintered, although hadrosaurids like Edmontosaurus were probably capable of annual round trips.
It is more difficult to determine the climate of southeastern Australia when the dinosaur fossil beds were laid down , towards the end of the Early Cretaceous: these deposits contain evidence of permafrost, ice wedges, and hummocky ground formed by the movement of subterranean ice, which suggests mean annual temperatures ranged between and ; oxygen isotope studies of these deposits give a mean annual temperature of to . However the diversity of fossil vegetation and the large size of some of fossil trees exceed what is found in such cold environments today, and no-one has explained how such vegetation could have survived in the cold temperatures suggested by the physical indicators – for comparison Fairbanks, Alaska presently has a mean annual temperature of . An annual migration from and to southeastern Australia would have been very difficult for fairly small dinosaurs in such as Leaellynasaura, a herbivore about to long, because seaways to the north blocked the passage to warmer latitudes. Bone samples from Leaellynasaura and Timimus, an ornithomimid about long and high at the hip, suggested these two dinosaurs had different ways of surviving the cold, dark winters: the Timimus sample had lines of arrested growth (LAGs for short; similar to growth rings), and it may have hibernated; but the Leaellynasaura sample showed no signs of LAGs, so it may have remained active throughout the winter. A 2011 study focusing on hypsilophodont and theropod bones also concluded that these dinosaurs did not hibernate through the winter, but stayed active.
Evidence for behavioral thermoregulation
Some dinosaurs, e.g. Spinosaurus and Ouranosaurus, had on their backs "sails" supported by spines growing up from the vertebrae. (This was also true, incidentally, for the synapsid Dimetrodon.) Such dinosaurs could have used these sails to:
take in heat by basking with the "sails" at right angles to the sun's rays.
to lose heat by using the "sails" as radiators while standing in the shade or while facing directly towards or away from the sun.
But these were a very small minority of known dinosaur species.
One common interpretation of the plates on stegosaurs' backs is as heat exchangers for thermoregulation, as the plates are filled with blood vessels, which, theoretically, could absorb and dissipate heat.
This might have worked for a stegosaur with large plates, such as Stegosaurus, but other stegosaurs, such as Wuerhosaurus, Tuojiangosaurus and Kentrosaurus possessed much smaller plates with a surface area of doubtful value for thermo-regulation. However, the idea of stegosaurian plates as heat exchangers has recently been questioned.
Other evidence
Endothermy demands frequent respiration, which can result in water loss. In living birds and mammals, water loss is limited by pulling moisture out of exhaled air with mucus-covered respiratory turbinates, tissue-covered bony sheets in the nasal cavity. Several dinosaurs have olfactory turbinates, used for smell, but none have yet been identified with respiratory turbinates.
Because endothermy allows refined neuromuscular control, and because brain matter requires large amounts of energy to sustain, some speculate that increased brain size indicates increased activity and, thus, endothermy. The encephalization quotient (EQ) of dinosaurs, a measure of brain size calculated using brain endocasts, varies on a spectrum from bird-like to reptile-like. Using EQ alone, coelurosaurs appear to have been as active as living mammals, while theropods and ornithopods fall somewhere between mammals and reptiles, and other dinosaurs resemble reptiles.
A study published by Roger Seymour in 2013 added more support to the idea that dinosaurs were endothermic. After studying saltwater crocodiles, Seymour found that even if their large sizes could provide stable and high body temperatures, during activity the crocodile's ectothermic metabolism provided less aerobic abilities and generate only 14% of the total muscle power of a similar sized endothermic mammal before full fatigue. Seymour reasoned that dinosaurs would have needed to be endothermic since they would have needed better aerobic abilities and higher power generation to compete with and dominate over mammals as active land animals throughout the Mesozoic era.
Early archosaur metabolism
It appears that the earliest dinosaurs had the features that form the basis for arguments for warm-blooded dinosaurs—especially erect limbs. This raises the question "How did dinosaurs become warm-blooded?" The most obvious possible answers are:
"Their immediate ancestors (archosaurs) were cold-blooded, and dinosaurs began developing warm-bloodedness very early in their evolution." This implies that dinosaurs developed a significant degree of warm-bloodedness in a very short time, possibly less than 20M years. But in mammals' ancestors the evolution of warm-bloodedness seems to have taken much longer, starting with the beginnings of a secondary palate around the beginning of the mid-Permian and going on possibly until the appearance of hair about 164M years ago in the mid Jurassic).
"Dinosaurs' immediate ancestors (archosaurs) were at least fairly warm-blooded, and dinosaurs evolved further in that direction." This answer raises 2 problems: (A) The early evolution of archosaurs is still very poorly understood – large numbers of individuals and species are found from the start of the Triassic but only 2 species are known from the very late Permian (Archosaurus rossicus and Protorosaurus speneri); (B) Crocodilians evolved shortly before dinosaurs and are closely related to them, but are cold-blooded (see below).
Crocodilians present some puzzles if one regards dinosaurs as active animals with fairly constant body temperatures. Crocodilians evolved shortly before dinosaurs and, second to birds, are dinosaurs' closest living relatives – but modern crocodilians are cold-blooded. This raises some questions:
If dinosaurs were to a large extent "warm-blooded", when and how fast did warm-bloodedness evolve in their lineage?
Modern crocodilians are cold-blooded but have several features associated with warm-bloodedness. How did they acquire these features?
Modern crocodilians are cold-blooded but can move with their limbs erect, and have several features normally associated with warm-bloodedness because they improve the animal's oxygen supply:
4-chambered hearts. Mammals and birds have four-chambered hearts. Non-crocodilian reptiles have three-chambered hearts, which are less efficient because they allow oxygenated and de-oxygenated blood to mix and therefore send some de-oxygenated blood out to the body instead of to the lungs. Modern crocodilians' hearts are four-chambered, but are smaller relative to body size and run at lower pressure than those of modern mammals and birds. They also have a bypass that makes them functionally three-chambered when under water, conserving oxygen.
a diaphragm, which aids breathing.
a secondary palate, which allows the animal to eat and breathe at the same time.
a hepatic piston mechanism for pumping the lungs. This is different from the lung-pumping mechanisms of mammals and birds but similar to what some researchers claim to have found in some dinosaurs.
So why did natural selection favor these features, which are important for active warm-blooded creatures but of little apparent use to cold-blooded aquatic ambush predators that spend most of their time floating in water or lying on river banks?
It was suggested in the late 1980s that crocodilians were originally active, warm-blooded predators and that their archosaur ancestors were warm-blooded. More recently, developmental studies indicate that crocodilian embryos develop fully four-chambered hearts first—then develop the modifications that make their hearts function as three-chambered under water. Using the principle that ontogeny recapitulates phylogeny, the researchers concluded that the original crocodilians had fully 4-chambered hearts and were therefore warm-blooded and that later crocodilians developed the bypass as they reverted to being cold-blooded aquatic ambush predators.
More recent research on archosaur bone structures and their implications for growth rates also suggests that early archosaurs had fairly high metabolic rates and that the Triassic ancestors of crocodilians dropped back to more typically "reptilian" metabolic rates.
If this view is correct, the development of warm-bloodedness in archosaurs (reaching its peak in dinosaurs) and in mammals would have taken more similar amounts of time. It would also be consistent with the fossil evidence:
The earliest crocodylomorphs, e.g. Terrestrisuchus, were slim, leggy terrestrial predators.
Erect limbs appeared quite early in archosaurs' evolution, and those of rauisuchians are very poorly adapted for any other posture.
See also
Dinosaur classification
Dinosaur renaissance
Evolution of dinosaurs
Evolutionary physiology
List of dinosaurs
Origin of birds
Argentine black and white tegu#Warm-bloodedness
References
External links
Thermophysiology and Biology of Giganotosaurus: Comparison with Tyrannosaurus by RE Barrick and WJ Showers (1999)
Heart of a Dinosaur Is Reported Found
Crocodile evolution no heart-warmer
Dinosaurs
Animal physiology | Physiology of dinosaurs | Biology | 11,516 |
48,593,539 | https://en.wikipedia.org/wiki/Seasonal%20tropical%20forest | Seasonal tropical forest, also known as moist deciduous, semi-evergreen seasonal, tropical mixed or monsoon forest, typically contains a range of tree species: only some of which drop some or all of their leaves during the dry season. This tropical forest is classified under the Walter system as (i) tropical climate with high overall rainfall (typically in the 1000–2500 mm range; 39–98 inches) and (ii) having a very distinct wet season with (an often cooler “winter”) dry season. These forests represent a range of habitats influenced by monsoon (Am) or tropical wet savanna (Aw/As) climates (as in the Köppen climate classification). Drier forests in the Aw/As climate zone are typically deciduous and placed in the Tropical dry forest biome: with further transitional zones (ecotones) of savannah woodland then tropical and subtropical grasslands, savannas, and shrublands.
Distribution
Seasonal (mixed) tropical forests can be found in many parts of the tropical zone, with examples found in:
In the Asia-Pacific region: seasonal forests predominate across large areas of the Eastern Java, Wallacea, Indian subcontinent and Indochina
Eastern Java monsoon forests
Wallacea Forest
Brahmaputra Valley semi-evergreen forests
Mondulkiri Province, Cambodia
Cat Tien National Park, Vietnam
Khao Yai National Park and Huai Kha Khaeng Wildlife Sanctuary, Thailand
Northern Australia: Cape York Peninsula (Queensland), Arnhem Land (Northern Territory), The Kimberly (Western Australia)
In the Americas
Atlantic forests of Brazil
Central and eastern Panama: with Barro Colorado Island especially well studied
In Africa
Coastal West Africa: Guinean seasonal forest: from south-western Gambia to eastern Ghana
Climate
The climate of seasonal forests is typically controlled by a system called the Intertropical Convergence Zone (ITCZ), located near the equator and created by the convergence of the trade winds from the Northern and Southern Hemispheres. The position of these bands vary seasonally, moving north in the northern summer and south in the northern winter, and ultimately controlling the wet and dry seasons in the tropics.
These regions appear to have experienced strong warming, at a mean rate of 0.26 degrees Celsius per decade, which coincides with a global rise in temperature resulting from the anthropocentric inputs of greenhouse gases into the atmosphere. Studies have also found that precipitation has declined and tropical Asia has experienced an increase in dry season intensity whereas Amazonian has no significant pattern change in precipitation or dry season. Additionally, El Niño-Southern Oscillation (ENSO) events drive the inter-annual climatic variability in temperature and precipitation and result in drought and increased intensity of the dry season. As anthropogenic warming increases the intensity and frequency of ENSO will increase, rendering tropical rainforest regions susceptible to stress and increased mortality of trees and other plants.
Structure
As with tropical rainforests there are different canopy layers, but these may be less pronounced in mixed forests, which are often characterised by numerous lianas due to their growth advantage during the dry season. The colloquial term jungle, derived from the Sanskrit word for "forest", has no specific ecological meaning but originally referred to this type of primary and especially secondary forest in the Indian subcontinent. Determining which strands of mixed forest are primary and secondary can also be problematic, since the species mixture is influenced by factors such as soil depth and climate, as well as human interference.
Characteristic biology
The fauna and flora of seasonal tropical mixed forest are usually distinctive. Examples of the biodiversity and habitat type are often well described for National Parks in:
Africa represented by:
the northern part of Korup National Park in Cameroon (central region)
the Upper Guinean forests (West Africa)
Asia represented by Cat Tien National Park and Huai Kha Khaeng in the (Indochina region)
Pacific region: including the Queensland forest reserves
Central American wildlife is well represented in:
Costa Rica e.g. Corcovado National Park
the Soberanía National Park in Panama.
South American flora listed and represented in Rio Doce State Park
References
See also
International Tropical Timber Organization (ITTO)
List of tropical and subtropical moist broadleaf forests ecoregions
Trees of the world
Tropical dry forest
Tropical rainforest
Tropical vegetation
Terrestrial biomes
Moist broadleaf forests
Ecoregions
Forests | Seasonal tropical forest | Biology | 867 |
1,543,682 | https://en.wikipedia.org/wiki/DC%20One%20Million | "DC One Million" is a comic book crossover storyline which ran through an eponymous weekly miniseries and through special issues of almost all of the "DCU" titles published by DC Comics in November 1998. It featured a vision of the DC Universe in the 853rd century (85,201–85,300 AD), chosen because that is the century in which DC will have published issue #1,000,000 of Action Comics if it maintains a regular monthly publishing schedule. The miniseries was written by Grant Morrison and drawn by Val Semeiks.
Set-up
The core of the event was a four-issue miniseries, in which the 20th-century Justice League of America and the 853rd-century Justice Legion Alpha cooperate to defeat a plot by the supervillain Vandal Savage (who, as an immortal, lives to the far flung century) and future Superman nemesis Solaris the Living Sun. Thirty-four other series then being published by DC also put out a single issue numbered #1,000,000, which either showed its characters' involvement in the central plot or gave a glimpse of what its characters' descendants/successors would be doing in the 853rd century. Hitman #1,000,000 was essentially a parody of the entire storyline. A trade paperback collection was subsequently published consisting of the four-issue mini-series and the tie-in issues that were necessary to follow the main plot. The series was then followed by a one-shot issue titled DC One Million 80-Page Giant #1,000,000 (1999), which was a collection of further adventures in the life of the future heroes.
Plot
In the 853rd century, the original Superman ("Superman-Prime One Million") still lives, but has spent over 15,000 years in exile within his Fortress of Solitude, located at the heart of the Sun, to keep it alive. During this time of absence, everyone he knew and loved died one by one. One of his descendants is "Kal Kent", the Superman of the 853rd century.
The galaxy in this far future is protected by the Justice Legions, which were inspired by the 20th-century Justice League and the 31st-century Legion of Super-Heroes, among others. Justice Legion Alpha, which protects the solar system, includes Kal Kent and future analogues of Wonder Woman, Hourman, Starman, Aquaman, the Flash and Batman. Advanced terraforming processes have made all the Solar System's planets habitable, with the ones most distant from the Sun being warmed by Solaris, a "star computer" which was once a villain, but was reprogrammed by one of Superman's descendants.
Superman-Prime announces that he will soon return to humanity and, to celebrate, Justice Legion Alpha travels to the late 20th century to meet Superman's original teammates in the JLA and bring them and Superman to the future to participate in games and displays of power as part of the celebration.
Meanwhile, in Russia, Vandal Savage single-handedly defeats the Titans (Arsenal, Tempest, Jesse Quick and Supergirl) when they attempt to stop him from purchasing nuclear-powered Rocket Red suits. He then launches four Rocket Red suits (with a Titan trapped inside each of the four) in a nuclear strike on Washington D.C., Metropolis, Brussels and Singapore.
One member of the Justice Legion Alpha (the future Starman) has been bribed into betraying his teammates by Solaris, which has returned to its old habits. Before the original heroes can be returned to their own time, the future Hourman android collapses and releases a virus programmed by Solaris to attack machines and humans.
The virus affects the guidance systems of the Rocket Red suits and causes one of them to instead detonate over Montevideo, killing over 1 million people. Tempest (the Titan inside) had escaped long before the suit exploded by using the ice that formed on the suit at high altitude, although he subsequently blacked out and fell into the sea. The virus also drives humans insane, causing an increase in anger and paranoia worldwide. Believing that this was deliberately planned by the JLA to stop him, Savage launches an all-out war on superhumans using "blitz engines" he had created and hidden while allied with Adolf Hitler during World War II. The paranoia caused by the virus also leads the Justice Legion Alpha and the contemporary heroes to attack each other, although the Justice Legion Alpha manage to coordinate themselves enough to stop the other Rocket Red suits from hitting their targets.
The remnants of the JLA that stayed in the present and the Justice Legion Alpha overcome their paranoia when the future Superman and Steel realize the significance of the symbol they both wear; as the Huntress had pointed out to Steel earlier, wearing the 'S' means that he has to make the hard choices. The two JLAs are eventually able to stop the virus when it is discovered that it is a complex computer program looking for appropriate hardware. To provide this hardware, the heroes are forced to build the body of Solaris (including in it a DNA sample of Superman's wife Lois Lane) and the virus flees from Earth to this body, bringing Solaris to life. In a final act of repentance, the future Starman sacrifices himself to banish Solaris from the Solar System. The future Superman forces himself through time using confiscated time travel technology he finds in the Watchtower, almost dying in the process due to the drain on his powers.
Meanwhile, in the 853rd century, the original JLA are fighting an alliance between Solaris and Vandal Savage. Savage has found a sample of kryptonite on Mars (where it was left by the future Starman back in the 20th century), which he gives to Solaris. Savage has also hired Walker Gabriel to steal the time travel gauntlets of the 853rd century Flash (John Fox) to ensure the Justice Legion Alpha remains trapped in the past, but ultimately double-crosses Gabriel.
Solaris, in a final attack, slaughters thousands of superhumans so that it can fire the kryptonite into the sun and kill Superman-Prime before he emerges. The JLA's Green Lantern — a hero who uses a power that Solaris has never encountered before — causes Solaris to go supernova and he and the 853rd century Superman contain the resulting blast — but not before the kryptonite is released.
The future Vandal Savage teleports from Mars to Earth using the stolen Time-Gauntlets. It turns out, however, that Walker Gabriel and Mitch Shelley, the Resurrection Man (an immortal who had become Savage's greatest foe through the millennia), had sabotaged the Gauntlets so that Savage, instead of travelling only in space, also travels through time, arriving in Montevideo moments before the nuclear blast he caused centuries earlier, finally bringing his life to an end.
It is then revealed that a secret conspiracy — forewarned by the trouble in the 20th century, mainly in that the Huntress, inspired by the time capsules which students in her class were currently making, realized they had centuries to foil the plot — has spent the intervening centuries coming up with a foolproof plan for stopping Solaris. Their actions included replacing the hidden kryptonite with a disguised Green Lantern power ring, with which the original Superman emerges from the Sun and finishes off Solaris.
In the aftermath, the original Superman and the future Hourman use the DNA sample to recreate Lois Lane, complete with superpowers. Superman then also recreates Krypton, along with all its deceased inhabitants, in Earth's Solar system, and lives happily ever after with Lois.
Later, in the miniseries The Kingdom, it is established that this timeline is merely one of many possibilities and thus not definite due to the mutable effects of Hypertime.
Crossovers
Alongside the main DC One Million miniseries and the accompanying 80-Page Giant issue, the following ongoing DC Comics books also partook in the event:
Action Comics
Adventures of Superman
Aquaman
Azrael
Batman
Batman: Shadow of the Bat
Booster Gold
Catwoman
Chase
Chronos
Creeper
Detective Comics
Flash
Green Arrow
Green Lantern
Hitman
Hourman
Impulse
JLA
Legion of Super-Heroes (vol. 4)
Legionnaires
Lobo
Martian Manhunter
Nightwing
The Power of Shazam (vol. 2)
Resurrection Man
Robin
Starman (vol. 2)
Superboy
Supergirl
Superman (vol. 2)
Superman: The Man of Steel
Superman: The Man of Tomorrow
Wonder Woman
Young Heroes in Love
Young Justice
The Justice Legions
There are 24 Justice Legions, each based on 20th- and 30th-century superhero teams. Those featured include:
Justice Legion A is based on the Justice League.
Justice Legion B is based on the Titans. Members include Nightwing (a bat-like humanoid), Aqualad (a humanoid made from water), Troy (a younger version of the 853rd century Wonder Woman), Arsenal (a robot) and Joto (killed in a teleporter accident).
Justice Legion L is based on the Legion of Super-Heroes and protects an artificially created planetary system (all that remains of the United Planets). Members include Cosmicbot (a cyborg based on magnetism, modelled on Cosmic Boy), Titangirl (the combined psychic energy of all Titanians, based on Saturn Girl), Implicate Girl (who contains the abilities of all three trillion Carggites in her "third eye", loosely based on Triplicate Girl), Brainiac 417 (a disembodied intelligence, based on Brainiac 5 and Apparition), the M'onelves (who combine the powers of M'onel and Shrinking Violet) and humanoid versions of Umbra and Chameleon.
Justice Legion S consists of numerous Superboy clones, all with different powers. Members include Superboy 820 (with aquatic powers), Superboy 3541 (who can increase his size) and Superboy One Million (who can channel any of their powers through "the Eye"). They all (most notably One Million) resemble OMAC as much as Superboy. This was an intentional pun, as the title of the story was "One Million And Counting", which referred to the 1 million clones and formed the OMAC acronym.
Justice Legion T is based on Young Justice. Members include Superboy One Million (as referred to above), Robin the Toy Wonder (an optimistic robot sidekick to the 853rd century Batman) and Impulse (the living embodiment of random thoughts lost in the Speed Force).
Justice Legion Z (for Zoomorphs) is based on the Legion of Super-Pets. Members include Proty One Million and Master Mind. A version of Comet is also a member.
Other characters
Several other futuristic versions of DC characters appeared in the crossover, including:
Atom
Azrael
Booster Gold
Captain Marvel
Catwoman
Charade City
Gunfire
Lex Luthor
Supergirl
Later references
In 2008, 10 years after the crossover, an issue of Booster Gold (vol. 2) was published as Booster Gold #1,000,000 and was announced as an official DC One Million tie-in by DC Comics. This comic introduced Peter Platinum, the Booster Gold of the 853rd century.
Grant Morrison's All-Star Superman miniseries made several references to the DC One Million miniseries. The Superman from DC One Million makes an appearance and the series ends with Superman becoming an energy being who resides in the Sun after his body has been supercharged with yellow solar energy (similar in appearance to Superman-Prime) and Solaris makes an appearance as well.
Morrison's Batman #700 also briefly shows the One Million Batman and his sidekick—Robin, the Toy Wonder—alongside a number of future iterations of Batman.
The One Million Batman, Robin the Toy Wonder and One Million Superman play a significant role in Superman/Batman #79–80, in which Epoch battles Batmen and Supermen from various time periods.
By signing into WBID account in the video game Batman: Arkham Origins, the costume of the One Million version of Batman will be unlocked for use.
Awards
The original miniseries was a top vote-getter for the Comics Buyer's Guide Fan Award for Favorite Limited Series for 1999. The storyline was a top vote-getter for the Comics Buyer's Guide Award for Favorite Story for 1999.
Collected editions
DC One Million, later reprinted with the title JLA: One Million (208 pages, DC Comics, June 1999, , Titan Books, June 1999, , DC Comics, June 2004, ) collects:
DC One Million (by Grant Morrison, with pencils by Val Semeiks and inks by Prentis Rollins/Jeff Albrecht/Del Barras, four-issue miniseries)
Green Lantern #1,000,000 (by Ron Marz, with pencils by Bryan Hitch and inks by Andy Lanning/Paul Neary)
Resurrection Man #1,000,000 (by Dan Abnett/Andy Lanning, with art by Jackson Guice)
Starman #1,000,000 (by James Robinson, with pencils by Peter Snejbjerg and inks by Wade Von Grawbadger)
JLA #1,000,000 (by Grant Morrison, with pencils by Howard Porter and inks by John Dell)
Superman: The Man of Tomorrow #1,000,000 (by Mark Schultz, with pencils by Georges Jeanty and inks by Dennis Janke/Denis Rodier)
Detective Comics #1,000,000 (by Chuck Dixon, with pencils by Greg Land and inks by Drew Geraci)
DC One Million Omnibus (1,080 pages, DC Comics, October 2013, ) collects:
DC One Million #1–4, plus the #1,000,000 issues of Action Comics, Adventures Of Superman, Aquaman, Azrael, Batman, Batman: Shadow Of The Bat, Catwoman, Chase, Chronos, The Creeper, Detective Comics, The Flash, Green Arrow, Green Lantern, Hitman, Impulse, JLA, Legion of Super-Heroes, Legionnaires, Lobo, Martian Manhunter, Nightwing, Power Of Shazam, Resurrection Man, Robin, Starman, Superboy, Supergirl, Superman (vol. 2), Superman: The Man of Steel, Superman: The Man of Tomorrow, Wonder Woman and Young Justice; as well as Booster Gold #1,000,000, DC One Million 80-Page Giant #1 and Superman/Batman #79–80 (the Omnibus did not include the #1,000,000 issue of Young Heroes in Love, as it was a creator-owned series).
References
External links
Comics Buyer's Guide Fan Awards
Sequart on DC One Million
DC Comics dimensions
DC Comics planets
Comics about time travel
Comics by Grant Morrison
Fiction set in the 7th millennium or beyond
Works set in the future
Fiction about malware
Fiction about nanotechnology
Comics about artificial intelligence | DC One Million | Materials_science | 3,058 |
47,933,030 | https://en.wikipedia.org/wiki/Propadienone | Propadienone is an organic compound with molecular formula C3H2O consisting of a propadiene carbon framework with a ketone functional group. The structure of propadienone is not the same as propadiene or carbon suboxide. In propadienone, oxygen has +1 formal charge and C2 carbon has -1 formal charge.
See also
Propadiene
Carbon suboxide
References
Ketones | Propadienone | Chemistry | 85 |
20,723,955 | https://en.wikipedia.org/wiki/Flight%20envelope%20protection | Flight envelope protection is a human machine interface extension of an aircraft's control system that prevents the pilot of an aircraft from making control commands that would force the aircraft to exceed its structural and aerodynamic operating limits. It is used in some form in all modern commercial fly-by-wire aircraft. The professed advantage of flight envelope protection systems is that they restrict a pilot's excessive control inputs, whether in surprise reaction to emergencies or otherwise, from translating into excessive flight control surface movements. Notionally, this allows pilots to react quickly to an emergency while blunting the effect of an excessive control input resulting from "startle," by electronically limiting excessive control surface movements that could over-stress the airframe and endanger the safety of the aircraft.
In practice, these limitations have sometimes resulted in unintended human factors errors and accidents of their own.
One example of such a flight envelope protection device is an anti-stall system which is designed to prevent an aircraft from stalling, for example in the form of a stick pusher that pushes the aircraft nose downward based on an input signal from a stall warning system, or by means of other fly-by-wire actions. Anti-stall systems are used on most modern swept wing aircraft, and are used on a large variety of civilian and military jet airplanes.
Function
Aircraft have a flight envelope that describes its safe performance limits in regard to such things as minimum and maximum operating speeds, and its operating structural strength. Flight envelope protection calculates that flight envelope (and adds a margin of safety) and uses this information to stop pilots from making control inputs that would put the aircraft outside that flight envelope. The interference of the flight envelope protection system with the pilot's commands can happen in two different ways (which can also be combined):
Ignoring part or all of a control input that would bring an aircraft's state of flight closer to or even outside of its operational borders. This method is applied in most sidestick-controlled fly-by-wire aircraft with rate command.
Inform the pilot that the respective command is bringing the aircraft closer to the calculated operational borders; this communication can happen by simple alarms or tactile feedback. This method is often applied in aircraft with conventional controls.
For example, if the pilot uses the rearward side-stick to pitch the aircraft nose up, the control computers creating the flight envelope protection can prevent the pilot pitching the aircraft beyond the stalling angle of attack:
In the first case, if the pilot tries to apply even more rearward control, the flight envelope protection would cause the aircraft to ignore this command. Flight envelope protection can in this way increase aircraft safety by allowing the pilot to apply maximum control forces in an emergency while not at the same time inadvertently putting the aircraft outside the margins of its operational safety. Examples of where this might stop air accidents are when it allows a pilot to make a quick evasive maneuver in response to a ground proximity warning system warning, or in quick response to an approaching aircraft and a potential mid air collision. In this case without a flight envelope protection system, "you would probably hold back from maneuvering as hard as you could for fear of tumbling out of control, or worse. You would have to sneak up on it [2.5 G, the design limit], and when you got there you wouldn't be able to tell, because very few commercial pilots have ever flown 2.5 G. But in the A320, you wouldn't have to hesitate: you could just slam the controller all the way to the side and instantly get out of there as fast as the plane will take you." Thus the makers of the Airbus argue: "envelope protection doesn't constrain the pilot. It liberates the pilot from uncertainty – and thus enhances safety."
In the second case, e.g. when using a force-feedback-system to communicate with the pilot, if the pilot tries to apply even more rearward control, the flight envelope protection would present increasing counterforces on the controls so that the pilot has to apply increasing force in order to continue the control input that is perceived as dangerous by the flight envelope protection.
While most designers of modern fly-by-wire aircraft stick to either one of these two solutions ('sidestick-control & no feedback' or 'conventional control & feedback', see also below), there are also approaches in science to combine both of them: As a study demonstrated, force-feedback applied to the side-stick of an aircraft controlled via roll rate and g-load (as e.g. a modern Airbus aircraft) can be used to increase adherence to a safe flight envelope and thus reduce the risk of pilots entering dangerous states of flights outside the operational borders while maintaining the pilots' final authority and increasing their situation awareness.
Airbus and Boeing
The Airbus A320 was the first commercial aircraft to incorporate full flight-envelope protection into its flight-control software. This was instigated by former Airbus senior vice president for engineering Bernard Ziegler. In the Airbus, the flight envelope protection cannot be overridden completely, although the crew can fly beyond flight envelope limits by selecting an alternate "control law". Boeing took a different approach with the 777 by allowing the crew to override flight envelope limits by using excessive force on the flight controls.
Incidents
China Airlines Flight 006
One objection raised against flight envelope protection is the incident that happened to China Airlines Flight 006, a Boeing 747SP-09, northwest of San Francisco in 1985. In this flight incident, the crew was forced to overstress (and structurally damage) the horizontal tail surfaces in order to recover from a roll and near-vertical dive. (This had been caused by an automatic disconnect of the autopilot and incorrect handling of a yaw brought about by an engine flame-out). The pilot recovered control with about 10,000 ft of altitude remaining (from its original high-altitude cruise). To do this, the pilot had to pull the aircraft with an estimated 5.5 G, or more than twice its design limits. Had the aircraft
incorporated a flight envelope protection system, this excessive manoeuvre could not have been performed, greatly reducing chances of recovery.
Against this objection, Airbus has responded that an A320 in the situation of Flight 006 "never would have fallen out of the air
in the first place: the envelope protection would have automatically kept it in level flight in spite of the drag of a stalled engine".
FedEx Flight 705
In April 1995, FedEx Flight 705, a McDonnell Douglas DC-10-30, was hijacked by a FedEx Flight Engineer who, facing a dismissal, attempted to hijack the plane and crash it into FedEx Headquarters so that his family could collect his life insurance policy. After being attacked and severely injured, the flight crew was able to fight back and land the plane safely. In order to keep the attacker off balance and out of the cockpit the crew had to perform extreme maneuvers, including a barrel roll and a dive so fast the airplane couldn't measure its airspeed.
Had the crew not been able to exceed the plane's flight envelope, the crew might not have been successful .
American Airlines Flight 587
American Airlines Flight 587, an Airbus A300, crashed in November 2001, when the vertical stabilizer broke off due to excessive rudder inputs made by the pilot.
A flight-envelope protection system could have prevented this crash, though it can still be argued that an override button should be provided for contingencies when the pilots are aware of the need to exceed normal limits.
US Airways Flight 1549
US Airways Flight 1549, an Airbus A320, experienced a dual engine failure after a bird strike and subsequently landed safely in the Hudson River in January 2009. The NTSB accident report mentions the effect of flight envelope protection: "The airplane’s airspeed in the last 150 feet of the descent was low enough to activate the alpha-protection mode of the airplane’s fly-by-wire envelope protection features... Because of these features, the airplane could not reach the maximum angle of attack (AoA) attainable in pitch normal law for the airplane weight and configuration; however, the airplane did provide maximum performance for the weight and configuration at that time...
The flight envelope protections allowed the captain to pull full aft on the sidestick without the risk of stalling the airplane."
Qantas Flight 72
Qantas 72 suffered an uncommanded pitch-down due to erroneous data from one of its ADIRU computers.
Air France Flight 447
Air France Flight 447, an Airbus A330, entered an aerodynamic stall from which it did not recover and crashed into the Atlantic Ocean in June 2009 killing all aboard.
Temporary inconsistency between measured speeds, likely a result of the obstruction of the pitot tubes by ice crystals, caused autopilot disconnection and reconfiguration to alternate law;
a second consequence of the reconfiguration into alternate law was that stall protection no longer operated.
The crew made inappropriate control inputs that caused the aircraft to stall and did not recognize that the aircraft had stalled.
MCAS on the Boeing 737 MAX
In October 2018 and again in March 2019, the MCAS flight protection system's erroneous activation pushed two Boeing 737 MAX airliners into unrecoverable dives, killing 346 people and resulting in the worldwide grounding of the airliner.
See also
Aircraft flight control system
Flight envelope
Notes
Aerospace engineering
Aircraft controls
Aviation risks
Aviation safety
Avionics
Control engineering
Technology systems
User interfaces | Flight envelope protection | Technology,Engineering | 1,960 |
2,985,223 | https://en.wikipedia.org/wiki/Apomorphine | Apomorphine, sold under the brand name Apokyn among others, is a type of aporphine having activity as a non-selective dopamine agonist which activates both D2-like and, to a much lesser extent, D1-like receptors. It also acts as an antagonist of 5-HT2 and α-adrenergic receptors with high affinity. The compound is an alkaloid belonging to nymphaea caerulea, or blue lotus, but is also historically known as a morphine decomposition product made by boiling morphine with concentrated acid, hence the -morphine suffix. Contrary to its name, apomorphine does not actually contain morphine or its skeleton, nor does it bind to opioid receptors. The apo- prefix relates to it being a morphine derivative ("[comes] from morphine").
Historically, apomorphine has been tried for a variety of uses, including as a way to relieve anxiety and craving in alcoholics, an emetic (to induce vomiting), for treating stereotypies (repeated behaviour) in farmyard animals, and more recently in treating erectile dysfunction. Currently, apomorphine is used in the treatment of Parkinson's disease. It is a potent emetic and should not be administered without an antiemetic such as domperidone. The emetic properties of apomorphine are exploited in veterinary medicine to induce therapeutic emesis in canines that have recently ingested toxic or foreign substances.
Apomorphine was also used as a private treatment of heroin addiction, a purpose for which it was championed by the author William S. Burroughs. Burroughs and others claimed that it was a "metabolic regulator" with a restorative dimension to a damaged or dysfunctional dopaminergic system. Despite anecdotal evidence that this offers a plausible route to an abstinence-based mode, no clinical trials have ever tested this hypothesis. A recent study indicates that apomorphine might be a suitable marker for assessing central dopamine system alterations associated with chronic heroin consumption. There is, however, no clinical evidence that apomorphine is an effective and safe treatment regimen for opiate addiction.
Medical uses
Apomorphine is used in advanced Parkinson's disease intermittent hypomobility ("off" episodes), where a decreased response to an anti-Parkinson drug such as L-DOPA causes muscle stiffness and loss of muscle control. While apomorphine can be used in combination with L-DOPA, the intention is usually to reduce the L-DOPA dosing, as by this stage the patient often has many of dyskinesias caused by L-DOPA and hypermobility periods. When an episode sets in, the apomorphine is injected subcutaneously or applied sublingually, and signs subside. It is used an average of three times a day. Some people use portable mini-pumps that continuously infuse them with apomorphine, allowing them to stay in the "on" state and using apomorphine as an effective monotherapy.
Contraindications
The main and absolute contraindication to using apomorphine is the concurrent use of adrenergic receptor antagonists; combined, they cause a severe drop in blood pressure and fainting. Alcohol causes an increased frequency of orthostatic hypotension (a sudden drop in blood pressure when getting up), and can also increase the chances of pneumonia and heart attacks. Dopamine antagonists, by their nature of competing for sites at dopamine receptors, reduce the effectiveness of the agonistic apomorphine.
IV administration of apomorphine is highly discouraged, as it can crystallize in the veins and create a blood clot (thrombus) and block a pulmonary artery (pulmonary embolism).
Side effects
Nausea and vomiting are common side effects when first beginning therapy with apomorphine; antiemetics such as trimethobenzamide or domperidone, dopamine antagonists, are often used while first starting apomorphine. Around 50% of people grow tolerant enough to apomorphine's emetic effects that they can discontinue the antiemetic.
Other side effects include orthostatic hypotension and resultant fainting, sleepiness, dizziness, runny nose, sweating, paleness, and flushing. More serious side effects include dyskinesias (especially when taking L-DOPA), fluid accumulation in the limbs (edema), suddenly falling asleep, confusion and hallucinations, increased heart rate and heart palpitations, and persistent erections (priapism). The priapism is caused by apomorphine increasing arterial blood supply to the penis. This side effect has been exploited in studies attempting to treat erectile dysfunction.
Pharmacology
Mechanism of action
Apomorphine's R-enantiomer is an agonist of both D1 and D2 dopamine receptors, with higher activity at D2. The members of the D2 subfamily, consisting of D2, D3, and D4 receptors, are inhibitory G protein–coupled receptors. The D4 receptor in particular is an important target in the signaling pathway, and is connected to several neurological disorders. Shortage or excess of dopamine can prevent proper function and signaling of these receptors leading to disease states.
Apomorphine improves motor function by activating dopamine receptors in the nigrostriatal pathway, the limbic system, the hypothalamus, and the pituitary gland. It also increases blood flow to the supplementary motor area and to the dorsolateral prefrontal cortex (stimulation of which has been found to reduce the tardive dyskinesia effects of L-DOPA). Parkinson's has also been found to have excess iron at the sites of neurodegeneration; both the (R)- and (S)-enantiomers of apomorphine are potent iron chelators and radical scavengers.
Apomorphine also decreases the breakdown of dopamine in the brain (though it inhibits its synthesis as well). It is an upregulator of certain neural growth factors, in particular NGF but not BDNF, epigenetic downregulation of which has been associated with addictive behaviour in rats.
Apomorphine causes vomiting by acting on dopamine receptors in the chemoreceptor trigger zone of the medulla; this activates the nearby vomiting center.
Pharmacokinetics
While apomorphine has lower bioavailability when taken orally, due to not being absorbed well in the GI tract and undergoing heavy first-pass metabolism, it has a bioavailability of 100% when given subcutaneously. It reaches peak plasma concentration in 10–60 minutes. Ten to twenty minutes after that, it reaches its peak concentration in the cerebrospinal fluid. Its lipophilic structure allows it to cross the blood–brain barrier.
Apomorphine possesses affinity for the following receptors (note that a higher Ki indicates a lower affinity):
It has a Ki of over 10,000 nM (and thus negligible affinity) for β-adrenergic, H1, and mACh.
Apomorphine has a high clearance rate (3–5 L/kg/hr) and is mainly metabolized and excreted by the liver. It is likely that while the cytochrome P450 system plays a minor role, most of apomorphine's metabolism happens via auto-oxidation, O-glucuronidation, O-methylation, N-demethylation, and sulfation. Only 3–4% of the apomorphine is excreted unchanged and into the urine. The half-life is 30–60 minutes, and the effects of the injection last for up to 90 minutes.
Toxicity depends on the route of administration; the LD50s in mice were 300 mg/kg for the oral route, 160 mg/kg for intraperitoneal, and 56 mg/kg intravenous.
Chemistry
Properties
Apomorphine has a catechol structure similar to that of dopamine.
Synthesis
Several techniques exist for the creation of apomorphine from morphine. In the past, morphine had been combined with hydrochloric acid at high temperatures (around 150 °C) to achieve a low yield of apomorphine, ranging anywhere from 0.6% to 46%.
More recent techniques create the apomorphine in a similar fashion, by heating it in the presence of any acid that will promote the essential dehydration rearrangement of morphine-type alkaloids, such as phosphoric acid. The method then deviates by including a water scavenger, which is essential to remove the water produced by the reaction that can react with the product and lead to decreased yield. The scavenger can be any reagent that will irreversibly react with water such as phthalic anhydride or titanium chloride. The temperature required for the reaction varies based upon choice of acid and water scavenger. The yield of this reaction is much higher: at least 55%.
History
The pharmacological effects of the naturally-occurring analog aporphine in the blue lotus (Nymphaea caerulea) were known to the ancient Egyptians and Mayans, with the plant featuring in tomb frescoes and associated with entheogenic rites. It is also observed in Egyptian erotic cartoons, suggesting that they were aware of its erectogenic properties.
The modern medical history of apomorphine begins with its synthesis by Arppe in 1845 from morphine and sulfuric acid, although it was named sulphomorphide at first. Matthiesen and Wright (1869) used hydrochloric acid instead of sulfuric acid in the process, naming the resulting compound apomorphine. Initial interest in the compound was as an emetic, tested and confirmed safe by London doctor Samuel Gee, and for the treatment of stereotypies in farmyard animals. Key to the use of apomorphine as a behavioural modifier was the research of Erich Harnack, whose experiments in rabbits (which do not vomit) demonstrated that apomorphine had powerful effects on the activity of rabbits, inducing licking, gnawing and in very high doses convulsions and death.
Treatment of alcoholism
Apomorphine was one of the earliest used pharmacotherapies for alcoholism. The Keeley Cure (1870s to 1900) contained apomorphine, among other ingredients, but the first medical reports of its use for more than pure emesis come from James Tompkins and Charles Douglas. Tompkins reported, after injection of 6.5 mg ("one tenth of a grain"):Douglas saw two purposes for apomorphine:This use of small, continuous doses (1/30th of a grain, or 2.16 mg by Douglas) of apomorphine to reduce alcoholic craving comes some time before Pavlov's discovery and publication of the idea of the "conditioned reflex" in 1903. This method was not limited to Douglas; the Irish doctor Francis Hare, who worked in a sanatorium outside London from 1905 onward, also used low-dose apomorphine as a treatment, describing it as "the most useful single drug in the therapeutics of inebriety". He wrote:He also noted there appeared to be a significant prejudice against the use of apomorphine, both from the associations of its name and doctors being reluctant to give hypodermic injections to alcoholics. In the US, the Harrison Narcotics Tax Act made working with any morphine derivatives extremely hard, despite apomorphine itself not being an opiate.
In the 1950s the neurotransmitter dopamine was discovered in the brain by Katharine Montagu, and characterised as a neurotransmitter a year later by Arvid Carlsson, for which he would be awarded the Nobel Prize. A. N. Ernst then discovered in 1965 that apomorphine was a powerful stimulant of dopamine receptors. This, along with the use of sublingual apomorphine tablets, led to a renewed interest in the use of apomorphine as a treatment for alcoholism. A series of studies of non-emetic apomorphine in the treatment of alcoholism were published, with mostly positive results. However, there was little clinical consequence.
Parkinson's disease
The use of apomorphine to treat "the shakes" was first suggested by Weil in France in 1884, although seemingly not pursued until 1951. Its clinical use was first reported in 1970 by Cotzias et al., although its emetic properties and short half-life made oral use impractical. A later study found that combining the drug with the antiemetic domperidone improved results significantly. The commercialization of apomorphine for Parkinson's disease followed its successful use in patients with refractory motor fluctuations using intermittent rescue injections and continuous infusions.
Aversion therapy
Aversion therapy in alcoholism had its roots in Russia in the early 1930s, with early papers by Pavlov, Galant and Sluchevsky and Friken, and would remain a strain in the Soviet treatment of alcoholism well into the 1980s. In the US a particularly notable devotee was Dr Voegtlin, who attempted aversion therapy using apomorphine in the mid to late 1930s. However, he found apomorphine less able to induce negative feelings in his subjects than the stronger and more unpleasant emetic emetine.
In the UK, however, the publication of J. Y. Dent's (who later went on to treat Burroughs) 1934 paper "Apomorphine in the treatment of Anxiety States" laid out the main method by which apomorphine would be used to treat alcoholism in Britain. His method in that paper is clearly influenced by the then-novel idea of aversion:However, even in 1934 he was suspicious of the idea that the treatment was pure conditioned reflex – "though vomiting is one of the ways that apomorphine relives the patient, I do not believe it to be its main therapeutic effect." – and by 1948 he wrote:This led to his development of lower-dose and non-aversive methods, which would inspire a positive trial of his method in Switzerland by Dr Harry Feldmann and later scientific testing in the 1970s, some time after his death. However, the use of apomorphine in aversion therapy had escaped alcoholism, with its use to treat homosexuality leading to the death of a British Army Captain Billy Clegg Hill in 1962, helping to cement its reputation as a dangerous drug used primarily in archaic behavioural therapies.
Opioid addiction
In his Deposition: Testimony Concerning a Sickness in the introduction to later editions of Naked Lunch (first published in 1959), William S. Burroughs wrote that apomorphine treatment was the only effective cure to opioid addiction he has encountered:
He goes on to lament the fact that as of his writing, little to no research has been done on apomorphine or variations of the drug to study its effects on curing addiction, and perhaps the possibility of retaining the positive effects while removing the side effect of vomiting.
Despite his claims throughout his life, Burroughs never really cured his addiction and was back to using opiates within years of his apomorphine "cure". However, he insisted on apomorphine's effectiveness in several works and interviews.
Society and culture
Apomorphine has a vital part in Agatha Christie's detective story Sad Cypress.
The 1965 Tuli Kupferberg song "Hallucination Horrors" recommends apomorphine at the end of each verse as a cure for hallucinations brought on by a humorous variety of intoxicants; the song was recorded by The Fugs and appears on the album Virgin Fugs.
Research
There is renewed interest in the use of apomorphine to treat addiction, in both smoking cessation and alcoholism. As the drug is known to be reasonably safe for use in humans, it is a viable target for repurposing.
Apomorphine has been researched as a possible treatment for erectile dysfunction and female hypoactive sexual desire disorder, though its efficacy has been limited. Nonetheless, it was under development as a treatment for erectile dysfunction by TAP Pharmaceuticals under the brand name Uprima. In 2000, TAP withdrew its new drug application after an FDA review panel raised questions about the drug's safety, due to many clinical trial subjects fainting after taking the drug.
Alzheimer's disease
Apomorphine is reported to be an inhibitor of amyloid beta protein fiber formation, whose presence is a hallmark of Alzheimer's disease, and a potential therapeutic under the amyloid hypothesis.
Alternative administration routes
Two routes of administration are currently clinically utilized: subcutaneous (either as intermittent injections or continuous infusion) and sublingual. Other non-invasive administration routes were investigated as a substitute for parenteral administration, reaching different preclinical and clinical stages. These include: peroral, nasal, pulmonary, transdermal, rectal, and buccal, as well as iontophoresis methods.
Veterinary use
Apomorphine is used to inducing vomiting in dogs after ingestion of various toxins or foreign bodies. It can be given subcutaneously, intramuscularly, intravenously, or, when a tablet is crushed, in the conjunctiva of the eye. The oral route is ineffective, as apomorphine cannot cross the blood–brain barrier fast enough, and blood levels don't reach a high enough concentration to stimulate the chemoreceptor trigger zone. It can remove around 40–60% of the contents in the stomach.
One of the reasons apomorphine is a preferred drug is its reversibility: in cases of prolonged vomiting, the apomorphine can be reversed with dopamine antagonists like the phenothiazines (for example, acepromazine). Giving apomorphine after giving acepromazine, however, will no longer stimulate vomiting, because apomorphine's target receptors are already occupied. An animal who undergoes severe respiratory depression due to apomorphine can be treated with naloxone.
Apomorphine does not work in cats, who have too few dopamine receptors.
Related compounds
Mdo-npa, the methylenedioxy analog of apomorphine, has greater bioavailability and a longer duration of action.
References
5-HT2A antagonists
5-HT2B antagonists
5-HT2C antagonists
Alpha blockers
Catechols
Dibenzoquinolines
D1-receptor agonists
D2-receptor agonists
D3 receptor agonists
D4 receptor agonists
D5 receptor agonists
Erectile dysfunction drugs
Sexual orientation and medicine
Emetics | Apomorphine | Chemistry | 4,000 |
62,582,674 | https://en.wikipedia.org/wiki/HD%2060803 | HD 60803 is a binary star system in the equatorial constellation of Canis Minor, located less than a degree to the northwest of the prominent star Procyon. It has a yellow hue and is visible to the naked eye as a dim point of light with a combined apparent visual magnitude of 5.904. The distance to this system is 135 light years as determined using parallax measurements, and is drifting further away with a radial velocity of +4.6 km/s.
The binary nature of this star system was first noted by O. C. Wilson and A. Skumanich in 1964. It is a double-lined spectroscopic binary with an orbital period of 26.2 days and an eccentricity of 0.22. Both components are similar, G-type main-sequence stars; the primary has a stellar classification of G0V while the secondary has a class of G1V. The masses are similar to each other, and are 28–31% greater than the mass of the Sun. They have low rotation rates which may be quasi-synchronized with their orbital period.
References
G-type main-sequence stars
Spectroscopic binaries
Canis Minor
Durchmusterung objects
060803
037031 | HD 60803 | Astronomy | 255 |
20,888,413 | https://en.wikipedia.org/wiki/Force%20lines | Force lines is a method used in solid mechanics for visualization of internal forces in a deformed body. A force line is a curve representing graphically the internal force acting within a body across imaginary internal surfaces. The force lines show the maximal internal forces and their directions.
Force lines drawing
The procedure for determining the force lines consists of two stages:
1) Defining the internal surface. The surface is perpendicular to maximum principal stress in every point of the solid.
2) Integration of internal stresses on the surface. Stress is a measure of the average amount of force exerted per unit area. The stress distribution can be obtained from known theoretical or numerical (Finite element method) analysis.
The researcher who builds up the force lines can choose a magnitude of the internal force and the initial border where the drawing procedure starts.
Figure 1 shows an example of force lines in a body with a hole under tension. The force lines are denser near the hole. The visualization helps to explain the stress concentration.
Figure 2 shows the force lines in a body with a crack. The cracks are the most dangerous stress concentrator: the intensity of the force lines is high in the crack tip (see Fracture mechanics).
Figure 3 shows the case of pure bending of a beam with rectangular cross section. There are no internal forces at the neutral axis of the beam. The tensile and compressive force lines are symmetrical and are denser at the beam’s edge.
Application
The force lines pictures are used for
1) Analysis of stress concentration (Figure1 and Figure 2): the number of the force lines increases in areas with stress concentration.
2) Optimization of structures: reinforcing the structure in the areas with concentration of force lines and deleting the components where there are no force lines.
See also
Fracture
Engineering stress
Stress concentration
Stress intensity factor
Strength of materials
Structural fracture mechanics
References
Structural engineering
Materials science
Continuum mechanics
Fracture mechanics
Solid mechanics
Curves | Force lines | Physics,Materials_science,Engineering | 388 |
8,946 | https://en.wikipedia.org/wiki/Decipherment | In philology and linguistics, decipherment is the discovery of the meaning of the symbols found in extinct languages and/or alphabets. Decipherment is possible with respect to languages and scripts. One can also study or try to decipher how spoken languages that no longer exist were once pronounced, or how living languages used to be pronounced in prior eras.
Notable examples of decipherment include the decipherment of ancient Egyptian scripts and the decipherment of cuneiform. A notable decipherment in recent years is that of the Linear Elamite script. Today, at least a dozen languages remain undeciphered. Historically speaking, decipherments do not come suddenly through single individuals who "crack" ancient scripts. Instead, they emerge from the incremental progress brought about by a broader community of researchers.
Decipherment should not be confused with cryptanalysis, which aims to decipher special written codes or ciphers used in intentionally concealed secret communication (especially during war). It should also not be confused with determining the meaning of ambiguous text in a known language (interpretation).
Categories
According to Gelb and Whiting, the approach of decipherment depends on four categories of situations in an undeciphered language:
Type O: known writing and known language. Although decipherment in this case is trivial, useful information can be gleaned when a known language is written in an alphabet other than the one it is commonly written in. Studying the writing of the Phoenician or Sumerian languages in the Greek alphabet allows information about pronunciation and vocalization to be gleaned that cannot be obtained when studying the expression of these languages in their normal writing system.
Type I: unknown writing and known language. Deciphered languages in this category include Phoenician, Ugaritic, Cypriot, and Linear B. In this situation, alphabetic systems are the easiest to decipher, followed by syllabic languages, and finally the most difficult being logo-syllabic.
Type II: known writing and unknown language. An example is Linear A. Strictly speaking, this situation is not one of decipherment but of linguistic analysis. Decipherment in this category is considered extremely difficult to achieve on the basis of internal information only.
Type III: unknown writing and unknown language. Examples include the Archanes script and the Archanes formula, Phaistos disk, Cretan hieroglyphs, and Cypro-Minoan syllabary. When this situation occurs in an isolated culture and without the availability of outside information, decipherment is typically considered impossible.
Methods
There is no single recipe or linear method for decipherment, however: instead, philologists and linguists must rely on a set of heuristic devices that have been established. Broadly, it is important to be familiar with the relevant texts where the script or language occurs in, access to accurate drawings or photographs of these texts, information about their relative chronology, and background information on where the texts occur in (their geography, perhaps being found in the context of a funerary monument, etc).
These methods can be divided into approaches utilizing external or internal information.
External information
Many successful encipherments have proceeded from the discovery of external information, a common example being through the use of multilingual inscriptions, such as the Rosetta Stone (with the same text in three scripts: Demotic, hieroglyphic, and Greek) that enabled the decipherment of Egyptian hieroglyphic. In principle, multilingual text may be insufficient for a decipherment as translation is not a linear and reversible process, but instead represents an encoding of the message in a different symbolic system. Translating a text from one language into a second, and then from the second language back into the first, rarely reproduces exactly the original writing. Likewise, unless a significant number of words are contained in the multilingual text, limited information can be gleaned from it.
Internal information
Internal approaches are multi-step: one must first ensure that the writing they are looking at represents real writing, as opposed to a grouping of pictorial representations or a modern-day forgery without further meaning. This is commonly approached with methods from the field of grammatology. Prior to decipherment of meaning, one can then determine the number of distinct graphemes (which, in turn, allows one to tell if the writing system is alphabetic, syllabic, or logo-syllabic; this is because such writing systems typically do not overlap in the number of graphemes they use), the sequence of writing (whether it be from left to right, right to left, top to bottom, etc.), and the determination of whether individual words are properly segmented when the alphabet is written (such as with the use of a space or a different special mark) or not. If a repetitive schematic arrangement can be identified, this can help in decipherment. For example, if the last line of a text has a small number, it can be reasonably guessed to be referring to the date, where one of the words means "year" and, sometimes, a royal name also appears. Another case is when the text contains many small numbers, followed by a word, followed by a larger number; here, the word likely means "total" or "sum". After one has exhausted the information that can be inferentially derived from probable content, they must transition to the systematic application of statistical tools. These include methods concerning the frequency of appearance of each symbol, the order in which these symbols typically appear, whether some symbols appear at the beginning or end of words, etc. There are situations where orthographic features of a language make it difficult if not impossible to decipher specific features (especially without certain outside information), such as when an alphabet does not express double consonants. Additional, and more complex methods, also exist. Eventually, the application of such statistical methods becomes exceedingly laborious, in which computers might be used to apply them automatically.
Computational approaches
Computational approaches towards the decipherment of unknown languages began to appear in the late 1990s. Typically, there are two types of computational approaches used in language decipherment: approaches meant to produce translations in known languages, and approaches used to detect new information that might enable future efforts at translation. The second approach is more common, and includes things such as the detection of cognates or related words, discovery of the closest known language, word alignments, and more.
Artificial intelligence
In recent years, there has been a growing emphasis on methods utilizing artificial intelligence for the decipherment of lost languages, especially through natural language processing (NLP) methods. Proof-of-concept methods have independently re-deciphered Ugaritic and Linear B using data from similar languages, in this case Hebrew and Ancient Greek.
Deciphering pronunciation
Related to attempts to decipher the meaning of languages and alphabets, include attempts to decipher how extinct writing systems, or older versions of contemporary writing systems (such as English in the 1600s) were pronounced. Several methods and criteria have been developed in this regard. Important criteria include (1) Rhymes and the testimony of poetry (2) Evidence from occasional spellings and misspellings (3) Interpretations of material in one language from authors in foreign languags (4) Information obtained from related languages (5) Grammatical changes in spelling over time.
For example, analysis of poetry focuses on the use of wordplay or literary techniques between words that have a similar sound. Shakespeare's play Romeo and Juliet contains wordplay that relies on a similar sound between the words "soul" and "soles", allowing confidence that the similar pronunciation between the terms today also existed in Shakespeare's time. Another common source of information on pronunciation is when earlier texts use rhyme, such as when consecutive lines in poetry end in the similar or the same sound. This method does have some limitations however, as texts may use rhymes that rely on visual similarities between words (such as 'love' and 'remove') as opposed to auditory similarities, and that rhymes can be imperfect. Another source of information about pronunciation comes from explicit description of pronunciations from earlier texts, as in the case of the Grammatica Anglicana, such as in the following comment about the letter <o>: "In the long time it naturally soundeth sharp, and high; as in chósen, hósen, hóly, fólly [. . .] In the short time more flat, and a kin to u; as còsen, dòsen, mòther, bròther, lòve, pròve". Another example comes from detailed comments on pronunciations of Sanskrit from the surviving works of Sanskrit grammarians.
Challenges
Many challenges exist in the decipherment of languages, including when:
When it is not known which language is closest to it.
When the words in the script are not clearly segmented, like in some Iberian languages.
When the writing system is not known. In specific, if there is little certainty towards the number of graphemes that exist in a certain writing system, it cannot be determined if that system is an alphabet, a syllabry, a logosyllabry, or something else.
When the reading direction is not known. For example, it may not be clear if a writing system is meant to be read from left to right, or from right to left.
When it is not known if a script uses punctuation or spaces between words.
When the language of a script subject to decipherment efforts is not known.
When there is a small dataset available to learn about the properties of a script. This could lead to issues such as an incomplete vocabulary being known for the script.
When the typical order between subjects, objects, and verbs is not known.
When it is not known whether or how certain words can change their form.
When it is not known when multiple symbols are used to represent the same sound, syllable, word, concept, or idea (allographs).
When it is not clear how the penmanship or the style of writing of a particular scribe relates to the style of writing of another scribe working in the same text (the same letters or words might be written in a way that looks different), in which case it is difficult to correlate information across multiple examples of the use of the writing system.
When it is not known if certain words change their meaning depending on the context they appear in (homonyms).
When the context of discovery of a writing is not known. This is because information about the location out of which a writing system came from can provide valuable information about its relationship to known languages.
When adequate digital datasets for documented writing systems is not available, limiting the ability to use computational methods for decipherment.
When sufficient hardware resources, such as high performance computing, is not available (which might be necessary for more energy-intensive computational methods).
Relationship to cryptanalysis
Decipherment overlaps with another technical field known as cryptanalysis, a field that aims to decipher writings used in secret communication, known as ciphertext. A famous case of this was in the cryptanalysis of the Enigma during the World War II. Many other ciphers from past wars have only recently been cracked. Unlike in language decipherment, however, actors using ciphertext intentionally lay obstacles to prevent outsiders from uncovering the meaning of the communication system.
History
Interest in ancient scripts and dead languages began to arise by the Renaissance, if not earlier. Extensive information began to be collected about these scripts in the 16th and 17th centuries, and a typology of writing was established in the 17th century. The first serious decipherments, however, did not take place until the 18th century. In 1754, Swinton and Barthélemy independently deciphered the Aramaic script as represented in Palmyrene inscriptions from the 3rd century AD. In 1787, Silvestre de Sacy deciphered the Sasanian script, which was the script used in Ancient Persia to write down the Middle Iranian language used in the Sasanian empire. Both decipherments relied on bilingual texts where Greek was included as the second script. It was also in the 18th century when the methodological framework for deciphering scripts and languages began to be established. For example, in 1714, Leibniz advocated that parallel content in bilingual inscriptions could be specified by correlating where personal names occur in both inscriptions. By the 19th century, the prerequisites for decipherment began to become widely available. These included extensive knowledge about the scripts themselves, adequate editions of known texts from that script, philological skills, and the ability to reconstruct linguistic forms from the limited available evidence. The 19th century saw two major successes in decipherment: that of Egyptian hieroglyphic and cuneiform.
Notable decipherers
See also
Deciphered scripts
Cuneiform
Egyptian hieroglyphs
Kharoshthi
Linear B
Mayan
Staveless Runes
Cypriot Syllabary
Undeciphered scripts
Rongorongo (Decipherment of rongorongo)
Indus script
Cretan hieroglyphs
Byblos syllabary
Linear A
Cypro-Minoan syllabary
Espanca
Numidian language
Undeciphered texts
Phaistos Disc
Rohonc Codex
Voynich Manuscript
References
Further reading
Cryptography
Writing systems
Genetics terms
Philology
Decipherment | Decipherment | Mathematics,Engineering,Biology | 2,783 |
5,879 | https://en.wikipedia.org/wiki/Caesium | Caesium (IUPAC spelling; also spelled cesium in American English) is a chemical element; it has symbol Cs and atomic number 55. It is a soft, silvery-golden alkali metal with a melting point of , which makes it one of only five elemental metals that are liquid at or near room temperature. Caesium has physical and chemical properties similar to those of rubidium and potassium. It is pyrophoric and reacts with water even at . It is the least electronegative stable element, with a value of 0.79 on the Pauling scale. It has only one stable isotope, caesium-133. Caesium is mined mostly from pollucite. Caesium-137, a fission product, is extracted from waste produced by nuclear reactors. It has the largest atomic radius of all elements whose radii have been measured or calculated, at about 260 picometres.
The German chemist Robert Bunsen and physicist Gustav Kirchhoff discovered caesium in 1860 by the newly developed method of flame spectroscopy. The first small-scale applications for caesium were as a "getter" in vacuum tubes and in photoelectric cells. Caesium is widely used in highly accurate atomic clocks. In 1967, the International System of Units began using a specific hyperfine transition of neutral caesium-133 atoms to define the basic unit of time, the second.
Since the 1990s, the largest application of the element has been as caesium formate for drilling fluids, but it has a range of applications in the production of electricity, in electronics, and in chemistry. The radioactive isotope caesium-137 has a half-life of about 30 years and is used in medical applications, industrial gauges, and hydrology. Nonradioactive caesium compounds are only mildly toxic, but the pure metal's tendency to react explosively with water means that caesium is considered a hazardous material, and the radioisotopes present a significant health and environmental hazard.
Spelling
Caesium is the spelling recommended by the International Union of Pure and Applied Chemistry (IUPAC). The American Chemical Society (ACS) has used the spelling cesium since 1921, following Webster's New International Dictionary. The element was named after the Latin word caesius, meaning "bluish grey". In medieval and early modern writings caesius was spelled with the ligature æ as cæsius; hence, an alternative but now old-fashioned orthography is cæsium. More spelling explanation at ae/oe vs e.
Characteristics
Physical properties
Of all elements that are solid at room temperature, caesium is the softest: it has a hardness of 0.2 Mohs. It is a very ductile, pale metal, which darkens in the presence of trace amounts of oxygen. When in the presence of mineral oil (where it is best kept during transport), it loses its metallic lustre and takes on a duller, grey appearance. It has a melting point of , making it one of the few elemental metals that are liquid near room temperature. The others are rubidium (), francium (estimated at ), mercury (), and gallium (); bromine is also liquid at room temperature (melting at ), but it is a halogen and not a metal. Mercury is the only stable elemental metal with a known melting point lower than caesium. In addition, the metal has a rather low boiling point, , the lowest of all stable metals other than mercury. Copernicium and flerovium have been predicted to have lower boiling points than mercury and caesium, but they are extremely radioactive and it is not certain if they are metals.
Caesium forms alloys with the other alkali metals, gold, and mercury (amalgams). At temperatures below , it does not alloy with cobalt, iron, molybdenum, nickel, platinum, tantalum, or tungsten. It forms well-defined intermetallic compounds with antimony, gallium, indium, and thorium, which are photosensitive. It mixes with all the other alkali metals (except lithium); the alloy with a molar distribution of 41% caesium, 47% potassium, and 12% sodium has the lowest melting point of any known metal alloy, at . A few amalgams have been studied: is black with a purple metallic lustre, while CsHg is golden-coloured, also with a metallic lustre.
The golden colour of caesium comes from the decreasing frequency of light required to excite electrons of the alkali metals as the group is descended. For lithium through rubidium this frequency is in the ultraviolet, but for caesium it enters the blue–violet end of the spectrum; in other words, the plasmonic frequency of the alkali metals becomes lower from lithium to caesium. Thus caesium transmits and partially absorbs violet light preferentially while other colours (having lower frequency) are reflected; hence it appears yellowish. Its compounds burn with a blue or violet colour.
Allotropes
Caesium exists in the form of different allotropes, one of them a dimer called dicaesium.
Chemical properties
Caesium metal is highly reactive and pyrophoric. It ignites spontaneously in air, and reacts explosively with water even at low temperatures, more so than the other alkali metals. It reacts with ice at temperatures as low as . Because of this high reactivity, caesium metal is classified as a hazardous material. It is stored and shipped in dry, saturated hydrocarbons such as mineral oil. It can be handled only under inert gas, such as argon. However, a caesium-water explosion is often less powerful than a sodium-water explosion with a similar amount of sodium. This is because caesium explodes instantly upon contact with water, leaving little time for hydrogen to accumulate. Caesium can be stored in vacuum-sealed borosilicate glass ampoules. In quantities of more than about , caesium is shipped in hermetically sealed, stainless steel containers.
The chemistry of caesium is similar to that of other alkali metals, in particular rubidium, the element above caesium in the periodic table. As expected for an alkali metal, the only common oxidation state is +1. It differs from this value in caesides, which contain the Cs− anion and thus have caesium in the −1 oxidation state. Under conditions of extreme pressure (greater than 30 GPa), theoretical studies indicate that the inner 5p electrons could form chemical bonds, where caesium would behave as the seventh 5p element, suggesting that higher caesium fluorides with caesium in oxidation states from +2 to +6 could exist under such conditions. Some slight differences arise from the fact that it has a higher atomic mass and is more electropositive than other (nonradioactive) alkali metals. Caesium is the most electropositive chemical element. The caesium ion is also larger and less "hard" than those of the lighter alkali metals.
Compounds
Most caesium compounds contain the element as the cation , which binds ionically to a wide variety of anions. One noteworthy exception is the caeside anion (), and others are the several suboxides (see section on oxides below). More recently, caesium is predicted to behave as a p-block element and capable of forming higher fluorides with higher oxidation states (i.e., CsFn with n > 1) under high pressure. This prediction needs to be validated by further experiments.
Salts of Cs+ are usually colourless unless the anion itself is coloured. Many of the simple salts are hygroscopic, but less so than the corresponding salts of lighter alkali metals. The phosphate, acetate, carbonate, halides, oxide, nitrate, and sulfate salts are water-soluble. Its double salts are often less soluble, and the low solubility of caesium aluminium sulfate is exploited in refining Cs from ores. The double salts with antimony (such as ), bismuth, cadmium, copper, iron, and lead are also poorly soluble.
Caesium hydroxide (CsOH) is hygroscopic and strongly basic. It rapidly etches the surface of semiconductors such as silicon. CsOH has been previously regarded by chemists as the "strongest base", reflecting the relatively weak attraction between the large Cs+ ion and OH−; it is indeed the strongest Arrhenius base; however, a number of compounds such as n-butyllithium, sodium amide, sodium hydride, caesium hydride, etc., which cannot be dissolved in water as reacting violently with it but rather only used in some anhydrous polar aprotic solvents, are far more basic on the basis of the Brønsted–Lowry acid–base theory.
A stoichiometric mixture of caesium and gold will react to form yellow caesium auride (Cs+Au−) upon heating. The auride anion here behaves as a pseudohalogen. The compound reacts violently with water, yielding caesium hydroxide, metallic gold, and hydrogen gas; in liquid ammonia it can be reacted with a caesium-specific ion exchange resin to produce tetramethylammonium auride. The analogous platinum compound, red caesium platinide (), contains the platinide ion that behaves as a .
Complexes
Like all metal cations, Cs+ forms complexes with Lewis bases in solution. Because of its large size, Cs+ usually adopts coordination numbers greater than 6, the number typical for the smaller alkali metal cations. This difference is apparent in the 8-coordination of CsCl. This high coordination number and softness (tendency to form covalent bonds) are properties exploited in separating Cs+ from other cations in the remediation of nuclear wastes, where 137Cs+ must be separated from large amounts of nonradioactive K+.
Halides
Caesium fluoride (CsF) is a hygroscopic white solid that is widely used in organofluorine chemistry as a source of fluoride anions. Caesium fluoride has the halite structure, which means that the Cs+ and F− pack in a cubic closest packed array as do Na+ and Cl− in sodium chloride. Notably, caesium and fluorine have the lowest and highest electronegativities, respectively, among all the known elements.
Caesium chloride (CsCl) crystallizes in the simple cubic crystal system. Also called the "caesium chloride structure", this structural motif is composed of a primitive cubic lattice with a two-atom basis, each with an eightfold coordination; the chloride atoms lie upon the lattice points at the edges of the cube, while the caesium atoms lie in the holes in the centre of the cubes. This structure is shared with CsBr and CsI, and many other compounds that do not contain Cs. In contrast, most other alkaline halides have the sodium chloride (NaCl) structure. The CsCl structure is preferred because Cs+ has an ionic radius of 174 pm and 181 pm.
Oxides
More so than the other alkali metals, caesium forms numerous binary compounds with oxygen. When caesium burns in air, the superoxide is the main product. The "normal" caesium oxide () forms yellow-orange hexagonal crystals, and is the only oxide of the anti- type. It vaporizes at , and decomposes to caesium metal and the peroxide at temperatures above . In addition to the superoxide and the ozonide , several brightly coloured suboxides have also been studied. These include , , , (dark-green), CsO, , as well as . The latter may be heated in a vacuum to generate . Binary compounds with sulfur, selenium, and tellurium also exist.
Isotopes
Caesium has 41 known isotopes, ranging in mass number (i.e. number of nucleons in the nucleus) from 112 to 152. Several of these are synthesized from lighter elements by the slow neutron capture process (S-process) inside old stars and by the R-process in supernova explosions. The only stable caesium isotope is 133Cs, with 78 neutrons. Although it has a large nuclear spin (+), nuclear magnetic resonance studies can use this isotope at a resonating frequency of 11.7 MHz.
The radioactive 135Cs has a very long half-life of about 2.3 million years, the longest of all radioactive isotopes of caesium. 137Cs and 134Cs have half-lives of 30 and two years, respectively. 137Cs decomposes to a short-lived 137mBa by beta decay, and then to nonradioactive barium, while 134Cs transforms into 134Ba directly. The isotopes with mass numbers of 129, 131, 132 and 136, have half-lives between a day and two weeks, while most of the other isotopes have half-lives from a few seconds to fractions of a second. At least 21 metastable nuclear isomers exist. Other than 134mCs (with a half-life of just under 3 hours), all are very unstable and decay with half-lives of a few minutes or less.
The isotope 135Cs is one of the long-lived fission products of uranium produced in nuclear reactors. However, this fission product yield is reduced in most reactors because the predecessor, 135Xe, is a potent neutron poison and frequently transmutes to stable 136Xe before it can decay to 135Cs.
The beta decay from 137Cs to 137mBa results in gamma radiation as the 137mBa relaxes to ground state 137Ba, with the emitted photons having an energy of 0.6617 MeV. 137Cs and 90Sr are the principal medium-lived products of nuclear fission, and the prime sources of radioactivity from spent nuclear fuel after several years of cooling, lasting several hundred years. Those two isotopes are the largest source of residual radioactivity in the area of the Chernobyl disaster. Because of the low capture rate, disposing of 137Cs through neutron capture is not feasible and the only current solution is to allow it to decay over time.
Almost all caesium produced from nuclear fission comes from the beta decay of originally more neutron-rich fission products, passing through various isotopes of iodine and xenon. Because iodine and xenon are volatile and can diffuse through nuclear fuel or air, radioactive caesium is often created far from the original site of fission. With nuclear weapons testing in the 1950s through the 1980s, 137Cs was released into the atmosphere and returned to the surface of the earth as a component of radioactive fallout. It is a ready marker of the movement of soil and sediment from those times.
Occurrence
Caesium is a relatively rare element, estimated to average 3 parts per million in the Earth's crust. It is the 45th most abundant element and 36th among the metals. Caesium is 30 times less abundant than rubidium, with which it is closely associated, chemically.
Due to its large ionic radius, caesium is one of the "incompatible elements". During magma crystallization, caesium is concentrated in the liquid phase and crystallizes last. Therefore, the largest deposits of caesium are zone pegmatite ore bodies formed by this enrichment process. Because caesium does not substitute for potassium as readily as rubidium does, the alkali evaporite minerals sylvite (KCl) and carnallite () may contain only 0.002% caesium. Consequently, caesium is found in few minerals. Percentage amounts of caesium may be found in beryl () and avogadrite (), up to 15 wt% Cs2O in the closely related mineral pezzottaite (), up to 8.4 wt% Cs2O in the rare mineral londonite (), and less in the more widespread rhodizite. The only economically important ore for caesium is pollucite , which is found in a few places around the world in zoned pegmatites, associated with the more commercially important lithium minerals, lepidolite and petalite. Within the pegmatites, the large grain size and the strong separation of the minerals results in high-grade ore for mining.
The world's most significant and richest known source of caesium is the Tanco Mine at Bernic Lake in Manitoba, Canada, estimated to contain 350,000 metric tons of pollucite ore, representing more than two-thirds of the world's reserve base. Although the stoichiometric content of caesium in pollucite is 42.6%, pure pollucite samples from this deposit contain only about 34% caesium, while the average content is 24 wt%. Commercial pollucite contains more than 19% caesium. The Bikita pegmatite deposit in Zimbabwe is mined for its petalite, but it also contains a significant amount of pollucite. Another notable source of pollucite is in the Karibib Desert, Namibia. At the present rate of world mine production of 5 to 10 metric tons per year, reserves will last for thousands of years.
Production
Mining and refining pollucite ore is a selective process and is conducted on a smaller scale than for most other metals. The ore is crushed, hand-sorted, but not usually concentrated, and then ground. Caesium is then extracted from pollucite primarily by three methods: acid digestion, alkaline decomposition, and direct reduction.
In the acid digestion, the silicate pollucite rock is dissolved with strong acids, such as hydrochloric (HCl), sulfuric (), hydrobromic (HBr), or hydrofluoric (HF) acids. With hydrochloric acid, a mixture of soluble chlorides is produced, and the insoluble chloride double salts of caesium are precipitated as caesium antimony chloride (), caesium iodine chloride (), or caesium hexachlorocerate (). After separation, the pure precipitated double salt is decomposed, and pure CsCl is precipitated by evaporating the water.
The sulfuric acid method yields the insoluble double salt directly as caesium alum (). The aluminium sulfate component is converted to insoluble aluminium oxide by roasting the alum with carbon, and the resulting product is leached with water to yield a solution.
Roasting pollucite with calcium carbonate and calcium chloride yields insoluble calcium silicates and soluble caesium chloride. Leaching with water or dilute ammonia () yields a dilute chloride (CsCl) solution. This solution can be evaporated to produce caesium chloride or transformed into caesium alum or caesium carbonate. Though not commercially feasible, the ore can be directly reduced with potassium, sodium, or calcium in vacuum to produce caesium metal directly.
Most of the mined caesium (as salts) is directly converted into caesium formate (HCOO−Cs+) for applications such as oil drilling. To supply the developing market, Cabot Corporation built a production plant in 1997 at the Tanco mine near Bernic Lake in Manitoba, with a capacity of per year of caesium formate solution. The primary smaller-scale commercial compounds of caesium are caesium chloride and nitrate.
Alternatively, caesium metal may be obtained from the purified compounds derived from the ore. Caesium chloride and the other caesium halides can be reduced at with calcium or barium, and caesium metal distilled from the result. In the same way, the aluminate, carbonate, or hydroxide may be reduced by magnesium.
The metal can also be isolated by electrolysis of fused caesium cyanide (CsCN). Exceptionally pure and gas-free caesium can be produced by thermal decomposition of caesium azide , which can be produced from aqueous caesium sulfate and barium azide. In vacuum applications, caesium dichromate can be reacted with zirconium to produce pure caesium metal without other gaseous products.
+ 2 → 2 + 2 +
The price of 99.8% pure caesium (metal basis) in 2009 was about , but the compounds are significantly cheaper.
History
In 1860, Robert Bunsen and Gustav Kirchhoff discovered caesium in the mineral water from Dürkheim, Germany. Because of the bright blue lines in the emission spectrum, they derived the name from the Latin word , meaning . Caesium was the first element to be discovered with a spectroscope, which had been invented by Bunsen and Kirchhoff only a year previously.
To obtain a pure sample of caesium, of mineral water had to be evaporated to yield of concentrated salt solution. The alkaline earth metals were precipitated either as sulfates or oxalates, leaving the alkali metal in the solution. After conversion to the nitrates and extraction with ethanol, a sodium-free mixture was obtained. From this mixture, the lithium was precipitated by ammonium carbonate. Potassium, rubidium, and caesium form insoluble salts with chloroplatinic acid, but these salts show a slight difference in solubility in hot water, and the less-soluble caesium and rubidium hexachloroplatinate () were obtained by fractional crystallization. After reduction of the hexachloroplatinate with hydrogen, caesium and rubidium were separated by the difference in solubility of their carbonates in alcohol. The process yielded of rubidium chloride and of caesium chloride from the initial 44,000 litres of mineral water.
From the caesium chloride, the two scientists estimated the atomic weight of the new element at 123.35 (compared to the currently accepted one of 132.9). They tried to generate elemental caesium by electrolysis of molten caesium chloride, but instead of a metal, they obtained a blue homogeneous substance which "neither under the naked eye nor under the microscope showed the slightest trace of metallic substance"; as a result, they assigned it as a subchloride (). In reality, the product was probably a colloidal mixture of the metal and caesium chloride. The electrolysis of the aqueous solution of chloride with a mercury cathode produced a caesium amalgam which readily decomposed under the aqueous conditions. The pure metal was eventually isolated by the Swedish chemist Carl Setterberg while working on his doctorate with Kekulé and Bunsen. In 1882, he produced caesium metal by electrolysing caesium cyanide, avoiding the problems with the chloride.
Historically, the most important use for caesium has been in research and development, primarily in chemical and electrical fields. Very few applications existed for caesium until the 1920s, when it came into use in radio vacuum tubes, where it had two functions; as a getter, it removed excess oxygen after manufacture, and as a coating on the heated cathode, it increased the electrical conductivity. Caesium was not recognized as a high-performance industrial metal until the 1950s. Applications for nonradioactive caesium included photoelectric cells, photomultiplier tubes, optical components of infrared spectrophotometers, catalysts for several organic reactions, crystals for scintillation counters, and in magnetohydrodynamic power generators. Caesium is also used as a source of positive ions in secondary ion mass spectrometry (SIMS).
Since 1967, the International System of Measurements has based the primary unit of time, the second, on the properties of caesium. The International System of Units (SI) defines the second as the duration of 9,192,631,770 cycles at the microwave frequency of the spectral line corresponding to the transition between two hyperfine energy levels of the ground state of caesium-133. The 13th General Conference on Weights and Measures of 1967 defined a second as: "the duration of 9,192,631,770 cycles of microwave light absorbed or emitted by the hyperfine transition of caesium-133 atoms in their ground state undisturbed by external fields".
Applications
Petroleum exploration
The largest present-day use of nonradioactive caesium is in caesium formate drilling fluids for the extractive oil industry. Aqueous solutions of caesium formate (HCOO−Cs+)—made by reacting caesium hydroxide with formic acid—were developed in the mid-1990s for use as oil well drilling and completion fluids. The function of a drilling fluid is to lubricate drill bits, to bring rock cuttings to the surface, and to maintain pressure on the formation during drilling of the well. Completion fluids assist the emplacement of control hardware after drilling but prior to production by maintaining the pressure.
The high density of the caesium formate brine (up to 2.3 g/cm3, or 19.2 pounds per gallon), coupled with the relatively benign nature of most caesium compounds, reduces the requirement for toxic high-density suspended solids in the drilling fluid—a significant technological, engineering and environmental advantage. Unlike the components of many other heavy liquids, caesium formate is relatively environment-friendly. Caesium formate brine can be blended with potassium and sodium formates to decrease the density of the fluids to that of water (1.0 g/cm3, or 8.3 pounds per gallon). Furthermore, it is biodegradable and may be recycled, which is important in view of its high cost (about $4,000 per barrel in 2001). Alkali formates are safe to handle and do not damage the producing formation or downhole metals as corrosive alternative, high-density brines (such as zinc bromide solutions) sometimes do; they also require less cleanup and reduce disposal costs.
Atomic clocks
Caesium-based atomic clocks use the electromagnetic transitions in the hyperfine structure of caesium-133 atoms as a reference point. The first accurate caesium clock was built by Louis Essen in 1955 at the National Physical Laboratory in the UK. Caesium clocks have improved over the past half-century and are regarded as "the most accurate realization of a unit that mankind has yet achieved." These clocks measure frequency with an error of 2 to 3 parts in 1014, which corresponds to an accuracy of 2 nanoseconds per day, or one second in 1.4 million years. The latest versions are more accurate than 1 part in 1015, about 1 second in 20 million years. The caesium standard is the primary standard for standards-compliant time and frequency measurements. Caesium clocks regulate the timing of cell phone networks and the Internet.
Definition of the second
The second, symbol s, is the SI unit of time. The BIPM restated its definition at its 26th conference in 2018: "[The second] is defined by taking the fixed numerical value of the caesium frequency , the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be when expressed in the unit Hz, which is equal to s−1."
Electric power and electronics
Caesium vapour thermionic generators are low-power devices that convert heat energy to electrical energy. In the two-electrode vacuum tube converter, caesium neutralizes the space charge near the cathode and enhances the current flow.
Caesium is also important for its photoemissive properties, converting light to electron flow. It is used in photoelectric cells because caesium-based cathodes, such as the intermetallic compound , have a low threshold voltage for emission of electrons. The range of photoemissive devices using caesium include optical character recognition devices, photomultiplier tubes, and video camera tubes. Nevertheless, germanium, rubidium, selenium, silicon, tellurium, and several other elements can be substituted for caesium in photosensitive materials.
Caesium iodide (CsI), bromide (CsBr) and fluoride (CsF) crystals are employed for scintillators in scintillation counters widely used in mineral exploration and particle physics research to detect gamma and X-ray radiation. Being a heavy element, caesium provides good stopping power with better detection. Caesium compounds may provide a faster response (CsF) and be less hygroscopic (CsI).
Caesium vapour is used in many common magnetometers.
The element is used as an internal standard in spectrophotometry. Like other alkali metals, caesium has a great affinity for oxygen and is used as a "getter" in vacuum tubes. Other uses of the metal include high-energy lasers, vapour glow lamps, and vapour rectifiers.
Centrifugation fluids
The high density of the caesium ion makes solutions of caesium chloride, caesium sulfate, and caesium trifluoroacetate () useful in molecular biology for density gradient ultracentrifugation. This technology is used primarily in the isolation of viral particles, subcellular organelles and fractions, and nucleic acids from biological samples.
Chemical and medical use
Relatively few chemical applications use caesium. Doping with caesium compounds enhances the effectiveness of several metal-ion catalysts for chemical synthesis, such as acrylic acid, anthraquinone, ethylene oxide, methanol, phthalic anhydride, styrene, methyl methacrylate monomers, and various olefins. It is also used in the catalytic conversion of sulfur dioxide into sulfur trioxide in the production of sulfuric acid.
Caesium fluoride enjoys a niche use in organic chemistry as a base and as an anhydrous source of fluoride ion. Caesium salts sometimes replace potassium or sodium salts in organic synthesis, such as cyclization, esterification, and polymerization. Caesium has also been used in thermoluminescent radiation dosimetry (TLD): When exposed to radiation, it acquires crystal defects that, when heated, revert with emission of light proportionate to the received dose. Thus, measuring the light pulse with a photomultiplier tube can allow the accumulated radiation dose to be quantified.
Nuclear and isotope applications
Caesium-137 is a radioisotope commonly used as a gamma-emitter in industrial applications. Its advantages include a half-life of roughly 30 years, its availability from the nuclear fuel cycle, and having 137Ba as a stable end product. The high water solubility is a disadvantage which makes it incompatible with large pool irradiators for food and medical supplies. It has been used in agriculture, cancer treatment, and the sterilization of food, sewage sludge, and surgical equipment. Radioactive isotopes of caesium in radiation devices were used in the medical field to treat certain types of cancer, but emergence of better alternatives and the use of water-soluble caesium chloride in the sources, which could create wide-ranging contamination, gradually put some of these caesium sources out of use. Caesium-137 has been employed in a variety of industrial measurement gauges, including moisture, density, levelling, and thickness gauges. It has also been used in well logging devices for measuring the electron density of the rock formations, which is analogous to the bulk density of the formations.
Caesium-137 has been used in hydrologic studies analogous to those with tritium. As a daughter product of fission bomb testing from the 1950s through the mid-1980s, caesium-137 was released into the atmosphere, where it was absorbed readily into solution. Known year-to-year variation within that period allows correlation with soil and sediment layers. Caesium-134, and to a lesser extent caesium-135, have also been used in hydrology to measure the caesium output by the nuclear power industry. While they are less prevalent than either caesium-133 or caesium-137, these bellwether isotopes are produced solely from anthropogenic sources.
Other uses
Caesium and mercury were used as a propellant in early ion engines designed for spacecraft propulsion on very long interplanetary or extraplanetary missions. The fuel was ionized by contact with a charged tungsten electrode. But corrosion by caesium on spacecraft components has pushed development in the direction of inert gas propellants, such as xenon, which are easier to handle in ground-based tests and do less potential damage to the spacecraft. Xenon was used in the experimental spacecraft Deep Space 1 launched in 1998. Nevertheless, field-emission electric propulsion thrusters that accelerate liquid metal ions such as caesium have been built.
Caesium nitrate is used as an oxidizer and pyrotechnic colorant to burn silicon in infrared flares, such as the LUU-19 flare, because it emits much of its light in the near infrared spectrum. Caesium compounds may have been used as fuel additives to reduce the radar signature of exhaust plumes in the Lockheed A-12 CIA reconnaissance aircraft. Caesium and rubidium have been added as a carbonate to glass because they reduce electrical conductivity and improve stability and durability of fibre optics and night vision devices. Caesium fluoride or caesium aluminium fluoride are used in fluxes formulated for brazing aluminium alloys that contain magnesium.
Magnetohydrodynamic (MHD) power-generating systems were researched, but failed to gain widespread acceptance. Caesium metal has also been considered as the working fluid in high-temperature Rankine cycle turboelectric generators.
Caesium salts have been evaluated as antishock reagents following the administration of arsenical drugs. Because of their effect on heart rhythms, however, they are less likely to be used than potassium or rubidium salts. They have also been used to treat epilepsy.
Caesium-133 can be laser cooled and used to probe fundamental and technological problems in quantum physics. It has a particularly convenient Feshbach spectrum to enable studies of ultracold atoms requiring tunable interactions.
Health and safety hazards
Nonradioactive caesium compounds are only mildly toxic, and nonradioactive caesium is not a significant environmental hazard. Because biochemical processes can confuse and substitute caesium with potassium, excess caesium can lead to hypokalemia, arrhythmia, and acute cardiac arrest, but such amounts would not ordinarily be encountered in natural sources.
The median lethal dose (LD50) for caesium chloride in mice is 2.3 g per kilogram, which is comparable to the LD50 values of potassium chloride and sodium chloride. The principal use of nonradioactive caesium is as caesium formate in petroleum drilling fluids because it is much less toxic than alternatives, though it is more costly.
Caesium is one of the most reactive elements and is highly explosive in the presence of water. The hydrogen gas produced by the reaction is heated by the thermal energy released at the same time, causing ignition and a violent explosion. This can occur with other alkali metals, but caesium is so potent that this explosive reaction can be triggered even by cold water.
It is highly pyrophoric: the autoignition temperature of caesium is , and it ignites explosively in air to form caesium hydroxide and various oxides. Caesium hydroxide is a very strong base, and will rapidly corrode glass.
The isotopes 134 and 137 are present in the biosphere in small amounts from human activities, differing by location. Radiocaesium does not accumulate in the body as readily as other fission products (such as radioiodine and radiostrontium). About 10% of absorbed radiocaesium washes out of the body relatively quickly in sweat and urine. The remaining 90% has a biological half-life between 50 and 150 days. Radiocaesium follows potassium and tends to accumulate in plant tissues, including fruits and vegetables. Plants vary widely in the absorption of caesium, sometimes displaying great resistance to it. It is also well-documented that mushrooms from contaminated forests accumulate radiocaesium (caesium-137) in the fungal sporocarps. Accumulation of caesium-137 in lakes has been a great concern after the Chernobyl disaster. Experiments with dogs showed that a single dose of 3.8 millicuries (140 MBq, 4.1 μg of caesium-137) per kilogram is lethal within three weeks; smaller amounts may cause infertility and cancer. The International Atomic Energy Agency and other sources have warned that radioactive materials, such as caesium-137, could be used in radiological dispersion devices, or "dirty bombs".
See also
Acerinox accident, a caesium-137 contamination accident in 1998
Goiânia accident, a major radioactive contamination incident in 1987 involving caesium-137
Kramatorsk radiological accident, a 137Cs lost-source incident between 1980 and 1989
Notes
References
External links
Caesium or Cesium at The Periodic Table of Videos (University of Nottingham)
View the reaction of Caesium (most reactive metal in the periodic table) with Fluorine (most reactive non-metal) courtesy of The Royal Institution.
1860 introductions
Alkali metals
Chemical elements with body-centered cubic structure
Chemical elements
Glycine receptor agonists
Reducing agents
Articles containing video clips
Pyrophoric materials | Caesium | Physics,Chemistry,Technology | 7,981 |
38,375,237 | https://en.wikipedia.org/wiki/Server-based%20signatures | In cryptography, server-based signatures are digital signatures in which a publicly available server participates in the signature creation process. This is in contrast to conventional digital signatures that are based on public-key cryptography and public-key infrastructure. With that, they assume that signers use their personal trusted computing bases for generating signatures without any communication with servers.
Four different classes of server based signatures have been proposed:
1. Lamport One-Time Signatures. Proposed in 1979 by Leslie Lamport. Lamport one-time signatures are based on cryptographic hash functions. For signing a message, the signer just sends a list of hash values (outputs of a hash function) to a publishing server and therefore the signature process is very fast, though the size of the signature is many times larger, compared to ordinary public-key signature schemes.
2. On-line/off-line Digital Signatures. First proposed in 1989 by Even, Goldreich and Micali in order to speed up the signature creation procedure, which is usually much more time-consuming than verification. In case of RSA, it may be one thousand times slower than verification. On-line/off-line digital signatures are created in two phases. The first phase is performed off-line, possibly even before the message to be signed is known. The second (message-dependent) phase is performed on-line and involves communication with a server. In the first (off-line) phase, the signer uses a conventional public-key digital signature scheme to sign a public key of the Lamport one-time signature scheme. In the second phase, a message is signed by using the Lamport signature scheme. Some later works
have improved the efficiency of the original solution by Even et al.
3. Server-Supported Signatures (SSS). Proposed in 1996 by Asokan, Tsudik and Waidner in order to delegate the use of time-consuming operations of asymmetric cryptography from clients (ordinary users) to a server. For ordinary users, the use of asymmetric cryptography is limited to signature verification, i.e. there is no pre-computation phase like in the case of on-line/off-line signatures. The main motivation was the use of low-performance mobile devices for creating digital signatures, considering that such devices could be too slow for creating ordinary public-key digital signatures, such as RSA. Clients use hash chain based authentication to send their messages to a signature server in an authenticated way and the server then creates a digital signature by using an ordinary public-key digital signature scheme. In SSS, signature servers are not assumed to be Trusted Third Parties (TTPs) because the transcript of the hash chain authentication phase can be used for non repudiation purposes. In SSS, servers cannot create signatures in the name of their clients.
4. Delegate Servers (DS). Proposed in 2002 by Perrin, Bruns, Moreh and Olkin in order to reduce the problems and costs related to individual private keys. In their solution, clients (ordinary users) delegate their private cryptographic operations to a Delegation Server (DS). Users authenticate to DS and request to sign messages on their behalf by using the server's own private key. The main motivation behind DS was that private keys are difficult for ordinary users to use and easy for attackers to abuse. Private keys are not memorable like passwords or derivable from persons like biometrics, and cannot be entered from keyboards like passwords. Private keys are mostly stored as files in computers or on smart-cards, that may be stolen by attackers and abuse off-line. In 2003, Buldas and Saarepera proposed a two-level architecture of delegation servers that addresses the trust issue by replacing trust with threshold trust via the use of threshold cryptosystems.
References
Cryptography | Server-based signatures | Mathematics,Engineering | 787 |
695,046 | https://en.wikipedia.org/wiki/Quaternionic%20representation | In the mathematical field of representation theory, a quaternionic representation is a representation on a complex vector space V with an invariant quaternionic structure, i.e., an antilinear equivariant map
which satisfies
Together with the imaginary unit i and the antilinear map k := ij, j equips V with the structure of a quaternionic vector space (i.e., V becomes a module over the division algebra of quaternions). From this point of view, quaternionic representation of a group G is a group homomorphism φ: G → GL(V, H), the group of invertible quaternion-linear transformations of V. In particular, a quaternionic matrix representation of g assigns a square matrix of quaternions ρ(g) to each element g of G such that ρ(e) is the identity matrix and
Quaternionic representations of associative and Lie algebras can be defined in a similar way.
Properties and related concepts
If V is a unitary representation and the quaternionic structure j is a unitary operator, then V admits an invariant complex symplectic form ω, and hence is a symplectic representation. This always holds if V is a representation of a compact group (e.g. a finite group) and in this case quaternionic representations are also known as symplectic representations. Such representations, amongst irreducible representations, can be picked out by the Frobenius-Schur indicator.
Quaternionic representations are similar to real representations in that they are isomorphic to their complex conjugate representation. Here a real representation is taken to be a complex representation with an invariant real structure, i.e., an antilinear equivariant map
which satisfies
A representation which is isomorphic to its complex conjugate, but which is not a real representation, is sometimes called a pseudoreal representation.
Real and pseudoreal representations of a group G can be understood by viewing them as representations of the real group algebra R[G]. Such a representation will be a direct sum of central simple R-algebras, which, by the Artin-Wedderburn theorem, must be matrix algebras over the real numbers or the quaternions. Thus a real or pseudoreal representation is a direct sum of irreducible real representations and irreducible quaternionic representations. It is real if no quaternionic representations occur in the decomposition.
Examples
A common example involves the quaternionic representation of rotations in three dimensions. Each (proper) rotation is represented by a quaternion with unit norm. There is an obvious one-dimensional quaternionic vector space, namely the space H of quaternions themselves under left multiplication. By restricting this to the unit quaternions, we obtain a quaternionic representation of the spinor group Spin(3).
This representation ρ: Spin(3) → GL(1,H) also happens to be a unitary quaternionic representation because
for all g in Spin(3).
Another unitary example is the spin representation of Spin(5). An example of a non-unitary quaternionic representation would be the two dimensional irreducible representation of Spin(5,1).
More generally, the spin representations of Spin(d) are quaternionic when d equals 3 + 8k, 4 + 8k, and 5 + 8k dimensions, where k is an integer. In physics, one often encounters the spinors of Spin(d, 1). These representations have the same type of real or quaternionic structure as the spinors of Spin(d − 1).
Among the compact real forms of the simple Lie groups, irreducible quaternionic representations only exist for the Lie groups of type A4k+1, B4k+1, B4k+2, Ck, D4k+2, and E7.
References
.
.
See also
Symplectic vector space
Representation theory | Quaternionic representation | Mathematics | 853 |
48,853,442 | https://en.wikipedia.org/wiki/Roadometer%20%28odometer%29 | The roadometer was a 19th-century device like an odometer for measuring mileage, mounted on a wagon wheel. One such device was invented in 1847 by William Clayton, Orson Pratt, and Appleton Harmon, pioneers of the Church of Jesus Christ of Latter-day Saints.
History
Brass odometers were used by many pioneers making the westward trek in the 1840s. However, the design of Clayton, Pratt, and Harmon's odometer was new. In 1847, William Clayton accompanied the first expedition to the Utah Territory as a writer and record-keeper. He initially counted revolutions of a wagon wheel to calculate the distance they had travelled. He tired of counting wheel revolutions and wanted a device that could measure the distance a wagon travelled. It is possible he was familiar with the English viometers that measured distance using gears. Clayton asked Orson Pratt if it would be possible to make such a device, and Pratt created the design. Harmon carved the gears out of wood and may have further refined the design. They started using the roadometer around May 12. Three hundred and sixty revolutions of the wagon wheel equaled one mile. A piece on the hub of the wheel turned a shaft one revolution for every six revolutions of the wagon wheel. Then one revolution of that shaft moved a 60 tooth gear by one tooth. "The second gear wheel had forty teeth [and] overlaid the first gear and was turned by four teeth on the axle of that gear. One rotation of the second gear therefore represented ten miles each tooth being one quarter of a mile." Unfortunately, the small four-toothed gear swelled in the rain and was not functional for much of the journey. Clayton used their invention to provide an estimate of the distance their party traveled each day between Omaha, Nebraska, and Salt Lake City, Utah. William Clayton returned to Winter Quarters from the Salt Lake Valley. He had a new odometer built by William A. King that could measure a thousand miles for the return trip. Clayton published the distances and other helpful travel information in his popular The Latter-day Saints' Emigrants' Guide.
Later documentation of Clayton's odometer
A machine commonly displayed as Clayton's odometer is actually one built in 1876 by Thomas G. Lowe. Lowe created his odometer to calculate the distance between villages in northern Arizona. He gave his odometer to the Deseret Museum in Salt Lake City, and it was on display with accurate information from 1876 until it closed for a period in 1903. When the museum reopened in 1911, they displayed his odometer with the incorrect information that it had been made by Appleton Harmon and William Clayton. Lowe's odometer was visibly different from Clayton's. It had four toothed gears and a ratchet-like drive mechanism. Lowe attempted to correct the misinformation when he visited in 1921, but the information was not corrected until 1983. Steven Pratt created a replica of Clayton's odometer which was on display at the Museum of Church History and Art.
A 1921 news article in the Deseret News claimed that Clayton's original odometer was "the first of its kind". The paper published a correction from an engineer, who clarified that odometers existed as early as 12 B.C. in Rome. The incorrect idea that Clayton's odometer was the first persisted.
Brigham Young University engineering professor Larry Howell built a replica of the roadometer in 2006. He stated that his replica was more accurate than Steven Pratt's. He published information about the rebuild at the 2006 symposium for the American Society of Mechanical Engineers. According to Howell's calculations, the 60-tooth gear's diameter was 15 inches, the 40-tooth gear's diameter was 10 inches, and the 4-tooth gear's diameter was 1 inch.
References
Measuring instruments
Vehicle parts
Vehicle technology
Length, distance, or range measuring devices | Roadometer (odometer) | Technology,Engineering | 794 |
3,035,321 | https://en.wikipedia.org/wiki/Toilet-related%20injuries%20and%20deaths | There have been many toilet-related injuries and deaths throughout history and in urban legends.
Accidental injuries
Infants and toddlers have fallen headfirst into toilet bowls and drowned. Safety devices exist to help prevent such accidents. Injuries to adults include bruised buttocks and tail bones, as well as dislocated hips have resulted from unexpectedly sitting on the toilet bowl rim because the seat is up or loose. Injuries can also be caused by pinching due to splits in plastic seats and/or by splinters from wooden seats, or if the toilet itself collapses or shatters under the weight of the user. Older high-tank cast-iron cisterns have been known to detach from the wall when the chain is pulled to flush, causing injuries to the user. The 2000 Ig Nobel Prize in Public Health was awarded to three physicians from the Glasgow Western Infirmary for a 1993 case report on wounds sustained to the buttocks due to collapsing toilets. Furthermore, injuries are frequently sustained by people who stand on toilets to reach a height, then slip and fall. There are also instances of people slipping on a wet bathroom floor or from a bath and concussing themselves on the fixture.
Toilet-related injuries are surprisingly common, with some estimates ranging as high as 40,000 in the US every year. In the past, this number would have been much higher, due to the material from which toilet paper was made. This was shown in a 1935 Northern Tissue advertisement which depicted splinter-free toilet paper. In 2012, 2.3 million toilets in the United States, and about 9,400 in Canada, were recalled due to faulty pressure-assist flush mechanisms which put users at risk of the fixture exploding.
Injuries caused by animals
There are also injuries caused by animals. Some black widow spiders like to spin their web below the toilet seat because of insects that can exist in and around it. Therefore, several people have been bitten while using a toilet, particularly outhouse toilets. Although there is immediate pain at the bite site, these bites are rarely fatal. The danger of spiders living beneath toilet seats is the subject of Slim Newton's comic 1972 country song "The Redback on the Toilet Seat".
It has been reported that in some cases rats crawl up through toilet sewer pipes and emerge in the toilet bowl, so that toilet users may be at risk of having a rat bite their buttocks. Many rat exterminators do not believe this, as pipes, at generally six inches (15 centimeters) wide, are too large for rats to climb and are also very slippery. Reports by janitors are always on the top floor, and could involve the rats on the roof, entering the soil pipe through the roof vent, lowering themselves into the pipe, and then into the toilet.
In May 2016, an 11-foot snake, a reticulated python, emerged from a squat toilet and bit the man using it on his penis at his home in Chachoengsao Province, Thailand. Both the victim and the python survived.
Self-induced injury
Some instances of toilet-related deaths are attributed to the drop in blood pressure due to the parasympathetic nervous system during bowel movements. This effect may be magnified by existing circulatory issues. It is further possible that people succumb on the toilet to chronic constipation, because the Valsalva maneuver is often dangerously used to aid in the expulsion of feces from the rectum during a bowel movement. According to Sharon Mantik Lewis, Margaret McLean Heitkemper and Shannon Ruff Dirksen, the "Valsalva maneuver occurs during straining to pass a hardened stool. If defecation is suppressed over long periods, problems can occur, such as constipation or stool impaction. Defecation can be facilitated by the Valsalva maneuver. This maneuver involves contraction of the chest muscles on a closed glottis with simultaneous contraction of the abdominal muscles." This means that people can die while "straining at stool." In chapter 8 of their Abdominal Emergencies, David Cline and Latha Stead wrote that "autopsy studies continue to reveal missed bowel obstruction as an unexpected cause of death".
A 2001 Sopranos episode "He is Risen" shows a fictional depiction of the risk, when the character Gigi Cestone has a heart attack on the toilet of his social club while straining to defecate.
Exploding toilets
In the Victorian era, there was a perceived risk of toilets exploding. These scenarios typically include a flammable substance (either accidentally or deliberately) being introduced into the toilet water, and a lit match or cigarette igniting and exploding the toilet. In 2014, Sloan's Flushmate pressure-assisted flushing system, which uses compressed air to force waste down the drain, was recalled after the company received reports of the air tank failing under pressure and shattering the porcelain.
Historical deaths
In 1945, the German submarine U-1206 was sunk after a toilet accident resulted in seawater flooding into the hull, which created chlorine gas upon contact with a battery and forced the submarine to resurface. At the surface the sub was discovered and attacked by Allied forces, causing the sub's captain to scuttle the sub so Allied forces could not capture it. This case may not have been due to a malfunction, but rather the possibility that the pressurized flushing system in the U-boats, which was extremely complex and required a training course to operate, may not have been properly operated.
Godfrey the Hunchback, Duke of Lower Lorraine (an area roughly coinciding with the Netherlands and Belgium), was murdered in 1076 when staying in the Dutch city of Vlaardingen. Supposedly, the assassin made sure which of the latrines, which were built and drained on the outer side of the wall, according to medieval building style, belonged to the duke's sleeping room, and took a position underneath. Some sources say that a sword was used for the assassination; others mention a sharp iron weapon, which could have been a sword but also a spear or a dagger, but a spear seems to be the most practical choice. After being stabbed in the bottom it took him several days to die from internal bleeding. The assassination was ordered by Dirk V, Count of Holland, and his ally Robrecht the Frisian, Count of Flanders.
The Erfurt latrine disaster of 1184 caused the death of at least 60 people, most of them being nobles.
George II of Great Britain died on the toilet on October 25, 1760, from an aortic dissection. According to Horace Walpole's memoirs, King George "rose as usual at six, and drank his chocolate; for all his actions were invariably methodic. A quarter after seven he went into a little closet. His German valet de chambre in waiting heard a noise, and running in, found the King dead on the floor." In falling he had cut his face.
Ioan P. Culianu was shot dead while on the toilet in the third-floor men's room of Swift Hall on the campus of the University of Chicago on 21 May 1991, in a possibly politically motivated assassination. His killer has never been caught.
The Abbasid's visier Al-Fadl ibn Sahl was found dead mysteriously in a bathroom in Sarakhs in Northern Khorasan. According to some rumors, the Abbasid Caliph Al-Ma'mun ibn Harun Ar-Rashid had ordered his assassination.
Elvis Presley died when using the toilet. "Most sources indicate that Elvis was likely sitting in the toilet area, partially nude, and reading when he collapsed." According to Dylan Jones, "Elvis Presley died aged 42 on August 16th, 1977, in the bathroom of the star's own Graceland mansion in Memphis. Sitting on the toilet, he had toppled like a toy soldier and collapsed onto the floor, where he lay in a pool of his own vomit. His light blue pajamas were around his ankles." In similar terms, Elvis biographer Joel Williamson writes, "For some reason — perhaps involving a reaction to the codeine and attempts to move his bowels — he experienced pain and fright while sitting on the toilet. Alarmed, he stood up, dropped the book he was reading, stumbled forward, and fell face down in the fetal position. He struggled weakly and drooled on the rug. Unable to breathe, he died." This led to the common saying, “The King died on the throne”.
Possible occurrences
Duke Jing of Jin (Ju), ruler of the State of Jin during the Spring and Autumn period of ancient China, died after falling into a toilet pit in summer 581 BC.
Edmund II of England died of natural causes on November 30, 1016, though some report that he was stabbed in the bowels while attending the outhouse. Similarly, Uesugi Kenshin, a warlord in Japan, died on April 19, 1578, with some reports stating that he was assassinated on the toilet.
Lenny Bruce died of a heroin overdose on August 3, 1966, while sitting on the toilet, with his arm tied off.
Air Canada Flight 797 was destroyed on June 2, 1983, with 23 fatalities after an in-flight fire began in or around the rear lavatory. Investigators could not determine the cause or exact point of origin for the fire.
Michael Anderson Godwin, a convicted murderer in South Carolina who had his sentence reduced from death by the electric chair, sat on the metal toilet in his cell while fixing his television. When he bit one of the wires, the resultant electric shock killed him. Another convicted murderer, Laurence Baker in Pittsburgh, was electrocuted while listening to the television on homemade earphones while sitting on a metal toilet.
A collision between a disabled Cessna 182 and a row of portable toilets on May 2, 2009, at Thun Field (south-east of Tacoma), despite an engine failure at altitude, ended without fatalities; the toilets "kind of cushioned things" for the 67-year-old pilot.
British businessman and Conservative politician Christopher Shale was found dead in a portable toilet at the Glastonbury Festival on June 26, 2011. It is suspected he died of a heart attack.
Aboard ships, the head (ship's toilet), and fittings associated with it are cited as one of the most common reasons for the sinking of tens of thousands of boats of all types and sizes. Heads typically have through-hull fittings located below the water line to draw flush water and eliminate waste. Boats are sunk when fittings fail or the toilet back siphons.
Urban legends
Urban legends have been reported regarding the dangers of using a toilet in a variety of situations. Several of them have been shown to be questionable. These include some cases of the presence of venomous spiders but do not include the Australian redback spider who has a reputation for hiding under toilet seats. These recent fears have emerged from a series of hoax emails originating in the Blush Spider hoax, which began circulating the internet in 1999. Spiders have also been reported to live under seats of airplanes, however, the cleaning chemicals used in the toilets would result in an incompatibility with spider's survival.
In large cities like New York City, sewer rats often have mythical status regarding size and ferocity, resulting in tales involving the rodents crawling up sewer pipes to attack an unwitting occupant. Of late, stories about terrorists booby trapping the seat to castrate their targets have begun appearing. Another myth is the risk of being sucked into an aircraft lavatory as a result of vacuum pressure during a flight.
See also
List of unusual deaths
Sanitation
List of people who died on the toilet
References
37. PBS.org Elvis’ addiction was the perfect prescription for an early death
Technology hazards
Injury
Causes of death | Toilet-related injuries and deaths | Technology,Biology | 2,404 |
6,474,767 | https://en.wikipedia.org/wiki/Freundlich%20equation | The Freundlich equation or Freundlich adsorption isotherm, an adsorption isotherm, is an empirical relationship between the quantity of a gas adsorbed into a solid surface and the gas pressure. The same relationship is also applicable for the concentration of a solute adsorbed onto the surface of a solid and the concentration of the solute in the liquid phase. In 1909, Herbert Freundlich gave an expression representing the isothermal variation of adsorption of a quantity of gas adsorbed by unit mass of solid adsorbent with gas pressure. This equation is known as Freundlich adsorption isotherm or Freundlich adsorption equation. As this relationship is entirely empirical, in the case where adsorption behavior can be properly fit by isotherms with a theoretical basis, it is usually appropriate to use such isotherms instead (see for example the Langmuir and BET adsorption theories). The Freundlich equation is also derived (non-empirically) by attributing the change in the equilibrium constant of the binding process to the heterogeneity of the surface and the variation in the heat of adsorption.
Freundlich adsorption isotherm
The Freundlich adsorption isotherm is mathematically expressed as
In Freundlich's notation (used for his experiments dealing with the adsorption of organic acids on coal in aqueous solutions), signifies the ratio between the adsorbed mass or adsorbate and the mass of the adsorbent , which in Freundlich's studies was coal. In the figure above, the x-axis represents , which denotes the equilibrium concentration of the adsorbate within the solvent.
Freundlich's numerical analysis of the three organic acids for the parameters and according to equation
were:
Freundlich's experimental data can also be used in a contemporary computer based fit. These values are added to appreciate the numerical work done in 1907.
△ K and △ n values are the error bars of the computer based fit. The K and n values itself are used to calculate the dotted lines in the figure.
Equation can also be written as
Sometimes also this notation for experiments in the gas phase can be found:
= mass of adsorbate
= mass of adsorbent
= equilibrium pressure of the gaseous adsorbate in case of experiments made in the gas phase (gas/solid interaction with gaseous species/adsorbed species)
and are constants for a given adsorbate and adsorbent at a given temperature (from there, the term isotherm needed to avoid significant gas pressure fluctuations due to uncontrolled temperature variations in the case of adsorption experiments of a gas onto a solid phase).
= distribution coefficient
= correction factor
At high pressure , hence extent of adsorption becomes independent of pressure.
The Freundlich equation is unique; consequently, if the data fit the equation, it is only likely, but not proved, that the surface is heterogeneous. The heterogeneity of the surface can be confirmed with calorimetry. Homogeneous surfaces (or heterogeneous surfaces that exhibit homogeneous adsorption (single site)) have a constant of adsorption. On the other hand, heterogeneous adsorption (multi-site) have a variable of adsorption depending on the percent of sites occupied. When the adsorbate pressure in the gas phase (or the concentration in solution) is low, high-energy sites will be occupied first. As the pressure in the gas phase (or the concentration in solution) increases, the low-energy sites will then be occupied resulting in a weaker of adsorption.
Limitation of Freundlich adsorption isotherm
Experimentally it was determined that extent of gas adsorption varies directly with pressure, and then it directly varies with pressure raised to the power until saturation pressure is reached. Beyond that point, the rate of adsorption saturates even after applying higher pressure. Thus, the Freundlich adsorption isotherm fails at higher pressure.
See also
Langmuir adsorption model
References
Further reading
External links
Chromatography | Freundlich equation | Chemistry | 886 |
11,360,494 | https://en.wikipedia.org/wiki/European%20Inter-University%20Association%20on%20Society%2C%20Science%20and%20Technology | The European Inter-University Association on Society, Science and Technology (ESST) is an association of universities that teach and research together in the field of social, scientific and technological developments. Universities from all over Europe are members of the association, which was founded in 1991 and is registered as a non-profit organisation in Belgium. The association was founded to strengthen education and research in Science and Technology Studies (STS).
Activities and ethos
ESST runs a programme of teaching and research devoted to science and technology studies, in both historical and contemporary perspectives. The ESST programme has affiliated faculty with strong interests in the intersections of science and technology with public policy, cultural change, and economic development.
The ESST programme is international in its outlook: it is a multicultural venture rooted in the teaching, research and scientific cultures of many European regions and countries, and in their wider social experience. The universities have developed a networked postgraduate programme focusing on the social, scientific and technological developments in Europe, which they teach in collaboration with each other. This involves substantial exchange of students and staff from the participating universities.
ESST Master's degree
The ESST Association offers a Master's programme "Society, Science and Technology in Europe" and contributes to the development of information resources and analytical concepts and skills for researchers and students in the field of science, technological change and innovation. The programme is designed to provide postgraduate training for academics from all disciplines: Social Sciences, Natural Sciences, Engineering and Humanities.
Aim of the programme
The overall aim of the MA ESST is to provide future researchers, innovation consultants, research managers and policy makers with a deep and critical understanding of the relationship between research and innovation, their specific socio-historical contexts of emergence and contemporary socio-economic embeddedness. The programme takes an interdisciplinary approach and provides opportunities for student and staff exchange.
Educational structure
The ESST Master's programme is a 60 ECTS programme and is divided into two parts: a first general introduction part and a second specialisation part. The ESST programme is organised in different ways by the participating universities: Some universities offer ESST as a one-year programme with 60 ECTS: Athens, Maastricht and Madrid. Other partners have embedded the 60 ECTS ESST programme in a local two-year programme: Klagenfurt, Strasbourg and Trento. Oslo offers the ESST programme as a 90 ECTS programme lasting one and a half years. Regardless of whether the ESST MA is organised in one, one and a half or two years, all these universities offer a 60 ECTS ESST programme and thus fulfil the degree requirements.
All ESST universities offering the first semester teach a common curriculum (with some local additions) in that semester, after which students choose a specialisation from the range offered by the different universities within ESST in the second semester. Students have the option of either transferring between universities (and countries) after the first semester or staying at one university for both semesters. The Master's thesis is supervised by supervisors at the second-semester university and graded by staff from two ESST universities, one of whom is always from the first-semester university.
Other ESST partners only offer a specialisation in the second semester: Aalborg, Lisbon, Louvain, Lund, Tallinn and Toruń.
Language
Some universities offer the first 30 ECTS in English: Athens and Maastricht. Others offer the first 30 ECTS of the programme in their national language: Madrid, Klagenfurt, Oslo, Strasbourg and Trento.
Title and degree
Upon successful completion, students receive the Master of Arts (M.A.) "Society, Science and Technology", which confers the right to the corresponding title. At most ESST higher education institutions, students receive both a local Master's title and the ESST MA diploma. Students who successfully complete course modules covering the common ESST curriculum for the first semester, but who do not take (or complete) an ESST specialisation in the second semester and do not write a thesis in accordance with ESST rules, receive an ESST certificate in addition to their local degree.
Master's programme universities
Alpen-Adria-University of Klagenfurt, Austria
Autonomous University of Madrid, Spain
Maastricht University, The Netherlands
NKUA, Athens, Greece
University of Oslo, Norway
University of Strasbourg, France
University of Trento, Italy
Second-semester specialisations
Aalborg University, Denmark
Lund University, Sweden
Nicolaus Copernicus University, Poland
University of Lisbon, Portugal
Tallinn University of Technology, Estonia
Université catholique de Louvain, Belgium
References
External links
The ESST website which contains information about universities, specialisations, the Association, alumni activities and the ESST Award.
InterESST the ESST Alumni organisation
InterESST LinkedIn pages
Oslo InterESST Facebook group
Maastricht InterESST Facebook group
Science and technology studies associations
College and university associations and consortia in Europe
Technology consortia | European Inter-University Association on Society, Science and Technology | Technology | 1,013 |
76,944,753 | https://en.wikipedia.org/wiki/Craig%20Dunn | Craig P. Dunn is an American professor in the fields of business and sustainability. He is a professor in the management department at Western Washington University, where from 2016 to 2023 he served as Wilder Distinguished Professor of Business and Sustainability, an endowed professorship. Dunn attended California State University, Long Beach for his Bachelor of Science degree in business administration, California State University, Bakersfield for his Master of Business Administration, and Indiana University Bloomington for his Doctor of Philosophy. He formerly worked for San Diego State University, where he is now an associate professor, emeritus.
At Western Washington University, he served as dean of the College of Business and Economics from 2013 to 2016 before gaining his professorship. He was succeeded as dean by Scott Young. Dunn also serves on the faculty of the Institute for Energy Studies, on the Graduate Faculty Governance Council, and on the Lesbian, Gay, Bisexual & Transgender Advocacy Council. In 2021, Dunn had the highest salary of any university employee other than the president, Sabah Randhawa.
References
External links
Craig Dunn – WWU News
Living people
Year of birth missing (living people)
Missing middle or first names
American academics
American businesspeople
American education businesspeople
American energy industry businesspeople
Business school deans
Businesspeople in education
Energy economists
Sustainability scientists
California State University, Long Beach alumni
California State University, Bakersfield alumni
Indiana University Bloomington alumni
San Diego State University faculty
Western Washington University faculty | Craig Dunn | Environmental_science | 282 |
18,766,223 | https://en.wikipedia.org/wiki/Frequent%20flyer%20program%20%28Guantanamo%29 | The frequent flyer program is a controversial technique used by the United States in the Guantanamo Bay detainment camps in Cuba. Guards deprived detainees of sleep by moving them from one cell to another, multiple times a day, for days or weeks on end.
The technique was used to "soften up" detainees prior to interrogation. Guantanamo guards were ordered to discontinue the use of the technique in March 2004, although the practice persisted until at least later that year.
Major David Frakt, USAF, defense counsel to a recipient of the program, Mohamed Jawad, said:
In August 2008, in testimony at Jawad's Guantanamo military commission trial, US Army officers confirmed the existence of the frequent flyer program. At least 17 detainees were subjected to the program.
In May 2012, Ramzi Kassem, a lawyer for detainee Shaker Aamer, said his client alleges the frequent flyer program was still being used as a punishment technique in the isolation block known as Camp Five Echo.
See also
Mohamed Jawad
Ghassan al-Shirbi
Torture
References
Guantanamo Bay detention camp
Interrogation techniques
Sleeplessness and sleep deprivation | Frequent flyer program (Guantanamo) | Biology | 231 |
4,551,386 | https://en.wikipedia.org/wiki/History%20of%20submarines | The history of the submarine goes back to antiquity. Humanity has employed a variety of methods to travel underwater for exploration, recreation, research and significantly, warfare. While early attempts, such as those by Alexander the Great, were rudimentary, the advent of new propulsion systems, fuels, and sonar, propelled an increase in submarine technology. The introduction of the diesel engine, then the nuclear submarine, saw great expansion in submarine use — and specifically military use — during World War I, World War II, and the Cold War. The Second World War use of the U-Boat by the Kriegsmarine against the Royal Navy and commercial shipping, and the Cold War's use of submarines by the United States and Russia, helped solidify the submarine's place in popular culture. The latter conflicts also saw an increasing role for the military submarine as a tool of subterfuge, hidden warfare, and nuclear deterrent. The military use of submarines continues to this day, predominantly by North Korea, China, the United States and Russia.
Beyond their use in warfare, submarines continue to have recreational and scientific uses. They are heavily employed in the exploration of the sea bed, and the deepest places of the ocean floor. They are used extensively in search and rescue operations for other submarines, surface vessels, and air craft, and offer a means to descend vast depths beyond the reach of scuba diving for both exploration and recreation. They remain a focus of popular culture and the subject of numerous books and films.
Early
The concept of underwater transport has roots deep in antiquity. There are images of men using hollow sticks to breathe underwater for hunting at the temples at Thebes, and the first known military use occurred during the siege of Syracuse (415–413 BC), where divers cleared obstructions, according to the History of the Peloponnesian War. At the siege of Tyre (332 BC), Alexander the Great used divers, according to Aristotle. Later legends suggested that Alexander descended into the sea using a primitive submersible in the form of a diving bell, as depicted in a 16th-century illustration in the works of the Mughal poet Amir Khusrau.
According to a report attributed to Tahbir al-Tayseer in Opusculum Taisnieri published in 1562:
There were various plans for submersibles or submarines during the Middle Ages. In Eastern Europe the Cossack Zaporozhian Host constructed underwater skiffs for use against Turkish positions.
The Englishman William Bourne designed a prototype submarine in 1578. This was to be a completely enclosed boat that could be submerged and rowed beneath the surface. Comprising a completely enclosed wooden vessel sheathed in waterproofed leather, it was to be submerged by using hand-operated wooden screw thread adjustable plungers pressing against flexible leather bags located at the sides to increase or decrease the volume of water to adjust the buoyancy of the craft. The sketch (left) suggests that the depth adjustment was utilizing a crankset projecting above the surface. There is no obvious accommodation for crew.
In 1596 the Scottish mathematician and theologian John Napier wrote in his Secret Inventions the following: "These inventions besides devises of sayling under water with divers, other devises and strategems for harming of the enemyes by the Grace of God and worke of expert Craftsmen I hope to perform."
It remains unclear whether or not Napier ever carried out his plans. Henry Briggs, who was professor of mathematics at Gresham College, London, and later at Oxford, was a friend of Napier, whom he visited in 1615 and 1616, and was also an acquaintance of Cornelius Van Drebbel, a Dutchman in the service of James I of England, who designed and built the first successful submarine in 1620. Hence, it is not impossible that it was because of the interest taken by Napier in the submarine that Briggs came in touch with Drebbel.
Drebbel's submarine was propelled by oars. The precise nature of this submarine is unclear, it may be possible that it resembled a bell towed by a boat. Two improved types were tested in the River Thames between 1620 and 1624. Of one of these tests Constantijn Huygens reports in his autobiography of 1651 the following:
On 18 October 1690, his son Constantijn Huygens, Jr. commented in his diary on how Drebbel was able to measure the depth to which his boat had descended (which was necessary to prevent the boat from sinking) utilizing a quicksilver barometer:
In order to solve the problem of the absence of oxygen, Drebbel was able to extract oxygen from saltpetre to refresh the air in his submarine. An indication of this can be found in Drebbel's own work: On the Nature of the Elements (1604), in the fifth chapter:
The introduction of Drebbel's submarine concept seemed beyond conventional expectations of the capability of contemporary science. Commenting on the scientific basis of Drebbel's claims, renowned German astronomer Johannes Kepler allegedly remarked in 1607: "If [Drebbel] can create a new spirit, by means of which he can move and keep in motion his instrument without weights or propelling power, he will be Apollo in my opinion."
Although the first submersible vehicles were tools for exploring underwater, it did not take long for inventors to recognize their military potential. The strategic advantages of submarines were first set out by Bishop John Wilkins of Chester in Mathematical Magick in 1648:
Between 1690 and 1692, the French physicist Denis Papin designed and built two submarines. The first design (1690) was a strong and heavy metallic square box, equipped with an efficient pump that pumped air into the hull to raise the inner pressure. When the air pressure reached the required level, holes were opened to let in some water. This first machine was destroyed by accident. The second design (1692) had an oval shape and worked on similar principles. A water pump controlled the buoyancy of the machine. According to some sources, a spy of German mathematician Gottfried Wilhelm Leibniz called Haes reported that Papin had met with some success with his second design on the River Lahn.
The Russian autodidact designed and built military submarines in the decade from 1718 to 1728.
By the mid 18th century, over a dozen patents for submarines/submersible boats had been granted in England. In 1747, Nathaniel Symons patented and built the first known working example of the use of a ballast tank for submersion. His design used leather bags that could fill with water to submerge the craft. A mechanism twisted the water out of the bags and caused the boat to resurface. In 1749, the Gentlemen's Magazine reported that a similar design had been proposed by Giovanni Borelli in 1680. At this point of development, further improvement in design stagnated for over a century, until new industrial technologies for propulsion and stability emerged.
Early modern
The first American military submarine was in 1776, a hand-powered egg-shaped (or acorn-shaped) device designed by the American David Bushnell, to accommodate a single man. It was the first submarine capable of independent underwater operation and movement, and the first to use screws for propulsion. However, according to British naval historian Richard Compton-Hall, the problems of achieving neutral buoyancy would have rendered the vertical propeller of the Turtle useless. The route that Turtle had to take to attack its intended target, , was slightly across the tidal stream which would, in all probability, have resulted in Ezra Lee becoming exhausted. There are also no British records of an attack by a submarine during the war. In the face of these and other problems, Compton-Hall suggests that the entire story around the Turtle was fabricated as disinformation and morale-boosting propaganda, and that if Ezra Lee did carry out an attack, it was in a covered rowing boat rather than in Turtle. Replicas of Turtle have been built to test the design. One replica (Acorn), constructed by Duke Riley and Jesse Bushnell (claiming to be a descendant of David Bushnell), used the tide to get within of the in New York City (a police boat stopped Acorn for violating a security zone).
Displays of replicas of Turtle which acknowledge its place in history appear in the Connecticut River Museum, the U.S. Navy's Submarine Force Library and Museum, Britain's Royal Navy Submarine Museum and Monaco's Oceanographic Museum.
In 1800, the French Navy built a human-powered submarine designed by Robert Fulton, the . It also had a sail for use on the surface and so exhibited the first known use of dual propulsion on a submarine. It proved capable of using mines to destroy two warships during demonstrations. The French eventually gave up on the experiment in 1804, as did the British, when Fulton later offered them the submarine design.
In 1834 the Russian Army General demonstrated the first rocket-equipped submarine to Emperor Nicholas I.
The Submarino Hipopótamo, the first submarine built in South America, underwent testing in Ecuador on September 18, 1837. Its designer, Jose Rodriguez Lavandera, successfully crossed the Guayas River in Guayaquil accompanied by Jose Quevedo. Rodriguez Lavandera had enrolled in the Ecuadorian Navy in 1823, becoming a Lieutenant by 1830. The Hipopotamo crossed the Guayas on two more occasions, but it was abandoned because of lack of funding and interest from the government.
In 1851 a Bavarian artillery corporal, Wilhelm Bauer, took a submarine designed by him called the Brandtaucher (fire-diver) to sea in Kiel Harbour. Built by August Howaldt and powered by a treadwheel, Brandtaucher sank, but the crew of three managed to escape.
During the American Civil War both sides made use of submarines. Examples were the , for the Union, and the Hunley, for the Confederacy. The Hunley was the first submarine to successfully attack and sink an opposing warship. (see below)
In 1863 the was built by the German American engineer Julius H. Kroehl, and featured a pressurized work chamber for the crew to exit and enter underwater. This pre-figured modern diving arrangements such as the lock-out dive chamber, though the problems of decompression sickness were not well understood at the time. After its public maiden dive in 1866, the Sub Marine Explorer was used for pearl diving off the coast of Panama. It was capable of diving deeper than , deeper than any other submarine built before.
The Chilean government commissioned the in 1865, during the Chincha Islands War (1864–1866) when Chile and Peru fought against Spain. Built by the German engineer Karl Flach, the submarine sank during tests in Valparaiso Bay on May 3, 1866, with the entire eleven-man crew.
During the War of the Pacific in 1879, the Peruvian government commissioned and built a submarine, the Toro Submarino design by the Peruvian engineer Federico Blume and built in Paita, Peru. It is considered the first operational submarine or submersible in Latin America. With long and manually operate by a crew of 11 persons, it could sumerge to a depth of with a ventilation system a speed of and a maximum dive of . Being fully operational, waiting for its opportunity to attack with naval mines during the Blockade of Callao, it was scuttled to avoid its capture by Chilean troops on January 17, 1881, before the imminent occupation of Lima.
Mechanical power
The first submarine that did not rely on human power for propulsion was the French Navy submarine Plongeur, launched in 1863, and equipped with a reciprocating engine using compressed air from 23 tanks at . In practice, the submarine was virtually unmanageable underwater, with very poor speed and maneouverability.
The first air independent and combustion powered submarine was the Ictineo II, designed by the Spanish engineer Narcís Monturiol. Originally launched in 1864 as a human-powered vessel, propelled by 16 men, it was converted to peroxide propulsion and steam in 1867. The craft was designed for a crew of two, could dive to , and demonstrated dives of two hours. On the surface, it ran on a steam engine, but underwater such an engine would quickly consume the submarine's oxygen. To solve this problem, Monturiol invented an air-independent propulsion system. As the air-independent power system drove the screw, the chemical process driving it also released oxygen into the hull for the crew and an auxiliary steam engine. Apart from being mechanically powered, Monturiol's pioneering double-hulled vessels also solved pressure, buoyancy, stability, diving and ascending problems that earlier designs had encountered.
The submarine became a potentially viable weapon with the development of the first practical self-propelled torpedoes. The Whitehead torpedo was the first such weapon, and was designed in 1866 by British engineer Robert Whitehead. His 'mine ship' was an long, diameter torpedo propelled by compressed air and carried an explosive warhead. The device had a speed of and could hit a target away. Many naval services procured the Whitehead torpedo during the 1870s and it first proved itself in combat during the Russo-Turkish War when, on 16 January 1878, the Turkish ship Intibah was sunk by Russian torpedo boats carrying Whiteheads.
During the 1870s and 1880s, the basic contours of the modern submarine began to emerge, through the inventions of the English inventor and curate, George Garrett, and his industrialist financier Thorsten Nordenfelt, and the Irish inventor John Philip Holland.
In 1878, Garrett built a long hand-cranked submarine of about 4.5 tons, which he named the Resurgam. This was followed by the second (and more famous) Resurgam of 1879, built by Cochran & Co. at Birkenhead, England. The construction was of iron plates fastened to iron frames, with the central section of the vessel clad with wood secured by iron straps. As built, it was long by in diameter, weighed , and had a crew of 3. Resurgam was powered by a closed cycle steam engine, which provided enough steam to turn the single propeller for up to 4 hours. It was designed to have positive buoyancy, and diving was controlled by a pair of hydroplanes amidships. At the time it cost £1,538.
Although his design was not very practical – the steam boiler generated intense heat in the cramped confines of the vessel, and it lacked longitudinal stability – it caught the attention of the Swedish industrialist Thorsten Nordenfelt. Discussions between the two led to the first practical steam-powered submarines, armed with torpedoes and ready for military use.
The first such boat was the Nordenfelt I, a 56 tonne, vessel similar to Garret's ill-fated Resurgam, with a range of , armed with a single torpedo, in 1885. Like Resurgam, Nordenfelt I operated on the surface by steam, then shut down its engine to dive. While submerged, the submarine released pressure generated when the engine was running on the surface to provide propulsion for some distance underwater. Greece, fearful of the return of the Ottomans, purchased it. Nordenfelt commissioned the Barrow Shipyard in England in 1886 to build Nordenfelt II () and Nordenfelt III (Abdül Mecid) in 1887. They were powered by a coal-fired Lamm steam engine turning a single screw, and carried two 356mm torpedo tubes and two 35mm machine guns. They were loaded with a total of 8 tons of coal as fuel and could dive to a depth of . It was 30.5m long and 6m wide, and weighed 100 tons. It carried a normal crew of 7. It had a maximum surface speed of , and a maximum speed of while submerged. Abdülhamid became the first submarine in history to fire a torpedo submerged.
Nordenfelt's efforts culminated in 1887 with Nordenfelt IV, which had twin motors and twin torpedoes. It was sold to the Russians, but soon ran aground and was scrapped. Garrett and Nordenfelt made significant advances in constructing the first modern, militarily capable submarines and fired up military and popular interest around the world for this new technology. However, the solution to fundamental technical problems, such as propulsion, quick submergence, and the maintenance of balance underwater was still lacking, and would only be solved in the 1890s.
Electric power
A reliable means of propulsion for submerged vessels was only made possible in the 1880s with the advent of the necessary electric battery technology. The first electrically powered submarines were built by the Polish engineer Stefan Drzewiecki in 1881, he designed and constructed the world's first submarine in Russia, and later other engineers used his design in their constructions, they were James Franklin Waddington and the team of James Ash and Andrew Campbell in England, Dupuy de Lôme and Gustave Zédé in France and Isaac Peral in Spain.
In 1884, Drzewiecki converted 2 mechanical submarines, installing in each a engine with a new, at the time, source of energy – batteries. In tests, the submarines travelled under the water against the flow of the Neva River at a rate of . They were the first submarines in the world with electric propulsion. Ash and Campbell constructed their craft, the Nautilus, in 1886. It was long with a engine powered by 52 batteries. It was an advanced design for the time, but became stuck in the mud during trials and was discontinued. Waddington's Porpoise vessel showed more promise. Waddington had formerly worked in the shipyard in which Garrett had been active. Waddington's vessel was similar in size to the Resurgam and its propulsion system used 45 accumulator cells with a capacity of 660 ampere hours each. These were coupled in series to a motor driving a propeller at about 750 rpm, giving the ship a sustained speed of for at least 8 hours. The boat was armed with two externally mounted torpedoes as well as a mine torpedo that could be detonated electronically. Although the boat performed well in trials, Waddington was unable to attract further contracts and went bankrupt.
In France, the early electric submarines Goubet I and Goubet II were built by the civil engineer, Claude Goubet. These boats were also unsuccessful, but they inspired the renowned naval architect Dupuy de Lôme to begin work on his submarine – an advanced electric-powered submarine almost 20 metres long. He didn't live to see his design constructed, but the craft was completed by Gustave Zédé in 1888 and named the . It was one of the first truly successful electrically powered submarines, and was equipped with an early periscope and an electric gyrocompass for navigation. It completed over 2,000 successful dives using a 204-cell battery. Although the Gymnote was scrapped for its limited range, its side hydroplanes became the standard for future submarine designs.
The Peral Submarine, constructed by Isaac Peral, was launched by the Spanish Navy in the same year, 1888. It had three Schwartzkopff torpedoes and one torpedo tube in the bow, new air systems, hull shape, propeller, and cruciform external controls anticipating much later designs. Peral was an all-electrical powered submarine with an underwater speed of . After two years of trials the project was scrapped by naval officialdom who cited, among other reasons, concerns over the range permitted by its batteries.
Many more designs were built at this time by various inventors, but submarines were not put into service by navies until the turn of the 20th century.
Modern
The turn of the 20th century marked a pivotal time in the development of submarines, with a number of important technologies making their debut, as well as the widespread adoption and fielding of submarines by a number of nations. Diesel electric propulsion would become the dominant power system and instruments such as the periscope would become standardized. Batteries were used for running underwater and gasoline (petrol) or diesel engines were used on the surface and to recharge the batteries. Early boats used gasoline, but quickly gave way to kerosene, then diesel, because of reduced flammability. Effective tactics and weaponry were refined in the early part of the century, and the submarine would have a large impact on 20th century warfare.
The Irish inventor John Philip Holland built a model submarine in 1876 and a full scale one in 1878, followed by a number of unsuccessful ones. In 1896, he designed the Holland Type VI submarine. This vessel made use of internal combustion engine power on the surface and electric battery power for submerged operations. Launched on 17 May 1897 at Navy Lt. Lewis Nixon's Crescent Shipyard in Elizabeth, New Jersey, the Holland VI was purchased by the United States Navy on 11 April 1900, becoming the United States Navy's first commissioned submarine and renamed USS Holland.
A prototype version of the A-class submarine (Fulton) was developed at Crescent Shipyard under the supervision of naval architect and shipbuilder from the United Kingdom, Arthur Leopold Busch, for the newly reorganized Electric Boat Company in 1900. The Fulton was never commissioned by the United States Navy and was sold to the Imperial Russian Navy in 1905. The submarines were built at two different shipyards on both coasts of the United States. In 1902, Holland received for his relentless pursuit to perfect the modern submarine craft. Many countries became interested in Holland's (weapons) product and purchased the rights to build them during this time.
The Royal Navy commissioned the Holland-class submarine from Vickers, Barrow-in-Furness, under licence from the Holland Torpedo Boat Company during the years 1901 to 1903. Construction of the boats took longer than anticipated, with the first only ready for a diving trial at sea on 6 April 1902. Although the design had been purchased entirely from the US company, the actual design used was an untested improved version of the original Holland design using a new petrol engine.
Meanwhile, the French steam and electric Narval was commissioned in June 1900 and introduced the classic double-hull design, with a pressure hull inside the outer shell. These 200-ton ships had a range of over underwater. The French submarine Aigrette in 1904 further improved the concept by using a diesel rather than a gasoline engine for surface power. Large numbers of these submarines were built, with seventy-six completed before 1914.
By 1914, all the main powers had submarine fleets, though the development of a strategy for their use lay in the future.
At the start of World War I, the Royal Navy had the world's largest submarine service by a considerable margin, with 74 boats of the B, C and D classes, of which 15 were oceangoing, with the rest capable of coastal patrols. The D-class, built 1907–1910, were designed to be propelled by diesel motors on the surface to avoid the problems with petrol engines experienced with the A class. These boats were designed for foreign service with an endurance of at on the surface and much-improved living conditions for a larger crew. They were fitted with twin screws for greater maneuverability and with innovative saddle tanks. They were also the first submarines to be equipped with deck guns forward of the conning tower. Armament also included three torpedo tubes (two vertically in the bow and one in the stern). D-class was also the first class of submarine to be equipped with standard wireless transmitters. The aerial was attached to the mast of the conning tower that was lowered before diving. With their enlarged bridge structure, the boat profile was recognisably that of the modern submarine. The D-class submarines were considered to be so innovative that the prototype D1 was built in utmost secrecy in a securely guarded building shed.
The British also experimented with other power sources. Oil-fired steam turbines powered the British "K" class submarines built during the First World War and in following years, but these were not very successful.
The aim was to give them the necessary surface speed to keep up with the British battle fleet.
The Germans were slower to recognize the importance of this new weapon. A submersible was initially ordered by the Imperial Russian Navy from the Kiel shipyard in 1904, but cancelled after the Russo-Japanese War ended. One example was modified and improved, then commissioned into the Imperial German Navy in 1906 as its first U-boat, U-1. It had a double hull, was powered by a Körting kerosene engine and was armed with a single torpedo tube. The fifty percent larger SM U-2 had two torpedo tubes. A diesel engine was not installed in a German navy boat until the class of 1912–13. At the start of World War I, Germany had 20 submarines of 13 classes in service with more under construction.
Interwar
Diesel submarines needed air to run their engines, and so carried very large batteries for submerged travel. These limited the speed and range of the submarines while submerged.
An early submarine snorkel was designed by James Richardson, an assistant manager at Scotts Shipbuilding and Engineering Company, Greenock, Scotland, as early as 1916. The snorkel allowed the submarine to avoid detection for long periods by travelling under the water using non-electric powered propulsion. Although the company received a British Patent for the design, no further use was made of itthe British Admiralty did not accept it for use in Royal Navy submarines.
The first German U-boat to be fitted with a snorkel was , which experimented with the equipment in the Baltic Sea during the summer of 1943. The technology was based on pre-war Dutch experiments with a device named a snuiver (sniffer). As early as 1938, a simple pipe system was installed on the submarines and that enabled them to travel at periscope depth operating on its diesels with almost unlimited underwater range while charging the propulsion batteries. U-boats began to use it operationally in early 1944. By June 1944, about half of the boats stationed in the French bases were fitted with snorkels.
Various new submarine designs were developed during the interwar years. Among the most notable were submarine aircraft carriers, equipped with a waterproof hangar and steam catapult to launch and recover one or more small seaplanes. The submarine and its plane could then act as a reconnaissance unit ahead of the fleet, an essential role at a time when radar was not available. The first example was the British , followed by the French , and numerous aircraft-carrying submarines in the Imperial Japanese Navy.
Early submarine designs put the diesel engine and the electric motor on the same shaft, which also drove a propeller with clutches between each of them. This allowed the engine to drive the electric motor as a generator to recharge the batteries and also propel the submarine as required. The clutch between the motor and the engine would be disengaged when the boat dived so that the motor could be used to turn the propeller. The motor could have more than one armature on the shaft – these would be electrically coupled in series for slow speed and parallel for high speed (known as "group down" and "group up" respectively).
In the 1930s, the principle was modified for some submarine designs, particularly those of the U.S. Navy and the British U class. The engine was no longer attached to the motor/propeller drive shaft, but drove a separate generator, which would drive the motors on the surface and/or recharge the batteries. This diesel-electric propulsion allowed much more flexibility. For example, the submarine could travel slowly whilst the engines were running at full power to recharge the batteries as quickly as possible, reducing time on the surface, or use of its snorkel. Also, it was now possible to insulate the noisy diesel engines from the pressure hull, making the submarine quieter.
An early form of anaerobic propulsion had already been employed by the in 1864. The engine used a chemical mix containing a peroxide compound, which generated heat for steam propulsion while at the same time solved the problem of oxygen renovation in an hermetic container for breathing purposes. This system was not employed again until 1940 when the German Navy tested a system employing the same principles, the Walter turbine, on the experimental submarine and later on the naval .
At the end of the Second World War, the British and Russians experimented with hydrogen peroxide/kerosene (paraffin) engines, which could be used both above and below the surface. The results were not encouraging enough for this technique to be adopted at the time, although the Russians deployed a class of submarines with this engine type code named Quebec by NATO. They were considered a failure. Today, several navies, notably Sweden, use air-independent propulsion boats, which substitute liquid oxygen for hydrogen peroxide.
Nuclear propulsion and missile platforms
For further information on nuclear powered submarines, see Nuclear submarine.
The first launch of a cruise missile (SSM-N-8 Regulus) from a submarine occurred in July 1953 from the deck of , a World War II fleet boat modified to carry this missile with a nuclear warhead. Tunny and her sister boat were the United States' first nuclear deterrent patrol submarines. They were joined in 1958 by two purpose-built Regulus submarines, , , and, later, by the nuclear-powered . So that no target would be left uncovered, four Regulus missiles had to be at sea at any given time. Thus, Barbero and Tunny, each of which carried two Regulus missiles, patrolled simultaneously. Growler and Grayback, with four missiles, or Halibut, with five, could patrol alone. These five submarines made 40 Regulus strategic deterrent patrols between October 1959 and July 1964. They were replaced by the introduction of a greatly superior system beginning in 1961: the Polaris missile launched from nuclear-powered ballistic missile submarines (SSBNs). The Soviet Navy developed submarine-launched ballistic missiles launched from conventional submarines a few years before the US, and paralleled subsequent US development in this area.
In the 1950s, nuclear power partially replaced diesel-electric propulsion. The sailing of the first nuclear-powered submarine, the USN in 1955 was soon followed by similar British, French and Russian boats. Equipment was also developed to extract oxygen from sea water. These two innovations, together with inertial navigation systems, gave submarines the ability to remain submerged for weeks or months, and enabled previously impossible voyages such as the crossing of the North Pole beneath the Arctic ice cap by the USS Nautilus in 1958. Most of the naval submarines built since that time in the United States and the Soviet Union and its successor state the Russian Federation have been powered by nuclear reactors. The limiting factors in submerged endurance for these vessels are food supply and crew morale in the space-limited submarine.
The Soviet Navy attempted to use a very advanced lead cooled fast reactor on Project 705 "Lira" (NATO Alfa class) beginning in the 1970s, but its maintenance was considered too expensive, and only six submarines of this class were completed. By removing the requirement for atmospheric oxygen all nuclear-powered submarines can stay submerged indefinitely so long as food supplies remain (air is recycled and fresh water distilled from seawater). These vessels always have a small battery and diesel generator installation for emergency use when the reactors have to be shut down.
While the greater endurance and performance of nuclear reactors mean that nuclear submarines are better for long distance missions or the protection of a carrier battle-force, both countries that do and countries that do not use nuclear power continue to produce conventional diesel-electric submarines, because they can be made stealthier, except when required to run the diesel engine to recharge the ship's battery. Technological advances in sound dampening, noise isolation and cancellation have substantially eroded this advantage. Though far less capable regarding speed and weapons payload, conventional submarines are also cheaper to build. The introduction of air-independent propulsion boats led to increased sales numbers of such types of submarines.
In 1958 the USN carried out a series of trials with the . Various hull and control configurations were tested to reduce drag and so allow greater underwater speed and maneuverability. The results of these trials were incorporated into the and later submarines. From the same era is the first SSBN, the .
Recent
The German Type 212 submarine was the first series production submarine to use fuel cells for air-independent propulsion. It is powered by nine 34-kilowatt hydrogen fuel cells.
Most small modern commercial submarines that are not expected to operate independently and use batteries that can be recharged by a mother-ship after every dive.
Towards the end of the 20th century, some submarines were fitted with pump-jet propulsors, instead of propellers. Although these are heavier, more expensive, and often less efficient than a propeller, they are significantly quieter, giving an important tactical advantage.
A possible propulsion system for submarines is the magnetohydrodynamic drive, or "caterpillar drive", which has no moving parts. It was popularized in the movie version of The Hunt for Red October, written by Tom Clancy, which portrayed it as a virtually silent system. (In the book, a form of propulsor was used rather than an MHD.) Although some experimental surface ships have been built with this propulsion system, speeds have not been as high as hoped. In addition, the noise created by bubbles, and the higher power settings a submarine's reactor would need, mean that it is unlikely to be considered for any military purpose.
Associated technology
Sensors
The first submarines had only a porthole to provide a view to aid navigation. An early periscope was patented by Simon Lake in 1893. The modern periscope was developed by the industrialist Sir Howard Grubb in the early 20th century and was fitted onto most Royal Navy designs.
Passive sonar was introduced in submarines during the First World War, but active sonar ASDIC did not come into service until the inter-war period. Today, the submarine may have a wide variety of sonar arrays, from bow-mounted to trailing ones. There are often upward-looking under-ice sonars as well as depth sounders.
Early experiments with the use of sound to 'echo locate' underwater in the same way as bats use sound for aerial navigation began in the late 19th century. The first patent for an underwater echo ranging device was filed by English meteorologist Lewis Fry Richardson a month after the sinking of the Titanic. The First World War stimulated research in this area. The British made early use of underwater hydrophones, while the French physicist Paul Langevin worked on the development of active sound devices for detecting submarines in 1915 using quartz. In 1916, under the British Board of Invention and Research, Canadian physicist Robert William Boyle took on the active sound detection project with A B Wood, producing a prototype for testing in mid-1917. This work, for the Anti-Submarine Division of the British Naval Staff, was undertaken in utmost secrecy, and used quartz piezoelectric crystals to produce the world's first practical underwater active sound detection apparatus.
By 1918, both France and Britain had built prototype active systems. The British tested their ASDIC on in 1920, and started production in 1922. The 6th Destroyer Flotilla had ASDIC-equipped vessels in 1923. An anti-submarine school, HMS Osprey, and a training flotilla of four vessels were established on the English Isle of Portland in 1924. The US Sonar QB set arrived in 1931.
Weapons and countermeasures
Early submarines carried torpedoes mounted externally to the craft. Later designs incorporated the weapons into the internal structure of the submarine. Originally, both bow-mounted and stern-mounted tubes were used, but the latter eventually fell out of favour. Today, only bow-mounted installations are employed. The modern submarine is capable of firing many types of weapon from its launch tubes, including UAVs. Special mine laying submarines were also built. Up until the end of the Second World War, it was common to fit deck guns to submarines to allow them to sink ships without wasting their limited numbers of torpedoes.
To aid in the weapons targeting mechanical calculators were employed to improve the fire control of the on-board weaponry. The firing calculus was determined by the targets' course and speed through measurements of the angle and its range via the periscope. Today, these calculations are achieved by digital computers with display screens providing necessary information on the torpedo status and ship status.
German submarines in World War II had rubber coatings and could launch chemical devices to provide a decoy when the boat came under attack. These proved to be ineffective, as sonar operators learned to distinguish between the decoy and the submarine. Modern submarines can launch a variety of devices for the same purpose.
Safety
After the sinking of the A1 submarine in 1904, lifting eyes were fitted to British submarines, and in 1908 air-locks and escape helmets were provided. The Royal Navy experimented with various types of escape apparatus, but it was not until 1924 that the "Davis Submerged Escape Apparatus" was developed for crew members. The USN used the similar "Momsen Lung". The French used "Joubert's apparatus" and the Germans used "Draeger's apparatus".
Rescue submarines for evacuating a disabled submarine's crew were developed in the 1970s. A British unmanned vehicle was used for recovering an entangled Russian submarine crew in 2005. A new NATO Submarine Rescue System entered service in 2007.
Communication and navigation
Wireless was used to provide communication to and from submarines in the First World War. The D-class submarine was the first submarine class to be fitted with wireless transmitters in 1907. With time, the type, range and bandwidth of the communications systems have increased. With the danger of interception, transmissions by a submarine are minimised. Various periscope-mounted aerials have been developed to allow communication without surfacing.
The standard navigation system for early submarines was by eye, with use of a compass. The gyrocompass was introduced in the early part of the 20th century and inertial navigation in the 1950s. The use of satellite navigation is of limited use to submarines, except at periscope depth or when surfaced.
Military
The first military submarine was Turtle in 1776. During the American Revolutionary War, Turtle (operated by Sgt. Ezra Lee, Continental Army) tried and failed to sink a British warship, HMS Eagle (flagship of the blockaders) in New York harbor on September 7, 1776. There is no record of any attack in the ships' logs.
During the War of 1812, in 1814 Silas Halsey died while using a submarine in an unsuccessful attack on a British warship stationed in New London harbour.
American Civil War
During the American Civil War, the Union was the first to field a submarine. The French-designed Alligator was the first U.S. Navy sub and the first to feature compressed air (for air supply) and an air filtration system. It was the first submarine to carry a diver lock, which allowed a diver to plant electrically detonated mines on enemy ships. Initially hand-powered by oars, it was converted after 6 months to a screw propeller powered by a hand crank. With a crew of 20, it was larger than Confederate submarines. Alligator was long and about in diameter. It was lost in a storm off Cape Hatteras on 1 April 1863 while uncrewed and under tow to its first combat deployment at Charleston.
The Intelligent Whale was built by Oliver Halstead and tested by the U.S. Navy after the American Civil War and caused the deaths of 39 men during trials.
The Confederate States of America fielded several human-powered submarines, including CSS H. L. Hunley (named for its designer and chief financier, Horace Lawson Hunley). The first Confederate submarine was the Pioneer, which sank a target schooner using a towed mine during tests on Lake Pontchartrain, but it was not used in combat. It was scuttled after New Orleans was captured and in 1868 was sold for scrap. The similar Bayou St. John submarine is preserved in the Louisiana State Museum. CSS Hunley was intended for attacking Union ships that were blockading Confederate seaports. The submarine had a long pole with an explosive charge in the bow, called a spar torpedo. The sub had to approach an enemy vessel, attach the explosive, move away, and then detonate it. It was extremely hazardous to operate, and had no air supply other than what was contained inside the main compartment. On two occasions, the sub sank; on the first occasion half the crew died, and on the second, the entire eight-man crew (including Hunley himself) drowned. On 17 February 1864, Hunley sank USS Housatonic off the Charleston Harbor, the first time a submarine successfully sank another ship, though it sank in the same engagement shortly after signalling its success. Submarines did not have a major impact on the outcome of the war, but did portend their coming importance to naval warfare and increased interest in their use in naval warfare.
Russo-Japanese War
On 14 June 1904, the Imperial Japanese Navy (IJN) placed an order for five Holland Type VII submersibles, which were built in Quincy, Massachusetts, at the Fore River Yard, and shipped to Yokohama, Japan in sections. The five machines arrived on 12 December 1904. Under the supervision of naval architect Arthur L. Busch, the imported Hollands were re-assembled, and the first submersibles were ready for combat operations by August 1905, but hostilities were nearing the end by that date, and no submarines saw action during the war.
Meanwhile, the Imperial Russian Navy (IRN) purchased German constructed submersibles built by the Germaniawerft shipyards out of Kiel. In 1903, Germany successfully completed its first fully functional engine-powered submarine, Forelle (Trout), It was sold to Russia in 1904 and shipped via the Trans-Siberian Railway to the combat zone during the Russo-Japanese War.
Due to the naval blockade of Port Arthur, Russia sent their remaining submarines to Vladivostok, and by the end of 1904, seven subs were based there. On 1 January 1905, the IRN created the world's first operational submarine fleet around these seven submarines. The first combat patrol by the newly created IRN submarine fleet occurred on 14 February 1905, and was carried out by Delfin and Som, with each patrol normally lasting about 24 hours. Som first made contact with the enemy on 29 April, when it was fired upon by IJN torpedo boats, which withdrew shortly after opening fire and resulting in no casualties or damage to either combatant. A second contact occurred on 1 July 1905 in the Tartar Strait when two IJN torpedo boats spotted the IRN sub Keta. Unable to submerge quickly enough, Keta was unable to obtain a proper firing position, and both combatants broke contact.
World War I
The first time military submarines had significant impact on a war was in World War I. Forces such as the U-boats of Germany operated against Allied commerce (Handelskrieg); the submarine's ability to function as a practical war machine relied on new tactics, their numbers, and submarine technologies such as combination diesel/electric power system that had been developed in the preceding years. More like submersible ships than the submarines of today, submarines operated primarily on the surface using standard engines, submerging occasionally to attack under battery power. They were roughly triangular in cross-section, with a distinct keel, to control rolling while surfaced, and a distinct bow.
Shortly before the outbreak of World War I, submarines were employed by the Italian Regia Marina during the Italo-Turkish War without seeing any naval action, and by the Greek Navy during the Balkan Wars, where notably the French-built became the first such vessel to launch a torpedo against an enemy ship (albeit unsuccessfully).
At the start of the war, Germany had 48 submarines in service or under construction, with 29 operational. These included vessels of the diesel-engined U-19 class with the range () and speed () to operate effectively around the entire British coast. Initially, Germany followed the international "Prize Rules", which required a ship's crew to be allowed to leave before sinking their ship. The U-boats saw action in the First Battle of the Atlantic.
After the British ordered transport ships to act as auxiliary cruisers, the German navy adopted unrestricted submarine warfare, generally giving no warning of an attack. During the war, 360 submarines were built, but 178 were lost. The rest were surrendered at the end of the war. A German U-boat sunk and is often cited among the reasons for the entry of the United States into the war.
In August 1914, a flotilla of ten U-boats sailed from their base in Heligoland to attack Royal Navy warships in the North Sea in the first submarine war patrol in history. Their aim was to sink capital ships of the British Grand Fleet, and so reduce the Grand Fleet's numerical superiority over the German High Seas Fleet. Depending more on luck than strategy, the first sortie was not a success. Only one attack was carried out, when U-15 fired a torpedo (which missed) at , while two of the ten U-boats were lost. The had better luck. On 22 September 1914 while patrolling the Broad Fourteens, a region of the southern North Sea, U-9 found three obsolescent British armoured cruisers (, , and ), which were assigned to prevent German surface vessels from entering the eastern end of the English Channel. The U-9 fired all six of its torpedoes, reloading while submerged, and sank the three cruisers in less than an hour.
The British had 77 operational submarines at the beginning of the war, with 15 under construction. The main type was the E class, but several experimental designs were built, including the K class, which had a reputation for bad luck, and the M class, which had a large deck-mounted gun. The R class was the first boat designed to attack other submarines. British submarines operated in the Baltic, North Sea and Atlantic, as well as in the Mediterranean and Black Sea. Over 50 were lost from various causes during the war.
France had 62 submarines at the beginning of the war, in 14 different classes. They operated mainly in the Mediterranean; in the course of the war, 12 were lost. The Russians started the war with 58 submarines in service or under construction. The main class was the with 24 boats. Twenty-four submarines were lost during the war.
World War II
Germany
Although Germany was banned from having submarines in the Treaty of Versailles, construction started in secret during the 1930s. When this became known, the Anglo-German Naval Agreement of 1936 allowed Germany to achieve parity in submarines with Britain.
Germany started the war with only 65 submarines, with 21 at sea when war broke out. Germany soon built the largest submarine fleet during World War II. Due to the Treaty of Versailles limiting the surface navy, the rebuilding of the German surface forces had only begun in earnest a year before the outbreak of World War II. Having no hope of defeating the vastly superior Royal Navy decisively in a surface battle, the German High Command planned on fighting a campaign of "Guerre de course" (Merchant warfare), and immediately stopped all construction on capital surface ships, save the nearly completed s and two cruisers, and switched the resources to submarines, which could be built more quickly. Though it took most of 1940 to expand production facilities and to start mass production, more than a thousand submarines were built by the end of the war.
Germany used submarines to devastating effect in World War II during the Battle of the Atlantic, attempting but ultimately failing to cut off Britain's supply routes by sinking more ships than Britain could replace. The supply lines were vital to Britain for food and industry, as well as armaments from Canada and the United States. Although the U-boats had been updated in the intervening years, the major innovation was improved communications, encrypted using the famous Enigma cipher machine. This allowed for mass-attack tactics or "wolfpacks" (Rudel), but was also ultimately the U-boats' downfall.
After putting to sea, the U-boats operated mostly on their own trying to find convoys in areas assigned to them by the High Command. If a convoy was found, the submarine did not attack immediately, but shadowed the convoy and radioed to the German Command to allow other submarines in the area to find the convoy. The submarines were then grouped into a larger striking force and attacked the convoy simultaneously, preferably at night while surfaced to avoid the ASDIC.
During the first few years of World War II, the Ubootwaffe ("U-boat force") scored unprecedented success with these tactics ("First Happy Time"), but were too few to have any decisive success. By the spring of 1943, German U-boat construction was at full capacity, but this was more than nullified by increased numbers of convoy escorts and aircraft, as well as technical advances like radar and sonar. High Frequency Direction Finding (HF/DF, known as Huff-Duff) and Ultra allowed the Allies to route convoys around wolfpacks when they detected radio transmissions from trailing boats. The results were devastating: from March to July of that year, over 130 U-boats were lost, 41 in May alone. Concurrent Allied losses dropped dramatically, from 750,000 tons in March to 188,000 in July. Although the Battle of the Atlantic continued to the last day of the war, the U-boat arm was unable to stem the tide of personnel and supplies, paving the way for Operation Torch, Operation Husky, and ultimately, D-Day. Winston Churchill wrote the U-boat "peril" was the only thing to ever give him cause to doubt eventual Allied victory.
By the end of the war, almost 3,000 Allied ships (175 warships, 2,825 merchantmen) were sunk by U-boats. Of the 40,000 men in the U-boat service, 28,000 (70%) died.
The Germans built some novel submarine designs, including the Type XVII, which used hydrogen peroxide in a Walther turbine (named for its designer, Dr Hellmuth Walther) for propulsion. They also produced the Type XXII, which had a large battery and mechanical torpedo handling.
Italy
Italy had 116 submarines in service at the start of the war, with 24 different classes. These operated mainly in the Mediterranean theatre. Some were sent to a base at Bordeaux in Occupied France. A flotilla of several submarines also operated out of the Eritrean colonial port of Massawa.
Italian designs proved to be unsuitable for use in the Atlantic Ocean. Italian midget submarines were used in attacks against British shipping near the port of Gibraltar.
Britain
The Royal Navy Submarine Service had 70 operational submarines in 1939. Three classes were selected for mass production, the seagoing S class and the oceangoing T class, as well as the coastal U class. All of these classes were built in large numbers during the war.
The French submarine fleet consisted of over 70 vessels (with some under construction) at the beginning of the war. After the Fall of France, the French-German Armistice required the return of all French submarines to German-controlled ports in France. Some of these submarines were forcibly seized by British forces.
The main operating theatres for British submarines were off the coast of Norway, in the Mediterranean, where a flotilla of submarines successfully disrupted the Axis replenishment route to North Africa from their base in Malta, as well as in the North Sea. As Germany was a Continental power, there was little opportunity for the British to sink German shipping in this theatre of the Atlantic.
From 1940, U-class submarines were stationed at Malta, to interdict enemy supplies bound for North Africa. Over a period of three years, this force sank over 1 million tons of shipping, and fatally undermined the attempts of the German High Command to adequately support General Erwin Rommel. Rommel's Chief of Staff, Fritz Bayerlein conceded that "We would have taken Alexandria and reached the Suez Canal, if it had not been for the work of your submarines". 45 vessels were lost during this campaign, and five Victoria Crosses were awarded to submariners serving in this theatre.
In addition, British submarines attacked Japanese shipping in the Far East, during the Pacific campaign. The Eastern Fleet was responsible for submarine operations in the Bay of Bengal, Strait of Malacca as far as Singapore, and the western coast of Sumatra to the Equator. Few large Japanese cargo ships operated in this area, and the British submarines' main targets were small craft operating in inshore waters. The submarines were deployed to conduct reconnaissance, interdict Japanese supplies travelling to Burma, and attack U-boats operating from Penang. The Eastern Fleet's submarine force continued to expand during 1944, and by October 1944 had sunk a cruiser, three submarines, six small naval vessels, of merchant ships, and nearly 100 small vessels. In this theatre, the only documented instance of a submarine sinking another submarine while both were submerged occurred. engaged the and the Venturer crew manually computed a successful firing solution against a three-dimensionally maneuvering target using techniques which became the basis of modern torpedo computer targeting systems.
By March 1945, British boats had gained control of the Strait of Malacca, preventing any supplies from reaching the Japanese forces in Burma by sea. By this time, there were few large Japanese ships in the region, and the submarines mainly operated against small ships which they attacked with their deck guns. The submarine torpedoed and sank the heavy cruiser in the Bangka Strait, taking down some 1,200 Japanese army troops. Three British submarines (, , and ) were sunk by the Japanese during the war.
Japan
Japan had the most varied fleet of submarines of World War II, including manned torpedoes (Kaiten), midget submarines (Ko-hyoteki, Kairyu), medium-range submarines, purpose-built supply submarines (many for use by the Army), long-range fleet submarines (many of which carried an aircraft), submarines with the highest submerged speeds of the conflict (Sentaka I-200), and submarines that could carry multiple aircraft (World War II's largest submarine, the Sentoku I-400). These submarines were also equipped with the most advanced torpedo of the conflict, the oxygen-propelled Type 95 (what U.S. historian Samuel E. Morison postwar called "Long Lance").
Overall, despite their technical prowess, Japanese submarines – having been incorporated into the Imperial Navy's war plan of "Guerre D' Escadre" (Fleet Warfare), in contrast to Germany's war plan of "Guerre De Course" – were relatively unsuccessful. Japanese submarines were primarily used in offensive roles against warships, which were fast, maneuverable and well-defended compared to merchant ships. In 1942, Japanese submarines sank two fleet aircraft carriers, one cruiser, and several destroyers and other warships, and damaged many others, including two battleships. They were not able to sustain these results afterward, as Allied fleets were reinforced and became better organized. By the end of the war, submarines were instead often used to transport supplies to island garrisons. During the war, Japan managed to sink about 1 million tons of merchant shipping (184 ships), compared to 1.5 million tons for Great Britain (493 ships), 4.65 million tons for the U.S. (1,079 ships) and 14.3 million tons for Germany (2,840 ships).
Early models were not very maneuverable underwater, could not dive very deep, and lacked radar. Later in the war, units that were fitted with radar were in some instances sunk due to the ability of U.S. radar sets to detect their emissions. For example, sank three such equipped submarines in the span of four days. After the war, several of Japan's most original submarines were sent to Hawaii for inspection in "Operation Road's End" (I-400, I-401, I-201 and I-203) before being scuttled by the U.S. Navy in 1946, when the Soviets demanded access to the submarines as well.
United States
After the attack on Pearl Harbor, many of the U.S. Navy's front-line Pacific Fleet surface ships were destroyed or severely damaged. The submarines survived the attack and carried the war to the enemy. Lacking support vessels, the submarines were asked to independently hunt and destroy Japanese ships and submarines. They did so very effectively.
During World War II, the submarine force was the most effective anti-ship and anti-submarine weapon in the entire American arsenal. Submarines, though only about 2 percent of the U.S. Navy, destroyed over 30 percent of the Japanese Navy, including 8 aircraft carriers, 1 battleship and 11 cruisers. U.S. submarines also destroyed over 60 percent of the Japanese merchant fleet, crippling Japan's ability to supply its military forces and industrial war effort. Allied submarines in the Pacific War destroyed more Japanese shipping than all other weapons combined. This feat was considerably aided by the Imperial Japanese Navy's failure to provide adequate escort forces for the nation's merchant fleet.
Whereas Japanese submarine torpedoes of the war are considered the best, those of U.S. Navy are considered the worst. For example, the U.S. Mark 14 torpedo typically ran too deep and was tipped with a Mk VI exploder, with both magnetic influence and contact features, neither reliable. The faulty depth control mechanism of the Mark 14 was corrected in August 1942, but field trials for the exploders were not ordered until mid-1943, when tests in Hawaii and Australia confirmed the flaws. In addition, the Mark 14 sometimes suffered circular runs, which sank at least one U.S. submarine, . Fully operational Mark 14 torpedoes were not put into service until September 1943. The Mark 15 torpedo used by U.S. surface combatants had the same Mk VI exploder and was not fixed until late 1943. One attempt to correct the problems resulted in a wakeless, electric torpedo (the Mark 18) being placed in submarine service. was lost to a circular run by one of these torpedoes. Given the prevalence of circular runs, there were probably other losses among boats which simply disappeared.
During World War II, 314 submarines served in the United States Navy, of which nearly 260 were deployed to the Pacific. On 7 December 1941, 111 boats were in commission and 203 submarines from the , , and es were commissioned during the war. During the war, 52 US submarines were lost to all causes, with 48 directly due to hostilities; 3,505 sailors were lost, the highest percentage killed in action of any US service arm in World War II. U.S. submarines sank 1,560 enemy vessels, a total tonnage of 5.3 million tons (55% of the total sunk), including 8 aircraft carriers, a battleship, three heavy cruisers, and over 200 other warships, and damaged several other ships including the battleships (badly damaged by ) and (damaged by ). In addition, the Japanese merchant marine lost 16,200 sailors killed and 53,400 wounded, of some 122,000 at the start of the war, due to submarines.
Post-War
During the Cold War, the United States and the Soviet Union maintained large submarine fleets that engaged in cat-and-mouse games. This continues today, on a much-reduced scale. The Soviet Union suffered the loss of at least four submarines during this period: was lost in 1968 (which the CIA attempted to retrieve from the ocean floor with the Howard Hughes-designed ship named Glomar Explorer), in 1970, in 1986, and Komsomolets in 1989. Many other Soviet subs, such as were badly damaged by fire or radiation leaks. The United States lost two nuclear submarines during this time: and . The Thresher was lost due to equipment failure, and the exact cause of the loss of the Scorpion is not known.
The sinking of in the Indo-Pakistani War of 1971 was the first submarine casualty in the South Asian region.
The United Kingdom employed nuclear-powered submarines against Argentina during the 1982 Falklands War. The sinking of the cruiser by was the first sinking by a nuclear-powered submarine in war. During this conflict, the conventional Argentinian submarine ARA Santa Fé was disabled by a Sea Skua missile, and the claimed to have made unsuccessful attacks on the British fleet.
Major incidents
There have been a number of accidental sinkings, but also some collisions between submarines.
Up to August 1914, there were 68 submarine accidents. There were 23 collisions, 7 battery gas explosions, 12 gasoline explosions, and 13 sinkings due to hull openings not being closed. was lost in the English Channel in 1951 due to the snort mast fracturing and in 1963 due to a pipe weld failure during a test dive. Many other scenarios have been proven to be probable causes of sinking, most notably a battery malfunction causing a torpedo to detonate internally, and the loss of the Russian Kursk on 12 August 2000 probably due to a torpedo explosion. An example of the latter was the incident between the Russian K-276 and the in February 1992.
Since 2000, there have been 9 major naval incidents involving submarines. There were three Russian submarine incidents, in two of which the submarines in question were lost, along with three United States submarine incidents, one Chinese incident, one Canadian, and one Australian incident. In August 2005, AS-28, a Russian Priz-class rescue submarine, was trapped by cables and/or nets off of Petropavlovsk, and saved when a British ROV cut them free in a massive international effort.
See also
List of submarine actions
List of submarine museums
List of sunken nuclear submarines
Depth charge and Depth charge (cocktail)
Nuclear navy
Nuclear submarine
Attack submarine
List of countries with submarines
Vessels
Nerwin (NR-1)
Vesikko (museum submarine)
ORP Orzeł
Ships named Nautilus
List of submarines of the Royal Navy
List of submarines of the United States Navy
List of Soviet submarines
List of U-boats of Germany
Kaikō ROV (deepest submarine dive)
Bathyscaphe Trieste (deepest manned dive)
Classes
List of submarine classes
List of submarine classes of the Royal Navy
List of Soviet and Russian submarine classes
List of United States submarine classes
References
Further reading
Blair, Clay Jr., Silent Victory: The U.S. Submarine War Against Japan,
Compton-Hall, Richard. Submarine Boats, the beginnings of underwater warfare, Windward, 1983.
Fontenoy, Paul. Submarines: An Illustrated History of Their Impact. ABC-CLIO, 2007.
Harris, Brayton (Captain, USN ret.). The Navy Times Book of Submarines: A Political, Social, and Military History. Berkley Books, 1997
Jentschura, Hansgeorg; Dieter Jung, Peter Mickel. Warships of the Imperial Japanese Navy, 1869–1945. United States Naval Institute, 1977. Annapolis, Maryland. .
Lockwood, Charles A. (VAdm, USN ret.), Sink 'Em All: Submarine Warfare in the Pacific, (1951)
Polmar, Norman & Kenneth Moore. Cold War Submarines: The Design and Construction of U. S. and Soviet Submarines. Brassey's, Washington DC, 2004.
Preston, Antony. The World's Greatest Submarines Greenwich Editions 2005.
Showell, Jak. The U-Boat Century-German Submarine Warfare 1906–2006. Great Britain; Chatham Publishing, 2006. .
External links
John Holland
German Submarines of WWII
Submarine Simulations
Seehund – German Midget Submarine
Submarines of WWI
Molch – German Midget Submarine
Developed for the NOVA television series.
Role of the Modern Submarine
Submariners of WWII – World War II Submarine Veterans History Project
German submarines using peroxide
record-breaking Japanese Submarines
German U-Boats 1935–1945
U.S. ship photo archive
Israeli missile trials
The Sub Report
The Invention of the Submarine
Submersibles and Technology by Graham Hawkes
Submarine of Karl Shilder
Royal Navy submarine history
A century of Royal Navy submarine operations
Royal Navy submarines
Still floating submarine Lembit (1936)
Submarines, the Enemy Unseen, History Today
American Society of Safety Engineers. Journal of Professional Safety. Submarine Accidents: A 60-Year Statistical Assessment. C. Tingle. September 2009. pp. 31–39. Ordering full article: https://www.asse.org/professionalsafety/indexes/2009.php; or Reproduction fewer graphics/tables: http://www.allbusiness.com/government/government-bodies-offices-government/12939133-1.html.
Submarine
History of submarines
Submarines
Dutch inventions | History of submarines | Technology | 13,181 |
589,548 | https://en.wikipedia.org/wiki/Prothrombin%20time | The prothrombin time (PT) – along with its derived measures of prothrombin ratio (PR) and international normalized ratio (INR) – is an assay for evaluating the extrinsic pathway and common pathway of coagulation. This blood test is also called protime INR and PT/INR. They are used to determine the clotting tendency of blood, in conditions such as the measure of warfarin dosage, liver damage (cirrhosis), and vitamin K status. PT measures the following coagulation factors: I (fibrinogen), II (prothrombin), V (proaccelerin), VII (proconvertin), and X (Stuart–Prower factor).
PT is often used in conjunction with the activated partial thromboplastin time (aPTT) which measures the intrinsic pathway and common pathway of coagulation.
Laboratory measurement
The reference range for prothrombin time depends on the analytical method used, but is usually around 12–13 seconds (results should always be interpreted using the reference range from the laboratory that performed the test), and the INR in absence of anticoagulation therapy is 0.8–1.2. The target range for INR in anticoagulant use (e.g. warfarin) is 2 to 3. In some cases, if more intense anticoagulation is thought to be required, the target range may be as high as 2.5–3.5 depending on the indication for anticoagulation.
Methodology
Prothrombin time is typically analyzed by a laboratory technologist on an automated instrument at 37 °C (as a nominal approximation of normal human body temperature).
Blood is drawn into a test tube containing liquid sodium citrate, which acts as an anticoagulant by binding the calcium in a sample. The blood is mixed, then centrifuged to separate blood cells from plasma (as prothrombin time is most commonly measured using blood plasma). In newborns, a capillary whole blood specimen is used.
A sample of the plasma is extracted from the test tube and placed into a measuring test tube (Note: for an accurate measurement, the ratio of blood to citrate needs to be fixed and should be labeled on the side of the measuring test tube by the manufacturing company; many laboratories will not perform the assay if the tube is underfilled and contains a relatively high concentration of citrate—the standardized dilution of 1 part anticoagulant to 9 parts whole blood is no longer valid).
Next an excess of calcium (in a phospholipid suspension) is added to the test tube, thereby reversing the effects of citrate and enabling the blood to clot again.
Finally, in order to activate the extrinsic / tissue factor clotting cascade pathway, tissue factor (also known as factor III) is added and the time the sample takes to clot is measured optically. Some laboratories use a mechanical measurement, which eliminates interferences from lipemic and icteric samples.
Prothrombin time ratio
The prothrombin time ratio is the ratio of a subject's measured prothrombin time (in seconds) to the normal laboratory reference PT. The PT ratio varies depending on the specific reagents used, and has been replaced by the INR. Elevated INR may be useful as a rapid and inexpensive diagnostic of infection in people with COVID-19.
International normalized ratio
The result (in seconds) for a prothrombin time performed on a normal individual will vary according to the type of analytical system employed. This is due to the variations between different types and batches of manufacturer's tissue factor used in the reagent to perform the test. The INR was devised to standardize the results. Each manufacturer assigns an ISI value (International Sensitivity Index) for any tissue factor they manufacture. The ISI value indicates how a particular batch of tissue factor compares to an international reference tissue factor. The ISI is usually between 0.94 and 1.4 for more sensitive and 2.0–3.0 for less sensitive thromboplastins.
The INR is the ratio of a patient's prothrombin time to a normal (control) sample, raised to the power of the ISI value for the analytical system being used.
PTnormal is established as the geometric mean of the prothrombin times (PT) of a reference sample group.
Interpretation
The prothrombin time is the time it takes plasma to clot after addition of tissue factor (obtained from animals such as rabbits, or recombinant tissue factor, or from brains of autopsy patients). This measures the quality of the extrinsic pathway (as well as the common pathway) of coagulation. The speed of the extrinsic pathway is greatly affected by levels of functional factor VII in the body. Factor VII has a short half-life and the carboxylation of its glutamate residues requires vitamin K. The prothrombin time can be prolonged as a result of deficiencies in vitamin K, warfarin therapy, malabsorption, or lack of intestinal colonization by bacteria (such as in newborns). In addition, poor factor VII synthesis (due to liver disease) or increased consumption (in disseminated intravascular coagulation) may prolong the PT.
The INR is typically used to monitor patients on warfarin or related oral anticoagulant therapy. The normal range for a healthy person not using warfarin is 0.8–1.2, and for people on warfarin therapy an INR of 2.0–3.0 is usually targeted, although the target INR may be higher in particular situations, such as for those with a mechanical heart valve. If the INR is outside the target range, a high INR indicates a higher risk of bleeding, while a low INR suggests a higher risk of developing a clot. In patients on a vitamin K antagonist such as warfarin with supratherapeutic INR but INR less than 10 and no bleeding, it is enough to lower the dose or omit a dose, monitor the INR and resume the vitamin K antagonist at an adjusted lower dose when the target INR is reached. For people who need rapid reversal of the vitamin K antagonist – such as due to serious bleeding – or who need emergency surgery, the effects of warfarin can be reversed with vitamin K, prothrombin complex concentrate (PCC), or fresh frozen plasma (FFP).
Factors determining accuracy
Lupus anticoagulant, a circulating inhibitor predisposing for thrombosis, may skew PT results, depending on the assay used. Variations between various thromboplastin preparations have in the past led to decreased accuracy of INR readings, and a 2005 study suggested that despite international calibration efforts (by INR) there were still statistically significant differences between various kits, casting doubt on the long-term tenability of PT/INR as a measure for anticoagulant therapy. Indeed, a new prothrombin time variant, the Fiix prothrombin time, intended solely for monitoring warfarin and other vitamin K antagonists has been invented and recently become available as a manufactured test. The Fiix prothrombin time is only affected by reductions in factor II and/or factor X and this stabilizes the anticoagulant effect and appears to improve clinical outcome according to an investigator initiated randomized blinded clinical trial, The Fiix-trial. In this trial thromboembolism was reduced by 50% during long-term treatment and despite that bleeding was not increased.
Statistics
An estimated 800 million PT/INR assays are performed annually worldwide.
Near-patient testing
In addition to the laboratory method outlined above, near-patient testing (NPT) or home INR monitoring is becoming increasingly common in some countries. In the United Kingdom, for example, near-patient testing is used both by patients at home and by some anticoagulation clinics (often hospital-based) as a fast and convenient alternative to the lab method. After a period of doubt about the accuracy of NPT results, a new generation of machines and reagents seems to be gaining acceptance for its ability to deliver results close in accuracy to those of the lab.
In a typical NPT set up, a small table-top device is used. A drop of capillary blood is obtained with an automated finger-prick, which is almost painless. This drop is placed on a disposable test strip with which the machine has been prepared. The resulting INR comes up on the display a few seconds later. A similar form of testing is used by people with diabetes for monitoring blood sugar levels, which is easily taught and routinely practiced.
Local policy determines whether the patient or a coagulation specialist (pharmacist, nurse, general practitioner or hospital doctor) interprets the result and determines the dose of medication. In Germany and Austria, patients may adjust the medication dose themselves, while in the UK and the US this remains in the hands of a health care professional.
A significant advantage of home testing is the evidence that patient self-testing with medical support and patient self-management (where patients adjust their own anticoagulant dose) improves anticoagulant control. A meta analysis which reviewed 14 trials showed that home testing led to a reduced incidence of complications (bleeding and thrombosis) and improved the time in the therapeutic range, which is an indirect measure of anticoagulant control. In 2022, a smartphone system was introduced by researchers to perform PT/INR testing in an inexpensive and accessible manner. It uses the vibration motor and camera ubiquitous on smartphones to track micro-mechanical movements of a copper particle and compute PT/INR values.
Other advantages of the NPT approach are that it is fast and convenient, usually less painful, and offers, in home use, the ability for patients to measure their own INRs when required. Among its problems are that quite a steady hand is needed to deliver the blood to the exact spot, that some patients find the finger-pricking difficult, and that the cost of the test strips must also be taken into account. In the UK these are available on prescription so that elderly and unwaged people will not pay for them and others will pay only a standard prescription charge, which at the moment represents only about 20% of the retail price of the strips. In the US, NPT in the home is currently reimbursed by Medicare for patients with mechanical heart valves, while private insurers may cover for other indications. Medicare is now covering home testing for patients with chronic atrial fibrillation. Home testing requires a doctor's prescription and that the meter and supplies are obtained from a Medicare-approved Independent Diagnostic Testing Facility (IDTF).
There is some evidence to suggest that NPT may be less accurate for certain patients, for example those who have the lupus anticoagulant.
Guidelines
International guidelines were published in 2005 to govern home monitoring of oral anticoagulation by the International Self-Monitoring Association for Oral Anticoagulation. The international guidelines study stated, "The consensus agrees that patient self-testing and patient self-management are effective methods of monitoring oral anticoagulation therapy, providing outcomes at least as good as, and possibly better than, those achieved with an anticoagulation clinic. All patients must be appropriately selected and trained. Currently, available self-testing/self-management devices give INR results which are comparable with those obtained in laboratory testing."
Medicare coverage for home testing of INR has been expanded in order to allow more people access to home testing of INR in the US. The release on 19 March 2008 said, "[t]he Centers for Medicare & Medicaid Services (CMS) expanded Medicare coverage for home blood testing of prothrombin time (PT) International Normalized Ratio (INR) to include beneficiaries who are using the drug warfarin, an anticoagulant (blood thinner) medication, for chronic atrial fibrillation or venous thromboembolism." In addition, "those Medicare beneficiaries and their physicians managing conditions related to chronic atrial fibrillation or venous thromboembolism will benefit greatly through the use of the home test."
History
The prothrombin time was developed by Armand J. Quick and colleagues in 1935, and a second method was published by , also called the "p and p" or "prothrombin and proconvertin" method. It aided in the identification of the anticoagulants dicumarol and warfarin, and was used subsequently as a measure of activity for warfarin when used therapeutically.
The INR was invented in the early 1980s by Tom Kirkwood working at the UK National Institute for Biological Standards and Control (and subsequently at the UK National Institute for Medical Research) to provide a consistent way of expressing the prothrombin time ratio, which had previously suffered from a large degree of variation between centres using different reagents. The INR was coupled to Dr Kirkwood's simultaneous invention of the International Sensitivity Index (ISI), which provided the means to calibrate different batches of thromboplastins to an international standard. The INR became widely accepted worldwide, especially after endorsement by the World Health Organization.
See also
D-dimer
Partial thromboplastin time (PTT), or activated partial thromboplastin time (aPTT or APTT)
Thrombin time (TT)
Thrombodynamics test
Thromboelastography
Thrombus
References
External links
PT and INR – Lab Tests Online
Blood tests | Prothrombin time | Chemistry | 2,901 |
1,530,575 | https://en.wikipedia.org/wiki/Fungal%20infection | Fungal infection, also known as mycosis, is a disease caused by fungi. Different types are traditionally divided according to the part of the body affected; superficial, subcutaneous, and systemic. Superficial fungal infections include common tinea of the skin, such as tinea of the body, groin, hands, feet and beard, and yeast infections such as pityriasis versicolor. Subcutaneous types include eumycetoma and chromoblastomycosis, which generally affect tissues in and beneath the skin. Systemic fungal infections are more serious and include cryptococcosis, histoplasmosis, pneumocystis pneumonia, aspergillosis and mucormycosis. Signs and symptoms range widely. There is usually a rash with superficial infection. Fungal infection within the skin or under the skin may present with a lump and skin changes. Pneumonia-like symptoms or meningitis may occur with a deeper or systemic infection.
Fungi are everywhere, but only some cause disease. Fungal infection occurs after spores are either breathed in, come into contact with skin or enter the body through the skin such as via a cut, wound or injection. It is more likely to occur in people with a weak immune system. This includes people with illnesses such as HIV/AIDS, and people taking medicines such as steroids or cancer treatments. Fungi that cause infections in people include yeasts, molds and fungi that are able to exist as both a mold and yeast. The yeast Candida albicans can live in people without producing symptoms, and is able to cause both superficial mild candidiasis in healthy people, such as oral thrush or vaginal yeast infection, and severe systemic candidiasis in those who cannot fight infection themselves.
Diagnosis is generally based on signs and symptoms, microscopy, culture, sometimes requiring a biopsy and the aid of medical imaging. Some superficial fungal infections of the skin can appear similar to other skin conditions such as eczema and lichen planus. Treatment is generally performed using antifungal medicines, usually in the form of a cream or by mouth or injection, depending on the specific infection and its extent. Some require surgically cutting out infected tissue.
Fungal infections have a world-wide distribution and are common, affecting more than one billion people every year. An estimated 1.7 million deaths from fungal disease were reported in 2020. Several, including sporotrichosis, chromoblastomycosis and mycetoma are neglected.
A wide range of fungal infections occur in other animals, and some can be transmitted from animals to people.
Classification
Mycoses are traditionally divided into superficial, subcutaneous, or systemic, where infection is deep, more widespread and involving internal body organs. They can affect the nails, vagina, skin and mouth. Some types such as blastomycosis, cryptococcus, coccidioidomycosis and histoplasmosis, affect people who live in or visit certain parts of the world. Others such as aspergillosis, pneumocystis pneumonia, candidiasis, mucormycosis and talaromycosis, tend to affect people who are unable to fight infection themselves. Mycoses might not always conform strictly to the three divisions of superficial, subcutaneous and systemic. Some superficial fungal infections can cause systemic infections in people who are immunocompromised. Some subcutaneous fungal infections can invade into deeper structures, resulting in systemic disease. Candida albicans can live in people without producing symptoms, and is able to cause both mild candidiasis in healthy people and severe invasive candidiasis in those who cannot fight infection themselves.
ICD-11 codes
ICD-11 codes include:
1F20 Aspergillosis
1F21 Basidiobolomycosis
1F22 Blastomycosis
1F23 Candidosis
1F24 Chromoblastomycosis
1F25 Coccidioidomycosis
1F26 Conidiobolomycosis
1F27 Cryptococcosis
1F28 Dermatophytosis
1F29 Eumycetoma
1F2A Histoplasmosis
1F2B Lobomycosis
1F2C Mucormycosis
1F2D Non-dermatophyte superficial dermatomycoses
1F2E Paracoccidioidomycosis
1F2F Phaeohyphomycosis
1F2G Pneumocystosis
1F2H Scedosporiosis
1F2J Sporotrichosis
1F2K Talaromycosis
1F2L Emmonsiosis
Superficial mycoses
Superficial mycoses include candidiasis in healthy people, common tinea of the skin, such as tinea of the body, groin, hands, feet and beard, and malassezia infections such as pityriasis versicolor.
Subcutaneous
Subcutaneous fungal infections include sporotrichosis, chromoblastomycosis, and eumycetoma.
Systemic
Systemic fungal infections include histoplasmosis, cryptococcosis, coccidioidomycosis, blastomycosis, mucormycosis, aspergillosis, pneumocystis pneumonia and systemic candidiasis.
Systemic mycoses due to primary pathogens originate normally in the lungs and may spread to other organ systems. Organisms that cause systemic mycoses are inherently virulent.. Systemic mycoses due to opportunistic pathogens are infections of people with immune deficiencies who would otherwise not be infected. Examples of immunocompromised conditions include AIDS, alteration of normal flora by antibiotics, immunosuppressive therapy, and metastatic cancer. Examples of opportunistic mycoses include Candidiasis, Cryptococcosis and Aspergillosis.
Signs and symptoms
Most common mild mycoses often present with a rash. Infections within the skin or under the skin may present with a lump and skin changes. Less common deeper fungal infections may present with pneumonia like symptoms or meningitis.
Causes
Mycoses are caused by certain fungi; yeasts, molds and some fungi that can exist as both a mold and yeast. They are everywhere and infection occurs after spores are either breathed in, come into contact with skin or enter the body through the skin such as via a cut, wound or injection. Candida albicans is the most common cause of fungal infection in people, particularly as oral or vaginal thrush, often following taking antibiotics.
Risk factors
Fungal infections are more likely in people with weak immune systems. This includes people with illnesses such as HIV/AIDS, and people taking medicines such as steroids or cancer treatments. People with diabetes also tend to develop fungal infections. Very young and very old people, also, are groups at risk.
Individuals being treated with antibiotics are at higher risk of fungal infections.
Children whose immune systems are not functioning properly (such as children with cancer) are at risk of invasive fungal infections.
COVID-19
During the COVID-19 pandemic some fungal infections have been associated with COVID-19. Fungal infections can mimic COVID-19, occur at the same time as COVID-19 and more serious fungal infections can complicate COVID-19. A fungal infection may occur after antibiotics for a bacterial infection which has occurred following COVID-19. The most common serious fungal infections in people with COVID-19 include aspergillosis and invasive candidiasis. COVID-19–associated mucormycosis is generally less common, but in 2021 was noted to be significantly more prevalent in India.
Mechanism
Fungal infections occur after spores are either breathed in, come into contact with skin or enter the body through a wound.
Diagnosis
Diagnosis is generally by signs and symptoms, microscopy, biopsy, culture and sometimes with the aid of medical imaging.
Differential diagnosis
Some tinea and candidiasis infections of the skin can appear similar to eczema and lichen planus. Pityriasis versicolor can look like seborrheic dermatitis, pityriasis rosea, pityriasis alba and vitiligo.
Some fungal infections such as coccidioidomycosis, histoplasmosis, and blastomycosis can present with fever, cough, and shortness of breath, thereby resembling COVID-19.
Prevention
Keeping the skin clean and dry, as well as maintaining good hygiene, will help larger topical mycoses. Because some fungal infections are contagious, it is important to wash hands after touching other people or animals. Sports clothing should also be washed after use.
Treatment
Treatment depends on the type of fungal infection, and usually requires topical or systemic antifungal medicines. Pneumocystosis that does not respond to anti-fungals is treated with co-trimoxazole. Sometimes, infected tissue needs to be surgically cut away.
Epidemiology
Worldwide, every year fungal infections affect more than one billion people. An estimated 1.6 million deaths from fungal disease were reported in 2017. The figure has been rising, with an estimated 1.7 million deaths from fungal disease reported in 2020. Fungal infections also constitute a significant cause of illness and mortality in children.
According to the Global Action Fund for Fungal Infections, every year there are over 10 million cases of fungal asthma, around 3 million cases of long-term aspergillosis of lungs, 1 million cases of blindness due to fungal keratitis, more than 200,000 cases of meningitis due to cryptococcus, 700,000 cases of invasive candidiasis, 500,000 cases of pneumocystosis of lungs, 250,000 cases of invasive aspergillosis, and 100,000 cases of histoplasmosis.
History
In 500BC, an apparent account of ulcers in the mouth by Hippocrates may have been thrush. The Hungarian microscopist based in Paris David Gruby first reported that human disease could be caused by fungi in the early 1840s.
SARS 2003
During the 2003 SARS outbreak, fungal infections were reported in 14.8–33% of people affected by SARS, and it was the cause of death in 25–73.7% of people with SARS.
Other animals
A wide range of fungal infections occur in other animals, and some can be transmitted from animals to people, such as Microsporum canis from cats.
See also
Actinomycosis
Climate change and infectious diseases
References
Tropical diseases
Animal fungal diseases
Fungal diseases | Fungal infection | Biology | 2,189 |
62,047,186 | https://en.wikipedia.org/wiki/Kotzig%27s%20theorem | In graph theory and polyhedral combinatorics, areas of mathematics, Kotzig's theorem is the statement that every polyhedral graph has an edge whose two endpoints have total degree at most 13. An extreme case is the triakis icosahedron, where no edge has smaller total degree. The result is named after Anton Kotzig, who published it in 1955 in the dual form that every convex polyhedron has two adjacent faces with a total of at most 13 sides. It was named and popularized in the west in the 1970s by Branko Grünbaum.
More generally, every planar graph of minimum degree at least three either has an edge of total degree at most 12, or at least 60 edges that (like the edges in the triakis icosahedron) connect vertices of degrees 3 and 10.
If all triangular faces of a polyhedron are vertex-disjoint, there exists an edge with smaller total degree, at most eight.
Generalizations of the theorem are also known for graph embeddings onto surfaces with higher genus.
The theorem cannot be generalized to all planar graphs, as the complete bipartite graphs and have edges with unbounded total degree. However, for planar graphs with vertices of degree lower than three, variants of the theorem have been proven, showing that either there is an edge of bounded total degree or some other special kind of subgraph.
References
Planar graphs
Theorems in graph theory | Kotzig's theorem | Mathematics | 298 |
60,579,044 | https://en.wikipedia.org/wiki/Promenade%20architecturale | Promenade architecturale is a concept developed by Swiss-French architect Le Corbusier that refers to the implied "itinerary" of a built environment. Le Corbusier coined the term in reference to his houses: Villas La Roche and Savoye. In the study of architecture there is a longstanding tradition of walking to achieve spatial perception, of for example, a street, building or any spatial premises designed or otherwise. Throughout history the perception of spaces through movement, mainly by means of walking through or along them, has always been a recurring, yet often overlooked concept. Promenade architecturale refers literally to such a walk of perception, or in other words, an "Architectural walk".
References
Sources
Architectural terminology
Le Corbusier | Promenade architecturale | Engineering | 151 |
38,636,554 | https://en.wikipedia.org/wiki/C14H28 | {{DISPLAYTITLE:C14H28}}
The molecular formula C14H28 (molar mass: 196.37 g/mol, exact mass: 196.2191 u) may refer to:
Cyclotetradecane
Tetradecene
Molecular formulas | C14H28 | Physics,Chemistry | 60 |
30,954,342 | https://en.wikipedia.org/wiki/Peter%20Rheinstein | Peter Howard Rheinstein (born September 7, 1943) is an American physician, lawyer, author, and administrator (both private and governmental). He was an official of the Food and Drug Administration (FDA) from 1974 to 1999.
Education
Rheinstein, a General Motors Scholar, received a B.A. with high honors from Michigan State University in 1963, an M.S. in mathematics from Michigan State University in 1964, an M.D. from Johns Hopkins University in 1967, and a J.D. from the University of Maryland School of Law in 1973. At Michigan State University, Rheinstein was noted for his facility in mathematics.
Food and Drug Administration
Rheinstein was director of the Drug Advertising and Labeling Division, Food and Drug Administration, Rockville (1974-1982). He was acting deputy director Office of Drugs (1982–83), acting director Office of Drugs (1983–84), director Office of Drug Standards (1984–90), and director medicine staff Office Health Affairs (1990-99). While at the FDA, Rheinstein developed precedents for Food and Drug Administration regulation of prescription drug promotion, initiated FDA’s first patient medication information program, implemented the Drug Price Competition and Patent Term Restoration Act of 1984, and authored medication goals for Healthy People 2000 and 2010. Judy Woodruff interviewed Rheinstein about generic drug safety on the McNeil-Lehrer NewsHour on December 11, 1985. Stone Phillips interviewed Rheinstein about drug labeling on Dateline NBC on March 31, 1992.
Later career
From 1999 to 2004, Rheinstein was senior vice president for medical and clinical affairs at Cell Works, Inc., in Baltimore. Among other projects, Cell Works wanted to develop a blood test for anthrax, similar to a system for cancer cells it produced. "It's something that companies like ours can incorporate into our diagnostic technology," Rheinstein told the Washington Times. Biodefense projects "create new technologies, the spin-offs of which can be commercialized into some pretty good things." In 2000 Rheinstein became president of Severn Health Solutions in Severna Park, Maryland. In 2010 Rheinstein was named president of the Academy of Physicians in Clinical Research and in 2011 was named chairman of the American Board of Legal Medicine. Rheinstein was named chairman of the United States Adopted Names Council in 2012. Rheinstein is a member of Phi Kappa Phi and vice president of the Intercultural Friends Foundation. Rheinstein is publisher of Discovery Medicine and chairman of MedData Foundation. He is past president of the Academy of Medicine of Washington, DC. Sarah Gonzalez interviewed Rheinstein for Planet Money, This Is Your Brain on Drug Ads, on September 8, 2021.
Publications
co-author of Human Organ Transplantation: Societal, Medical-Legal, Regulatory, and Reimbursement Issues. Health Administration Press, Ann Arbor, Michigan 1987.
special editorial advisor, Good Housekeeping Guide to Medicines and Drugs, 1977–80
member editorial board Legal Aspects Medical Practice, 1981–89
member editorial board Drug Information Journal, 1982–86
publisher of Discovery Medicine, 2001-
External links
Peter Rheinstein on Avvo.com
Peter Rheinstein's biography from Who's Who in America
Peter Rheinstein's recent AMA House of Delegate Articles
Peter Rheinstein publications on Google Scholar
Peter Rheinstein publications on Google Books
Peter Rheinstein publications on PubMed
Judy Woodruff interviews Peter Rheinstein about generic drug safety on the McNeil-Lehrer NewsHour, 11 Dec 1985
Stone Phillips interviews Peter Rheinstein about drug labeling, Dateline NBC, March 31, 1992
Sarah Gonzalez interviews Peter Rheinstein for Planet Money, This is Your Brain on Drug Ads, 8 Sept 2021
Peter Rheinstein. House drug bill dooms medical research. Detroit News 20 Nov 2019
Peter Rheinstein. Research protections will beat crises like COVID-19. Columbus Dispatch 11 Aug 2020
FHS student completes summer science course. Farmington Enterprise 8 Sept 1960
Peter Rheinstein's biography from American Men & Women of Science: A Biographical Directory of Today’s Leaders in Physical, Biological and Related Sciences(Vol. 12. 36th ed. 2018)
Peter Rheinstein interviewed Slate.com naming new drugs
Dr. Peter Rheinstein Receives the APCR 2024 President’s Award
References
1943 births
Living people
American public health doctors
Michigan State University alumni
University of Maryland Francis King Carey School of Law alumni
Johns Hopkins University alumni
American lawyers
Drug safety
American pharmacologists
Food and Drug Administration people
Johns Hopkins School of Medicine alumni
People from Severna Park, Maryland | Peter Rheinstein | Chemistry | 943 |
3,014,061 | https://en.wikipedia.org/wiki/NPH%20insulin | Neutral Protamine Hagedorn (NPH) insulin, also known as isophane insulin, is an intermediate-acting insulin given to help control blood sugar levels in people with diabetes. The words refer to neutral pH (pH = 7), protamine a protein, and Hans Christian Hagedorn, the insulin researcher who invented this formulation. It is designed to improve the delivery of insulin, and is one of the earliest examples of engineered drug delivery.
It is used by injection under the skin once to twice a day. Onset of effects is typically in 90 minutes and they last for 24 hours. Versions are available that come premixed with a short-acting insulin, such as regular insulin.
The common side effect is low blood sugar. Other side effects may include pain or skin changes at the sites of injection, low blood potassium, and allergic reactions. Use during pregnancy is relatively safe for the fetus. NPH insulin is made by mixing regular insulin and protamine in exact proportions with zinc and phenol such that a neutral-pH is maintained and crystals form. There are human and pig insulin based versions.
Protamine insulin was first created in 1936 and NPH insulin in 1946. It is on the World Health Organization's List of Essential Medicines. NPH is an abbreviation for "neutral protamine Hagedorn". In 2020, insulin isophane was the 221st most commonly prescribed medication in the United States, with more than 2million prescriptions. In 2020, the combination of human insulin with insulin isophane was the 246th most commonly prescribed medication in the United States, with more than 1million prescriptions.
Medical uses
NPH insulin is cloudy and has an onset of 1–3 hours. Its peak is 6–8 hours and its duration is up to 24 hours.
It has an intermediate duration of action, meaning longer than that of regular and rapid-acting insulin, and shorter than long acting insulins (ultralente, glargine or detemir). A recent Cochrane systematic review compared the effects of NPH insulin to other insulin analogues (insulin detemir, insulin glargine, insulin degludec) in both children and adults with Type 1 diabetes. Insulin detemir appeared provide a lower risk of severe hyperglycemia compared to NPH insulin, however this finding was inconsistent across included studies. In the same review no other clinically significant differences were found between different insulin analogues in either adults nor children.
History
Hans Christian Hagedorn (1888–1971) and August Krogh (1874–1949) obtained the rights for insulin from Frederick Banting and Charles Best in Toronto, Canada. In 1923 they formed Nordisk Insulin laboratorium, and in 1926 with August Kongsted he obtained a Danish royal charter as a non-profit foundation.
In 1936, Hagedorn and B. Norman Jensen discovered that the effects of injected insulin could be prolonged by the addition of protamine obtained from the "milt" or semen of river trout. The insulin would be added to the protamine, but the solution would have to be brought to pH 7 for injection. University of Toronto, Canada later licensed protamine zinc insulin (PZI), to several manufacturers. This mixture only needs to be shaken before injection. The effects of PZI lasted for 24–36 h.
In 1946, Nordisk was able to form crystals of protamine and insulin and marketed it in 1950, as neutral protamine Hagedorn (NPH) insulin. NPH insulin has the advantage that it can be mixed with an insulin that has a faster onset to complement its longer lasting action.
Eventually all animal insulins made by Novo Nordisk were replaced by synthetic, recombinant "human" insulin. Synthetic "human" insulin is also complexed with protamine to form NPH.
Timeline
The timeline is as follows:
1926 Nordisk receives Danish charter to produce insulin
1936 Hagedorn discovers that adding protamine to insulin prolongs the effect of insulin
1936 Canadians D.M. Scott and A.M. Fisher formulate zinc insulin mixture and license to Novo
1946 Nordisk crystallizes a protamine and insulin mixture
1950 Nordisk markets NPH insulin
1953 Nordisk markets "Lente" zinc insulin mixtures.
Society and culture
Names
Brand names include Humulin N, Novolin N, Novolin NPH, Gensulin N, SciLin N, Insulatard, and NPH Iletin II.
See also
Insulin analogue
References
Insulin receptor agonists
Human proteins
Recombinant proteins
Peptide hormones
Peptide therapeutics
Drugs developed by Eli Lilly and Company
World Health Organization essential medicines
Wikipedia medicine articles ready to translate
Medical mnemonics | NPH insulin | Biology | 971 |
11,817,652 | https://en.wikipedia.org/wiki/Loupe | A loupe ( ) is a simple, small magnification device used to see small details more closely. They generally have higher magnification than a magnifying glass, and are designed to be held or worn close to the eye. A loupe does not have an attached handle, and its focusing lens(es) are contained in an opaque cylinder or cone. On some loupes this cylinder folds into an enclosing housing that protects the lenses when not in use.
Optics
Three basic types of loupes exist:
Simple lenses, generally used for low-magnification designs because of high optical aberration.
Compound lenses, generally used for higher magnifications to control optical aberration.
Prismatic, multiple lenses with prisms.
Uses
Loupes are used in many professions where magnification enables precision work to be done with greater efficiency and ease. Examples include surgery, dentistry, ophthalmology, the jewelry trade, gemology, questioned document examination and watchmaking. Loupes are also sometimes used in photography and printing.
Jewellers and gemologists
Jewellers typically use a monocular, handheld loupe to magnify gemstones and other jewelry that they wish to inspect. A 10× magnification is good to use for inspecting jewelry and hallmarks and is the Gemological Institute of America's standard for grading diamond clarity. Stones will sometimes be inspected at higher magnifications than 10×, although the depth of field and field of view become too small to be instructive. The accepted standard for grading diamonds is therefore that inclusions and blemishes visible at 10× impact the clarity grade. The inclusions in VVS diamonds are hard to find even at 10×.
Watchmaking
Loupes are employed to assist watchmakers in assembling mechanical watches. Many aspects require the use of the loupe, in particular the assembly of the watch mechanism itself, the assembly and details of the watch dial, as well as the formation of the watch strap and installation of precious stones onto the watch face.
Photography
Analog (film) photographers use loupes to review, edit or analyze negatives and slides on a light table. Typical magnifications for viewing slides full-frame depend on image format; 35 mm frames (24×36 mm slides to 38×38 mm superslides) are best viewed at ca. 5×, while ca. 3× is optimal for viewing medium format slides (6×4.5 cm / 6×6 cm / 6×7 cm). Often, a 10× loupe is used to examine critical sharpness. Photographers using large format cameras may use a loupe to view the ground glass image to aid in focusing. Users of digital single-lens reflex cameras use loupes to help to identify dust and other particles on the sensor, in preparation for sensor cleaning.
Dentistry
Dentists, hygienists, and dental therapists typically use binocular loupe glasses since they need both hands free when performing dental procedures. The magnification helps with accurate diagnoses of oral conditions and enhances surgical precision when completing treatment. Additionally, loupes can improve dentists' posture which can decrease occupational strain.
Some dental loupes are flip-type, which take the form of two small cylinders, one in front of each lens of the glasses. Other types are inset within the lens of the glasses.
Dental caries, also known as cavities, are most accurately identified by visual and tactile examination of a clean, dry tooth. Magnification enables dentists to improve their ability to differentiate between a stain and a cavity. Cavities are rated and scored based on their visual presentation. If magnification is too high diagnosis becomes difficult due to the small field of view. Ideal magnification for diagnostic purposes is up to 2×. Treatment of dental caries, periodontal disease, and pulpal disease are all aided by magnification.
The dental specialty of endodontics has performed the vast majority of research regarding magnification in dentistry. Because the identification of accessory canals in addition to the primary pulp canals is essential to complete nonsurgical root canal therapy, magnification provides dentists enhanced visualization to locate and treat more obscured canals.
Treatment of periodontal disease is achieved by removing calculus deposits, plaque and therefore bacteria which causes inflammation and subsequently bone destruction. In severe cases, surgery to reduce pocket depth is indicated. Periodontists and hygienists must visualize plaque and calculus to remove it. Magnification can assist dentists and hygienists with identification and removal of plaque and calculus in addition to improving visualization for periodontal surgery.
Ergonomics have long been a pain point for doctors who need to physically strain, bending over and looking down, to treat their patients. Over time this posture results in discomfort, pain, and even neuromuscular disease. Some modern loupes address this by incorporating refractive prisms which alter the course of the light through the telescopes, so that the dentist can maintain a neutral, upright position with eyes relaxed and looking straight ahead.
A typical magnification for use in dentistry is 2.5×, but dental loupes can be anywhere in the range from 2× to 8×. Optimal magnification is a function of the type of work the doctor does - namely, how much detail he or she needs to see, taking into consideration that when magnification increases, the field of view decreases. As a tool that sits on the face and is used for hours at a time, weight is also a significant factor in considering the type of loupes to use.
Together with proper access to the oral cavity, light is an important part of performing precision dentistry. Because a dentist's head often eclipses the overhead dental lamp, loupes may be fitted with a light source. Loupe-mounted lights used to be fed by fiber optic cables that connected to either a wall-mounted or table-top light source. Newer models feature a more convenient LED lamp within the loupe-mounted light and an electric cord coming from either the conventional wall-mounted or table-top light source or a belt clip rechargeable battery pack. Options for loupe-mounted cameras and video recorders are also available.
Surgery
Surgeons in many specialties commonly use loupes when doing surgery on delicate structures. The loupes used by surgeons are mounted in the lenses of glasses and are custom made for the individual surgeon, taking into account their corrected vision, interpupillary distance and desired focal distance. Multiple magnification powers are available. They are most commonly used in otolaryngology, neurosurgery, ophthalmology, plastic surgery, cardiac surgery, orthopedic surgery, and vascular surgery.
Geology
The loupe is a vital geological field tool used to identify small mineral crystals and structures in rocks.
Collectables
Loupes are an essential tool in both numismatics, the study of currency, and the related practice of coin collection. Coin collectors frequently employ loupes for better evaluation of the quality of their coins, since identifying surface wear is vital when attempting to classify the grade of a coin. Uncirculated coins (coins without wear) can command a substantial premium over coins with slight wear. This wear cannot always be seen with the naked eye. Numismatists can also employ loupes to identify some counterfeit coins that would pass a naked-eye visual inspection. Loupes are similarly used for evaluating other collectable objects, such as trading cards and antiques.
Archival conservation
Conservators often use hand held loupes or head-mounted binocular magnifiers such as the Optivisor to examine artifacts and documents requiring cleaning or repair.
See also
Bioptic telescope
Loupe light
Pocket comparator
References
External links
Dental equipment
Magnifiers
Photography equipment
Watchmaking | Loupe | Technology,Engineering | 1,609 |
3,526,072 | https://en.wikipedia.org/wiki/Source%20field | In theoretical physics, a source is an abstract concept, developed by Julian Schwinger, motivated by the physical effects of surrounding particles involved in creating or destroying another particle. So, one can perceive sources as the origin of the physical properties carried by the created or destroyed particle, and thus one can use this concept to study all quantum processes including the spacetime localized properties and the energy forms, i.e., mass and momentum, of the phenomena. The probability amplitude of the created or the decaying particle is defined by the effect of the source on a localized spacetime region such that the affected particle captures its physics depending on the tensorial and spinorial nature of the source. An example that Julian Schwinger referred to is the creation of meson due to the mass correlations among five mesons.
Same idea can be used to define source fields. Mathematically, a source field is a background field coupled to the original field as
.
This term appears in the action in Richard Feynman's path integral formulation and responsible for the theory interactions. In a collision reaction a source could be other particles in the collision. Therefore, the source appears in the vacuum amplitude acting from both sides on the Green's function correlator of the theory.
Schwinger's source theory stems from Schwinger's quantum action principle and can be related to the path integral formulation as the variation with respect to the source per se corresponds to the field , i.e.
.
Also, a source acts effectively in a region of the spacetime. As one sees in the examples below, the source field appears on the right-hand side of the equations of motion (usually second-order partial differential equations) for . When the field is the electromagnetic potential or the metric tensor, the source field is the electric current or the stress–energy tensor, respectively.
In terms of the statistical and non-relativistic applications, Schwinger's source formulation plays crucial rules in understanding many non-equilibrium systems. Source theory is theoretically significant as it needs neither divergence regularizations nor renormalization.
Relation between path integral formulation and source formulation
In the Feynman's path integral formulation with normalization , the partition function is given by
.
One can expand the current term in the exponent
to generate Green's functions (correlators) , where the fields inside the expectation function are in their Heisenberg pictures. On the other hand, one can define the correlation functions for higher order terms, e.g., for term, the coupling constant like is promoted to a spacetime-dependent source such that .
One implements the quantum variational methodology to realize that is an external driving source of . From the perspectives of probability theory, can be seen as the expectation value of the function . This motivates considering the Hamiltonian of forced harmonic oscillator as a toy model
where .
In fact, the current is real, that is . And the Lagrangian is . From now on we drop the hat and the asterisk. Remember that canonical quantization states . In light of the relation between partition function and its correlators, the variation of the vacuum amplitude gives
, where .
As the integral is in the time domain, one can Fourier transform it, together with the creation/annihilation operators, such that the amplitude eventually becomes
.
It is easy to notice that there is a singularity at . Then, we can exploit the -prescription and shift the pole such that for the Green's function is revealed
The last result is the Schwinger's source theory for interacting scalar fields and can be generalized to any spacetime regions. The discussed examples below follow the metric .
Source theory for scalar fields
Causal perturbation theory explains how sources weakly act. For a weak source emitting spin-0 particles by acting on the vacuum state with a probability amplitude , a single particle with momentum and amplitude is created within certain spacetime region . Then, another weak source absorbs that single particle within another spacetime region such that the amplitude becomes . Thus, the full vacuum amplitude is given by
where is the propagator (correlator) of the sources. The second term of the last amplitude defines the partition function of free scalar field theory. And for some interaction theory, the Lagrangian of a scalar field coupled to a current is given by
If one adds to the mass term then Fourier transforms both and to the momentum space, the vacuum amplitude becomes
,
where It is easy to notice that the term in the amplitude above can be Fourier transformed into , i.e., the equation of motion . As the variation of the free action, that of the term , yields the equation of motion, one can redefine the Green's function as the inverse of the operator such that , which is a direct application of the general role of functional derivative . Thus, the generating functional is obtained from the partition function as follows. The last result allows us to read the partition function as , where , and is the vacuum amplitude derived by the source . Consequently, the propagator is defined by varying the partition function as follows.
This motivates discussing the mean field approximation below.
Effective action, mean field approximation, and vertex functions
Based on Schwinger's source theory, Steven Weinberg established the foundations of the effective field theory, which is widely appreciated among physicists. Despite the "shoes incident", Weinberg gave the credit to Schwinger for catalyzing this theoretical framework.
All Green's functions may be formally found via Taylor expansion of the partition sum considered as a function of the source fields. This method is commonly used in the path integral formulation of quantum field theory. The general method by which such source fields are utilized to obtain propagators in both quantum, statistical-mechanics and other systems is outlined as follows. Upon redefining the partition function in terms of Wick-rotated amplitude , the partition function becomes . One can introduce , which behaves as Helmholtz free energy in thermal field theories, to absorb the complex number, and hence . The function is also called reduced quantum action. And with help of Legendre transform, we can invent a "new" effective energy functional, or effective action, as
, with the transforms
The integration in the definition of the effective action is allowed to be replaced with sum over , i.e., . The last equation resembles the thermodynamical relation between Helmholtz free energy and entropy. It is now clear that thermal and statistical field theories stem fundamentally from functional integrations and functional derivatives. Back to the Legendre transforms,
The is called mean field obviously because , while is a background classical field. A field is decomposed into a classical part and fluctuation part , i.e., , so the vacuum amplitude can be reintroduced as
,
and any function is defined as
,
where is the action of the free Lagrangian. The last two integrals are the pillars of any effective field theory. This construction is indispensable in studying scattering (LSZ reduction formula), spontaneous symmetry breaking, Ward identities, nonlinear sigma models, and low-energy effective theories. Additionally, this theoretical framework initiates line of thoughts, publicized mainly be Bryce DeWitt who was a PhD student of Schwinger, on developing a canonical quantized effective theory for quantum gravity.
Back to Green functions of the actions. Since is the Legendre transform of , and defines N-points connected correlator , then the corresponding correlator obtained from , known as vertex function, is given by . Consequently in the one particle irreducible graphs (usually acronymized as 1PI), the connected 2-point -correlator is defined as the inverse of the 2-point -correlator, i.e., the usual reduced correlation is , and the effective correlation is . For , the most general relations between the N-points connected and are
and
Source theory for fields
Vector fields
For a weak source producing a missive spin-1 particle with a general current acting on different causal spacetime points , the vacuum amplitude is
In momentum space, the spin-1 particle with rest mass has a definite momentum in its rest frame, i.e. . Then, the amplitude gives
where and is the transpose of . The last result matches with the used propagator in the vacuum amplitude in the configuration space, that is,
.
When , the chosen Feynman-'t Hooft gauge-fixing makes the spin-1 massless. And when , the chosen Landau gauge-fixing makes the spin-1 massive. The massless case is obvious as studied in quantum electrodynamics. The massive case is more interesting as the current is not demanded to conserved. However, the current can be improved in a way similar to how the Belinfante-Rosenfeld tensor is improved so it ends up being conserved. And to get the equation of motion for the massive vector, one can define
One can apply integration by part on the second term then single out to get a definition of the massive spin-1 field
Additionally, the equation above says that . Thus, the equation of motion can be written in any of the following forms
Massive totally symmetric spin-2 fields
For a weak source in a flat Minkowski background, producing then absorbing a missive spin-2 particle with a general redefined energy-momentum tensor, acting as a current, , where is the vacuum polarization tensor, the vacuum amplitude in a compact form is
or
This amplitude in momentum space gives (transpose is imbedded)
And with help of symmetric properties of the source, the last result can be written as , where the projection operator, or the Fourier transform of Jacobi field operator obtained by applying Peierls braket on Schwinger's variational principle, is .
In N-dimensional flat spacetime, 2/3 is replaced by 2/(N-1). And for massless spin-2 fields, the projection operator is defined as .
Together with help of Ward-Takahashi identity, the projector operator is crucial to check the symmetric properties of the field, the conservation law of the current, and the allowed physical degrees of freedom.
It is worth noting that the vacuum polarization tensor and the improved energy momentum tensor appear in the early versions of massive gravity theories. Interestingly, massive gravity theories have not been widely appreciated until recently due to apparent inconsistencies obtained in the early 1970's studies of the exchange of a single spin-2 field between two sources. But in 2010 the dRGT approach of exploiting Stueckelberg field redefinition led to consistent covariantized massive theory free of all ghosts and discontinuities obtained earlier.
If one looks at and follows the same procedure used to define massive spin-1 fields, then it is easy to define massive spin-2 fields as
The corresponding divergence condition is read , where the current is not necessarily conserved (it is not a gauge condition as that of the massless case). But the energy-momentum tensor can be improved as such that according to Belinfante-Rosenfeld construction. Thus, the equation of motion
becomes
One can use the divergence condition to decouple the non-physical fields and , so the equation of motion is simplified as
.
Massive totally symmetric arbitrary integer spin fields
One can generalize source to become higher-spin source such that becomes . The generalized projection operator also helps generalizing the electromagnetic polarization vector of the quantized electromagnetic vector potential as follows. For spacetime points , the addition theorem of spherical harmonics states that
.
Also, the representation theory of the space of complex-valued homogeneous polynomials of degree on a unit (N-1)-sphere defines the polarization tensor asThen, the generalized polarization vector is .
And the projection operator can be defined as .
The symmetric properties of the projection operator make it easier to deal with the vacuum amplitude in the momentum space. Therefore rather that we express it in terms of the correlator in configuration space, we write
.
Mixed symmetric arbitrary spin fields
Also, it is theoretically consistent to generalize the source theory to describe hypothetical gauge fields with antisymmetric and mixed symmetric properties in arbitrary dimensions and arbitrary spins. But one should take care of the unphysical degrees of freedom in the theory. For example in N-dimensions and for a mixed symmetric massless version of Curtright field and a source , the vacuum amplitude is which for a theory in N=4 makes the source eventually reveal that it is a theory of a non physical field. However, the massive version survives in N≥5.
Arbitrary half-integer spin fields
For spin- fermion propagator and current as defined above, the vacuum amplitude is
In momentum space the reduced amplitude is given by
For spin- Rarita-Schwinger fermions, Then, one can use and the on-shell to get
One can replace the reduced metric with the usual one if the source is replaced with
For spin-, the above results can be generalized to
The factor is obtained from the properties of the projection operator, the tracelessness of the current, and the conservation of the current after being projected by the operator. These conditions can be derived form the Fierz-Pauli and the Fang-Fronsdal conditions on the fields themselves. The Lagrangian formulations of massive fields and their conditions were studied by Lambodar Singh and Carl Hagen. The non-relativistic version of the projection operators, developed by Charles Zemach who is another student of Schwinger, is used heavily in hadron spectroscopy. Zemach's method could be relativistically improved to render the covariant projection operators.
See also
Keldysh-Schwinger formalism
Schwinger function
Wigner-Bargmann equations
Joos–Weinberg equation
References
Quantum field theory | Source field | Physics | 2,863 |
337,976 | https://en.wikipedia.org/wiki/Shirin%20Ebadi | Shirin Ebadi (; born 21 June 1947) is an Iranian Nobel laureate, lawyer, writer, teacher and a former judge and founder of the Defenders of Human Rights Center in Iran. In 2003, Ebadi was awarded the Nobel Peace Prize for her pioneering efforts for democracy and women's, children's, and refugee rights. She was the first Muslim woman and the first Iranian to receive the award.
She has lived in exile in London since 2009.
Life and early career as a judge
Ebadi was born in Hamadan into an educated Persian family. Her father, Mohammad Ali Ebadi, was the city's chief notary public and a professor of commercial law. Her mother, Minu Yamini, was a homemaker. She was of Jewish descent. When she was an infant, her family moved to Tehran. Before earning a law degree from the University of Tehran Ebadi attended Anoshiravn Dadgar and Reza Shah Kabir schools.
She was admitted to the law department of the University of Tehran in 1965 and 1969; upon graduation, she passed the qualification exams to become a judge. After a six-month internship period, she officially became a judge in March 1969. She continued her studies at the University of Tehran to pursue a doctorate in law; in 1971, one of her professors was Mahmoud Shehabi Khorassani. In 1975, she became the first female president of the Tehran city court and served until the Iranian Revolution. She was one of the first female judges in Iran.
After the 1979 Revolution women were no longer allowed to serve as judges and
she was dismissed and given a new job as a clerk in the court she had presided over.
Later, despite already having a law office permit her applications were repeatedly rejected, and Ebadi was unable to practice law until 1993. She used this free time to write books and many articles in Iranian periodicals.
Ebadi as a lawyer
By 2004, Ebadi was lecturing law at the University of Tehran while practicing law in Iran. She is a campaigner for strengthening the legal status of children and women, and her work on women's rights played a key role in the May 1997 landslide presidential election of the reformist Mohammad Khatami.
As a lawyer, she is known for taking up pro bono cases of dissident figures who have fallen foul of the judiciary. Among her clients were the family of Dariush Forouhar, a dissident intellectual and politician who was found stabbed to death – along with his wife, Parvaneh Eskandari – in their home.
The couple was among several dissidents who died in a spate of gruesome murders that terrorized Iran's intellectual community. Suspicion fell on extremist hard-liners determined to stop the more liberal climate fostered by President Khatami, who championed freedom of speech. The murders were found to be committed by a team of employees of the Iranian Ministry of Intelligence, whose head, Saeed Emami, allegedly committed suicide in jail before being brought to court.
Ebadi also represented the family of Ezzat Ebrahim-Nejad, who was killed in the Iranian student protests in July 1999. In 2000 Ebadi was accused of manipulating the videotaped confession of Amir Farshad Ebrahimi, a former member of the Ansar-e Hezbollah. Ebrahimi confessed his involvement in attacks by the organization on the orders of high-level conservative authorities, including the killing of Ezzat Ebrahim-Nejad and attacks against members of President Khatami's cabinet. Ebadi claimed that she had only videotaped Amir Farshad Ebrahimi's confessions to present them to the court. This case was named "Tape makers" by hardliners who questioned the credibility of his videotaped deposition and his motives. Ebadi and another lawyer, Rohami were sentenced to five years in jail and suspension of their law licenses for sending Ebrahimi's videotaped deposition to President Khatami and the head of the Islamic judiciary. The Islamic judiciary's supreme court later vacated the sentences, but they did not forgive Ebarahimi's videotaped confession and sentenced him to 48 months in jail, including 16 months in solitary confinement. This case brought an increased focus on Iran from human rights groups abroad.
Ebadi has also defended various child abuse cases, including the case of Arian Golshani, a child who was abused for years and then beaten to death by her father and stepbrother. This case gained international attention and caused controversy in Iran. Ebadi used this case to highlight Iran's problematic child custody laws, whereby custody of children in divorce is usually given to the father, even in the case of Arian, where her mother had told the court that the father was abusive and had begged for custody of her daughter. Ebadi also handled the case of Leila, a teenage girl who was gang-raped and murdered. Leila's family became homeless, trying to cover the costs of the execution of the perpetrators owed to the government because, in the Islamic Republic of Iran, it is the victim's family's responsibility to pay to restore their honor when a girl is raped by paying the government to execute the perpetrator. Ebadi was not able to achieve victory in this case. Still, she brought international attention to this problematic law. Ebadi also handled a few cases dealing with bans of periodicals (including the cases of Habibollah Peyman, Abbas Marufi, and Faraj Sarkouhi). She has also established two non-governmental organizations in Iran with Western funding, the Society for Protecting the Rights of the Child (SPRC) (1994) and the Defenders of Human Rights Center (DHRC) in 2001.
She also helped in the drafting of the original text of a law against physical abuse of children, which was passed by the Iranian parliament in 2002. Female members of Parliament also asked Ebadi to draft a law explaining how a woman's right to divorce her husband is in line with Sharia (Islamic Law). Ebadi presented the bill before the government, but the male members made her leave without considering the bill, according to Ebadi's memoir.
Political views
In her book Iran Awakening, Ebadi explains her political/religious views on Islam, democracy and gender equality:
In the last 23 years, from the day I was stripped of my judgeship to the years of doing battle in the revolutionary courts of Tehran, I had repeated one refrain: an interpretation of Islam that is in harmony with equality and democracy is an authentic expression of faith. Not religion binds women, but the selective dictates of those who wish them cloistered. That belief and the conviction that change in Iran must come peacefully and from within has underpinned my work.
At the same time, Ebadi expresses a nationalist love of Iran and has criticized the policies and actions of Western countries. She opposed the pro-Western Shah, initially supported the Islamic Revolution, and remembers the CIA's 1953 overthrow of prime minister Mohammad Mosaddeq with rage.
At a press conference shortly after the Peace Prize announcement, Ebadi explicitly rejected foreign interference in the country's affairs: "The fight for human rights is conducted in Iran by the Iranian people, and we are against any foreign intervention in Iran."
Subsequently, Ebadi openly defended the Islamic regime's nuclear development program:
Aside from being economically justified, it has become a cause of national pride for an old nation with a glorious history. No Iranian government, regardless of its ideology or democratic credentials, would dare to stop the program.
However, in a 2012 interview, Ebadi stated:
The [Iranian] people want to stop enrichment, but the government doesn't listen. Iran is situated on a fault line, and people are scared of a Fukushima type of situation happening. We want peace, security, and economic welfare, and we cannot forgo all of our other rights for nuclear energy. The government claims it is not making a bomb. But I am not a member of the government, so I cannot speak to this directly. The fear is that if they do, Israel will be wiped out. If the Iranian people are able to topple the government, this could improve the situation. [In 2009] the people of Iran rose up and were badly suppressed. Right now, Iran is the country with the most journalists in prison. This is the price people are paying.
Concerning the Israeli–Palestinian conflict, in 2010, Shirin Ebadi, was one of four Peace Prize laureates supporting legislation requiring the University of California to divest itself from any companies providing technology to the Israel Defense Forces, who (bill supporters declared) were engaged in war crimes. (The legislation was supported by the Associated Students of the University of California).
Since the victory of Hassan Rouhani in the 2013 Iranian presidential election, Shirin Ebadi has expressed her worry about the growing human rights violations in her homeland. Ebadi, in her Dec. 2013 speech at Human Rights Day seminar at Leiden University angrily said: "I will shut up, but the problems of Iran will not be solved".
In April 2015, speaking on the subject of the Western campaign against the Sunni extremist group ISIL in Syria and Iraq, Ebadi expressed her desire that the Western world spend money funding education and an end to corruption rather than fighting with guns and bombs. She reasoned that because the Islamic State stems from an ideology based on a "wrong interpretation of Islam", the physical force will not end ISIS because it will not end its beliefs.
In 2018, in an interview with Bloomberg, Ebadi stated her belief that the Islamic Republic has reached a point of which it is now un-reformable. Ebadi called for a referendum on the Islamic Republic.
Nobel Peace Prize
On 10 October 2003, Ebadi was awarded the Nobel Peace Prize for her efforts for democracy and human rights, especially for the rights of women and children. The selection committee praised her as a "courageous person" who "has never heeded the threat to her own safety". Now she travels abroad lecturing in the West. She is against a policy of forced regime change.
The decision of the Nobel committee surprised some observers worldwide. Pope John Paul II had been predicted to win the Peace Prize amid speculation that he was nearing death. The era in which her prize was granted has been called one "when there still seemed a chance of something resembling a détente" between the U.S. and Iran (according to Associate Press).
She presented a book entitled Democracy, human rights, and Islam in modern Iran: Psychological, social and cultural perspectives to the Nobel Committee. The volume documents the historical and cultural basis of democracy and human rights from Cyrus and Darius, 2,500 years ago to Mohammad Mossadeq, the prime minister of modern Iran who nationalized the oil industry.
In her acceptance speech, Ebadi criticized repression in Iran and insisted that Islam was compatible with democracy, human rights and freedom of opinion. In the same speech she also criticized US foreign policy, particularly the War on terrorism. She was the first Iranian and the first Muslim woman to receive the prize.
Thousands greeted her at the airport when she returned from Paris after receiving the news that she had won the prize. The response to the Award in Iran was mixed—enthusiastic supporters greeted her at the airport upon her return, the conservative media underplayed it, and then-Iranian President Mohammad Khatami criticized it as political. In Iran, officials of the Islamic Republic were either silent or critical of the selection of Ebadi, calling it a political act by a pro-Western institution and were also critical when Ebadi did not cover her hair at the Nobel award ceremony. IRNA reported the Nobel committee's decision in few lines that the evening newspapers and the Iranian state media waited hours to report —and then only as the last item on the radio news update. Reformist officials are said to have "generally welcomed the award", but "come under attack for doing so." Reformist president Mohammad Khatami did not officially congratulate Ms. Ebadi and stated that although the scientific Nobels are important, the Peace Prize is "not very important" and was awarded to Ebadi on the basis of "totally political criteria". Vice President Mohammad Ali Abtahi, the only official to initially congratulate Ebadi, defended the president saying "abusing the President's words about Ms. Ebadi is tantamount to abusing the prize bestowed on her for political considerations".
In 2009, Norway's Foreign Minister Jonas Gahr Støre, published a statement reporting that Ebadi's Nobel Peace Prize had been confiscated by Iranian authorities and that "This [was] the first time a Nobel Peace Prize ha[d] been confiscated by national authorities." Iran denied the charges.
Post-Nobel prize
Since receiving the Nobel Prize, Ebadi has lectured, taught and received awards in different countries, issued statements and defended people accused of political crimes in Iran. She has traveled to and spoken to audiences in India, the United States, and other countries; released her autobiography in an English translation. With five other Nobel laureates, she created the Nobel Women's Initiative to promote peace, justice, and equality for women. In 2019, Ebadi called for a treaty to end violence against women, in support of Every Woman Coalition.
Threats
In April 2008, she told Reuters news agency that Iran's human rights record had regressed in the past two years and agreed to defend Baháʼís arrested in Iran in May 2008.
In April 2008, Ebadi released a statement saying: "Threats against my life and security and those of my family, which began some time ago, have intensified", and that the threats warned her against making speeches abroad and to stop defending Iran's persecuted Baháʼí community. In August 2008, the IRNA news agency published an article attacking Ebadi's links to the Baháʼí Faith and accused her of seeking support from the West. It also criticized Ebadi for defending homosexuals, appearing without the Islamic headscarf abroad, questioning Islamic punishments, and "defending CIA agents". It accused her daughter, Nargess Tavassolian, of conversion to the Baháʼí faith, a capital offense in the Islamic Republic. However Shirin Ebadi has denied it, saying, "I am proud to say that my family and I are Shiites," Her daughter believes "the government wanted to scare my mother with this scenario." Ebadi believes the attacks are in retaliation for her agreeing to defend the families of the seven Baháʼís arrested in May.
In December 2008, Iranian police shut down the office of a human rights group led by her. Another human rights group, Human Rights Watch, has said it was "extremely worried" about Ebadi's safety., and in December 2009 issued a statement demanding the Islamic Republic "stop harassing" her. Among many other complaints, the group accused the IRI of detaining "Ebadi's husband and sister for questioning and threatened them with losing their jobs and eventual arrest if Ebadi continues her human rights advocacy."
Seizure
Ebadi said while in London in late November 2009 that her Nobel Peace Prize medal and diploma had been taken from their bank box alongside her and a ring she had received from Germany's association of journalists. She said they had been taken by the Revolutionary Court approximately three weeks previously. Ebadi also said her bank account was frozen by authorities. Norwegian Minister of Foreign Affairs Jonas Gahr Støre expressed his "shock and disbelief" at the incident. The Iranian foreign ministry subsequently denied the confiscation, and also criticized Norway for interfering in Iran's affairs.
Post-Nobel Prize timeline
2003 (November) – She declared that she would provide legal representation for the family of the murdered Canadian freelance photographer Zahra Kazemi. The trial was halted in July 2004, prompting Ebadi and her team to leave the court in protest that their witnesses had not been heard.
2004 (January) – During the World Social Forum in Bombay Ebadi, speaking at a small girls' school run by the NGO "Sahyog", proposed that 30 January (the day Mahatma Gandhi was assassinated) be observed as International Day of Non-Violence. This proposal was brought to her by school children in Paris by their Indian teacher Akshay Bakaya. Three years later, Sonia Gandhi and Archbishop Desmond Tutu relayed the idea at the Delhi Satyagraha Convention in January 2007, preferring however to propose Gandhi's birthday on 2 October. The UN General Assembly on 15 June 2007 adopted 2 October as the International Day of Non-Violence.
2004 – Ebadi was listed by Forbes magazine as one of the "100 most powerful women in the world". She is also included in a published list of the "100 most influential women of all time".
2005 Spring – Ebadi taught a course on "Islam and Human Rights" at the University of Arizona's James E. Rogers College of Law in Tucson, Arizona.
2005 (12 May) – Ebadi delivered an address on Senior Class Day at Vanderbilt University, Nashville, Tennessee. Vanderbilt Chancellor Gordon Gee presented Ebadi with the Chancellor's Medal for her human rights work.
2005 – Ebadi was voted the world's 12th leading public intellectual in The 2005 Global Intellectuals Poll by Prospect (UK).
2006 – Random House released her first book for a Western audience, Iran Awakening: A Memoir of Revolution and Hope, with Azadeh Moaveni. A reading of the book was serialized as BBC Radio 4's Book of the Week in September 2006. American novelist David Ebershoff was the book's editor.
2006 – Ebadi was one of the founders of The Nobel Women's Initiative along with sister Nobel Peace laureates Betty Williams, Mairead Corrigan Maguire, Wangari Maathai, Jody Williams and Rigoberta Menchú Tum. Six women representing North America and South America, Europe, the Middle East and Africa decided to bring together their experiences in a united effort for peace with justice and equality. The Nobel Women's Initiative aims to help strengthen work being done in support of women's rights worldwide.
2007 (17 May) – Ebadi announced that she would defend the Iranian American scholar Haleh Esfandiari, who is jailed in Tehran.
2008 (March) – Ebadi tells Reuters news agency that Iran's human rights record had regressed in the past two years.
2008 (14 April) – Ebadi released a statement saying, "Threats against my life and security and those of my family, which began some time ago, have intensified", and that the threats warned her against making speeches abroad and against defending Iran's persecuted Baháʼí community.
2008 (June) – Ebadi volunteered to be the lawyer for the arrested Baháʼí leadership of Iran in June.
2008 (7 August) – Ebadi announced via the Muslim Network for Baháʼí Rights that she would defend in court the seven Baháʼí leaders arrested in the spring.
2008 (1 September) – Ebadi published her book Refugee Rights in Iran exposing the lack of rights given to Afghan refugees living in Iran.
2008 (21 December) – Ebadi's office of the Center for the Defense of Human Rights was raided and closed.
2008 (29 December) – Islamic authorities close Ebadi's Center for Defenders of Human Rights, raiding her private office, seizing her computers and files. Worldwide condemnation of raid.
2009 (1 January) – Pro-regime "demonstrators" attack Ebadi's home and office.
2009 (12 June) – Ebadi was at a seminar in Spain at the time of Iranian presidential election. "[W]hen the crackdown began colleagues told her not to come home" and as of October 2009 she has not returned to Iran.
2009 (16 June) – In the midst of nationwide protests against the very surprising and highly suspect election results giving incumbent President Mahmoud Ahmadinejad a landslide victory, Ebadi calls for new elections in an interview with Radio Free Europe.
2009 (24 September) – Touring abroad to lobby international leaders and highlight the Islamic regime's human rights abuses since June, Ebadi criticizes the British government for putting talks on the Islamic regime's nuclear program ahead of protesting its brutal suppression of opposition. Noting the British Ambassador attended President Ahmadinejad's inauguration, she said, "`That's when I felt that human rights were being neglected. ... Undemocratic countries are more dangerous than a nuclear bomb. It's undemocratic countries that jeopardize international peace.`" She calls for "the downgrading of Western embassies, the withdrawal of ambassadors and the freezing of the assets of Iran's leaders."
2009 (November) – The Iranian authorities seize Ebadi's Nobel medal together with other belongings from her safe-deposit box.
2009 (29 December) – Ebadi's sister Noushin Ebadi was detained apparently to silence Ebadi who is abroad. "She was neither politically active nor had a role in any rally. It's necessary to point out that in the past two months she had been summoned several times to the Intelligence Ministry, who told her to persuade me to give up my human rights activities. I have been arrested solely because of my activities in human rights," Ebadi said.
2010 (June) – Ebadi's husband denounced her on state television. According to Ebadi this was a coerced confession after his arrest and torture.
2012 (26 January) — in a statement released by the International Campaign for Human Rights in Iran, Ebadi called on "all freedom-loving people across the globe" to work for the release of three opposition leaders — Zahra Rahnavard, Mir Hossein Mousavi, and Mehdi Karroubi — who have been confined to house arrest for nearly a year.
Lawsuits
Lawsuit against the United States
In 2004, Ebadi filed a lawsuit against the U.S. Department of Treasury because of restrictions she faced over publishing her memoir in the United States. American trade laws prohibit writers from embargoed countries. The law also banned American literary agent Wendy Strothman from working with Ebadi. Azar Nafisi wrote a letter in support of Ebadi. Nafisi said that the law infringes on the First Amendment. After a lengthy legal battle, Ebadi won and was able to publish her memoir in the United States.
Other activities
Apne Aap Women Worldwide, Co-Chair of the International Advisory Board
Aurora Prize, Member of the Selection Committee (since 2015)
Business for Peace Award Committee, Member (2009)
Reporters Without Borders (RWB), Member of the Emeritus Board
Scholars at Risk (SAR), Member of the Ambassadors Council
Nuremberg International Human Rights Award, Member of the Jury (2004–2020)
Recognition
Awards
Awarded plate by Human Rights Watch, 1996
Official spectator of Human Rights Watch, 1996
Awarded Rafto Prize, Human Rights Prize in Norway, 2001
Nobel Peace Prize in October 2003
Women's eNews 21 Leaders for the 21st Century Award, 2004
International Democracy Award, 2004
James Parks Morton Interfaith Award from the Interfaith Center of New York, 2004
‘Lawyer of the Year’ award, 2004
UCI Citizen Peacebuilding Award, 2005
The Golden Plate Award by the American Academy of Achievement, 2005
Legion of Honor award, 2006
Toleranzpreis der Evangelischen Akademie Tutzing, 2008
Award for the Global Defence of Human Rights, International Service Human Rights Award, 2009
Wolfgang Friedmann Memorial Award, Columbia Journal of Transnational Law, 2013
Honorary degrees
Doctor of Laws, Williams College, 2004
Doctor of Laws, Brown University, 2004
Doctor of Laws, University of British Columbia, 2004
Honorary doctorate, University of Maryland, College Park, 2004
Honorary doctorate, University of Toronto, 2004
Honorary doctorate, Simon Fraser University, 2004
Honorary doctorate, University of Akureyri, 2004
Honorary doctorate, Australian Catholic University, 2005
Honorary doctorate, University of San Francisco, 2005
Honorary doctorate, Concordia University, 2005
Honorary doctorate, The University of York, The University of Canada, 2005
Honorary doctorate, Université Jean Moulin in Lyon, 2005
Honorary doctorate, Loyola University Chicago, 2007
Honorary Doctorate The New School University, 2007
Honorary Doctor of Laws, Marquette University, 2009
Honorary Doctor of Law, University of Cambridge, 2011
Honorary Doctorate, School of Oriental and African Studies (SOAS) University of London, 2012
Honorary Doctor of Laws, Law Society of Upper Canada, 2012
Books published
Iran Awakening: One Woman's Journey to Reclaim Her Life and Country (2007)
Refugee Rights in Iran (2008)
The Golden Cage: Three brothers, Three choices, One destiny (2011)
Until We Are Free (2016)
See also
Iranian women
List of famous Persian women
List of peace activists
Intellectual movements in Iran
Persian women's movement
Islamic feminism
References
Further reading
Monshipouri, M. (2009). "Shirin Ebadi" in Encyclopedia of human rights. Volume 2. David Forsythe (Ed.). Oxford University Press.
External links
Shirin Ebadi's biography, Iowa State University
Interview With Iranian Nobel Prize Winner: Shirin Ebadi. PBS
Gruber Distinguished Lecture in Global Justice: Dr. Shirin Ebadi, Yale Law School
Nobel Women's Initiative
Quotes from Shirin Ebadi Speeches
TIME.com: 10 Questions for Shirin Ebadi
Shirin Ebadi, avocate pour les droits de l'homme en Iran Jean Albert, Ludivine Tomasso and edited by Jacqueline Duband, Emilie Dessens
Press interviews
Iranian elections – Nobel Peace Prize winner Shirin Ebadi talks to Euronews 2013 June 12
David Batty in conversation with Shirin Ebadi, "If you want to help Iran, don't attack", The Guardian, 13 June 2008
Nermeen Shaikh, AsiaSource Interview with Shirin Ebadi
"Iran's Quiet Revolution" Winter 2007 article from Ms. magazine about activism and feminism in Iran.
Video
Video: Shirin Ebadi on 'What's Ahead for Iran', Asia Society, New York, 3 March 2010
Shirin Ebadi Presses Iran on Human Rights and Warns Against International Sanctions – video by Democracy Now!
Pictures
Picture Gallery
1947 births
Living people
Iranian democracy activists
Iranian dissidents
Iranian human rights activists
Iranian women activists
Iranian women's rights activists
Iranian exiles
Children's rights activists
Iranian feminists
Iranian emigrants to the United Kingdom
Nobel Peace Prize laureates
Iranian Nobel laureates
Academic staff of the University of Tehran
University of Tehran alumni
People from Hamadan
Commanders of the Legion of Honour
Iranian women lawyers
Iranian women judges
Pacifist feminists
Women Nobel laureates
Iranian women writers
Iranian writers
Nonviolence advocates
Carnegie Council for Ethics in International Affairs
Members of the National Council for Peace | Shirin Ebadi | Technology | 5,590 |
8,861,618 | https://en.wikipedia.org/wiki/Patterson%20power%20cell | The Patterson power cell is a cold fusion device invented by chemist James A. Patterson, which he claimed created 200 times more energy than it used. Patterson claimed the device neutralized radioactivity without emitting any harmful radiation. Cold fusion was the subject of an intense scientific controversy in 1989, before being discredited in the eyes of mainstream science. Physicist Robert L. Park describes the device as fringe science in his book Voodoo Science.
Company formed
In 1995, Clean Energy Technologies Inc. was formed to produce and promote the power cell.
Claims and observations
Patterson variously said it produced a hundred or two hundred times more power than it used. Representatives promoting the device at the Power-Gen '95 Conference said that an input of 1 watt would generate more than 1,000 watts of excess heat (waste heat). This supposedly happened as hydrogen or deuterium nuclei fuse together to produce heat through a form of low energy nuclear reaction. The by-products of nuclear fusion, e.g. a tritium nucleus and a proton or an 3He nucleus and a neutron, were not detected in any reliable way, leading experts to think that no such fusion was taking place.
It was further claimed that if radioactive isotopes such as uranium were present, the cell enables the hydrogen nuclei to fuse with these isotopes, transforming them into stable elements and thus neutralizing the radioactivity. It was claimed that the transformation would be achieved without releasing any radiation to the environment and without expending any energy. A televised demonstration on June 11, 1997, on Good Morning America provided no proof for the claims. As at 2002, the neutralization of radioactive isotopes has only been achieved through intense neutron bombardment in a nuclear reactor or large scale high energy particle accelerator, and at a large expense of energy.
Patterson has carefully distanced himself from the work of Fleischmann and Pons and from the label of "cold fusion", due to the negative connotations associated to them since 1989. Ultimately, this effort was unsuccessful, and not only did it inherit the label of pathological science, but it managed to make cold fusion look a little more pathological in the public eye. Some cold fusion proponents view the cell as a confirmation of their work, while critics see it as "the fringe of the fringe of cold fusion research", since it attempts to commercialize cold fusion on top of making bad science.
In 2002, John R. Huizenga, professor of nuclear chemistry at the University of Rochester, who was head of a government panel convened in 1989 to investigate the cold fusion claims of Fleischmann and Pons, and who wrote a book about the controversy, said "I would be willing to bet there's nothing to it", when asked about the Patterson Power Cell.
Replications
George H. Miley is a professor of nuclear engineering and a cold fusion researcher who claims to have replicated the Patterson power cell. During the 2011 World Green Energy Symposium, Miley stated that his device continuously produces several hundred watts of power. Earlier results by Miley have not convinced researchers.
On Good Morning America, Quintin Bowles, professor of mechanical engineering at the University of Missouri–Kansas City, claimed in 1996 to have successfully replicated the Patterson power cell. In the book Voodoo Science, Bowles is quoted as having stated: "It works, we just don't know how it works."
A replication has been attempted at Earthtech, using a CETI supplied kit. They were not able to replicate the excess heat.
References
Further reading
Bailey, Patrick and Fox, Hal (October 20, 1997). A review of the Patterson Power Cell. Retrieved November 19, 2011. An earlier version of this paper appears in: Energy Conversion Engineering Conference, 1997; Proceedings of the 32nd Intersociety Energy Conversion Engineering Conference. Publication Date: Jul 27 – Aug 1, 1997. Volume 4, pages 2289–2294. Meeting Date: July 27, 1997 – January 8, 1997. Location: Honolulu, HI, USA.
Ask the experts, "What is the current scientific thinking on cold fusion? Is there any possible validity to this phenomenon?", Scientific American, October 21, 1999,(Patterson is mentioned on page 2). Retrieved December 5, 2007
Chemical equipment
Electrochemistry
Electrolysis
Fringe physics
Cold fusion | Patterson power cell | Physics,Chemistry,Engineering | 877 |
25,666,577 | https://en.wikipedia.org/wiki/Super-dense%20water | Super-dense water is water that has been contained in an environment with both molecular uniformity and extreme depth, which causes the molecules of water to be packed tightly together and thus gain a tougher solidity and higher density than regular ice. Super dense water is found on planets, such as the moons Tethys, Ganymede, Callisto, and Europa in the Solar System, which are covered entirely in water and have little to no landmass.
Speculation
Speculation exists that a planet located at around 30 light-years away from Earth may contain super-dense water. See ocean planet for more information on its formation.
References
Planetary science
Water | Super-dense water | Astronomy,Environmental_science | 132 |
393,055 | https://en.wikipedia.org/wiki/Index%20set | In mathematics, an index set is a set whose members label (or index) members of another set. For instance, if the elements of a set may be indexed or labeled by means of the elements of a set , then is an index set. The indexing consists of a surjective function from onto , and the indexed collection is typically called an indexed family, often written as .
Examples
An enumeration of a set gives an index set , where is the particular enumeration of .
Any countably infinite set can be (injectively) indexed by the set of natural numbers .
For , the indicator function on is the function given by
The set of all such indicator functions, , is an uncountable set indexed by .
Other uses
In computational complexity theory and cryptography, an index set is a set for which there exists an algorithm that can sample the set efficiently; e.g., on input , can efficiently select a poly(n)-bit long element from the set.
See also
Friendly-index set
References
Mathematical notation
Basic concepts in set theory | Index set | Mathematics | 219 |
58,877,790 | https://en.wikipedia.org/wiki/Xiaomi%20Mi%20MIX%203 | The Xiaomi Mi MIX 3 is an Android smartphone launched in Beijing on 25 October 2018. It is the successor for the Mi MIX 2 and Mi MIX 2S. This time Xiaomi uses a true bezel-less display with a magnetic sliding front camera setup.
The Xiaomi Mi MIX 3 has an overall score of 103 and a photo score of 108 on DxOMark.
Xiaomi unveiled a 5G version of the Mi MIX 3 on 24 February 2019 at MWC 2019. The Mi MIX 3 5G's hardware remains mostly the same, however, it has a newer Snapdragon 855 processor, a Qualcomm X50 5G modem and a larger 3800 mAh battery. The Mi Mix 3 5G is also more expensive at 600 euros, or $680 (the regular Mi Mix 3 retails for 560 euros, or $535). It went on sale in May 2019, but is not available in Jade Green or Forbidden City Blue, and there is no longer a variant with 10 GB of RAM.
Specifications
Display- The Xiaomi Mi MIX 3 comes with a 6.4-inch 2340 x 1080 Full HD+ OLED panel with an aspect ratio of 19.5:9 and a screen-to-body ratio of 93.4%.
Processor- The Mi MIX 3 is powered by the Qualcomm Snapdragon 845 octa-core processor and the Adreno 630 GPU.
Camera- The Mi MIX 3 has 4 cameras, two at the front and two at the rear. The back of the phone sports a 12 MP Sony IMX363 with f/1.8 aperture, 1.4-micron and a 12 MP Samsung S5K3M3+ with f/2.4 aperture, 1-micron pixels camera. It also supports optical image stabilization (OIS) and 960FPS slow-motion videos. There is a 24 MP Sony IMX576 sensor with 1.8-micron pixels and a 2 MP depth sensor with AI features on the front slider area of the device.
RAM and Storage- The Mi MIX 3 has 4 variants: 6 GB RAM/128 GB, 8 GB RAM/128 GB, 8 GB RAM/256 GB and 10 GB RAM/256 GB.
Battery- The Mi MIX 3 has a 3200 mAh battery with 10 W wireless charging.
Software- The Xiaomi Mi MIX 3 runs on MIUI 10 based on Android 9 Pie.
SIM:
4G variant: dual nano SIMs, - Simultaneous standby 4G w/VoLTE HD on both SIMS.
5G variant: single nano SIM.
References
Android (operating system) devices
Phablets
Mobile phones introduced in 2018
Mobile phones with multiple rear cameras
Mobile phones with 4K video recording
Discontinued flagship smartphones
Xiaomi smartphones
Slider phones | Xiaomi Mi MIX 3 | Technology | 582 |
64,050,069 | https://en.wikipedia.org/wiki/The%20Geometry%20of%20Musical%20Rhythm | The Geometry of Musical Rhythm: What Makes a "Good" Rhythm Good? is a book on the mathematics of rhythms and drum beats. It was written by Godfried Toussaint, and published by Chapman & Hall/CRC in 2013 and in an expanded second edition in 2020. The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries.
Author
Godfried Toussaint (1944–2019) was a Belgian–Canadian computer scientist who worked as a professor of computer science for McGill University and New York University. His main professional expertise was in computational geometry, but he was also a jazz drummer, held a long-term interest in the mathematics of music and musical rhythm, and since 2005 held an affiliation as a researcher in the Centre for Interdisciplinary Research in Music Media and Technology in the Schulich School of Music at McGill. In 2009 he visited Harvard University as a Radcliffe Fellow in advancement of his research in musical rhythm.
Topics
In order to study rhythms mathematically, Toussaint abstracts away many of their features that are important musically, involving the sounds or strengths of the individual beats, the phasing of the beats, hierarchically-structured rhythms, or the possibility of music that changes from one rhythm to another. The information that remains describes the beats of each bar (an evenly-spaced cyclic sequence of times) as being either on-beats (times at which a beat is emphasized in the musical performance) or off-beats (times at which it is skipped or performed only weakly). This can be represented combinatorially as a necklace, an equivalence class of binary sequences under rotations, with true binary values representing on-beats and false representing off-beats. Alternatively, Toussaint uses a geometric representation as a convex polygon, the convex hull of a subset of the vertices of a regular polygon, where the vertices of the hull represent times when a beat is performed; two rhythms are considered the same if the corresponding polygons are congruent.
As an example, reviewer William Sethares (himself a music theorist and engineer) presents a representation of this type for the tresillo rhythm, in which three beats are hit out of an eight-beat bar, with two long gaps and one short gap between each beat. The tresillo may be represented geometrically as an isosceles triangle, formed from three vertices of a regular octahedron, with the two long sides and one short side of the triangle corresponding to the gaps between beats. In the figure, the conventional start to a tresillo bar, the beat before the first of its two longer gaps, is at the top vertex, and the chronological progression of beats corresponds to the clockwise ordering of vertices around the polygon.
The book uses this method to study and classify existing rhythms from world music, to analyze their mathematical properties (for instance, the fact that many of these rhythms have a spacing between their beats that, like the tresillo, is near-uniform but not exactly uniform), to devise algorithms that can generate similar nearly uniformly spaced beat patterns for arbitrary numbers of beats in the rhythm and in the bar, to measure the similarity between rhythms, to cluster rhythms into related groups using their similarities, and ultimately to try to capture the suitability of a rhythm for use in music by a mathematical formula.
Audience and reception
Toussaint has used this book as auxiliary material in introductory computer programming courses, to provide programming tasks for the students. It is accessible to readers without much background in mathematics or music theory, and Setheres writes that it "would make a great introduction to ideas from mathematics and computer science for the musically inspired student". Reviewer Russell Jay Hendel suggests that, as well as being read for pleasure, it could be a textbook for an advanced elective for a mathematics student, or a general education course in mathematics for non-mathematicians. Professionals in ethnomusicology, music history, the psychology of music, music theory, and musical composition may also find it of interest.
Despite concerns with some misused terminology, with "naïveté towards core music theory", and with a mismatch between the visual representation of rhythm and its aural perception, music theorist Mark Gotham calls the book "a substantial contribution to a field that still lags behind the more developed theoretical literature on pitch". And although reviewer Juan G. Escudero complains that the mathematical abstractions of the book misses many important aspects of music and musical rhythm, and that many rhythmic features of contemporary classical music have been overlooked, he concludes that "transdisciplinary efforts of this kind are necessary". Reviewer Ilhand Izmirli calls the book "delightful, informative, and innovative". Hendel adds that the book's presentation of its material as speculative and exploratory, rather than as definitive and completed, is "exactly what [mathematics] students need".
References
Rhythm and meter
Mathematics books
2013 non-fiction books
Chapman & Hall books | The Geometry of Musical Rhythm | Physics | 1,015 |
403,326 | https://en.wikipedia.org/wiki/66%20%28number%29 | 66 (sixty-six) is the natural number following 65 and preceding 67.
In mathematics
66 is a sphenic number, a semiperfect number, and a Erdős–Woods number.
In computing
66 (more specifically 66.667) megahertz (MHz) is a common divisor for the front side bus (FSB) speed, overall central processing unit (CPU) speed, and base bus speed. On a Core 2 CPU, and a Core 2 motherboard, the FSB is 1066 MHz (~16 × 66 MHz), the memory speed is usually 666.67 MHz (~10 × 66 MHz), and the processor speed ranges from 1.86 gigahertz (GHz) (~66 MHz × 28) to 2.93 GHz (~66 MHz × 44), in 266 MHz (~66 MHz × 4) increments.
In motor vehicle transportation
The designation of the historic U.S. Route 66, dubbed the "Mother Road" by novelist John Steinbeck, and other roads.
References
External links
Integers | 66 (number) | Mathematics | 222 |
67,840,660 | https://en.wikipedia.org/wiki/NGC%201351 | NGC 1351 is a lenticular galaxy in the constellation Fornax. It has a redshift of z=0.00505, and its distance from Earth can be estimated as 21 million parsecs (68 million light-years). It is elongated in shape, and was discovered by John Herschel on October 19, 1835.
The diameter of the galaxy is about 33 kpc, which makes it a medium-size galaxy, and smaller than the Milky Way. It is a member of the Fornax Cluster, a cluster of approximately 200 galaxies. The galaxy possesses a bright nucleus at its center.
It is currently receding from the solar system at a velocity of 1514 km/s, and 1410 km/s from the cosmic microwave background.
See also
NGC 1399
NGC 1365
References
Elliptical galaxies
013028
Fornax
1351 | NGC 1351 | Astronomy | 178 |
27,250,860 | https://en.wikipedia.org/wiki/Open%20Blueprint | Open Blueprint was an IBM framework developed in the early 1990s (and released in March 1992) that provided a standard for connecting network computers. The open blueprint structure reduced redundancy by combining protocols.
References
IBM software | Open Blueprint | Technology | 46 |
52,852,412 | https://en.wikipedia.org/wiki/Octopus%20stone | The octopus stone, Taiko-ishi 蛸石 (also called "Drum Rock") is a large stone at Osaka Castle in Japan. The stone is near Sakura Gate.
It is one of the largest of several megaliths at the castle (by face area), at 5.5×11.7 meters and over . Its name is derived from the octopus shape visible on its lower left corner.
See also
List of individual rocks
References
Bibliography
Stones
Osaka Castle | Octopus stone | Physics | 96 |
1,735,228 | https://en.wikipedia.org/wiki/Lenstra%E2%80%93Lenstra%E2%80%93Lov%C3%A1sz%20lattice%20basis%20reduction%20algorithm | The Lenstra–Lenstra–Lovász (LLL) lattice basis reduction algorithm is a polynomial time lattice reduction algorithm invented by Arjen Lenstra, Hendrik Lenstra and László Lovász in 1982. Given a basis with n-dimensional integer coordinates, for a lattice L (a discrete subgroup of Rn) with , the LLL algorithm calculates an LLL-reduced (short, nearly orthogonal) lattice basis in time where is the largest length of under the Euclidean norm, that is, .
The original applications were to give polynomial-time algorithms for factorizing polynomials with rational coefficients, for finding simultaneous rational approximations to real numbers, and for solving the integer linear programming problem in fixed dimensions.
LLL reduction
The precise definition of LLL-reduced is as follows: Given a basis
define its Gram–Schmidt process orthogonal basis
and the Gram-Schmidt coefficients
for any .
Then the basis is LLL-reduced if there exists a parameter in such that the following holds:
(size-reduced) For . By definition, this property guarantees the length reduction of the ordered basis.
(Lovász condition) For k = 2,3,..,n .
Here, estimating the value of the parameter, we can conclude how well the basis is reduced. Greater values of lead to stronger reductions of the basis. Initially, A. Lenstra, H. Lenstra and L. Lovász demonstrated the LLL-reduction algorithm for . Note that although LLL-reduction is well-defined for , the polynomial-time complexity is guaranteed only for in .
The LLL algorithm computes LLL-reduced bases. There is no known efficient algorithm to compute a basis in which the basis vectors are as short as possible for lattices of dimensions greater than 4. However, an LLL-reduced basis is nearly as short as possible, in the sense that there are absolute bounds such that the first basis vector is no more than times as long as a shortest vector in the lattice,
the second basis vector is likewise within of the second successive minimum, and so on.
Applications
An early successful application of the LLL algorithm was its use by Andrew Odlyzko and Herman te Riele in disproving Mertens conjecture.
The LLL algorithm has found numerous other applications in MIMO detection algorithms and cryptanalysis of public-key encryption schemes: knapsack cryptosystems, RSA with particular settings, NTRUEncrypt, and so forth. The algorithm can be used to find integer solutions to many problems.
In particular, the LLL algorithm forms a core of one of the integer relation algorithms. For example, if it is believed that r=1.618034 is a (slightly rounded) root to an unknown quadratic equation with integer coefficients, one may apply LLL reduction to the lattice in spanned by and . The first vector in the reduced basis will be an integer linear combination of these three, thus necessarily of the form ; but such a vector is "short" only if a, b, c are small and is even smaller. Thus the first three entries of this short vector are likely to be the coefficients of the integral quadratic polynomial which has r as a root. In this example the LLL algorithm finds the shortest vector to be [1, -1, -1, 0.00025] and indeed has a root equal to the golden ratio, 1.6180339887....
Properties of LLL-reduced basis
Let be a -LLL-reduced basis of a lattice . From the definition of LLL-reduced basis, we can derive several other useful properties about .
The first vector in the basis cannot be much larger than the shortest non-zero vector: . In particular, for , this gives .
The first vector in the basis is also bounded by the determinant of the lattice: . In particular, for , this gives .
The product of the norms of the vectors in the basis cannot be much larger than the determinant of the lattice: let , then .
LLL algorithm pseudocode
The following description is based on , with the corrections from the errata.
INPUT
a lattice basis b1, b2, ..., bn in Zm
a parameter δ with 1/4 < δ < 1, most commonly δ = 3/4
PROCEDURE
B* <- GramSchmidt({b1, ..., bn}) = {b1*, ..., bn*}; and do not normalize
μi,j <- InnerProduct(bi, bj*)/InnerProduct(bj*, bj*); using the most current values of bi and bj*
k <- 2;
while k <= n do
for j from k−1 to 1 do
if |μk,j| > 1/2 then
bk <- bk − ⌊μk,j⌉bj;
Update B* and the related μi,j's as needed.
(The naive method is to recompute B* whenever bi changes:
B* <- GramSchmidt({b1, ..., bn}) = {b1*, ..., bn*})
end if
end for
if InnerProduct(bk*, bk*) > (δ − μ2k,k−1) InnerProduct(bk−1*, bk−1*) then
k <- k + 1;
else
Swap bk and bk−1;
Update B* and the related μi,j's as needed.
k <- max(k−1, 2);
end if
end while
return B the LLL reduced basis of {b1, ..., bn}
OUTPUT
the reduced basis b1, b2, ..., bn in Zm
Examples
Example from Z3
Let a lattice basis , be given by the columns of
then the reduced basis is
which is size-reduced, satisfies the Lovász condition, and is hence LLL-reduced, as described above. See W. Bosma. for details of the reduction process.
Example from Z[i]4
Likewise, for the basis over the complex integers given by the columns of the matrix below,
then the columns of the matrix below give an LLL-reduced basis.
Implementations
LLL is implemented in
Arageli as the function lll_reduction_int
fpLLL as a stand-alone implementation
FLINT as the function fmpz_lll
GAP as the function LLLReducedBasis
Macaulay2 as the function LLL in the package LLLBases
Magma as the functions LLL and LLLGram (taking a gram matrix)
Maple as the function IntegerRelations[LLL]
Mathematica as the function LatticeReduce
Number Theory Library (NTL) as the function LLL
PARI/GP as the function qflll
Pymatgen as the function analysis.get_lll_reduced_lattice
SageMath as the method LLL driven by fpLLL and NTL
Isabelle/HOL in the 'archive of formal proofs' entry LLL_Basis_Reduction. This code exports to efficiently executable Haskell.
See also
Coppersmith method
Notes
References
Theory of cryptography
Computational number theory
Lattice points | Lenstra–Lenstra–Lovász lattice basis reduction algorithm | Mathematics | 1,492 |
20,396,782 | https://en.wikipedia.org/wiki/Thymidine%20kinase%20from%20herpesvirus | Thymidine kinase from herpesvirus is a sub-family of thymidine kinases that catalyses the transfer of phospho group of ATP to thymidine to generate thymidine monophosphate, which serves as a substrate during viral DNA replication.
Its presence in herpesvirus-infected cells is used to activate a range of antivirals against herpes infection, and thus specifically target the therapy towards infected cells only.
Such antivirals include:
Purine analogues of guanine: Aciclovir, Famciclovir, Ganciclovir, Penciclovir, Valaciclovir, Valganciclovir
Vidarabine
Pyrimidine analogues of uridine: Idoxuridine, Trifluridine
Brivudine
Mutations in the gene coding thymidine kinase in herpes viruses can endow the virus with resistance to aciclovir. In these situations, alternative medications that are of use include other guanine analogues such as famciclovir, valaciclovir and penciclovir.
References
Protein families
Viral enzymes | Thymidine kinase from herpesvirus | Chemistry,Biology | 235 |
45,690,648 | https://en.wikipedia.org/wiki/Respiratory%20droplet | A respiratory droplet is a small aqueous droplet produced by exhalation, consisting of saliva or mucus and other matter derived from respiratory tract surfaces. Respiratory droplets are produced naturally as a result of breathing, speaking, sneezing, coughing, or vomiting, so they are always present in our breath, but speaking and coughing increase their number.
Droplet sizes range from < 1 μm to 1000 μm, and in typical breath there are around 100 droplets per litre of breath. So for a breathing rate of 10 litres per minute this means roughly 1000 droplets per minute, the vast majority of which are a few micrometres across or smaller. As these droplets are suspended in air, they are all by definition aerosols. However, large droplets (larger than about 100 μm, but depending on conditions) rapidly fall to the ground or another surface and so are only briefly suspended, while droplets much smaller than 100 μm (which is most of them) fall only slowly and so form aerosols with lifetimes of minutes or more, or at intermediate size, may initially travel like aerosols but at a distance fall to the ground like droplets ("jet riders").
These droplets can contain infectious bacterial cells or virus particles they are important factors in the transmission of respiratory diseases. In some cases, in the study of disease transmission a distinction between what are called "respiratory droplets" and what are called "aerosols" is made, with only larger droplets referred to as "respiratory droplets" and smaller ones referred to as "aerosols" but this arbitrary distinction has never been supported experimentally or theoretically, and is not consistent with the standard definition of an aerosol.
Description
Respiratory droplets from humans include various cells types (e.g. epithelial cells and cells of the immune system), physiological electrolytes contained in mucus and saliva (e.g. Na+, K+, Cl−), and, potentially, various pathogens.
Droplets that dry in the air become droplet nuclei which float as aerosols and can remain suspended in air for considerable periods of time.
The traditional hard size cutoff of 5 μm between airborne and respiratory droplets has been criticized as a false dichotomy not grounded in science, as exhaled particles form a continuum of sizes whose fates depend on environmental conditions in addition to their initial sizes. However, it has informed hospital based transmission based precautions for decades.
Formation
Respiratory droplets can be produced in many ways. They can be produced naturally as a result of breathing, talking, sneezing, coughing, or singing. They can also be artificially generated in a healthcare setting through aerosol-generating procedures such as intubation, cardiopulmonary resuscitation (CPR), bronchoscopy, surgery, and autopsy. Similar droplets may be formed through vomiting, flushing toilets, wet-cleaning surfaces, showering or using tap water, or spraying graywater for agricultural purposes.
Depending on the method of formation, respiratory droplets may also contain salts, cells, and virus particles. In the case of naturally produced droplets, they can originate from different locations in the respiratory tract, which may affect their content. There may also be differences between healthy and diseased individuals in their mucus content, quantity, and viscosity that affects droplet formation.
Transport
Different methods of formation create droplets of different size and initial speed, which affect their transport and fate in the air. As described by the Wells curve, the largest droplets fall sufficiently fast that they usually settle to the ground or another surface before drying out, and droplets smaller than 100 μm will rapidly dry out, before settling on a surface.
Once dry, they become solid droplet nuclei consisting of the non-volatile matter initially in the droplet. Respiratory droplets can also interact with other particles of non-biological origin in the air, which are more numerous than them. When people are in close contact, liquid droplets produced by one person may be inhaled by another person; droplets larger than 10 μm tend to remain trapped in the nose and throat while smaller droplets will penetrate to the lower respiratory system.
Advanced Computational Fluid Dynamics (CFD) showed that at wind speeds varying from 4 to 15 km/h, respiratory droplets may travel up to 6 meters.
Role in disease transmission
A common form of disease transmission is by way of respiratory droplets, generated by coughing, sneezing, or talking. Respiratory droplet transmission is the usual route for respiratory infections. Transmission can occur when respiratory droplets reach susceptible mucosal surfaces, such as in the eyes, nose or mouth. This can also happen indirectly via contact with contaminated surfaces when hands then touch the face. Respiratory droplets are large and cannot remain suspended in the air for long, and are usually dispersed over short distances.
Viruses spread by droplet transmission include influenza virus, rhinovirus, respiratory syncytial virus, enterovirus, and norovirus; measles morbillivirus; and coronaviruses such as SARS coronavirus (SARS-CoV-1) and SARS-CoV-2 that causes COVID-19. Bacterial and fungal infection agents may also be transmitted by respiratory droplets. By contrast, a limited number of diseases can be spread through airborne transmission after the respiratory droplet dries out. We all continuously breathe out these droplets, but in addition some medical procedures called aerosol-generating medical procedures also generate droplets.
Ambient temperature and humidity affect the survivability of bioaerosols because as the droplet evaporates and becomes smaller, it provides less protection for the infectious agents it may contain. In general, viruses with a lipid envelope are more stable in dry air, while those without an envelope are more stable in moist air. Viruses are also generally more stable at low air temperatures.
Measures taken to reduce transmission
In a healthcare setting, precautions include housing a patient in an individual room, limiting their transport outside the room and using proper personal protective equipment. It has been noted that during the 2002–2004 SARS outbreak, use of surgical masks and N95 respirators tended to decrease infections of healthcare workers. However, surgical masks are much less good at filtering out small droplets/particles than N95 and similar respirators, so the respirators offer greater protection.
Also, higher ventilation rates can be used as a hazard control to dilute and remove respiratory particles. However, if unfiltered or insufficiently filtered air is exhausted to another location, it can lead to spreading of an infection.
History
German bacteriologist Carl Flügge in 1899 was the first to show that microorganisms in droplets expelled from the respiratory tract are a means of disease transmission. In the early 20th century, the term Flügge droplet was sometimes used for particles that are large enough to not completely dry out, roughly those larger than 100 μm.
Flügge's concept of droplets as primary source and vector for respiratory transmission of diseases prevailed into the 1930s until William F. Wells differentiated between large and small droplets. He developed the Wells curve, which describes how the size of respiratory droplets influences their fate and thus their ability to transmit disease.
See also
Basic reproduction number
Source control (respiratory disease)
References
Disease transmission
Particulates | Respiratory droplet | Chemistry | 1,484 |
1,038,753 | https://en.wikipedia.org/wiki/Cut%20rule | In mathematical logic, the cut rule is an inference rule of sequent calculus. It is a generalisation of the classical modus ponens inference rule. Its meaning is that, if a formula A appears as a conclusion in one proof and a hypothesis in another, then another proof in which the formula A does not appear can be deduced. This applies to cases of modus ponens, such as how instances of man are eliminated from Every man is mortal, Socrates is a man to deduce Socrates is mortal.
Formal notation
It is normally written in formal notation in sequent calculus notation as :
cut
Elimination
The cut rule is the subject of an important theorem, the cut-elimination theorem. It states that any sequent that has a proof in the sequent calculus making use of the cut rule also has a cut-free proof, that is, a proof that does not make use of the cut rule.
References
Rules of inference
Logical calculi | Cut rule | Mathematics | 196 |
445,618 | https://en.wikipedia.org/wiki/IEEE%20802.7 | IEEE 802.7 is a sub-standard of the IEEE 802 which covers broadband local area networks. The working group did issue a recommendation in 1989, but is currently inactive and in hibernation.
IEEE 802.07
Working groups | IEEE 802.7 | Technology | 48 |
16,714 | https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov%20test | In statistics, the Kolmogorov–Smirnov test (also K–S test or KS test) is a nonparametric test of the equality of continuous (or discontinuous, see Section 2.2), one-dimensional probability distributions. It can be used to test whether a sample came from a given reference probability distribution (one-sample K–S test), or to test whether two samples came from the same distribution (two-sample K–S test). Intuitively, it provides a method to qualitatively answer the question "How likely is it that we would see a collection of samples like this if they were drawn from that probability distribution?" or, in the second case, "How likely is it that we would see two sets of samples like this if they were drawn from the same (but unknown) probability distribution?".
It is named after Andrey Kolmogorov and Nikolai Smirnov.
The Kolmogorov–Smirnov statistic quantifies a distance between the empirical distribution function of the sample and the cumulative distribution function of the reference distribution, or between the empirical distribution functions of two samples. The null distribution of this statistic is calculated under the null hypothesis that the sample is drawn from the reference distribution (in the one-sample case) or that the samples are drawn from the same distribution (in the two-sample case). In the one-sample case, the distribution considered under the null hypothesis may be continuous (see Section 2), purely discrete or mixed (see Section 2.2). In the two-sample case (see Section 3), the distribution considered under the null hypothesis is a continuous distribution but is otherwise unrestricted. However, the two sample test can also be performed under more general conditions that allow for discontinuity, heterogeneity and dependence across samples.
The two-sample K–S test is one of the most useful and general nonparametric methods for comparing two samples, as it is sensitive to differences in both location and shape of the empirical cumulative distribution functions of the two samples.
The Kolmogorov–Smirnov test can be modified to serve as a goodness of fit test. In the special case of testing for normality of the distribution, samples are standardized and compared with a standard normal distribution. This is equivalent to setting the mean and variance of the reference distribution equal to the sample estimates, and it is known that using these to define the specific reference distribution changes the null distribution of the test statistic (see Test with estimated parameters). Various studies have found that, even in this corrected form, the test is less powerful for testing normality than the Shapiro–Wilk test or Anderson–Darling test. However, these other tests have their own disadvantages. For instance the Shapiro–Wilk test is known not to work well in samples with many identical values.
One-sample Kolmogorov–Smirnov statistic
The empirical distribution function Fn for n independent and identically distributed (i.i.d.) ordered observations Xi is defined as
where is the indicator function, equal to 1 if and equal to 0 otherwise.
The Kolmogorov–Smirnov statistic for a given cumulative distribution function F(x) is
where supx is the supremum of the set of distances. Intuitively, the statistic takes the largest absolute difference between the two distribution functions across all x values.
By the Glivenko–Cantelli theorem, if the sample comes from distribution F(x), then Dn converges to 0 almost surely in the limit when goes to infinity. Kolmogorov strengthened this result, by effectively providing the rate of this convergence (see Kolmogorov distribution). Donsker's theorem provides a yet stronger result.
In practice, the statistic requires a relatively large number of data points (in comparison to other goodness of fit criteria such as the Anderson–Darling test statistic) to properly reject the null hypothesis.
Kolmogorov distribution
The Kolmogorov distribution is the distribution of the random variable
where B(t) is the Brownian bridge. The cumulative distribution function of K is given by
which can also be expressed by the Jacobi theta function . Both the form of the Kolmogorov–Smirnov test statistic and its asymptotic distribution under the null hypothesis were published by Andrey Kolmogorov, while a table of the distribution was published by Nikolai Smirnov. Recurrence relations for the distribution of the test statistic in finite samples are available.
Under null hypothesis that the sample comes from the hypothesized distribution F(x),
in distribution, where B(t) is the Brownian bridge. If F is continuous then under the null hypothesis converges to the Kolmogorov distribution, which does not depend on F. This result may also be known as the Kolmogorov theorem.
The accuracy of this limit as an approximation to the exact cdf of when is finite is not very impressive: even when , the corresponding maximum error is about ; this error increases to when and to a totally unacceptable when . However, a very simple expedient of replacing by
in the argument of the Jacobi theta function reduces these errors to
, , and respectively; such accuracy would be usually considered more than adequate for all practical applications.
The goodness-of-fit test or the Kolmogorov–Smirnov test can be constructed by using the critical values of the Kolmogorov distribution. This test is asymptotically valid when It rejects the null hypothesis at level if
where Kα is found from
The asymptotic power of this test is 1.
Fast and accurate algorithms to compute the cdf or its complement for arbitrary and , are available from:
and for continuous null distributions with code in C and Java to be found in.
for purely discrete, mixed or continuous null distribution implemented in the KSgeneral package of the R project for statistical computing, which for a given sample also computes the KS test statistic and its p-value. Alternative C++ implementation is available from.
Test with estimated parameters
If either the form or the parameters of F(x) are determined from the data Xi the critical values determined in this way are invalid. In such cases, Monte Carlo or other methods may be required, but tables have been prepared for some cases. Details for the required modifications to the test statistic and for the critical values for the normal distribution and the exponential distribution have been published, and later publications also include the Gumbel distribution. The Lilliefors test represents a special case of this for the normal distribution. The logarithm transformation may help to overcome cases where the Kolmogorov test data does not seem to fit the assumption that it came from the normal distribution.
Using estimated parameters, the question arises which estimation method should be used. Usually this would be the maximum likelihood method, but e.g. for the normal distribution MLE has a large bias error on sigma. Using a moment fit or KS minimization instead has a large impact on the critical values, and also some impact on test power. If we need to decide for Student-T data with df = 2 via KS test whether the data could be normal or not, then a ML estimate based on H0 (data is normal, so using the standard deviation for scale) would give much larger KS distance, than a fit with minimum KS. In this case we should reject H0, which is often the case with MLE, because the sample standard deviation might be very large for T-2 data, but with KS minimization we may get still a too low KS to reject H0. In the Student-T case, a modified KS test with KS estimate instead of MLE, makes the KS test indeed slightly worse. However, in other cases, such a modified KS test leads to slightly better test power.
Discrete and mixed null distribution
Under the assumption that is non-decreasing and right-continuous, with countable (possibly infinite) number of jumps, the KS test statistic can be expressed as:
From the right-continuity of , it follows that and and hence, the distribution of depends on the null distribution , i.e., is no longer distribution-free as in the continuous case. Therefore, a fast and accurate method has been developed to compute the exact and asymptotic distribution of when is purely discrete or mixed, implemented in C++ and in the KSgeneral package of the R language. The functions disc_ks_test(), mixed_ks_test() and cont_ks_test() compute also the KS test statistic and p-values for purely discrete, mixed or continuous null distributions and arbitrary sample sizes. The KS test and its p-values for discrete null distributions and small sample sizes are also computed in as part of the dgof package of the R language. Major statistical packages among which SAS PROC NPAR1WAY, Stata ksmirnov implement the KS test under the assumption that is continuous, which is more conservative if the null distribution is actually not continuous (see
).
Two-sample Kolmogorov–Smirnov test
The Kolmogorov–Smirnov test may also be used to test whether two underlying one-dimensional probability distributions differ. In this case, the Kolmogorov–Smirnov statistic is
where and are the empirical distribution functions of the first and the second sample respectively, and is the supremum function.
For large samples, the null hypothesis is rejected at level if
Where and are the sizes of first and second sample respectively. The value of is given in the table below for the most common levels of
and in general by
so that the condition reads
Here, again, the larger the sample sizes, the more sensitive the minimal bound: For a given ratio of sample sizes (e.g. ), the minimal bound scales in the size of either of the samples according to its inverse square root.
Note that the two-sample test checks whether the two data samples come from the same distribution. This does not specify what that common distribution is (e.g. whether it's normal or not normal). Again, tables of critical values have been published. A shortcoming of the univariate Kolmogorov–Smirnov test is that it is not very powerful because it is devised to be sensitive against all possible types of differences between two distribution functions. Some argue that the Cucconi test, originally proposed for simultaneously comparing location and scale, can be much more powerful than the Kolmogorov–Smirnov test when comparing two distribution functions.
Two-sample KS tests have been applied in economics to detect asymmetric effects and to study natural experiments.
Setting confidence limits for the shape of a distribution function
While the Kolmogorov–Smirnov test is usually used to test whether a given F(x) is the underlying probability distribution of Fn(x), the procedure may be inverted to give confidence limits on F(x) itself. If one chooses a critical value of the test statistic Dα such that P(Dn > Dα) = α, then a band of width ±Dα around Fn(x) will entirely contain F(x) with probability 1 − α.
The Kolmogorov–Smirnov statistic in more than one dimension
A distribution-free multivariate Kolmogorov–Smirnov goodness of fit test has been proposed by Justel, Peña and Zamar (1997). The test uses a statistic which is built using Rosenblatt's transformation, and an algorithm is developed to compute it in the bivariate case. An approximate test that can be easily computed in any dimension is also presented.
The Kolmogorov–Smirnov test statistic needs to be modified if a similar test is to be applied to multivariate data. This is not straightforward because the maximum difference between two joint cumulative distribution functions is not generally the same as the maximum difference of any of the complementary distribution functions. Thus the maximum difference will differ depending on which of or or any of the other two possible arrangements is used. One might require that the result of the test used should not depend on which choice is made.
One approach to generalizing the Kolmogorov–Smirnov statistic to higher dimensions which meets the above concern is to compare the cdfs of the two samples with all possible orderings, and take the largest of the set of resulting KS statistics. In d dimensions, there are 2d − 1 such orderings. One such variation is due to Peacock
(see also Gosset
for a 3D version)
and another to Fasano and Franceschini (see Lopes et al. for a comparison and computational details). Critical values for the test statistic can be obtained by simulations, but depend on the dependence structure in the joint distribution.
In one dimension, the Kolmogorov–Smirnov statistic is identical to the so-called star discrepancy D, so another native KS extension to higher dimensions would be simply to use D also for higher dimensions. Unfortunately, the star discrepancy is hard to calculate in high dimensions.
In 2021 the functional form of the multivariate KS test statistic was proposed, which simplified the problem of estimating the tail probabilities of the multivariate KS test statistic, which is needed for the statistical test. For the multivariate case, if Fi is the ith continuous marginal from a probability distribution with k variables, then
so the limiting distribution does not depend on the marginal distributions.
Implementations
The Kolmogorov–Smirnov test is implemented in many software programs. Most of these implement both the one and two sampled test.
Mathematica has KolmogorovSmirnovTest.
MATLAB's Statistics Toolbox has kstest and kstest2 for one-sample and two-sample Kolmogorov–Smirnov tests, respectively.
The R package "KSgeneral" computes the KS test statistics and its p-values under arbitrary, possibly discrete, mixed or continuous null distribution.
R's statistics base-package implements the test as ks.test {stats} in its "stats" package.
SAS implements the test in its PROC NPAR1WAY procedure.
In Python, the SciPy package implements the test in the scipy.stats.kstest function.
SYSTAT (SPSS Inc., Chicago, IL)
Java has an implementation of this test provided by Apache Commons.
KNIME has a node implementing this test based on the above Java implementation.
Julia has the package HypothesisTests.jl with the function ExactOneSampleKSTest(x::AbstractVector{<:Real}, d::UnivariateDistribution).
StatsDirect (StatsDirect Ltd, Manchester, UK) implements all common variants.
Stata (Stata Corporation, College Station, TX) implements the test in ksmirnov (Kolmogorov–Smirnov equality-of-distributions test) command.
PSPP implements the test in its KOLMOGOROV-SMIRNOV (or using KS shortcut function).
The Real Statistics Resource Pack for Excel runs the test as KSCRIT and KSPROB.
See also
Lepage test
Cucconi test
Kuiper's test
Shapiro–Wilk test
Anderson–Darling test
Cramér–von Mises test
Wasserstein metric
References
Further reading
External links
Short introduction
KS test explanation
JavaScript implementation of one- and two-sided tests
Online calculator with the KS test
Open-source C++ code to compute the Kolmogorov distribution and perform the KS test
Paper on Evaluating Kolmogorov's Distribution; contains C implementation. This is the method used in Matlab.
Paper on Computing the Two-Sided Kolmogorov–Smirnov Distribution; computing the cdf of the KS statistic in C or Java.
Paper powerlaw: A Python Package for Analysis of Heavy-Tailed Distributions; Jeff Alstott, Ed Bullmore, Dietmar Plenz. Among others, it also performs the Kolmogorov–Smirnov test. Source code and installers of powerlaw package are available at PyPi.
Statistical distance
Nonparametric statistics
Normality tests | Kolmogorov–Smirnov test | Physics | 3,369 |
55,557,578 | https://en.wikipedia.org/wiki/Hydrogen-deficient%20star | A hydrogen-deficient star is a type of star that has little or no hydrogen in its atmosphere.
Hydrogen deficiency is unusual in a star, as hydrogen is typically the most common element in a stellar atmosphere. Despite being rare, there are a variety of star types that display a hydrogen deficiency.
Observational history
Hydrogen-deficient stars had been noted prior to the discovery of their hydrogen deficiency. In 1797, Edward Pigott noted the profound variation in stellar magnitude of R Coronae Borealis (R CrB).
In 1867, Charles Wolf and Georges Rayet discovered unusual emission line structure in Wolf-Rayet stars.
Hydrogen deficiency in a star was first discovered in 1891 by Williamina Fleming, where she stated “the spectrum of υ Sgr is remarkable since the hydrogen lines are very faint and of the same intensity as the additional dark lines”. In 1906, Hans Ludendorff found that Hγ Balmer spectral lines were absent in R CrB.
It was widely believed at the time that all stellar atmospheres contain hydrogen, so these observations were discounted. Not until quantitative spectral measurements became available in 1935-1940 did astronomers begin to accept that stars such as R CrB and υ Sgr were hydrogen deficient. As of 1970, relatively few of these stars were known. Large-scale stellar surveys since then have greatly increased the number and variety of known hydrogen-deficient stars. As of 2008, about 2,000 hydrogen-deficient stars were known.
Classification
Despite being relatively rare, there are many different types of hydrogen-deficient stars. They can be grouped into five general classes: massive or upper-main-sequence stars, low-mass supergiants, hot subdwarf stars, central stars of planetary nebulae, and white dwarfs. There have been other classification schemes, such as one based on carbon content.
Massive stars
Wolf-Rayet stars show bright bands in continuous spectra that come from ionized atoms such as helium. Although there was some controversy, these were accepted as hydrogen-deficient stars in the 1980s. Helium-rich B stars, such as σ Orionis E, are chemically unusual spectral B or OB main sequence stars that show strong neutral helium lines. Hydrogen-deficient binaries, such as υ Sgr, have helium lines on a metallic spectrum and show large radial velocities that are thought to result from Population I stars orbiting the Galactic Center. Type Ib and Ic supernovae show no hydrogen absorption lines and are associated with stars that have lost their hydrogen envelope through supernova core collapse.
Low-mass supergiants
This type of hydrogen-deficient star occurs at late stages of stellar evolution. R CrB stars are hydrogen-deficient, carbon-rich stars that are notable for their light variation; they may dim by five stellar magnitudes over a period of days, then recover. These dimming events likely arise from stellar surface dynamics, rather than their exceptional chemical composition. Extreme helium stars have absent hydrogen emission or absorption lines, but have strong neutral helium lines and strong CII and NII lines. Born-again stars are stars that evolve over a period of years to migrate between the post-AGB and AGB regions of the Hertzsprung–Russell diagram. For example, Sakurai’s Object (V4334 Sgr) evolved from a faint blue star in 1994 to a yellow supergiant in 1996. One proposed mechanism for this migration is the final helium flash scenario.
Hot subdwarfs
He-sdB are subdwarfs with class B spectra with broader than usual H, HeI, and HeII lines. JL 87 in 1991 was the first He-sdB star to be reported. Since then this class of stars has been shown to have a wide range of hydrogen-to-helium ratios. Compact He-sdO stars have class O spectra, are typically nitrogen-rich, and may or may not be carbon-rich. Low-gravity He-sdO stars overlap with their compact cousins, but have lower surface gravity. It is hypothesized that R CrB and extreme Helium stars, if they evolve to become white dwarfs, would become similar to low-gravity He-sdO stars.
Central stars of planetary nebulae
Central stars of planetary nebulae are typically hot and compact. WC stars are massive Population I stars with broad emission lines for HeI, HeII, CII - CIV, NII, and NIII ions. They have surface temperatures from 14,000K to 270,000K. Of-WR(C) stars have strong carbon emission lines and also show hydrogen deficiency in the inner part of their nebulae. O(He) stars are characterized by HeII absorption while having CIV, NV and OVI emission lines. PG1159 stars, also termed O(C) stars, are dominated by carbon absorption line spectra. They are notable for complex pulsations and being among the hottest known stars.
White dwarfs
The first hydrogen-deficient white dwarfs were discovered by Milton Humason and Fritz Zwicky in 1947 and Willem Luyten in 1952. These stars had no hydrogen lines, but very strong HeI absorption lines. HZ 43 is such a star; early ultraviolet observations showed a temperature greater than 100,000K, but more recent measurements in far UV show an effective temperature of 50,400K. AM CVn stars are binary pairs of hydrogen-deficient white dwarfs with orbital sizes of only tens of Earth radii.
Formation and evolution
Hydrogen deficiency results from stellar evolution. Over the course of a star's evolution, both the consumption of hydrogen in nuclear fusion and the removal of hydrogen layers by explosive processes can lead to a deficiency of hydrogen in its atmosphere.
Detailed theoretical models are still in their infancy. Modeling of hydrogen-deficient star evolution involves either a single-star approach or a binary-star approach.
For example, there have been two theories put forward to explain the formation of extreme helium stars.
The helium final flash scenario is a single-star approach in which a helium flash serves to consume the hydrogen from the outer layer of the star. The double degenerate scenario is a binary-star approach in which a smaller degenerate helium white dwarf and a larger carbon-oxygen white dwarf orbit each other so closely that they eventually inspiral due to gravitational wave losses. At the Roche limit, mass transfer takes place from the helium to the carbon-oxygen star. The latter undergoes helium shell burning to form a supergiant and evolve to a hydrogen-deficient star. The double degenerate scenario provides a better fit to the observational data.
References
General references
Star types
Hydrogen
Helium | Hydrogen-deficient star | Astronomy | 1,360 |
2,816,523 | https://en.wikipedia.org/wiki/Canadian%20Nuclear%20Safety%20Commission | The Canadian Nuclear Safety Commission (CNSC; ) is the federal regulator of nuclear power and materials in Canada.
Mandate and history
Canadian Nuclear Safety Commission was established under the 1997 Nuclear Safety and Control Act with a mandate to regulate nuclear energy, nuclear substances, and relevant equipment in order to reduce and manage the safety, environmental, and national security risks, and to keep Canada in compliance with international legal obligations, such as the Treaty on the Non-Proliferation of Nuclear Weapons. It replaced the former Atomic Energy Control Board (AECB, French: Régie de energie atomique), which was founded in 1946.
The CNSC is an agency of the Government of Canada which reports to the Parliament of Canada through the Minister of Natural Resources.
In 2008, Linda Keen the president and the chief executive officer of the CNSC was fired following a shortage of medical radioisotopes in Canada as a results of the extended routine shutdown of the NRU nuclear reactor at the Chalk River Laboratories.
Rumina Velshi joined the organisation in 2011 and in 2018 she became the President and CEO. In 2020 she also took on an international role for the IAEA becoming their Chairperson for their Commission on Safety Standards. She was appointed to serve for four years.
Programs
The Participant Funding Program allows the public, Indigenous groups, and other stakeholders to request funding from the CNSC to participate in its regulatory processes.
In 2014, the CNSC launched the Independent Environmental Monitoring Program. The program verifies that the public and environment around licensed nuclear facilities are safe, helping to confirm their regulatory position and decision-making.
See also
Anti-nuclear movement in Canada
Canadian National Calibration Reference Centre
International Nuclear Regulators' Association
Nuclear industry in Canada
References
External links
2000 establishments in Canada
Federal departments and agencies of Canada
Government agencies established in 2000
Energy regulatory authorities of Canada
Nuclear regulatory organizations
Natural Resources Canada
Nuclear power in Canada | Canadian Nuclear Safety Commission | Engineering | 379 |
31,302,381 | https://en.wikipedia.org/wiki/Oven%20temperatures | Common oven temperatures (such as terms: cool oven, very slow oven, slow oven, moderate oven, hot oven, fast oven, etc.) are set to control the effects of baking in an oven, for various lengths of time.
Standard phrases
{| class=wikitable style="float:right; margin-left:1em"
|-
! colspan=3 style="background-color:#F1CC66;" | Table of equivalent oven temperatures
|-
! Description || °F || °C
|-
| Cool oven || 200 °F || 90 °C
|-
| Very slow oven || 250 °F || 120 °C
|-
| Slow oven || 300–325 °F || 150–160 °C
|-
| Moderately slow || 325–350 °F || 160–180 °C
|-
| Moderate oven || 350–375 °F || 180–190 °C
|-
| Moderately hot || 375–400 °F || 190–200 °C
|-
| Hot oven || 400–450 °F || 200–230 °C
|-
| Very hot oven || 450–500 °F || 230–260 °C
|-
| Fast oven || 450–500 °F || 230–260 °C
|}
The various standard phrases, to describe oven temperatures, include words such as "cool" to "hot" or "very slow" to "fast". For example, a cool oven has temperature set to 200 °F (90 °C), and a slow oven has a temperature range from 300–325 °F (150–160 °C). A moderate oven has a range of 350–375 °F (180–190 °C), and a hot oven has temperature set to 400–450 °F (200–230 °C). A fast oven has a range of 450-500 °F (230–260 °C) for the typical temperature.
Estimating oven temperature
Before ovens had thermometers or thermostats, these standard words were used by cooks and cookbooks to describe how hot an oven should be to cook various items. Custards require a slow oven for example, bread a moderate oven, and pastries a very hot oven. Cooks estimated the temperature of an oven by counting the number of minutes it took to turn a piece of white paper golden brown, or counting the number of seconds one could hold one's hand in the oven. Another method was to put a layer of flour or a piece of white tissue paper on a pan in the oven for five minutes. The resulting colors range from delicate brown in a slow oven through golden brown in a moderate oven to dark brown in a hot oven.
See also
Conversion of units
Gas mark
SI Units
References
Baking
Temperature | Oven temperatures | Physics,Chemistry | 556 |
65,667,384 | https://en.wikipedia.org/wiki/Detecting%20Earth%20from%20distant%20star-based%20systems | There are several methods currently used by astronomers to detect distant exoplanets from Earth. Theoretically, some of these methods can be used to detect Earth as an exoplanet from distant star systems.
History
In June 2021, astronomers identified 1,715 stars (with likely related exoplanetary systems) within 326 light-years (100 parsecs) that have a favorable positional vantage point—in relation to the Earth Transit Zone (ETZ)—of detecting Earth as an exoplanet transiting the Sun since the beginnings of human civilization (about 5,000 years ago); an additional 319 stars are expected to arrive at this special vantage point in the next 5,000 years. Seven known exoplanet hosts, including Ross 128, may be among these stars. Teegarden's Star and Trappist-1 may be expected to see the Earth in 29 and 1,642 years, respectively. Radio waves, emitted by humans, have reached over 75 of the closest stars that were studied. In June 2021, astronomers reported identifying 29 planets in habitable zones that may be capable of observing the Earth. Earlier, in October 2020, astronomers had initially identified 508 such stars within 326 light-years (100 parsecs) that would have a favorable positional vantage point—in relation to the Earth Transit Zone (ETZ)—of detecting Earth as an exoplanet transiting the Sun.
Transit method is the most popular tool used to detect exoplanets and the most common tool to spectroscopically analyze exoplanetary atmospheres. As a result, such studies, based on the transit method, will be useful in the search for life on exoplanets beyond the Solar System by the SETI program, Breakthrough Listen Initiative, as well as upcoming exoplanetary TESS mission searches.
Detectability of Earth from distant star-based systems may allow for the detectability of humanity and/or analysis of Earth from distant vantage points such as via "atmospheric SETI" for the detection of atmospheric compositions explainable only by use of (artificial) technology like air pollution containing nitrogen dioxide from e.g. transportation technologies. The easiest or most likely artificial signals from Earth to be detectable are brief pulses transmitted by anti-ballistic missile (ABM) early-warning and space-surveillance radars during the Cold War and later astronomical and military radars. Unlike the earliest and conventional radio- and television-broadcasting which has been claimed to be undetectable at short distances, such signals could be detected from very distant, possibly star-based, receiver stations – any single of which would detect brief episodes of powerful pulses repeating with intervals of one Earth day – and could be used to detect both Earth as well as the presence of a radar-utilizing civilization on it.
Studies have suggested that radio broadcast leakage – with the program material likely not being detectable – may be a technosignature detectable at distances of up to a hundred light years with technology equivalent to the Square Kilometer Array if the location of Earth is known. Likewise, if Earth's location can be and is known, it may be possible to use atmospheric analysis to detect life or favorable conditions for it on Earth via biosignatures, including MERMOZ instruments that may be capable of remotely detecting living matter on Earth.
Experiments
In 1980s, astronomer Carl Sagan persuaded NASA to perform an experiment of detecting life and civilization on Earth using instruments of the Galileo spacecraft. It was launched in December 1990, and when it was from the planet's surface, Galileo turned its instruments to observe Earth. Sagan's paper was titled "A search for life on Earth from the Galileo spacecraft"; he wrote that "high-resolution images of Australia and Antarctica obtained as Galileo flew overhead did not yield signs of civilization"; other measurements showed the presence of vegetation and detected radio transmissions.
See also
Earliest known life forms
List of exoplanet search projects
Lists of exoplanets
References
External links
Extrasolar Planets Encyclopaedia by the Paris Observatory
Astrobiology
Planetary science | Detecting Earth from distant star-based systems | Astronomy,Biology | 833 |
20,425,200 | https://en.wikipedia.org/wiki/NGC%2031 | NGC 31 is a spiral galaxy located in the constellation Phoenix. It was discovered on October 28, 1834 by the astronomer John Herschel. Its morphological type is SB(rs)cd, meaning that it is a late-type barred spiral galaxy.
References
External links
Galaxies discovered in 1834
0031
Barred spiral galaxies
Phoenix (constellation)
000751
18341028 | NGC 31 | Astronomy | 73 |
1,200,329 | https://en.wikipedia.org/wiki/Ben%20Mayer | Ben Mayer (1925 in Germany – 28 December 1999) was an amateur astronomer perhaps best known for the invention of the projection blink comparator (PROBLICOM), a low-cost version of the blink comparator. This inexpensive tool allowed amateur astronomers to contribute to some phases of serious research. Professionally, Mayer worked as an interior designer.
Mayer was the first ever to photograph a nova in its brightening phase. On the night of August 29, 1975, Mayer was using an automatic camera to photograph the sky, hoping to track meteors large enough to survive entry into the Earth's atmosphere. After learning of the nova, he realized it was in the part of the sky he was photographing. He retrieved his negatives from the trash (there were no meteors on the photographs) and found a series of images of Nova Cygni 1975 in several stages of brightening.
Mayer was a member of the American Association of Variable Star Observers and a frequent lecturer at the Riverside Telescope Makers Conference. In 1982 he won the Amateur Achievement Award of the Astronomical Society of the Pacific. Perhaps the best words summarizing Mayer's contributions and style appeared in The Griffith Observer memorial as written by Ed Krupp, Director of the Griffith Observatory in the June, 2000 edition (Vol 64, No. 6).
Death
Mayer died on December 28, 1999 at the age of 74.
References
American astronomers
Amateur astronomers
1925 births
1999 deaths
German emigrants to the United States | Ben Mayer | Astronomy | 298 |
20,943,440 | https://en.wikipedia.org/wiki/Bridge%20scour | Bridge scour is the removal of sediment such as sand and gravel from around bridge abutments or piers. Hydrodynamic scour, caused by fast flowing water, can carve out scour holes, compromising the integrity of a structure.
In the United States, bridge scour is one of the three main causes of bridge failure (the others being collision and overloading). It has been estimated that 60% of all bridge failures result from scour and other hydraulic-related causes. It is the most common cause of highway bridge failure in the US, where 46 of 86 major bridge failures resulted from scour near piers from 1961 to 1976.
Areas affected by scour
Water normally flows faster around piers and abutments making them susceptible to local scour. At bridge openings, contraction scour can occur when water accelerates as it flows through an opening that is narrower than the channel upstream from the bridge. Degradation scour occurs both upstream and downstream from a bridge over large areas. Over long periods of time, this can result in the lowering of the stream bed.
Causes
Stream channel instability resulting in river erosion and changing angles-of-attack can contribute to bridge scour. Debris can also have a substantial impact on bridge scour in several ways. A build-up of material can reduce the size of the waterway under a bridge causing contraction scour in the channel. A build-up of debris on the abutment can increase the obstruction area and increase local scour. Debris can deflect the water flow, changing the angle of attack and increasing local scour. Debris might also shift the entire channel around the bridge causing increased water flow and scour in another location.
The most frequently encountered bridge scour problems usually involve loose alluvial material that can be easily eroded. It should not be assumed that total scour in cohesive or cemented soils will not be as large as in non-cohesive soils; the scour simply takes longer to develop.
Many of the equations for scour were derived from laboratory studies, for which the range of applicability is difficult to ascertain. Most studies focussed on piers and pile formations, though most bridge scour problems are related to the more complex configuration of the bridge abutment. Some studies were verified using limited field data, though this is also difficult to accurately scale for physical modelling purposes. During field measurements of post scour, a scour hole that had developed on the rising stage of a flood, or at the peak, may be filled in again on the falling stage. For this reason, the maximum depth of scour cannot be simply modelled after the event.
Scour can also cause problems with the hydraulic analysis of a bridge. Scour may considerably deepen the channel through a bridge and effectively reduce or even eliminate the backwater. This reduction in backwater should not be relied on, however, because of the unpredictable nature of the processes involved.
When considering scour it is normal to distinguish between non-cohesive or cohesionless (alluvial) sediments and cohesive material. The former are usually of most interest to laboratory studies. Cohesive materials require special techniques and are poorly researched.
The first major issue when considering scour is the distinction between clear-water scour and live-bed scour. The critical issue is whether or not the mean bed shear stress of the flow upstream of the bridge is less than or larger than the threshold value needed to move the bed material.
If the upstream shear stress is less than the threshold value, the bed material upstream of the bridge is at rest. This is referred to as the clear-water condition because the approach flow is clear and does not contain sediment. Thus, any bed material that is removed from a local scour hole is not replaced by sediment being transported by the approach flow. The maximum local scour depth is achieved when the size of the scour hole results in a local reduction in shear stress to the critical value such that the flow can no longer remove bed material from the scoured area.
Live-bed scour occurs where the upstream shear stress is greater than the threshold value and the bed material upstream of the crossing is moving. This means that the approach flow continuously transports sediment into a local scour hole. By itself, a live bed in a uniform channel will not cause a scour hole—for this to be created some additional increase in shear stress is needed, such as that caused by a contraction (natural or artificial, such as a bridge) or a local obstruction (e.g. a bridge pier). The equilibrium scour depth is achieved when material is transported into the scour hole at the same rate at which it is transported out.
Typically the maximum equilibrium clear-water scour is about 10% larger than the equilibrium live-bed scour. Conditions that favour clear-water scour include bed material being too coarse to be transported, the presence of vegetated or artificial reinforced channels where velocities are only high enough due to local scour, or flat bed slopes during low flows.
It is possible that both clear-water and live-bed scour can occur. During a flood event, bed shear stress may change as the flood flows change. It is possible to have clear-water conditions at the commencement of a flood event, transitioning to a live bed before reverting to clear-water conditions. Note that the maximum scour depth may occur under initial clear-water conditions, not necessarily when the flood levels peak and live-bed scour is underway. Similarly, relatively high velocities can be experienced when the flow is just contained within the banks, rather than spread over the floodplains at the peak discharge.
Urbanization has the effect of increasing flood magnitudes and causing hydrographs to peak earlier, resulting in higher stream velocities and degradation. Channel improvements or the extraction of gravel (above or below the site in question) can alter water levels, flow velocities, bed slopes and sediment transport characteristics and consequently affect scour. For instance, if an alluvial channel is straightened, widened or altered in any other way that results in an increased flow-energy condition, the channel will tend back towards a lower energy state by degrading upstream, widening and aggrading downstream.
The significance of degradation scour to bridge design is that the engineer has to decide whether the existing channel elevation is likely to be constant over the life of the bridge, or whether it will change. If change is probable then it must be allowed for when designing the waterway and foundations.
The lateral stability of a river channel may also affect scour depths, because movement of the channel may result in the bridge being incorrectly positioned or aligned with respect to the approach flow. This problem can be significant under any circumstances but is potentially very serious in arid or semi-arid regions and with ephemeral (intermittent) streams. Lateral migration rates are largely unpredictable. Sometimes a channel that has been stable for many years may suddenly start to move, but significant influences are floods, bank material, vegetation of the banks and floodplains, and land use.
Scour at bridge sites is typically classified as contraction (or constriction) scour and local scour. Contraction scour occurs over a whole cross-section as a result of the increased velocities and bed shear stresses arising from a narrowing of the channel by a construction such as a bridge. In general, the smaller the opening ratio the larger the waterway velocity and the greater the potential for scour. If the flow contracts from a wide floodplain, considerable scour and bank failure can occur. Relatively severe constrictions may require regular maintenance for decades to combat erosion. It is evident that one way to reduce contraction scour is to make the opening wider.
Local scour arises from the increased velocities and associated vortices as water accelerates around the corners of abutments, piers and spur dykes.
Flow pattern around a cylindrical pier
The approaching flow decelerates as it nears the cylinder, coming to rest at the centre of the pier. The resulting stagnation pressure is highest near the water surface where the approach velocity is greatest, and smaller lower down. The downward pressure gradient at the pier face directs the flow downwards. Local pier scour begins when the downflow velocity near the stagnation point is strong enough to overcome the resistance to motion of the bed particles.
During flooding, although the foundations of a bridge might not suffer damage, the fill behind abutments may scour. This type of damage typically occurs with single-span bridges with vertical wall abutments.
Bridge examination and scour evaluation
The examination process is normally conducted by hydrologists and hydrologic technicians, and involves a review of historical engineering information about the bridge, followed by a visual inspection. Information is recorded about the type of rock or sediment carried by the river, and the angle at which the river flows toward and away from the bridge. The area under the bridge is also inspected for holes and other evidence of scour.
Bridge examination begins by office investigation. The history of the bridge and any previous scour related problems should be noted. Once a bridge is recognized as a potential scour bridge, it will proceed to further evaluation including field review, scour vulnerability analysis and prioritizing. Bridges will also be rated in different categories and prioritized for scour risk. Once a bridge is evaluated as scour critical, the bridge owner should prepare a scour plan of action to mitigate the known and potential deficiencies. The plan may include installation of countermeasures, monitoring, inspections after flood events, and procedures for closing bridges if necessary.
Alternatively, sensing technologies are also being put in place for scour assessment. The scour-sensing level can be classified into three levels: general bridge inspection, collecting limited data and collecting detailed data. There are three different types of scour-monitoring systems: fixed, portable and geophysical positioning. Each system can help to detect scour damage in an effort to avoid bridge failure, thus increasing public safety.
Countermeasures and prevention
The Hydraulic Engineering Circular Manual No. 23 (HEC-23) contains general design guidelines as scour countermeasures that are applicable to piers and abutments. The numbering in the following table indicates the HEC-23 design guideline section:
Bend way weirs, spurs and guide banks can help to align the upstream flow while riprap, gabions, articulated concrete blocks and grout-filled mattresses can mechanically stabilize the pier and abutment slopes. Riprap remains the most common countermeasure used to prevent scour at bridge abutments. A number of physical additions to the abutments of bridges can help prevent scour, such as the installation of gabions and stone pitching upstream from the foundation. The addition of sheet piles or interlocking prefabricated concrete blocks can also offer protection. These countermeasures do not change the scouring flow and are temporary since the components are known to move or be washed away in a flood. The Federal Highway Administration (FHWA) recommends design criteria in HEC-18 and 23, such as avoiding unfavourable flow patterns, streamlining the abutments, and designing pier foundations resistant to scour without depending upon the use of riprap or other countermeasures.
Trapezoidal-shaped channels through a bridge can significantly decrease local scour depths compared to vertical wall abutments, as they provide a smoother transition through a bridge opening. This eliminates abrupt corners that cause turbulent areas. Spur dykes, barbs, groynes, and vanes are river training structures that change stream hydraulics to mitigate undesirable erosion or deposits. They are usually used on unstable stream channels to help redirect stream flow to more desirable locations through the bridge. The insertion of piles or deeper footings is also used to help strengthen bridges.
Estimating scour depth
Hydraulic Engineering Circular Manual No. 18 (HEC-18) was published by the FHWA, and includes several techniques of estimating scour depth. The empirical scour equations for live-bed scour, clear-water scour, and local scour at piers and abutments are shown in the Chapter 5General Scour section. The total scour depth is determined by adding three scour components which includes the long-term aggradation and degradation of the river bed, general scour at the bridge and local scour at the piers or abutment. However, research has shown that the standard equations in HEC-18 over-predict scour depth for a number of hydraulic and geologic conditions. Most of the HEC-18 relationships are based on laboratory flume studies conducted with sand-sized sediments increased with factors of safety that are not easily recognizable or adjustable. Sand and fine gravel are the most easily eroded bed materials, but streams frequently contain much more scour resistant materials such as compact till, stiff clay, and shale. The consequences of using design methods based on a single soil type are especially significant for many major physiographic provinces with distinctly different geologic conditions and foundation materials. This can lead to overly conservative design values for scour in low risk or non-critical hydrologic conditions. Thus, equation improvements are continued to be made in an effort to minimize the underestimation and overestimation of scour.
Bridge disasters caused by scour
Custer Creek train wreck
Glanrhyd Bridge collapse
Hintze Ribeiro Bridge collapse
Schoharie Creek Bridge collapse
See also
List of bridge failures
Armor (hydrology)
Baer–Babinet law
Breakwater (structure)
Bridge maintenance
Fluid dynamics
Homochitto River
Kármán vortex street
MIKE 21C
References
Further reading
Boorstin, Robert O. (1987). Bridge Collapses on the Thruway, Trapping Vehicles, Volume CXXXVI, No. 47,101, The New York Times, April 6, 1987.
Huber, Frank. (1991). "Update: Bridge Scour". Civil Engineering, ASCE, Vol. 61, No. 9, pp. 62–63, September 1991.
Levy, Matthys and Salvadori, Mario (1992). Why Buildings Fall Down. W.W. Norton and Company, New York, New York.
National Transportation Safety Board (NTSB). (1988). "Collapse of New York Thruway (1-90) Bridge over the Schoharie Creek, near Amsterdam, New York, April 5, 1987". Highway Accident Report: NTSB/HAR-88/02, Washington, D.C.
Springer Netherlands. International Journal of Fracture, Volume 51, Number 1. September 1991. "The collapse of the Schoharie Creek Bridge: a case study in concrete fracture mechanics"
Palmer, R., and Turkiyyah, G. (1999). "CAESAR: An Expert System for Evaluation of Scour and Stream Stability". National Cooperative Highway Research Program (NCHRP) Report 426, Washington D. C.
Shepherd, Robin and Frost, J. David (1995). Failures in Civil Engineering: Structural, Foundation and Geoenvironmental Case Studies. American Society of Civil Engineers, New York, New York.
Thornton, C. H., Tomasetti, R. L., and Joseph, L. M. (1988). "Lessons From Schoharie Creek", Civil Engineering, Vol. 58, No. 5, pp. 46–49, May 1988.
Thornton-Tomasetti, P. C. (1987) "Overview Report Investigation of the New York State Thruway Schoharie Creek Bridge Collapse". Prepared for: New York State Disaster Preparedness Commission, December 1987.
Wiss, Janney, Elstner Associates, Inc., and Mueser Rutledge Consulting Engineers (1987) "Collapse of Thruway Bridge at Schoharie Creek", Final Report, Prepared for: New York State Thruway Authority, November 1987.
Richardson, E. V., and Davis, S. R. 1995. "Evaluating Scour at Bridges, Third Edition", US Department of Transportation, Publication No. FHWA-IP-90-017.
Sumer, B. M., and Fredsøe, J. (2002). "The Mechanics of Scour in the Marine Environment", World Scientific, Singapore.
External links
Bruce W. Melville, Stephen E. Coleman, Bridge Scour
Bridge scour study
USGS National Bridge Scour Project
USGS publications on bridge scour
USGS bridge scour study
USGS National bridge scour database
Mathematical formulas for various kinds of scour
Ascelibrary - Bridge Scour
Hydrology
Hydraulic engineering
Environmental engineering
Physical geography
Fluid dynamics
Fluid mechanics
Erosion | Bridge scour | Physics,Chemistry,Engineering,Environmental_science | 3,440 |
9,249,813 | https://en.wikipedia.org/wiki/Forward%20kinematics | In robot kinematics, forward kinematics refers to the use of the kinematic equations of a robot to compute the position of the end-effector from specified values for the joint parameters.
The kinematics equations of the robot are used in robotics, computer games, and animation. The reverse process, that computes the joint parameters that achieve a specified position of the end-effector, is known as inverse kinematics.
Kinematics equations
The kinematics equations for the series chain of a robot are obtained using a rigid transformation [Z] to characterize the relative movement allowed at each joint and separate rigid transformation [X] to define the dimensions of each link. The result is a sequence of rigid transformations alternating joint and link transformations from the base of the chain to its end link, which is equated to the specified position for the end link,
where [T] is the transformation locating the end-link. These equations are called the kinematics equations of the serial chain.
Link transformations
In 1955, Jacques Denavit and Richard Hartenberg introduced a convention for the definition of the joint matrices [Z] and link matrices [X] to standardize the coordinate frame for spatial linkages. This convention positions the joint frame so that it consists of a screw displacement along the Z-axis
and it positions the link frame so it consists of a screw displacement along the X-axis,
Using this notation, each transformation-link goes along a serial chain robot, and can be described by the coordinate transformation,
where θi, di, αi,i+1 and ai,i+1 are known as the Denavit-Hartenberg parameters.
Kinematics equations revisited
The kinematics equations of a serial chain of n links, with joint parameters θi are given by
where is the transformation matrix from the frame of link to link . In robotics, these are conventionally described by Denavit–Hartenberg parameters.
Denavit-Hartenberg matrix
The matrices associated with these operations are:
Similarly,
The use of the Denavit-Hartenberg convention yields the link transformation matrix, [i-1Ti] as
known as the Denavit-Hartenberg matrix.
Computer animation
The forward kinematic equations can be used as a method in 3D computer graphics for animating models.
The essential concept of forward kinematic animation is that the positions of particular parts of the model at a specified time are calculated from the position and orientation of the object, together with any information on the joints of an articulated model. So for example if the object to be animated is an arm with the shoulder remaining at a fixed location, the location of the tip of the thumb would be calculated from the angles of the shoulder, elbow, wrist, thumb and knuckle joints. Three of these joints (the shoulder, wrist and the base of the thumb) have more than one degree of freedom, all of which must be taken into account. If the model were an entire human figure, then the location of the shoulder would also have to be calculated from other properties of the model.
Forward kinematic animation can be distinguished from inverse kinematic animation by this means of calculation - in inverse kinematics the orientation of articulated parts is calculated from the desired position of certain points on the model. It is also distinguished from other animation systems by the fact that the motion of the model is defined directly by the animator - no account is taken of any physical laws that might be in effect on the model, such as gravity or collision with other models.
See also
Inverse kinematics
Kinematic chain
Robot control
Mechanical systems
Robot kinematics
Kinematic synthesis
References
3D computer graphics
Computational physics
Robot kinematics | Forward kinematics | Physics,Engineering | 755 |
6,393,600 | https://en.wikipedia.org/wiki/Rhenium%20trioxide | Rhenium trioxide or rhenium(VI) oxide is an inorganic compound with the formula ReO3. It is a red solid with a metallic lustre that resembles copper in appearance. It is the only stable trioxide of the Group 7 elements (Mn, Tc, Re).
Preparation and structure
Rhenium trioxide can be formed by reducing rhenium(VII) oxide with carbon monoxide at 200 °C or elemental rhenium at 400 °C.
Re2O7 + CO → 2 ReO3 + CO2
3 Re2O7 + Re → 7 ReO3
Re2O7 can also be reduced with dioxane.
Rhenium trioxide crystallizes with a primitive cubic unit cell, with a lattice parameter of 3.742 Å (374.2 pm). The structure of ReO3 is similar to that of perovskite (ABO3), without the large A cation at the centre of the unit cell. Each rhenium center is surrounded by an octahedron defined by six oxygen centers. These octahedra share corners to form the 3-dimensional structure. The coordination number of O is 2, because each oxygen atom has 2 neighbouring Re atoms.
Properties
Physical properties
ReO3 is unusual for an oxide because it exhibits very low resistivity. It behaves like a metal in that its resistivity decreases as its temperature decreases. At 300 K, its resistivity is 100.0 nΩ·m, whereas at 100 K, this decreases to 6.0 nΩ·m, 17 times less than at 300 K.
Chemical properties
Rhenium trioxide is insoluble in water, as well as dilute acids and bases. Heating it in base results in disproportionation to give and , while reaction with acid at high temperature affords . In concentrated nitric acid, it yields perrhenic acid.
Upon heating to 400 °C under vacuum, it undergoes disproportionation:
3 ReO3 → Re2O7 + ReO2
Rhenium trioxide can be chlorinated to give rhenium trioxide chloride:
Uses
Hydrogenation catalyst
Rhenium trioxide finds some use in organic synthesis as a catalyst for amide reduction.
References
Rhenium compounds
Hydrogenation catalysts
Transition metal oxides | Rhenium trioxide | Chemistry | 482 |
16,592,105 | https://en.wikipedia.org/wiki/Kith%20%28Poul%20Anderson%29 | The Kith are a starfaring culture featured in a number of science fiction stories by American writer Poul Anderson. They are:
"Ghetto" (1954)
"The Horn of Time the Hunter" (also known as "Homo Aquaticus", 1963)
The novel Starfarers (1998) - John W. Campbell Memorial Award nominee, 1999
The Kith develop out of early interstellar explorers in the 21st and 22nd centuries. Because of the effects of time dilation associated with travel at near-light speeds, the Kith maintain separate settlements ("Kithtowns") in which care was taken to keep their language and culture consistent over the course of millennia. As Kith usually marry among themselves, they seek to avoid in-breeding by a strict exogamy; Kith must find their mates in a ship other their own, marriage between crew members of the same ship being considered a kind of incest.
Inevitably, Kith come to regard planet-bound cultures with aloof detachment, as an individual Kith may witness in his or her lifetime the passage of hundreds of years, the rise and fall of empires which can only seem ephemeral. To the ground-dwellers such attitudes come to seem superior and arrogant, and the Kith's apparent near-immortality arouses envy. Although the Kith are instrumental in maintaining the network of trade that makes human interstellar civilization possible, over time they become the object of derision, suspicion and ultimately persecution.
As set forth in Starfarers and "Ghetto", the Kithtowns ultimately become ghettos, and pogroms are launched against the Kith. "The Horn of Time the Hunter" suggests that the Kith are ultimately forced to flee human space altogether, and chronicles the return of one group of Kith to human space after hundreds of thousands of years' relativistic travel to the Galactic core.
References
External links
Works by Poul Anderson
Fictional species and races
Special relativity | Kith (Poul Anderson) | Physics | 402 |
29,548,278 | https://en.wikipedia.org/wiki/Minimum%20design%20metal%20temperature | MDMT is one of the design conditions for pressure vessels engineering calculations, design and manufacturing according to the ASME Boilers and Pressure Vessels Code. Each pressure vessel that conforms to the ASME code has its own MDMT, and this temperature is stamped on the vessel nameplate. The precise definition can sometimes be a little elaborate, but in simple terms the MDMT is a temperature arbitrarily selected by the user of type of fluid and the temperature range the vessel is going to handle. The so-called arbitrary MDMT must be lower than or equal to the CET (which is an environmental or "process" property, see below) and must be higher than or equal to the (MDMT)M (which is a material property).
Critical exposure temperature (CET) is the lowest anticipated temperature to which the vessel will be subjected, taking into consideration lowest operating temperature, operational upsets, autorefrigeration, atmospheric temperature, and any other sources of cooling. In some cases it may be the lowest temperature at which significant stresses will occur and not the lowest possible temperature.
(MDMT)M is the lowest temperature permitted according to the metallurgy of the vessel fabrication materials and the thickness of the vessel component, that is, according to the low temperature embrittlement range and the charpy impact test requirements per temperature and thickness, for each one of the vessel's components.
References
ASME, Boilers and Pressure Vessels Code
Dennis R. Moss, Pressure Vessel Design Manual, 1997 (2nd ed.)
Pressure vessels
Threshold temperatures | Minimum design metal temperature | Physics,Chemistry,Engineering | 316 |
22,116,598 | https://en.wikipedia.org/wiki/GHS%20precautionary%20statements | Precautionary statements form part of the Globally Harmonized System of Classification and Labelling of Chemicals (GHS). They are intended to form a set of standardized phrases giving advice about the correct handling of chemical substances and mixtures, which can be translated into different languages. As such, they serve the same purpose as the well-known S-phrases, which they are intended to replace.
Precautionary statements are one of the key elements for the labelling of containers under the GHS, along with:
an identification of the product;
one or more hazard pictograms (where necessary)
a signal word – either Danger or Warning – where necessary
hazard statements, indicating the nature and degree of the risks posed by the product
the identity of the supplier (who might be a manufacturer or importer)
Each precautionary statement is designated a code, starting with the letter P and followed by three digits. Statements which correspond to related hazards are grouped together by code number, so the numbering is not consecutive. The code is used for reference purposes, for example to help with translations, but it is the actual phrase which should appear on labels and safety data sheets. Some precautionary phrases are combinations, indicated by a plus sign "+". In several cases, there is a choice of wording, for example "Avoid breathing dust/fume/gas/mist/vapours/spray": the supplier or regulatory agency should choose the appropriate wording for the product concerned.
General precautionary statements
Note: "" = to be specified
Prevention precautionary statements
Response precautionary statements
Storage precautionary statements
Disposal precautionary statements
References
External links
("GHS Rev.10")
(the "CLP Regulation")
Chemical Hazard & Precautionary Phrases in 23 European Languages, machine-readable and versioned
Precautionary statements | GHS precautionary statements | Chemistry | 389 |
2,396,907 | https://en.wikipedia.org/wiki/PSC%20Inc. | PSC Inc. was a manufacturer of portable data terminals, mobile data terminals, wireless terminals, barcode readers, linear barcode verifiers, and RFID readers. It was founded in 1969 by John E. Blackert (Xerox) and Lawrence P. Albertson (Kodak) as Photographic Sciences Corporation in Webster, New York (a suburb of Rochester).
History
In 1996, PSC acquired Spectra-Physics Scanning Systems, Inc., and also acquired Percon Inc., a manufacturer of portable data terminals.
In 2002, PSC went through bankruptcy reorganization. Littlejohn & Co., a private equity firm, purchased all of the company's senior and subordinated debt of $124 million. During the reorganization PSC spun off its software division, IntelliTrack to private investors.
Acquisition by Datalogic
On October 24, 2005, Datalogic announced that it had signed a binding contract for the takeover of the entire capital stock of PSC Inc. The agreed price was set at approximately $195 Million. Datalogic retired the PSC brand name on April 2, 2007. The PSC legacy was catered to in the new corporate logo of Datalogic by adding a star representing PSC. Datalogic decided to retain some of the PSC product brand names including Magellan, Duet, Falcon, PowerScan, and QuickScan.
References
External links
Datalogic Group
Defunct computer companies of the United States
Defunct computer hardware companies
Radio-frequency identification | PSC Inc. | Engineering | 300 |
48,811,710 | https://en.wikipedia.org/wiki/Modified%20Uniformly%20Redundant%20Array | A modified uniformly redundant array (MURA) is a type of mask used in coded aperture imaging. They were first proposed by Gottesman and Fenimore in 1989.
Mathematical Construction of MURAs
MURAs can be generated in any length L that is prime and of the form
the first five such values being . The binary sequence of a linear MURA is given by , where
These linear MURA arrays can also be arranged to form hexagonal MURA arrays. One may note that if and , a uniformly redundant array(URA) is a generated.
As with any mask in coded aperture imaging, an inverse sequence must also be constructed. In the MURA case, this inverse G can be constructed easily given the original coding pattern A:
Rectangular MURA arrays are constructed in a slightly different manner, letting , where
and
The corresponding decoding function G is constructed as follows:
References
Radiation | Modified Uniformly Redundant Array | Physics,Chemistry | 181 |
51,008,855 | https://en.wikipedia.org/wiki/SX000i | SX000i - International guide for the use of the S-Series of Integrated Logistics Support (ILS) specifications, is a specification developed jointly by a multinational team from the AeroSpace and Defence Industries Association of Europe (ASD) and Aerospace Industries Association (AIA). SX000i is part of the S-Series of ILS specifications.
SX000i provides information, guidance and instructions to ensure compatibility and the commonality of Integrated Logistics Support (ILS) processes among the S-Series suite of ILS specifications jointly developed by both associations.
By defining common logistics processes to be used across all S-Series ILS specifications and the interactions of the current S-Series ILS specifications with the logistics processes, the SX000i forms the basis for sharing and exchanging data securely through the life of products and services, not only within the support domain, but also with other domains such as Engineering. The SX000i also provides governance for the maintenance of current S-Series ILS specifications and the development of new S-Series ILS specifications.
SX000i builds on existing standards and specifications so as to provide a unified view of sometimes contradictory ILS specifications and publications. A reference and mapping of SX000i to these documents has been provided in Chapter 6.
Purpose of the guide
SX000i provides a guide for the use of the S-series ILS specifications by ILS managers and practitioners, as well as for the management and future development of the specifications by the ILS specification Council and ILS specification Steering Committees (SC) and Working Groups (WG).
SX000i:
explains the vision and objectives for the suite of S-Series ILS specifications
provides a framework that documents the global ILS process and interactions
explains how the ASD/AIA S-Series ILS specifications interface with other standardization domains including program management, global supply chain management, engineering, manufacturing, security, safety, configuration management, quality, data exchange and integration, and life cycle cost
describes the global governance of the S-Series ILS specifications development
provides guidance on how to satisfy specific business requirements using an appropriate selection of defined processes and specifications
SX000i development history
During the development of the S-Series ILS specifications, the different ASD/AIA Steering Committees and Working Groups identified the need for an "umbrella" specification to ensure the compatibility and commonality of ILS processes among the S-Series ILS specifications.
In 2011, the decision was made to develop, publicize and maintain an Integrated Logistics Support Guide, named SX000i, so as to provide a compatible and common ILS process to be used in the other S-Series ILS specifications. Development of SX000i was viewed by the ILS Specifications Council as an essential step to achieve the vision for the S-Series ILS specifications.
In June 2011, the SX000i working group was formed and SX000i development started. The current title of SX000i, International guide for the use of the S-Series of Integrated Logistics Support (ILS) specifications, was approved by the ASD/AIA ILS Specifications Council in June 2012.
Following the creation of the SX000i working group, the ASD/AIA Data Model and Exchange Working Group (DMEWG) was formed under the ILS Specifications Council in October 2011. Working in close cooperation with the SX000i team, the DMEWG coordinates the data modeling activities that are performed within the respective S-Series ILS Specification SCs and WGs so as to harmonize and consolidate data requirements into one coherent data model.
Publication of SX000i, and continuing DMEWG coordination activities, enable the achievement of the vision for the suite of ILS specifications "to apply common logistics processes so as to share and exchange data securely through the life of products and services".
The companies and organizations that are currently participating in the development of SX000i are:
Airbus (France)
Airbus Defence and Space (Germany and Spain)
Boeing Defence Systems (USA)
Bundeswehr (Germany)
Elektroniksystem- und Logistik-GmbH (ESG) (Germany)
FACC AG (Austria)
HEME GmbH (Germany)
O’Neil (USA)
Rockwell Collins
Leonardo - Electronics, Defence & Security Systems (former Selex ES) (Italy)
Turkish Aerospace Industries (TAI) (Turkey)
Ministry of Defence (United Kingdom)
SX000i issue 1.0 was published in December 2015. An Issue 1.1 was published in July 2016.
The SX000i Steering Committee is currently co-chaired by the Spanish representative of Airbus Defence and Space, on behalf of ASD, and Boeing, on behalf of AIA.
Intended use
SX000i is intended:
To be a starting point for any potential users or new projects that would want to use the S-Series ILS specifications.
To be an overview and coordinating document for all members of the international ILS community, engaged in the use and development of the S-Series ILS specifications on existing projects.
In that context, SX000i was developed for three primary applications:
New Product development
Support of existing Products
ILS specification development and maintenance
Target audiences
The target audiences for SX000i are:
Contractors
SX000i can be used by prime contractors, original equipment manufacturers, and suppliers as a reference for initially establishing their Product support strategies and plans, and selecting specifications to support those plans. SX000i can also be used to evaluate existing Product support strategies and projects.
Customers
SX000i can be used by customers to determine support requirements for new Products they are acquiring, or fielded Products for which they are seeking support, and to identify ILS specifications to be cited in solicitations.
ILS specifications Council
The ILS specifications Council uses SX000i to promote a commonality and interoperability among the S-Series ILS specifications.
ILS specification steering committees and working groups
Steering committees and working groups developing specifications use SX000i as a basis for describing relationships and interfaces between the ILS element(s) that their specification covers and:
the other integrated logistic support elements
the standardization domains
Steering committees use SX000i to ensure the compatibility of their specification with the other ILS specifications.
The Data modeling and Exchange Working group (DMEWG) uses SX000i to harmonize and consolidate data requirements into one coherent data model supporting all of the ILS specifications.
Steering committees and working groups both use SX000i to ensure compliance with ILS specification Council governance requirements.
SX000i structure
SX000i consists of six chapters:
Chapter 1, (Introduction) provides background information on the S-Series ILS specifications and SX000i.
Chapter 2, (Integrated logistics support framework), documents a global ILS process and interactions at the ILS element level. This chapter establishes the foundation for the remainder of SX000i chapters and all of the S-Series ILS specifications.
Chapter 3, (Use of the S-Series ILS specifications in an ILS project), explains how the S-Series ILS specifications relate to the global ILS process and elements, and how to use them as part of an ILS project.
Chapter 4, (ILS specification governance), describes the structure of the S-Series ILS specifications organization and the processes used to manage the development and maintenance of those specifications. The target audience for this chapter is primarily the ILS specifications Council, and the SCs and WGs of the individual specifications.
Chapter 5, (Terms, abbreviations and acronyms), provides the definition of the main terms used in this specification, as well as a list of all the abbreviations and acronyms.
Chapter 6, (Comparison of specification terminology), provides a comparison of the terms, life cycle phases and ILS elements between SX000i and other international and military specifications, to enable users to better understand the underlying concepts.
Availability
SX000i can be downloaded for free from its project website
See also
Integrated logistics support
Associated specifications
The references below cover the specifications associated to the Integrated logistics support process described in SX000i, known as the ASD/AIA S-Series of ILS specifications:
SX000i - International guide for the use of the S-Series of Integrated Logistics Support (ILS) specifications
S1000D - International specification for technical publications using a common source database
S2000M - International specification for materiel management - Integrated data processing
S3000L - International specification for Logistics Support Analysis - LSA
S4000P - International specification for developing and continuously improving preventive maintenance
S5000F - International specification for in-service data feedback
S6000T - International specification for training needs analysis - TNA (definition on-going)
SX001G - Glossary for the Suite of S-specifications
SX002D - Common Data Model
References
Aerospace engineering
Military logistics
Systems engineering | SX000i | Engineering | 1,876 |
78,173,887 | https://en.wikipedia.org/wiki/Butralin | Butralin is a preemergent herbicide used to control suckers on tobacco in the United States, Australia, Mozambique and, for food crops also, China. It is a dinitroaniline, first registered in the US in 1976. It was used in the EU until a ban in 2009 due to its ecotoxicity.
Mode of action and effects
Butralin works by the HRAC mode of action Group D / K1 / 3, (Australian, Global, Numeric respectively), which involves inhibition of microtubule formation, by binding to tubulin, halting growth, and causing depolymerization.
In ryegrass meristems, butralin-treated roots show reduced elongation, but greater diameter. The cells' rate of mitosis lowers 36% after one hour, and they develop multiple nuclei. Butralin's effect is more similar to carbamate herbicides such as chlorpropham rather than other dinitroanilines.
Usage
Butralin is sold in Mozambique as "Tobralin 36% EC", made in South Africa. Users are instructed to pour 10 mL on each tobacco plant by hand, and not to unclog nozzles with their mouths.
In China, over 100 tons per year are used, as of 2022, on garlic, soybean, tomato, rice, peanut, pepper, cotton, eggplant, and watermelon. The maximum residue limit is 0.02 to 0.1 mg/kg. The growing Chinese market sells it in 36% or 48% emulsifiable concentrates, and in development as of 2012, a 41% wettable powder. The powder claims to be environmentally friendly, as it lacks the volatile organic molecules such as toluene and xylene used as solvent in EC formulations. The powder is recommended to be applied at 2100 g/Ha (active ingredient).
Health
Butralin is of low acute toxicity. There is no association with lung cancer.
It is very toxic to Daphnia.
In soil
Butralin is likely to be moderately persistent to persistent and relatively immobile in terrestrial environments. Butralin is stable to abiotic hydrolysis and photodegradation on soil. Its characteristics are unlike those of chemicals that leach to groundwater. Butralin's major soil metabolite is 4-tert-butyl-2,6-dinitroaniline. Other major metabolites see the loss of more or all of the carbons and hydrogen over the nitrogen, or loss of oxygen from the nitro groups. The principle residue in crops, however, is the parent butralin.
References
Links
Preemergent herbicides
Nitrotoluene derivatives
Anilines
Herbicides
Products introduced in 1976
Sec-Butyl compounds
Tert-butyl compounds | Butralin | Biology | 587 |
1,773,278 | https://en.wikipedia.org/wiki/Model%20of%20computation | In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how an output of a mathematical function is computed given an input. A model describes how units of computations, memories, and communications are organized. The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology.
Categories
Models of computation can be classified into three categories: sequential models, functional models, and concurrent models.
Sequential models
Sequential models include:
Finite-state machines
Post machines (Post–Turing machines and tag machines).
Pushdown automata
Register machines
Random-access machines
Turing machines
Decision tree model
Functional models
Functional models include:
Abstract rewriting systems
Combinatory logic
General recursive functions
Lambda calculus
Concurrent models
Concurrent models include:
Actor model
Cellular automaton
Interaction nets
Kahn process networks
Logic gates and digital circuits
Petri nets
Process calculus
Synchronous Data Flow
Some of these models have both deterministic and nondeterministic variants. Nondeterministic models correspond to limits of certain sequences of finite computers, but do not correspond to any subset of finite computers; they are used in the study of computational complexity of algorithms.
Models differ in their expressive power; for example, each function that can be computed by a finite-state machine can also be computed by a Turing machine, but not vice versa.
Uses
In the field of runtime analysis of algorithms, it is common to specify a computational model in terms of primitive operations allowed which have unit cost, or simply unit-cost operations. A commonly used example is the random-access machine, which has unit cost for read and write access to all of its memory cells. In this respect, it differs from the above-mentioned Turing machine model.
See also
Stack machine (0-operand machine)
Accumulator machine (1-operand machine)
Register machine (2,3,... operand machine)
Random-access machine
Abstract machine
Cell-probe model
Robertson–Webb query model
Chomsky hierarchy
Turing completeness
References
Further reading
Computational complexity theory
Computability theory | Model of computation | Mathematics | 445 |
17,081,128 | https://en.wikipedia.org/wiki/List%20of%20bioinformatics%20journals | This is a list of notable peer-reviewed scientific journals that focus on bioinformatics and computational biology.
Bioinformatics
Bioinformatics | List of bioinformatics journals | Engineering,Biology | 32 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.